The present study assessed the feasibility of implementing a study to assess the effects of a classroom-based attention training intervention (NeuroTracker) targeting academic skills in adolescents diagnosed with developmental disabilities and IQs in the extremely low range. The evaluation of the feasibility of such an intervention included the three following objectives: (1) assessing the appropriateness of classroom-based cognitive training program for the population, (2) assessing the appropriateness of using the NeuroTracker and active control task with this population, (3) assessing the validity of using standardised measures to assess far-transfer effects of the intervention on academic performance. The recruitment rate and low attrition demonstrate the feasibility of conducting a cognitive training study in a school setting. In addition, our results demonstrate that all participants in the treatment condition adhered to the training protocol, which suggests that using this intervention is feasible with this population. Finally, it was found that despite some limitations due to floor effects, the standardized reading and math tasks were sensitive to individual differences in performance.
Feasibility of implementing a cognitive training program
Our first objective assessed the feasibility of implementing a classroom-based cognitive training program with participants with extremely low cognitive functioning. Given the relatively low incidence of this type of cognitive profile, it was critical to determine if a sufficient proportion of parents, teachers, and adolescents with intellectual challenges are interested in the implementation of cognitive training interventions in their classroom. Our results suggest that slightly over half (58%) of families contacted consented to participation in a school-based cognitive training study and that adolescents generally consent to receiving the intervention. To our knowledge, this is the first study documenting willingness to participate in a cognitive training intervention study for this population. While our results are limited to a specific setting (i.e., a private school providing specialized services), they confirm that school principals, teachers, and parents of adolescents with intellectual challenges perceive the need for interventions that target deficits in cognitive abilities such as attention. Our findings also provide valuable information regarding the recruitment rate that can be expected in a larger-scale study.
In addition to initial interest, it was also critical to assess attrition, as it can pose a significant obstacle towards evaluating the efficacy of a cognitive training paradigm [31]. Outside of participants not meeting inclusion criteria, the retention rate from baseline to post-intervention assessments (83%) was similar to previous cognitive training studies conducted with participants with intellectual challenges [20, 22]. This low attrition rate suggests that the results of the study can be assessed fairly as they were not biased by the factors that can lead to attrition (e.g., motivation, harm, perceived cost-benefit). We hypothesize that the school-based nature of this study could have contributed to the high retention rates as it facilitated access the participants and monitoring throughout the study. The low attrition rate also suggests that our inclusion criteria were effective in identifying the participants that would be able to complete all stages of the study.
With regard to inclusion criteria, our study addressed a gap in the current cognitive training literature, which typically excludes participants with intellectual functioning below the average range (see [14,15,16] for examples). Our results are consistent with those of the few previous studies which show that it is feasible to assess the effects of cognitive training in students with similar cognitive profiles [20,21,22]. While our eligibility criteria were similar to the one used in those studies, participants who were likely to meet inclusion criteria prior to recruitment were not selected. As a result, 3 of the 7 recruited classrooms (44% of recruited participants) were excluded following the screening assessment due to not meeting inclusion criteria. The academic performance of most of the participants in the excluded classrooms could not be assessed fairly as they did not possess the access skills necessary to engage with standardized tasks (e.g., were non-verbal, had significant difficulties understanding questions and instructions). Therefore, our findings expand on the current cognitive training literature by indicating that while we can successfully implement cognitive training study with individuals with extremely low IQ, a subset of this population (i.e., individuals functioning at the lower end of the distribution) lacks the skills necessary to participate in such a study.
Feasibility of using NeuroTracker training for adolescents with extremely low IQs
We hypothesized that due to its core characteriscs (i.e., accuracy, accessibility, and adaptability), the NeuroTracker would be a suitable intervention to target academic difficulties in this population. Given the strong relationship between attention and academic performance [8], an intervention which accurately targets attention such as the NeuroTracker could be particularly beneficial for the population targeted in this study. With regard to accessibility, the complete adherence to the intervention indicates that this intervention is accessible to adolescents with extremely low cognitive functioning, expanding on findings of a study conducted with adolescents with cognitive functioning 1 to 2 standard deviations below average [25]. Our results also demonstrate that the program can be implemented as a group intervention and that one-to-one supervision is not necessary for successful implementation, further demonstrating the accessibility of the task.
Another caracteristic of the NeuroTracker which we hyposthesized would make it suitable for this population is its adaptability. Previous research has highlighted that adapting the task difficulty to a participant’s capacity is a valuable feature of effective cognitive training interventions [32]. The NeuroTracker is highly adaptable, as it modifies task difficulty based on participants’ performance in three different ways (i.e., by adapting speed, duration of the task, and the number of targets). It remains unclear whether the task was adaptive enough to meet the needs of all participants, as data collected on the difficulty level attained during each session suggest that about half of the participants showed progress on the task throughout the 5 weeks, while the other half trained on the lowest level of difficulty throughout most sessions. A similar variability in training performance was reported by [22], who examined the effects of a computerized cognitive training task targetting non-verbal reasoning and working memory in children with ID. Furture studies will be necessary to examine whether these participants were making progress within the lowest level, or if they were simply not engaging with the task because it was above their baseline capacity. Nonetheless, high adherence on the NeuroTracker increases the likelyhood that particpants will benefit from this intervention.
The adaptive nature of the NeuroTracker is based on factors that influence an individual performance on the Multiple Object-Tracking (MOT) task, the psychophysical task that the NeuroTracker is based on. Previous research examining MOT capability has demonstrated that manipulating the paradigm’s task demands (i.e., similar to those manipulated here to adapt to the participant’s capability) quantifies the attentional resources required to successfully complete the trial [27, 33]. Thus, any change in either the number of target items, the speed of all objects, and/or tracking duration is directly associated with the task’s attentional demands or level of difficulty.
One critical advantage of using the NeuroTracker with students with extremely low IQ is its ability to adapt to the participants’ capability by increasing or decreasing trial speed, depending on correct or incorrect responses, respectively. The importance of adaptability has been demonstrated with other cognitive training paradigms [34,35,36]. In fact, published meta-analyses that have concluded that adaptive cognitive training programs produced larger effects than non-adaptive programs (see [37, 38]). Nevertheless, there is a lack of research that has assessed adaptability as a variable for identifing optimal adaptive training protocols. As such, future research examining the benefits of diverse levels of adaptivity would be a significant contribution to uncover the mechanisms of NeuroTracker training as well as cognitive training at large.
Scientific validity assessment
In addition to assessing the feasibility of using the NeuroTracker with adolescents with extremely low cognitive functioning, an important objective of this study was to assess the suitability of clinically validated academic measures in this population. Exploratory correlations between scores on the cognitive and academic measures were conducted to examine the degree of co-variability between them. The objective of these exploratory analyses was not to infer a relationship between cognitive and academic capability. In fact, there are significant violations to the statistical assumptions that preclude this conclusion; (i) the variables are not normally distributed, (i) the sample size is small, and (iii) the sample recruited represents one portion of the normal distribution (i.e., restricted range). Rather, the analyses allowed us to observe the spread of the scores and the covariance between the cognitive and academic measures. Floor effects have been identified as being an issue in reducing range and variability when using standardized measures to assess abilities in individuals with significant intellectual challenges (e.g., [39, 40]). Therefore, a large cluster at the floor of each measure would have been an indication that these measures are not an appropriate tool to detect individual differences in performance in this population. However, the spread of scores (i.e., absence of large clusters at the floor of each measure) suggest that the measures presented an adequate degree of sensitivity in detecting individual differences across both cognitive and academic domains. Slightly larger floor effects were found for the Reading Comprehension and the Math Fluency Subtraction tasks of the WIAT-III.
A closer examination of Raw Scores on these tasks also indicates that a higher proportion of participants could not complete them. About a quarter (27%) of participants meeting inclusion criteria were unable to answer any questions on the reading comprehension task. As the task’s easiest item requires reading 8 short sentences, we hypothesize that it is too advanced to assess the reading comprehension skills of emergent readers (i.e., participants who could recognize some individual words but had difficulty reading connected text). The small floor effect observed on this task could also be affected by access skills, as the task places a high demand on verbal skills (e.g., understand the examiner’s question, respond verbally). A similar proportion of participants (30%) were unable to answer any of the Math Fluency Subtraction items correctly. In this case, we hypothesize that the distribution of scores likely represents the skill level of participants rather than deficits in access skills, as the presentation of the task was similar to the Addition task, which had a higher completion rate. Nevertheless, the degree of variability presented by the values suggests that overall, the measures were representative of the participant’s academic capabilities.
Exploratory analysis of changes in academic performance before and after training was also conducted to explore whether the measures could detect changes in performance following training. We note that the objective of these analyses was not to assess the effectiveness of the NeuroTracker in improving academic performance, and that given the small sample size we do not suggest that the results demonstrate transferability of training to these academic skills. Significant changes in performance from baseline to post-intervention were found for the intervention group on the Reading Comprehension task. We suggest that these results indicate that standardized academic measures can detect individual differences in reading and math skills and potentially detect changes in performance as a function of the intervention. The use of “in-house” measures which lack reliability and real-world validity has been highlighted as a significant limitation of the current cognitive training research [1, 13]. Our results demonstrate standardized academic measures are an appropriate and feasible mean to assess change in academic performance as a result of cognitive training, even in atypically developing populations. The use of such clinically validated and highly-reliable instruments reduces sources of error and improves our ability to accurately assess the efficacy of the cognitive training program.
Limitations
A limitation of this study was the inability to examine the performance of participants within levels of difficulty. As a result, it was not possible to assess the adaptability within the levels of difficulty. Therefore, it remains unclear whether participants that ended training at the lowest level of difficulty were representative of their capability and whether they improved within this level. Nonetheless, the data collected suggests the task presents with a wide range of adaptability which detected improvements in performance in half of participants. A further limitation was that the study’s inclusion criteria limited participation to a group of adolescents who presented emergent reading and math skills and who showed a certain ability to engage with standardized tasks (i.e., could answer questions verbally, showed minimal behaviour problems). While this limited the generalizability of our findings to adolescents who have similar characteristics, we believe this was necessary in order to evaluate the feasibility of using standardized measures to assess response to a cognitive training intervention for this underserved subpopulation. Finally, as the study did not include a follow-up assessment, we could not determine if retention rates would remain high if another assessment was conducted several weeks or months after post-test to investigate the long-term effects of the training program. The results of the post-test provide us evidence and support that a follow-up assessment would be feasible and warranted. Therefore, the inclusion of follow-up assessments would be of particular importance in future studies, as it is hypothesized that training benefits may be delayed rather than immediate [1, 13].
Implications for future studies
The information obtained through this feasibility study suggest provided support for the feasibility of conducting studies to assess the viability of using the NeuroTracker with adolescents with significant intellectual challenges. Therefore, our findings suggest that families are interested in participating in such a study and that this population can comply with training and engage with the NeuroTracker. Results from this study provided support for the use of standardized measures to assess academic performance, while identifying limitations that should be addressed in future studies, such as selection of a reading comprehension task with easier floor items (e.g., tasks that assess reading comprehension by asking to match a word or phrase with a picture). It was also found that while participants were all selected from classrooms which group students with similar functioning levels, there was a significant variability in their progress throughout training. This emphasizes the importance for future studies to investigate the impact of individual factors (e.g., IQ, baseline attentional capacities, co-morbid diagnosis) on the response to intervention.
Objectives centered on identifying individual differences factors that predict to improvement as a function of the intervention are examined in effectiveness studies (as defined by 1). Although our current study collected such baseline and/or non-trained cognitive data, the small sample size of participants in the treatment condition (n = 15) precludes any concrete conclusions on the role of individual differences factors here. Moreover, previous research examining the efficacy of NeuroTracker found no evidence of an association between improvement on the training paradigm and improvement in the non-trained targeted outcome [25]. As such, the mechanisms underlying the translation of benefits from the training paradigm remain unclear. The findings from the current study encourage future research arch questions that explore the factors that explain differences in program efficacy (i.e., improvement) and the underlying mechanisms at play, which translate to improvement in targeted outcomes.
Finally, the optimization of treatment duration (i.e., the amount of training sessions) and/or training dosage (i.e., the amount of time spent training) remains unknown for training with NeuroTracker, as is the case throughout the field of cognitive training. While meta-analyses have highlighted that time spent in repeated practice is positively related to degree of improvement in targeted outcomes [38, 41, 42], the optimal levels of treatment duration or time per training session remains unknown. Thus, research aimed at optimizing treatment duration and dosage is of great importance and would significantly advance the field of cognitive training at large, it is best served in research uncovering the mechanisms of the cognitive training program, rather than in research assessing the feasibility to implement a cognitive training program and/or the appropriateness of the training paradigm for specific populations, as assessed here.