Start Submission Become a Reviewer

Reading: Cognition Assessment Technologies on Deaf People


A- A+
Alt. Display

Review Article

Cognition Assessment Technologies on Deaf People


Coral I. Guerrero-Arenas ,

Departamento de Ingeniería en Sistemas Biomédicos, Universidad Nacional Autónoma de México, Ciudad de México, MX
X close

Fernando Uristy Osornio-García

Departamento de Ingeniería en Sistemas Biomédicos, Universidad Nacional Autónoma de México, Ciudad de México, MX
X close


In recent years there has been a growing interest in research about the different ways of processing and consolidating cognition in deaf people. It is known that hearing loss can lead to differences in some executive functions like control inhibitory or working memory. This literature review describes executive functions in deaf people and how they could be evaluated through technological devices complementing traditional assessments, like neuropsychological batteries. We identified biometric devices, digital and physical interfaces, and software from the literature, whose goal is to design or adapt technology to assess some cognition domains in several ways. The results of the review suggest the need to understand the cognitive phenomenon that significantly impacts the context of deaf people; moreover, it becomes relevant as a line of research in the Cognitive Science of Hearing. Using technologies to measure them and gain a better understanding of cognition in deaf people may provide possibilities for designing or adapting targeted educational or therapeutic strategies.

How to Cite: Guerrero-Arenas, C. I., & Osornio-García, F. U. (2023). Cognition Assessment Technologies on Deaf People. Journal of Cognition, 6(1), 18. DOI:
  Published on 09 Mar 2023
 Accepted on 17 Jan 2023            Submitted on 13 Sep 2022


The executive function (EF) construct encompasses a set of higher-order cognitive abilities such as the organizational, control, and self-regulation skills necessary for goal direction (Cristofori, Cohen-Zimerman & Grafman, 2019; Figueras et al., 2008). It has been reported that people with hearing loss may have a higher risk factor for developing and consolidating some of these executive functions (Mason et al., 2021; Conway, Pisoni & Kronenberger, 2009; Pisoni & Cleary, 2003). No agreement yet on the consequences of the hearing loss itself or the social and personal restrictions are underlying the barrier to communication with others (Corina & Singleton, 2009) or in addition to other factors such as the etiology of hearing loss and sociocultural context.

Particularly some researchers have exposed differences in attention skills (Bavelier et al., 2000a; Hauser et al., 2007; Thakur et al., 2019; Tharpe et al., 2008; Zeni et al., 2020) in working memory tasks (Burkholder & Pisoni, 2003; Pisoni & Cleary, 2003), inhibitory control, (Mason et al., 2021), and sensorimotor synchronization (Petry et al., 2018; Tranchant et al., 2017). However, there is evidence for some auditory deprivation-dependent brain changes that may be related to functional and adaptive behaviors (Kral et al., 2016). Indeed, it has been hypothesized that hearing loss has effects beyond auditory processing and may affect cognitive functions related to sequentially and temporality since hearing is the primary sensory pathway for perceiving high-level sequential patterns that change in time rather than space (Conway et al., 2009). That is, that audition leads to the establishment of temporal ordering that underlies expectation and anticipation, what comes before and what happens next? For example, a subdomain of working memory attends to this temporality, as will be discussed below. It appears that deaf individuals, especially those who have not acquired language early, there may be more difficult in consolidating functions related to learning sequentially.

On the other hand, and in contrast to the above, it seems that once the sign language is consolidated there are increases in subdomains of some functions, for example, in peripheral attention processes, that underlie inhibitory control (Bosworth & Dobkins, 2002), this makes sense given that a visuospatial language such as sign language develops spatially, supported more by visual and proprioceptive processes. This is consistent with the Enhanced Hypothesis proposed by Rodger et al., (2021) and Sidera et al., (2017) which states that in the absence of hearing the visual system would have a more significant visual function. However, Dye, Hauser & Bavelier (2009) suggest that hearing deprivation is not a causal factor for presenting difficulties in cognitive processes, such as attention, but that these may be a consequence of other factors that have nothing to do with deafness per se since it is essential to take into account aspects such as the etiology of deafness, sociocultural factors, the age of acquisition of a communicative system, and even the modality in which this is consolidated, that is, whether it is a sign language or an oral language. Even the results that show differences in some cognitive domains are different in children and adults because the consolidation of neurodevelopmental milestones seems to depend largely on the acquisition of a communicative system, without conclusive evidence on the issue, mainly because of the dynamism that characterizes cognition and that must be considered in technological development.

Technological development has allowed converging lines of research related to the deaf population, emphasizing on time advance related to assistive technologies, especially those focused on communication. Assistive technologies are defined as those developments, software, or products that allow for to increase, maintain, or improve some functional capabilities in people with disabilities, for example, prosthetic pieces, hearing aids, screen readers, or wheelchairs, among others. Its main function is to assist the person to be as independent as possible. Most of the literature linking technology and deafness is focused on these types of devices, especially those developments to reduce communication challenges (Landolfi et al., 2022; Imran, et al., 2021; Jacob, et al, 2021; Albrecht, Jungnicke & von Jan, 2015).

For this paper we focus on technological advances that in recent years have made it possible to adapt devices and methodologies for different populations, allowing a better understanding of the cognitive processes mediated by hearing, but not since an assistance idea or to facilitate communication between deaf and hearing people. On the other hand, the design of technological tools where there is an outstanding commitment to the interaction with digital environments promotes better behavior and motivation in users, giving an advantage to the evaluation processes to be more dynamic compared to other traditional ones.

This review is divided into three parts. The first part shows a brief description of the executive and cognitive functions; in part two we will give an account of the use of technologies such as interfaces and biometric devices, and the development or adaptation of software for the assessment of these functions. Finally, we conclude with a summary and perspectives on future applications and considerations for developing or adapting technological devices that complement the assessment of executive functions in the deaf population.

We introduce a qualitative literature review to explore if there are technological devices that evaluate executive functions or at least one of them, to show alternatives or complement neuroimages techniques or neuropsychological batteries, especially for the difficulties explained before the communication barrier. For this review, we exclude assistance or teaching devices, because the goal was not to evaluate this competence or the design adaptation for physical or communication nor aid hearing. The cochlear implant variable or another hearing aid did also not include because is documented that the cognitive process is usually different in people with hearing aid (van Wieringen et al., 2019; Hua, et al., 2017).

Search strategy and selection criteria

We searched PubMed and Web of Science, with terms like “deaf”, “hearing loss”, “hard hearing”, “technology devices”, “executive functions”, and “cognition assessment”. Terms excluded “aid hearing”, “cochlear implant”, “assistance device”, “therapy”, “education”, and “rehabilitation”, for reports published between January 2011 to November 2022. Our main focus was on studies that assessed one or more executive functions or cognitive skills through biomedical devices or using technology but not neuroimaging, in deaf people.

This was an example of a formula used in research:

deaf people AND cognition AND technology NOT hearing aid NOT cochlear implant NOT education NOT therapy

We do not include articles published in languages other than English, either type paper-like reviews, conferences, and not available in full text. We decided in this way because most of the research or interventions are focused on teaching sign language or another linguistic topic as well as restoration or hearing assistance.

1. Cognitive and executive functions

The construct of executive and cognitive functions has been coined from different perspectives over time. Lezak (1982) distinguished both based on what was intended to be known for each function. To describe cognitive functions, the author emphasized how much knowledge or skills the person possessed; for example, what skills remained intact or were impaired in people, or how well they performed a task compared to another, and on the other hand, executive functions are related to whether people do something or not, i.e., they focus on execution and what is related to it. Barkley (2012) contributes to this meaning by indicating that executive functioning underlies how a person achieves a goal efficiently. Lezak (1982) indicates that we should ask ourselves: ask how well the patient maintains a performance rate, how consistently and effectively he self-corrects, and how responsive to changes in the demands of the task, for example.

Recently, executive functions (EFs) have been described as a series of mental processes necessary to be attentive, solve problems, reason, have self-control, and persist in a task (Blankenship et al., 2019; Kral et al., 2016). According to some authors, EFs are framed in three central components, and, from them, other dimensions are derived that are supported by the first ones: working memory, inhibitory control, and cognitive flexibility (Diamond, 2020; Diamond, 2013), Cristofori, Cohen-Zimerman & Grafman, (2019) also include planning, reasoning and problem-solving. For this review, we consider the three central components and the subparts or dimensions associated with each of them, especially attention, which is framed within inhibitory control and seems to be the most studied and controversial function in deaf people.

Inhibitory control is required to control behaviors, thoughts, and emotions, and to adapt actions to emit or not a response. It is therefore strongly linked to cognitive control and working memory (Cristofori et al., 2019). This ability supports attention because it is necessary to inhibit external and internal factors and sustain attention to achieve specific goals. Attention is understood as the ability to filter, select, and focus on various elements of the environment (Dye, Hauser & Bavelier, 2008). To this effect, Hauser, Lukomski & Hillman (2008) indicate that inhibition is one of the major cognitive skills because it models motor and behavioral control. It has recently been theorized that inhibitory control is not a single unit, but is composed of a series of functions, among which two main ones are differentiated, response inhibition, which refers to the ability to control impulsive behavior to prevent (inhibit) motor and verbal responses, and interference suppression that involves working memory and refers to the ability to suppress interfering information (Daza González et al., 2021).

Working memory (WM) involves the ability to retain information in the short term, even when it is not perceptually present, as well as the ability to manipulate that information before it passes into long-term memory (Diamond, 2020; Baddeley, 2017). Two types of WM are generally distinguished, visuospatial and verbal. WM is critical for making sense of anything that unfolds over time, for that always requires holding in mind what happened earlier and relating that to what comes later (Diamond, 2013). The reasoning is closely linked to WM because you need to have the information online long enough to be able to manipulate it and act accordingly.

Lastly, cognitive flexibility is built on WM and inhibitory control and is defined as the ability to adapt in the face of environmental change and to generate new ideas that drive innovation and promote growth and discovery (Cristofori et al., 2019; Badre and Wagner, 2006). Cognitive flexibility is associated with an ability to adapt to changing conditions in the environment, which is why it requires control and manipulation of information to know new strategies.

Some researchers have been exposed to differences in EFs in deaf people, in particular in attention skills (Zeni et al., 2020; Thakur et al., 2019; Tharpe et al., 2008; Hauser et al., 2007; Bavelier et al., 2000b); the execution of tasks of working memory (Mason et al., 2021; Cardin et al., 2018; López-Crespo et al., 2012; Burkholder & Pisoni, 2003; Pisoni & Cleary, 2003), and inhibitory control (Mason et al., 2021). It seems to be that some executive skills are so close to the linguistic domain that are delayed consolidating (Figueras et al., 2008), maybe because deaf people acquire a late sign language, opposite to oral language that is developed since birth. Often, deaf children do not have access to a language at birth due to the majority being born in a hearing family or because the educational system does not give them an effective early intervention in sign language (Krebs et al., 2021). Thus, the sociocultural context also seems to affect the development and consolidation of some EFs, especially when a deaf child is born into a hearing family environment.

On the other hand, EFs seem to be linked with the fact that, when there is sensorial deprivation, the remaining senses appear to be more accurate, which is consistent with the Sensorial Compensation Hypothesis (Pavani & Bottari, 2012). Whereas where in the case of deafness, vision, and tactile sense are more sensitive to inputs from the environment. From the neurocognitive perspective, this is due to the brain’s ability to reorganize connectivity to other areas that are not affected by the sensory loss, in this case, there is a major recruitment of visual and tactile areas, although this process of neuroplasticity depends on variables such as the environment, age of onset of deafness and even interindividual factors that influence brain adaptation to sensory loss (Kral et al., 2016). The authors emphasize the possibility that, as the brain is a dynamic system that bases its development on experiences between neural activity and stimulation from the environment, a sensory loss could affect, positively or negatively, domains beyond hearing. This could be concordant with an enhancement in attention skills, for example. The case of disturbance in inhibitory control and working memory seems to be based on hearing feedback, which is coincident with Auditory Scaffolding proposed by (Conway et al., 2009) where some cognitive domains are underlying in sound perception, especially that based on temporally and sequential processing, like working memory.

2. Use of technology to assess cognitive domains in Deaf people

One of the most common methods to assess EFs is through neuropsychological tools. However, in the case of deaf people, there are two main factors to take into consideration: first, the communication barrier, since most of the batteries are designed for hearing people, and second, the consideration of the delay in language acquisition as a risk factor for the consolidation of cognitive functions. A systematic review by Vázquez Mosquera (2021) indicated that there are no rigorous instruments with special adaptations to assess deaf children, in this case. This could be a reason that some authors combine technology and assessment cognitive functions. We found design research for deaf children, named AWARD Neuropsychological battery, developed by González et al., (2011) that explores cognitive areas from tasks executed in a web application, where they can configure the language depending on user preference, for their research, could be Lengua de Signos Española (LSE) or oral language. This software tool allows making a cognitive profile of deaf children by implementing a battery of neuropsychological tests using adaptive web technology, whose main utility would be that professionals in neuropsychology can make use of this tool to minimize communication barriers. It should be noted that, although technological devices are accurate in detecting and evaluating cognitive functions, the design of tasks/paradigms should be more sensitive to detect changes in what is to be evaluated, considering that no brain function occurs in isolation, but rather the sum of interactions of various processes.

Based on the literature consulted for this review and taking into consideration that the exploration of EFs in deaf people is less explored compared to studies conducted on hearing people, the studies found so far in this review are described. The sections are divided according to the classification of assessment tools used by the authors of the studies. However, we know that each one had a different population of studies, in terms of adults and children, so we try to limit them to deaf people without cochlear implants, although we emphasize that we cannot generalize the results to the entire deaf population, since the heterogeneity of people must be considered. Within the literature review, we found a wide variety of technological devices that can be used to assess EFs, without most of them being necessarily designed for the deaf population, but rather have been adapted from traditional use in research in other populations and where the paradigms have been adapted by interpreting the instructions into some sign language, for example. That is the reason why the studies shown in this review include technological developments that can be classified as biometric devices, physical and digital interfaces, and video games. This review is an attempt to link the use of technological devices to assess these functions and to give a prospective scenario for future developments and applications. The papers reviewed are in Table 1.

Table 1

List of reviewed articles since 2011 to 2022.


2011 Visual attention Eye tracker (Watanabe et al., 2011)

2012 Working Memory Software/interface (López-Crespo et al., 2012)

2013 Visual perception Software/interface (Barca et al., 2013)

2014 Attention and cognitive control Software/interface (Dye & Hauser, 2014)

2015 Visual Attention Eye tracker (Heimler, van Zoest, Baruffaldi, Rinaldi, et al., 2015a)

2015 Visual Attention Eye tracker (Heimler, van Zoest, Baruffaldi, Donk, et al., 2015b)

2018 Visual Attention Eye tracker (Worster et al., 2018)

2018 ANS/Inhibition and Working Memory Software/Interface (Bull et al., 2018)

2019 Visual attention Software/interface (Thakur et al., 2019)

2020 Computational thinking Software/interface/
vibrotactile device
(Cano et al., 2020)

2020 Visual attention Eye tracker (Krejtz et al., 2020)

2020 Visual attention Eye tracker (Zeni et al., 2020b)

2021 Emotional arousal and
Workload cognition
Eye tracker (Tsou et al., 2021)

2021 Visual attention Eye tracker (Bonmassar et al., 2021)

2021 Inhbitory control Software/interface (Daza González et al., 2021)

2022 Visual attention and inhibitory control Software/interface (Dye & Terhune-Cotter, 2022)

Note: The table shows the type of cognition evaluated in each research and the technological device used for them.

2.1 Biometric assessment: Eye tracking

According to the literature consulted for this review, eye tracking is one of the most widely used tools for the evaluation of processes related to visual function, particularly visual attention. The eye-tracker is a biometric sensor technology for recording eye movement and gaze location. Eye-tracking is the process of determining the point of gaze or the point where the user is looking for a particular visual stimulus (Lim, Mountstephens & Teo, 2020). The data obtained can reveal how participants interact with stimuli on a global or specific level (Carter & Luke, 2020). Gaze is the externally observable indicator of human visual attention that can assess cognitive function and (Zeni et al., 2020) performance (Skaramagkas et al., 2021; Krafka et al., 2016). One of the advantages of using eye-tracking is that, depending on the task presented, eye movements can be voluntarily controlled and evaluated for speed and accuracy. Its use within the cognitive area is because visual orientation relates the oculomotor system to cognitive and neurological processes (Carter and Luke, 2020), so accurate inferences can be made about observable behavior and mental state during the performance of visual tasks through estimates of eye and pupil behavior. There are eye-tracking devices with monocular technology that acquire information from only one eye. Theoretically, this should be sufficient since the movements of one eye mimic those of the other, i.e., the eyes tend to move together in the same direction, so knowing the position of one eye can predict the position of the other eye as well. However, binocular acquisition is usually used because this type of biometric data recording reduces errors in precision and accuracy since the on-screen position of both eyes is averaged. On the other hand, there are eye-tracking systems where infrared sources illuminate the eye while a series of cameras, also infrared, capture its reflection, calculating its relative position concerning the physical world. This is because infrared light is reflected in two parts of the eye, the retina, which helps to calculate the center of the pupil (pupil reflection), and the cornea (corneal reflection) (Brunyé et al., 2019). The corneal reflection usually does not change when the eyes move, while the pupil reflection does.

2.1.1 Eye-tracking and cognitive functions

It appears that hearing loss may underlie visual abilities that are reflected in processes such as attention, which is defined as the general alert to interact with the environment (Lindsay, 2020); and there is the ability to focus perception on one or a group of stimuli by suppressing those that are irrelevant (Diamond, 2013). According to some authors, deaf people seem to use other visual resources or strategies to appropriate and link with the environment. For example, they tend to fixate gaze on or near the eyes and hearing people prefer to fixate gaze at or near the mouth (Heimler et al., 2015a). The development of visual attention and gaze tracking has been the subject of study in the neuropsychological field because they are closely related to attentional control and social cognition, as it accounts for how visual resources are redistributed to interact with and understand the surrounding world. This is clearly seen in the case of deaf infants of hearing parents, where there is a redistribution of attentional resources by having to focus on several stimuli at once, for example, while the caregivers point to something and name it. Deaf infants have to focus on the caregiver, then on what is pointed to, and then on the caregiver again to appropriate that external element. In the case of hearing infants, they only must see the object while the auditory input is present. This can be considered as a precursor to the consolidation of attentional control, and it refers to perception control to focus the mental resources on specific things, so the attention works at the same time as inhibitory control. Visual attention is understood as the process required to isolate relevant information from non-relevant information within a visual scene (Skaramagkas et al., 2021). Within this visual processing, two main attentional functions are evaluated, overt attention and covert attention. In the former, there is a selective orientation towards an object or spatial situation and the observable response is eye movement directed in some direction. Covert attention involves a change in the focus of attention, but no change in eye position or eye movement. Blair and Ristic (2019) indicate that this type of attention evidence how mental resources are aligned with the target of the response, whereas overt attention recruits oculomotor resources toward that target, manifesting in an observable response. Within these overt and covert attention processes is the research of Bonmassar et al. (2021) who observed that the interaction between both types of attention is different in people with deafness, in particular in spontaneous over-eye movement performance when there are social and non-social central cues. The authors observed that deaf participants were slower to respond compared to hearing participants. Bottari et al., (2012) complement this information by indicating there may be a neuroanatomical and functional reorganization underlying eye control when executing covert attention tasks. Also observed that voluntary and reflexive eye movements were more prominent in deaf compared to hearing people.

Zeni et al., (2020) used a context of complex naturalistic scenes, observing that deaf people prioritize objects within the images, which leads to infer that attentional differences may refer to an object and not only to the spatial or peripheral context, as previously described, which is consistent with the work of Heimler et al., (2015b) who observed that deaf people have greater control in gaze shifting when monitoring visual space when they are immersed in a social situation. This may be because they require a balance between maintaining spatial attention on the faces of their interlocutor to ensure better communicative and social quality and effective monitoring of the environment. The authors especially emphasize that this control may depend on the experience of acquiring sign language, which makes sense given that it is framed as a visuospatial system that unfolds in a signed space. Given the above, it is important to highlight the design of paradigms for the study of attentional processes. The aforementioned studies used complex contexts which although they are susceptible to involve other cognitive resources, is how it occurs in everyday situations. Another function of attention concerns the allocation of visual resources, which is measured within visual parameters with fixation metrics and saccades. Perhaps one of the goals of recording eye movements can provide information about recognition processes in social situations. For the study of cognition in deaf people, this is of prime importance, because from the knowledge of these parameters it is possible to know the differences between social dynamics when recognizing facial expressions. For example, for the recognition of emotions, there is a greater prominence of visual allocation in the eyes and mouth. Krejtz et al., (2020) examined visual attention while encoding emotions from facial expressions. They found that there was greater accuracy in detecting expressions related to happiness, although there were no significant differences between deaf and hearing. What they did find was a tendency to be faster at recognizing emotional expressions, which they hypothesize may be related to a more prominent reactivity to movement within a peripheral visual field. It was also observed that deaf people directed less attention to the mouth area and more to the eyes. Even a dynamic gaze pattern from ambient to focal was observed, especially during the detection of happiness-related stimuli. In line with these authors, Watanabe et al., (2011) observed that deaf participants also had higher fixation times in the eye and nose area in emotion recognition tasks, confirming that eye contact is a crucial component in processes related to emotions, social skills, and world appropriation.

It is interesting to take up again the differences in the allocation of attention between adults and children. As mentioned, deaf adults seem to spend more time looking at their interlocutors, while children interpret social situations by allocating their visual resources, first and for a shorter time, to the person’s face and then to the person’s body, i.e., they divide their attention to process emotions more effectively within a social context (Tsou et al., 2021). This is consistent with the arguments of Corina & Singleton, (2009) who state that deaf children of hearing parents have a disadvantage in consolidating their attentional process, because not having auditory input, they necessarily must divide their attention between their caregivers and the outside world, i.e., the perceptual and cognitive challenge is greater compared to a hearing child, as mentioned above. However, Worster et al., (2018) observed no differences between deaf and hearing children in speech-reading tasks, so they assume that hearing loss would not necessarily affect gaze behavior. In agreement with the studies of Heimler, et al., (2015a) and Watanabe et al., (2011), Worster et al., (2018) observed that gaze patterns were like those of adults, alternating between eye-mouth-eye gaze. This reinforces the argument that, for deaf people, visual expression contains more socially relevant information compared to other facial elements.

As can be observed, the use of biometric tools such as eye trackers to measure functions like visual attention seems to be the most used technology, and perhaps under the assumption that deaf people use other visual mechanisms which may be mediated by several factors such as the acquisition of a sign language, a neuroplasticity effect due to sensory deprivation and even the sociocultural context in which they develop. However, a limitation that arises is the possibility that, as people know that their eyes are being monitored, this may lead to a different behavior in eye tracking, given that the use of eye trackers may affect, in an unconscious way, the movement of the eyes, which would lead to other types of information (Bonmassar, et al., 2021). On the other hand, mapping these visual strategies from eye-tracking allows us to elucidate the visual function underlying other functions, e.g., social skills or another related to reading and writing.

2.2 Digital and Physical Interfaces

For this review we found other types of developments that are usually used in research, and that have been adapted for other populations. In this section, we include the interface can be defined as any medium, whether physical or virtual, that allows the interaction between a subject and a machine (Pérez-Ariza & Santís-Chaves, 2016). The area that specialized in the development of interfaces is called Interaction Design. Norman (2013) defines it as the branch of design that focuses on understanding the nature of the interaction between people and technologies. The development of technological tools has provided multiple possibilities to evaluate the interactions and behaviors of individuals in front of interfaces to improve them, causing an enhancement in the user experience. For this, software and hardware designs must comply with the usability components, both in the functional development of the system itself and in how people interact with it.

Within the classification of interfaces are haptic interfaces, described as devices that allow interaction/communication between person and machine through touch, which may or may not be accompanied by auditory or visual stimuli. Their main characteristic is that they are activated in response to the user’s motor action (Hayward et al., 2004). Haptic interfaces are categorized as mechatronic devices because they take advantage of mechanical signals for this exchange of communication between people and machines (Hayward et al., 2004). Among the simplest haptic interfaces are keyboards, buttons, and others more sophisticated such as gloves and exoskeletons.

The following are elaborated from a categorization of the digital and physical interfaces found for this review. We describe the use of software and interfaces adapted for the assessment of the specific domains of attention and working memory, followed by using games as an assessment tool.

2.2.1 Digital and Physical Interfaces and EFs

Dye and Hauser (2009) used the Gordon Diagnostic System, a portable microprocessor-based continuous performance test that assesses inattention and impulsivity-based behaviors (Foster, 2011). Although the GDS was initially developed to assess children and adolescents with ADHD, the literature reports that children with hearing loss have difficulties maintaining attention and cognitive control (Mason et al., 2021). One of the characteristics of the GDS is that it evaluates through Continuous Performance Tasks (CPT), defined as streams of stimuli that change continuously, and where randomized objective stimuli appear. Participants then must respond to this specific stimulus (Roebuck et al., 2016). Dye and Hauser (2009) evaluated sustained attention, selective attention, and cognitive control, and found that deaf children have a lower performance in the execution of attention and cognitive control tasks, perhaps contingent on the late acquisition of a sign language, so they suggest extending their research to deaf children of deaf parents because in this way it is inferred that they are exposed to a natural language from birth, since in a Deaf family a sign language is the natural language for communication, in the same way that a hearing family communicates orally. A relevant finding in this study was that the participating children were profoundly deaf and had access to a sign language since their deaf parents, and in the results no significant differences were observed between hearing and deaf children, suggesting that hearing deprivation is not sufficient to find significant differences in reported outcomes on attention, as mentioned above. Dye and Terhune-Cotter (2022) conducted a 3-year cohort of deaf children to determine whether there were changes over time in sustained selective attention processes or inhibitory responding and whether these changes were influenced by language and/or deafness. On this occasion, the authors found a significant correlation, observing that learning English is more closely related to better selective attention processing, while the acquisition of American Sign Language (ASL) is more closely related to better inhibitory control. They speculate that the above is due to two factors: the temporal precedes the acquisition process. It is emphasized that ASL is learned through natural exposure to meaningful communicative interactions, i.e., it is an acquisition process. While for English this does not occur, it is learned through therapies and generally after critical periods of neurodevelopment. This seems to be associated with the fact that acquisition occurs spontaneously and unconsciously within a natural context of language use (Krashen, 1982), and on the other hand, the process of learning a language is characterized by a conscious knowledge of rules and their use, which is not reflected in a fluent production of it.

Dye and Terhune-Cotter conclude that there is a close relationship between language and cognitive skills, especially in selective attention and inhibitory responding. In this research, the authors indicate that they used a version of the Gordon Diagnostic, the vigilance form, and recommend that they use other types of experimental approaches, which agrees with the findings of the study by Foster (2011) it also recommended that to validate the results obtained from the GDS, it should be combined with other assessment resources.

One of the functions that seem to be most compromised in deaf children is working memory (WM). As mentioned, it is a type of short-term memory whose goal is to be able to retain and manipulate information for a short time before it disappears, and which, according to Baddeley’s Working Memory Model, is determined by two subsystems based on sensory modalities: 1) the phonological loop, underlying the auditory system, and 2) the visuospatial sketchpad, modulated by the visual and spatial system. It seems to be expected that deaf people have better strategies to perform visuospatial tasks, based on two main assumptions: 1) the enhanced hypothesis or sensory compensation hypothesis, which explains that in people where there is a loss of some sense, other sensory modalities are increased to compensate for the one that is compromised, in the case of deaf people, it would be expected that the visual system incorporates greater resources that would allow greater interaction with the environment, perhaps because the acquisition and use of a visuospatial gestural language that develops within a sign space.

López-Crespo et al., (2012) used a computerized version of a DMTS task for visual WM assessment to estimate whether different modes of communication (sign language, oral, or both) determined whether the resources to execute visual WM tasks were different compared to hearing children. Contrary to expectations, deaf children performed less than hearing children, even when the tasks were spatially based. This does not imply, as mentioned by the authors, that there is a delay in WM but perhaps there is another type of functional processing. This study makes it relevant to establish within the methodology the communication mode as one of the variables to have reliable results, i.e. sign language, oral, or both. Recently, Daza González et al., (2021) assessed inhibitory control ability using a computerized Stroop task and a short version of the Attention Network Test for children. This research aimed to observe whether the frequency of inattention, impulsivity, and hyperactivity behaviors, as well as disruptive, aggressive, and antisocial behaviors, was higher in a deaf group than in a matched hearing group. And the second aim was to determine whether any behavioral differences between deaf and hearing children could be explained by deficits in inhibitory control. This is due to the idea that deaf children present greater behaviors related to inattention and impulsivity and that this can represent cognitive and emotional difficulties; however, Daza Gonzalez’s research showed no significant differences between deaf and hearing children in terms of inhibitory control tasks. The conclusion reached by the authors is that the behaviors presented by deaf children are adaptive visual coding strategies to obtain information from their environment. These visual strategies require redistributing their attentional resources to their central visual field but also to the periphery, which requires a shift of attention to the environment more frequently than hearing people, which may be perceived by others as inattention or hyperactivity (Daza González, et al, 2021).

From the manipulation of information in a short period and from the apprehension of abstract concepts other functions are also consolidated, for example, the approximate number system (ANS), which refers to a cognitive process where estimates of magnitude are made from abstract representations. This requires other general domains, such as working memory and inhibition, for example, Cai et al., (2018) and Bull et al., (2018) assessed ANS using Superlab 4.0 software and a button-press interaction interface. In the end, it was observed that a deficit in general domains leads to lower performance in ANS task execution, which was more evident in deaf children compared to hearing children, perhaps one of the reasons for this result is that ANS requires several skills such as WM, which involves retention of spatial and temporal information, as well as the allocation of selective visual resources and inhibitory control. The authors indicate that having a lower mastery of these skills impacts lower performance in ANS.

As can be seen, devices based on simple physical interfaces, such as a button or a keyboard, are the most common for evaluating tasks in children, maybe because the interaction with the device is so simple. The development of technological tools has provided multiple possibilities to evaluate the interactions and behaviors of individuals in front of interfaces to improve them, causing an enhancement in the user experience. For this, software and hardware designs must comply with the usability components, both in the functional development of the system itself and in how people interact with it. Usability measures the ease of interaction between the user and the interface (Chanchí, 2019) and is based on a set of attributes described by Nielsen (2013): useful, learnable, memorable, effective, efficient, and desirable for the user. In this sense, one growing field in research that meets the above criteria is games, described in the following section.

2.3 Games as tool assessment (Gamification)

Another way to evaluate is through the incorporation of games, especially when the population is children. One of the terms in trend is gamification, which refers to the use of games within a ludic context in which participants engage with the cognitive tasks presented, due to the combination of intrinsic and extrinsic motivation; and by integrating challenges and positive feedback, the need for competition is satisfied (Khaleghi et al., 2021). To interact with the environment, processing is required where multisensory interactions are shaped at neural, behavioral, and perceptual levels (Murray et al., 2016). Technology-based games can be used as a vehicle to assess cognitive skills or competencies involved in the game, but not to assess the game (Bellotti et al., 2013). The same authors indicate another category is the so-called serious games focused on being educational, but at the same time, fun/attractive. A review made by the authors mentions that the use of computational games is effective in some therapeutic fields and is related to the development of emotional, cognitive, and perceptual skills. A term that integrates these skills is Computational Thinking, which includes cognitive skills based on a constructionist approach, i.e. users build their knowledge from the understanding of their context (Cano et al., 2020). The use of digital strategies provides the opportunity to interact in different contexts, addressing and building new possibilities of mental processes (Ferreira, et al., 2021) that go hand in hand with a series of steps that give certainty to solving problems.

A game designed and developed for deaf children was based on a methodology of the same authors, called MECONESIS (Acronym in Spanish, MEtodología para CONcepción de juEgos Serios para nIños con discapacidad auditiva) (Cano et al., 2020). The game simulates a real environment with interfaces and physical elements simultaneously. They also made use of a vibrotactile and visual board, both generated simultaneously. The digital interface was a mobile application, where the children interacted through a QR code and where were instructed on the steps to follow. According to the authors, one of the challenges for the child’s development and engagement with the game was the interaction between the physical and digital devices. The authors’ work falls into two of the categories we propose for this review. One is the use of digital interfaces, and the other is the vibrotactile board that gives feedback to users as part of the developed game.

The combined use of games with other types of devices can be a viable option for the design of cognitive function assessment prototypes, considering the advantages that this offers. Khaleghi et al., (2021) even offer a framework to guide the gamification design process for the assessment of cognitive tasks. This can be a first approach for future research in the Deaf population, where the steps they propose are contemplated and with the background that has already been presented in this review.

3. Discussion

Interest in research on cognition in deaf people has been increasing in recent years, not only from the perspective of assistive, recognition, or educational devices but also for research and educational purposes. The evaluation of cognitive processes in deaf people allows inferences to be made from different perspectives:

From cognition. From the evaluation of different domains, it is possible to have deep knowledge about the differences in cognitive processes derived from hearing, which would be framed within the Cognitive Hearing Science proposed by Arlinger et al., (2009) that is aligned with an interdisciplinary approach between cognition and hearing. Maybe, although it may seem contradictory, from this field it is possible to rethink and rephrase approaches and research aimed at deaf people.

One of the limitations present in the reviewed articles is the heterogeneity of the population even between children and adults. The different etiologies and thresholds of hearing loss do not allow generalizations of the results to establish precise approximations about the development of executive functions. Although deafness is a risk factor for the development and consolidation of these functions, there is no consensus on whether deafness per se affects cognitive functioning or whether it is derived from the socio-cultural context, influenced by a lower interaction with the environment due to communication barriers; it has even been reported that deaf people have lower prosocial behaviors compared to their hearing peers.

This is also mentioned by Hauser, Lukomski & Hillman (2008), who argue that in most studies with deaf people, there is no care to select participants and that they are truly representative of the deaf community, so variables such as the age of acquisition of a sign language, socio-cultural context or etiology of deafness should be considered.

The evidence shows that there are differences between deaf and hearing people and that these differences do not imply disadvantages or negative issues for cognition in deaf people but seem to exhibit other types of strategies for the appropriation of the world, such as attentional or visual resources, like Daza González et al., (2021) proposed in their research. In this sense, we agree with the reflection of Marschark & Hauser (2008) that there is a misconception about the cognition of deaf people and what it is to be Deaf, which leads to a series of barriers in different areas such as work, school, social and even family.

One of the EFs that seems to have the greatest difficulty in consolidation is working memory, especially in deaf children. Understanding that differences in this function may underlie learning in mathematics, reading, coordination in motor skills, abstract concept formation, and categorization should be basic to better academic achievement, for example. Auditory experience provides temporal patterns that help develop the above skills, so it would be expected that Deaf children would show difficulties in consolidating areas related to sequential processing, but this is not general for all people. On the other hand, several authors describe advantages in visual processes, particularly those referred to as peripheral visual attention, with emphasis on people who have already acquired sign language, perhaps because visuospatial languages develop in space. However, this visual advantage may entail difficulty in deaf children, because, not having developed inhibitory control, it is common for them to be labeled as distractible or inattentive, although the cognitive basis indicates otherwise. It is imperative to note that one of the variables that stands out as a differentiator in the development of EFs is the early acquisition of sign language. In different contexts, it is known that children are the first generation of deaf people in their hearing family, which hinders the acquisition of a mother tongue in the first years of life, and it is not until a later age that children can have access to sign language. It is worth noting that at this point we would no longer speak strictly of language acquisition, which would occur spontaneously in a natural context (Krashen, 1982), but of language learning, where there is an express search for the linguistic forms to be used and which will allow access to communication resources (Muntzel, 1995).

Understanding the basis of the cognitive dimensions in the deaf population allows them to rethink conceptions about their abilities, to design educational strategies that enhance the advantages and help those domains with greater difficulty, and although it is emphasized that neither the results nor the inferences are generalizable to the entire deaf population, it can be a starting point for lines of research and action.

From technology. The use of digital devices seems to be increasing for the evaluation of different cognitive processes. In this sense, and within the biometric data devices, the use of eye trackers seems to be the most usual, based on the premise that deaf people have different visual strategies than hearing people, due to the expanded use of space by the form of visuospatial communication. This, however, seems to be mediated by the consolidation of a communicative system, so in people who have not consolidated one, there seem to be differences in the results. This should be considered in the methodological conception to obtain the greatest homogeneity even when there is variability in the participants. One of the advantages of using technological devices to assess EFs is that they can provide more precise information about domains that may not be so evident in a traditional observation. This does not imply that assessment methods such as neuropsychological batteries should be replaced, but perhaps they can be complemented with other types of data that provide additional information for a better understanding of cognition. On the other hand, several authors refer to the use of a single sensory modality for the design of paradigms, so that integrating other sensory resources can provide more information about the processing of different cognitive functions. The current use of gamification seems to open possibilities of conjunction between learning and evaluation through a sense of playfulness, which brings other challenges in usability. As a future scope, the next revision could integrate virtual reality applications.

The development of technologies has been increasing and has diversified towards different needs of the deaf population. On the one hand, as teaching tools, to eliminate communication barriers by designing recognition devices or interfaces; others as assistive technologies based on electronic devices such as hearing aids or amplifiers, and the ones we focus on in this review, designs focused on the assessment of cognitive skills.


The diversity of the use and adaptations of technologies shows the need to implement this type of device as a means, either complementary or primary, for the evaluation of skills and functions related to cognition and that move away from traditional methodologies. Perhaps a line of research that can provide accurate approximations of the development of executive functions is focused on the study and evaluation of different cognitive strategies and tools that would be mediated by the absence of auditory input. In this sense, assessment through technology offers the opportunity to observe and make estimates about cognitive domains and subdomains in deaf people. Although there is ample evidence of the use of standardized systems, such as the eye tracker, or some interfaces, such as the Gordon System Diagnostic, we identified that there are other devices that have not been validated in large populations, which is a limitation mentioned by the authors themselves. Another observation is that there is evidence that combining devices with other types of technologies, can yield more information on how and in what intensity the functions being evaluated occur (Pop-Jordanova & Pop-Jordanov, 2020). Approaches that integrate these parameters can be used to obtain more accurate, sensitive, and specific cognitive level indices. The development of technologies must then consider the interindividual, etiologies, heterogeneity, and sociocultural contexts in which Deaf people develop, and as a prospective scenario based on the above, consider interdisciplinary and collaborative work with the Deaf community itself to provide real and functional results that converge technological innovation and Cognitive Hearing Science.

Ethics and Consent

Ethical approval and/or consent was not required for this review.

Funding Information

This research has been supported by a postdoctoral fellowship from the Consejo Nacional de Ciencia y Tecnología, CONACyT, Mexico, awarded to Dr. Coral I. Guerrero-Arenas.

Competing Interests

The authors have no competing interests to declare.

Author Contributions

All authors contributed to conception of ideas, writing and editing of the manuscript.

i The convention of using Deaf with a capital letter implies the recognition of an individual who belongs to a social group, a culture, and a common language (Erting and Woodward, 1979: 283). For this article, we use lowercase deaf to refer to the audiological condition of not hearing.


  1. Albrecht, U. V., Jungnickel, T., & von Jan, U. (2015, January). iSignIT-Communication App and Concept for the Deaf and Hard of Hearing. In ICIMTH (pp. 283–286). 

  2. Arlinger, S., Lunner, T., Lyxell, B., & Kathleen Pichora-Fuller, M. (2009). The emergence of cognitive hearing science. Scandinavian journal of psychology, 50(5), 371–384. DOI: 

  3. Baddeley, A. (2017). Exploring working memory: Selected works of Alan Baddeley. Routledge. DOI: 

  4. Badre, D., & Wagner, A. D. (2006). Computational and neurobiological mechanisms underlying cognitive flexibility. Proceedings of the National Academy of Sciences, 103(18), 7186–7191. DOI: 

  5. Barca, L., Pezzulo, G., Castrataro, M., Rinaldi, P., & Caselli, M. C. (2013). Visual Word Recognition in Deaf Readers: Lexicality Is Modulated by Communication Mode. PLOS ONE, 8(3), e59080. DOI: 

  6. Barkley, R. A. (2012). Executive functions: What they are, how they work, and why they evolved. Guilford Press. 

  7. Bavelier, D., Tomann, A., Hutton, C., Mitchell, T., Corina, D., Liu, G., & Neville, H. (2000a). Visual Attention to the Periphery Is Enhanced in Congenitally Deaf Individuals. Journal of Neuroscience, 20(17), RC93–RC93. DOI: 

  8. Bavelier, D., Tomann, A., Hutton, C., Mitchell, T., Corina, D., Liu, G., & Neville, H. (2000b). Visual attention to the periphery is enhanced in congenitally deaf individuals. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 20(17), RC93. DOI: 

  9. Bellotti, F., Kapralos, B., Lee, K., Moreno-Ger, P., & Berta, R. (2013). Assessment in and of Serious Games: An Overview. Advances in Human-Computer Interaction, 2013, 1–11. DOI: 

  10. Blair, C. D., & Ristic, J. (2019). Attention Combines Similarly in Covert and Overt Conditions. Vision (Basel, Switzerland), 3(2), E16. DOI: 

  11. Blankenship, T. L., Slough, M. A., Calkins, S. D., Deater-Deckard, K., Kim-Spoon, J., & Bell, M. A. (2019). Attention and executive functioning in infancy: Links to childhood executive function and reading achievement. Developmental Science, 22(6), e12824. DOI: 

  12. Bonmassar, C., Pavani, F., Di Renzo, A., Caselli, M. C., & van Zoest, W. (2021). Eye movement patterns to social and non-social cues in early deaf adults. Quarterly Journal of Experimental Psychology (2006), 74(6), 1021–1036. DOI: 

  13. Bosworth, R. G., & Dobkins, K. R. (2002). The Effects of Spatial Attention on Motion Processing in Deaf Signers, Hearing Signers, and Hearing Non signers. Brain and Cognition, 49(1), 152–169. DOI: 

  14. Bottari, D., Valsecchi, M., & Pavani, F. (2012). Prominent reflexive eye-movement orienting associated with deafness. Cognitive Neuroscience, 3(1), 8–13. DOI: 

  15. Brunyé, T. T., Drew, T., Weaver, D. L., & Elmore, J. G. (2019). A review of eye tracking for understanding and improving diagnostic interpretation. Cognitive Research: Principles and Implications, 4(1), 7. DOI: 

  16. Bull, R., Marschark, M., Nordmann, E., Sapere, P., & Skene, W. A. (2018). The approximate number system and domain-general abilities as predictors of math ability in children with normal hearing and hearing loss. The British Journal of Developmental Psychology, 36(2), 236–254. DOI: 

  17. Burkholder, R. A., & Pisoni, D. B. (2003). Speech timing and working memory in profoundly deaf children after cochlear implantation. Journal of Experimental Child Psychology, 85(1), 63–88. DOI: 

  18. Cai, D., Zhang, L., Li, Y., Wei, W., & Georgiou, G. K. (2018). The Role of Approximate Number System in Different Mathematics Skills Across Grades. Frontiers in Psychology, 9, 1733. DOI: 

  19. Cano, S., Naranjo, J., Henao, C., Rusu, C., & Albiol, S. (2020). Serious Game as Support for the Development of Computational Thinking for Children with Hearing Impairment. Applied Sciences, 11. DOI: 

  20. Cardin, V., Rudner, M., De Oliveira, R. F., Andin, J., Su, M. T., Beese, L., Woll, B., & Rönnberg, J. (2018). The Organization of Working Memory Networks is Shaped by Early Sensory Experience. Cerebral Cortex (New York, N.Y.: 1991), 28(10), 3540–3554. DOI: 

  21. Carter, B. T., & Luke, S. G. (2020). Best practices in eye tracking research. International Journal of Psychophysiology: Official Journal of the International Organization of Psychophysiology, 155, 49–62. DOI: 

  22. Chanchí, G. E. G., Vargas, P. A., & Campo, W. Y. M. (2019). Construcción de recursos educativos para la temática de accesibilidad en el curso de interacción humano computador. Revista Ibérica de Sistemas e Tecnologias de Informação, E23, 171–183. 

  23. Conway, C. M., Pisoni, D. B., & Kronenberger, W. G. (2009). The Importance of Sound for Cognitive Sequencing Abilities: The Auditory Scaffolding Hypothesis. Current Directions in Psychological Science, 18(5), 275–279. DOI: 

  24. Corina, D., & Singleton, J. (2009). Developmental social cognitive neuroscience: Insights from deafness. Child Development, 80(4), 952–967. DOI: 

  25. Cristofori, I., Cohen-Zimerman, S., & Grafman, J. (2019). Executive functions. Handbook of clinical neurology, 163, 197–219. DOI: 

  26. Daza González, M. T., Phillips-Silver, J., López Liria, R., Gioiosa Maurno, N., Fernández García, L., & Ruiz-Castañeda, P. (2021). Inattention, impulsivity, and hyperactivity in deaf children are not due to deficits in inhibitory control, but may reflect an adaptive strategy. Frontiers in psychology, 12, 629032. DOI: 

  27. Diamond, A. (2013). Executive functions. Annual Review of Psychology, 64, 135–168. DOI: 

  28. Diamond, A. (2020). Executive functions. In Handbook of clinical neurology, 173, 225–240. Elsevier. DOI: 

  29. Dye, M. W. G., & Hauser, P. C. (2014). Sustained attention, selective attention and cognitive control in deaf and hearing children. Hearing Research, 309, 94–102. DOI: 

  30. Dye, M. W. G., Hauser, P. C., & Bavelier, D. (2008). Visual Attention in Deaf Children and Adults: Implications for Learning Enviroments. In Marschark, M., & Hauser, P. C. (Eds.), Deaf cognition: Foundations and outcomes (250–263). Oxford University Press. DOI: 

  31. Dye, M. W. G., Hauser, P. C., & Bavelier, D. (2009). Is Visual Selective Attention in Deaf Individuals Enhanced or Deficient? The Case of the Useful Field of View. PLOS ONE, 4(5), e5640. DOI: 

  32. Dye, M. W., & Terhune-Cotter, B. (2022). Development of visual sustained selective attention and response inhibition in deaf children. Memory & Cognition, 1–17. DOI: 

  33. Erting, C., & Woodward, J. (1979). Sign language and the deaf community a sociolinguistic profile. Discourse Processes, 2(4), 283–300. DOI: 

  34. Ferreira, C. P., González-González, C. S., & Adamatti, D. F. (2021). Business simulation games analysis supported by human-computer interfaces: a systematic review. Sensors, 21(14), 4810. DOI: 

  35. Figueras, B., Edwards, L., & Langdon, D. (2008). Executive Function and Language in Deaf Children. The Journal of Deaf Studies and Deaf Education, 13(3), 362–377. DOI: 

  36. Foster, N. (2011). Gordon Diagnostic System. In S. Goldstein & J. A. Naglieri (Eds.), Encyclopedia of Child Behavior and Development (pp. 707–707). Springer US. DOI: 

  37. González, M. T. D., & Re, F. G. G. (2011). Neuropsychological assessment in deaf children: Presentation and preliminary results obtained with the AWARD Neuropsychological battery. Electronic Journal of Research in Educational Psychology, 24, 20. 

  38. Hauser, P. C., Dye, M. W. G., Boutla, M., Green, C. S., & Bavelier, D. (2007). Deafness and visual enumeration: Not all aspects of attention are modified by deafness. Brain Research, 1153, 178–187. DOI: 

  39. Hauser, P. C., Lukomski, J. & Hillman, T. (2008). Development of Deaf and Hard-of-Hearing Students’ Executive Functions. In Marschark, M., & Hauser, P. C. (Eds.), Deaf cognition: Foundations and outcomes (250–263). Oxford University Press. DOI: 

  40. Hayward, V., Ashley, O., Hernandez, C., Grant, D., & Robles-De-La-Torre, G. (2004). Haptic interfaces and devices. Sensor Review, 24, 16–29. DOI: 

  41. Heimler, B., van Zoest, W., Baruffaldi, F., Donk, M., Rinaldi, P., Caselli, M. C., & Pavani, F. (2015b). Finding the balance between capture and control: Oculomotor selection in early deaf adults. Brain and Cognition, 96, 12–27. DOI: 

  42. Heimler, B., van Zoest, W., Baruffaldi, F., Rinaldi, P., Caselli, M. C., & Pavani, F. (2015a). Attentional orienting to social and nonsocial cues in early deaf adults. Journal of Experimental Psychology. Human Perception and Performance, 41(6), 1758–1771. DOI: 

  43. Hua, H., Johansson, B., Magnusson, L., Lyxell, B., & Ellis, R. J. (2017). Speech recognition and cognitive skills in bimodal cochlear implant users. Journal of Speech, Language, and Hearing Research, 60(9), 2752–2763. DOI: 

  44. Imran, A., Razzaq, A., Baig, I. A., Hussain, A., Shahid, S., & Rehman, T. U. (2021). Dataset of Pakistan sign language and automatic recognition of hand configuration of urdu alphabet through machine learning. Data in Brief, 36, 107021. DOI: 

  45. Jacob, S. A., Chong, E. Y. C., Goh, S. L., & Palanisamy, U. D. (2021). Design suggestions for an mHealth app to facilitate communication between pharmacists and the Deaf: perspective of the Deaf community (HEARD Project). Mhealth, 7. DOI: 

  46. Khaleghi, A., Aghaei, Z., & Mahdavi, M. A. (2021). A Gamification Framework for Cognitive Assessment and Cognitive Training: Qualitative Study. JMIR Serious Games, 9(2), e21900. DOI: 

  47. Krafka, K., Khosla, A., Kellnhofer, P., Kannan, H., Bhandarkar, S., Matusik, W., & Torralba, A. (2016). Eye Tracking for Everyone. ArXiv:1606.05814 [Cs]. DOI: 

  48. Kral, A., Kronenberger, W. G., Pisoni, D. B., & O’Donoghue, G. M. (2016). Neurocognitive factors in sensory restoration of early deafness: A connectome model. The Lancet. Neurology, 15(6), 610–621. DOI: 

  49. Krashen, S. D. (1982). Acquiring a second language. World Englishes, 1(3), 97–101. DOI: 

  50. Krebs, J., Roehm, D., Wilbur, R. B., & Malaia, E. A. (2021). Age of sign language acquisition has a lifelong effect on syntactic preferences in sign language users. International Journal of Behavioral Development, 45(5), 397–408. DOI: 

  51. Krejtz, I., Krejtz, K., Wisiecka, K., Abramczyk, M., Olszanowski, M., & Duchowski, A. T. (2020). Attention Dynamics During Emotion Recognition by Deaf and Hearing Individuals. Journal of Deaf Studies and Deaf Education, 25(1), 10–21. DOI: 

  52. Landolfi, E., Continisio, G. I., Del Vecchio, V., Serra, N., Burattini, E., Conson, M., … & Malesci, R. (2022). NeonaTal Assisted TelerehAbilitation (TATA Web App) for Hearing-Impaired Children: A Family-Centered Care Model for Early Intervention in Congenital Hearing Loss. Audiology Research, 12(2), 182–190. DOI: 

  53. Lezak, M. D. (1982). The problem of assessing executive functions. International journal of Psychology, 17(1–4), 281–297. DOI: 

  54. Lim, J. Z., Mountstephens, J., & Teo, J. (2020). Emotion Recognition Using Eye Tracking: Taxonomy, Review and Current Challenges. Sensors (Basel, Switzerland), 20(8), E2384. DOI: 

  55. Lindsay, G. W. (2020). Attention in Psychology, Neuroscience, and Machine Learning. Frontiers in Computational Neuroscience, 14, 29. DOI: 

  56. López-Crespo, G., Daza, M. T., & Méndez-López, M. (2012). Visual working memory in deaf children with diverse communication modes: Improvement by differential outcomes. Research in Developmental Disabilities, 33(2), 362–368. DOI: 

  57. Marschark, M., & Hauser, P. C. (2008). Cognitive Underpinnings of Learning by Deaf and Hard-of-Hearing Students. In Marschark, M., & Hauser, P. C. (Eds.), Deaf cognition: Foundations and outcomes (250–263). Oxford University Press. DOI: 

  58. Mason, K., Marshall, C. R., & Morgan, G. (2021). Executive Function Training for Deaf Children: Impact of a Music Intervention. Journal of Deaf Studies and Deaf Education, 26(4), 490–500. DOI: 

  59. Muntzel, M. C. (1995). Aprendizaje vs. Adquisición de segunda lengua:¿ Un conlficto de intereses?. Estudios de Lingüística Aplicada, 21. 

  60. Murray, M. M., Lewkowicz, D. J., Amedi, A., & Wallace, M. T. (2016). Multisensory Processes: A Balancing Act across the Lifespan. Trends in Neurosciences, 39(8), 567–579. DOI: 

  61. Nielsen, J. (2013). Usability 101: Introduction to Usability. Jakob Nielsen’s Alertbox (January 4, 2012). 

  62. Norman, D. (2013). The design of everyday things: Revised and expanded edition. Basic books. 

  63. Pavani, F., & Bottari, D. (2012). Visual Abilities in Individuals with Profound Deafness A Critical Review. In M. M. Murray & M. T. Wallace (Eds.), The Neural Bases of Multisensory Processes. CRC Press/Taylor & Francis. DOI: 

  64. Pérez Ariza, V. Z., & Santís-Chaves, M. (2016). Interfaces hápticas: sistemas cinestésicos vs. sistemas táctiles. Revista EIA, (26), 13–29. DOI: 

  65. Petry, B., Illandara, T., Elvitigala, D. S., & Nanayakkara, S. (2018). Supporting Rhythm Activities of Deaf Children using Music-Sensory-Substitution Systems. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1–10. DOI: 

  66. Pisoni, D. B., & Cleary, M. (2003). Measures of Working Memory Span and Verbal Rehearsal Speed in Deaf Children after Cochlear Implantation. Ear and Hearing, 24(1 Suppl), 106S–120S. DOI: 

  67. Pop-Jordanova, N., & Pop-Jordanov, J. (2020). Electrodermal Activity and Stress Assessment. Prilozi (Makedonska Akademija Na Naukite I Umetnostite. Oddelenie Za Medicinski Nauki), 41(2), 5–15. DOI: 

  68. Rodger, H., Lao, J., Stoll, C., Richoz, A. R., Pascalis, O., Dye, M., & Caldara, R. (2021). The recognition of facial expressions of emotion in deaf and hearing individuals. Heliyon, 7(5), e07018. DOI: 

  69. Roebuck, H., Freigang, C., & Barry, J. G. (2016). Continuous performance tasks: Not just about sustaining attention. Journal of speech, language, and hearing research, 59(3), 501–510. DOI: 

  70. Sidera, F., Amadó, A., & Martínez, L. (2017). Influences on Facial Emotion Recognition in Deaf Children. The Journal of Deaf Studies and Deaf Education, 22(2), 164–177. DOI: 

  71. Skaramagkas, V., Giannakakis, G., Ktistakis, E., Manousos, D., Karatzanis, I., Tachos, N., Tripoliti, E. E., Marias, K., Fotiadis, D. I., & Tsiknakis, M. (2021). Review of eye tracking metrics involved in emotional and cognitive processes. IEEE Reviews in Biomedical Engineering, PP. DOI: 

  72. Thakur, R., Jayakumar, J., & Pant, S. (2019). A comparative study of visual attention in hearing impaired and normal schoolgoing children. Indian Journal of Otology, 25(4), 192. DOI: 

  73. Tharpe, A. M., Ashmead, D., Sladen, D. P., Ryan, H. A. M., & Rothpletz, A. M. (2008). Visual Attention and Hearing Loss: Past and Current Perspectives. Journal of the American Academy of Audiology, 19(10), 741–747. DOI: 

  74. Tranchant, P., Shiell, M. M., Giordano, M., Nadeau, A., Peretz, I., & Zatorre, R. J. (2017). Feeling the Beat: Bouncing Synchronization to Vibrotactile Music in Hearing and Early Deaf People. Frontiers in Neuroscience, 11. DOI: 

  75. Tsou, Y.-T., Li, B., Kret, M. E., Frijns, J. H. M., & Rieffe, C. (2021). Hearing Status Affects Children’s Emotion Understanding in Dynamic Social Situations: An Eye-Tracking Study. Ear and Hearing, 42(4), 1024–1033. DOI: 

  76. Vázquez Mosquera, M. C. (2021). Revisión sistemática sobre pruebas de evaluación neuropsicológica para niños con discapacidad auditiva. Revista Eugenio Espejo, 15(3), 123–144. DOI: 

  77. van Wieringen, A., Boudewyns, A., Sangen, A., Wouters, J., & Desloovere, C. (2019). Unilateral congenital hearing loss in children: Challenges and potentials. Hearing research, 372, 29–41. DOI: 

  78. Watanabe, K., Matsuda, T., Nishioka, T., & Namatame, M. (2011). Eye gaze during observation of static faces in deaf people. PloS One, 6(2), e16919. DOI: 

  79. Worster, E., Pimperton, H., Ralph-Lewis, A., Monroy, L., Hulme, C., & MacSweeney, M. (2018). Eye Movements During Visual Speech Perception in Deaf and Hearing Children. Language Learning, 68 (Suppl Suppl 1), 159–179. DOI: 

  80. Zeni, S., Laudanna, I., Baruffaldi, F., Heimler, B., Melcher, D., & Pavani, F. (2020). Increased overt attention to objects in early deaf adults: An eye-tracking study of complex naturalistic scenes. Cognition, 194, 104061. DOI: 

comments powered by Disqus