Visual information and prior knowledge represent two different sources of predictability for tasks which each have been reported to have a beneficial effect on dual-task performance. What if the two were combined? Adding multiple sources of predictability might, on the one hand, lead to additive, beneficial effects on dual-tasking. On the other hand, it is conceivable that multiple sources of predictability do not increase dual-task performance further, as they complicate performance due to having to process information from multiple sources. In this study, we combined two sources of predictability, predictive visual information and prior knowledge (implicit learning and explicit learning) in a dual-task setup. 22 participants performed a continuous tracking task together with an auditory reaction time task over three days. The middle segment of the tracking task was repeating to promote motor learning, but only half of the participants was informed about this. After the practice blocks (day 3), we provided participants with predictive visual information about the tracking path to test whether visual information would add to beneficial effects of prior knowledge (additive effects of predictability). Results show that both predictive visual information and prior knowledge improved dual-task performance, presented simultaneously or in absence of each other. These results show that processing of information relevant for enhancement of task performance is unhindered by dual-task demands.
Humans continually predict events in their environment, updating representations of the external world as they encounter irregularities (Friston, 2010; Wolpert et al., 2003). This constant adaptation likely happens without awareness or the need of attentional resources (Whittlesea, 2004). A reduction, or adaptive use, of attentional resources is of major relevance to researchers interested in optimizing dual-task performance since the finding that dual-task performance is inferior to single-task performance is often attributed to a limited pool of resources (Wickens, 2008). While making tasks more predictable has been generally beneficial for reducing resource requirements, results have not always been straightforward for dual-task improvement, because more predictability does not necessarily lead to increased benefits or interference reduction (de Oliveira et al., 2014). A possible reason for these findings is that predictability not only reduces resource requirements, it also affects resource allocation policy and potentially unequal weighting of tasks (Broeker et al., 2018). This might be complicated further when multiple sources of predictability are available.
In the current paper we provide prior knowledge and predictive visual information as two sources of predictability in a continuous tracking task, with the aim to investigate how multiple sources of predictability interact in improving dual-task performance. In the literature, predictability has been argued to come from the individual through prior knowledge or from information available in the environment (Gentsch et al., 2016; Körding & Wolpert, 2006). We considered two possible outcomes: a) after having learnt a repeating pattern (implicitly or explicitly) participants will improve further as soon as predictive visual information is added to prior knowledge showing additive effects; b) after having learnt a repeating pattern (implicitly or explicitly) participants will no longer rely on prior knowledge as soon as predictive visual information is added and adapt to a feedforward online control strategy (see Figure 1).
The question of whether and how different sources of information interact has been the core interest of various experimental manipulations. While some researchers found that planned actions are abandoned when information in the environment is added (e.g. Pfister et al., 2012), others found that planned actions are more influential than information in the environment (i.e. cues) for subsequent behavior (e.g. Gozli et al., 2016; Kemper et al., 2012). Again others demonstrated that previously learnt knowledge about sequences is not abandoned when information in the environment is additionally presented (Gaschler et al., 2018). The variance in experimental paradigms used (e.g. discrete response-time tasks, memory tasks, visual search tasks) however prove a clear inference of hypotheses from these studies difficult and there are no studies examining the interaction of knowledge and information in continuous tasks. This complicates a mechanistic view, as for instance proposed in race models (e.g. Schuck et al., 2012), because no specific time point where knowledge vs. information is retrieved can be determined. For our design, we trust that the use of knowledge and visual information can be independently manipulated to test the alternative hypotheses presented.
In addition, we will examine the differential effects of implicit vs. explicit knowledge. Implicit knowledge, in contrast to explicit knowledge, is generally assumed to not require attentional resources (Kal et al., 2018). This means that implicit knowledge should be superior to explicit knowledge in dual-task situations. Regarding our manipulations, the expectation is that the addition of implicit knowledge and predictive visual information yields better performance compared to explicit knowledge and predictive visual information, accepting that visual information demands significant processing itself.
The two sources of predictability, and their potential interaction become clear when considering for instance a tennis player predicting the trajectory of an incoming ball by integrating knowledge from previously played rallies vs. the visual flight path information (Körding & Wolpert, 2004). Both sources of predictability have been independently demonstrated to have beneficial effects in single- and dual-task performance, as we will review below.
Beneficial effects of prior knowledge on task performance, mainly in the form of implicit knowledge, have been demonstrated in many serial reaction-time tasks (Nissen & Bullemer, 1987; Röttger et al., 2017) and tracking studies (Ewolds et al., 2017; Pew, 1974; Tsang & Chan, 2015). In these studies, a path-to-be-tracked or a sequence of buttons-to-be-pressed is repeated over numerous trials and thus builds up knowledge about the course of the task. Results of these experiments consistently show that performance on practiced sequences is better than on random sequences. However, participants are often unaware of the existence of sequences despite learning them, and thus the difference in performance between practiced and random trials is often taken as a measure of implicit learning. Implicit knowledge may have an advantage over explicit knowledge in dual-task situations since no conscious processes can interfere with a secondary task. In tracking studies, however, it has been shown that participants receiving explicit instructions about sequences are advantaged over participants learning them implicitly (Ewolds et al., 2017), possibly because explicit instructions draw attention to the regularity in the sequence and ensure that knowledge is established sooner. The resulting knowledge, however, could still become implicit, in line with traditional training approaches arguing that explicit instructions first lead to declarative knowledge about movements but often become more automatic after training. A review by Kal et al. (2018) demonstrated that there is indeed little evidence to suggest that implicit learning leads to a higher degree of automaticity than explicit learning for more complex movements. Therefore, the differences between the explicit group and the implicit group in the current study should be regarded as effects of instructions rather than absence of explicit knowledge (for the implicit group) or a clear distinction between implicit and explicit knowledge. In any case, the explicit group was means to guarantee that knowledge vs. visual information can be compared.
Beneficial effects of predictive visual information have been demonstrated in a driving simulation study by de Oliveira and Wann (2010). They showed that increasing visual information linearly improved driving performance in healthy controls, yet too much information was suboptimal for people with developmental coordination disorder. They hypothesized that in people with the disorder, too much visual information overloaded their processing system, because it cannot be immediately used for online control and interferes with concurrent performance. For example, seeing the first curve of the road ahead is helpful because it allows for concurrent planning ahead, while presenting the second and third curve may interfere with planning of the first curve (see also Raab et al., 2013). If coordination disorders share characteristics with increased processing load under dual-task requirements in healthy adults, then the results would speak against our hypothesized additive effects because too much information would be present when predictive visual information and prior knowledge were added. Related evidence comes from tracking studies applying predictive visual information. Broeker et al. (under revision) demonstrated that while dual-task tracking performance improved with 200–400 ms predictive visual information, 600–800 ms did not add to tracking improvements, so a higher amount of predictive visual information can be debilitative to tracking performance, too.
In sum, both sources of predictability are beneficial to dual-task performance but no study to date has tested whether adding prior knowledge and (an optimal amount of) predictive visual information would be notably beneficial to continuous dual-task performance. We tested dual-task performance in a tracking task with an auditory reaction time task, where participants had acquired prior knowledge about the tracking path and additionally received predictive visual information about the path. This allowed us to test potential additive effects, but also a general interaction of the two sources. If no additive effects are found, we need to examine which source of information was used more for performance optimization. Do people rely on knowledge they have learnt, disregarding visual input? Or do they rely on the newly added visual information without drawing on prior knowledge?
We recruited 22 naïve participants (10 female; aged between 21 and 29 years, M = 24.5 years, SD = 2.31), on university campus via mailing list and participant database. The implicit group consisted of 10 participants and the explicit group consisted of 12 participants. Initially, both groups consisted of 12 participants, but two participants were removed due to incomplete data sets. The sample size was based on Ewolds et al. (2018). Participants had normal or corrected-to-normal vision and reported no musculoskeletal or neurological disorders. Participants gave written informed consent prior to the experiment and received a small remuneration for taking part. The experiments were approved by the local ethics committee and were performed according to the Declaration of Helsinki 2008.
Participants were seated in a dimly lit room at a viewing distance of 60 cm to a 24” computer screen (144 Hz, 1,920 × 1,080 pixel resolution). The tracking software ran on a Windows 10, 64-bit system with a GTX750 graphics card. A spring-loaded joystick (SpeedLink Dark Tornado, max. sampling rate 60 Hz) was fixed to the table perpendicular to the midpoint of the screen at a distance of 30 cm. A pedal was fixed to the floor located on which participants placed their self-reported dominant foot (f-pro USB foot switch, 9 × 5 cm). Participants wore headphones (Sennheiser HD 65TV).
Participants operated a joystick with their self-reported dominant hand to control a white cursor cross to track a red target square. The cursor cross fit exactly in the 19 × 22 pixel target square. The position on the x axis of the cursor was coupled to target position, only the vertical movement of the cursor was user-controlled. This was implemented to prevent participants from moving the cursor straight to the right edge of the screen to cut trials short. The tracking path on each trial was pseudo-randomly composed of three different segments (adapted from Schmidt & Wulf, 1997) according to the formula:
with ai and bi being randomly generated numbers ranging from –5 to 5 and x being a real number in the range [0; 2π]. As different amplitudes and number of extrema (Magill, 1998) have been shown to lead to differences in performance, all randomly generated segments were balanced on length and number of extrema beforehand. This yielded a final set of 41 segments from which the three segments were selected for each trial. To avoid the anticipation of peaks, the red target followed a constant path velocity of 10.5 cm/s, and as a result, trial length varied from 25.6 to 27.9 s depending on the curve’s trajectory. Each participant received his/her own individual repeating segment, ensuring that practice effects were not due to difficulty differences between segments or between segments of the two groups (Künzell et al., 2016). The repeating segment was always placed in the middle during the practice blocks. The two random segments on each trial were chosen so that each occurred an equal number of times.
In the Test Block, we added predictive visual information, so a portion of the tracking path ahead of the target was made visible (Figure 2). We chose 400 ms because Broeker et al. (under revision) have shown 400 ms to be most beneficial for dual-task (DT) cost reduction (similar to de Oliveira et al., 2014).
The second task was an auditory go/no-go reaction time task with high-pitched and low-pitched tones occurring randomly (1,086 Hz and 217 Hz, 75-ms duration). Participants were instructed to respond with a pedal press to high-pitched tones as fast as possible while ignoring the low-pitched tones. The number of target and distractor sounds per trial varied between 9 and 14 but all participants received the same total of sounds across the whole experiment. No tones occurred before the first 500 ms and after the last 500 ms of a trial to guarantee sufficient response time. Because average RTs for auditory discrimination in earlier DT studies were 500–950 ms (Bherer et al., 2005), the minimum gap between two sounds was 1,001 ms and responses were considered valid only when they were given within 800 ms.
Participants were informed that they had to practice a tracking task over several days as the study aimed at examining multitasking efficiency. While participants in the explicit group were instructed about the existence of a repeating middle segment in the tracking task, participants of the implicit group were not. All participants were instructed to follow the target square as closely as possible, to react to target tones as fast and as accurately as possible, and to put equal emphasis on both tasks. A feedback window informing participants about their tracking performance and RTs was shown after every five trials to maintain motivation (McDowd, 1986).
Participants started with a familiarization block of five single-task (ST) tracking trials, then five ST auditory trials and finally two dual-task trials (without repeating segment), see Figure 3 for the experimental schedule. In the practice block the goal was for participants to learn the repeating segment, which was always placed in the middle. The number of trials followed Ewolds et al. (2017), who found implicit learning effects with a similar design. Predictive visual information was not provided during the practice block. In the Test Block, participants performed 80 trials in total: 40 dual-task and 40 single-task tracking trials, consisting of 2 predictive visual information (0 ms vs 400 ms) × 4 segment conditions (repeating in middle vs all random vs repeating left vs repeating right), each condition repeating five times. A Retention Block was performed two days later, which repeated the schedule of the Test Block. Participants were exposed to two sources of predictability for the first time during the Test Block, so a Retention Block 48 hours later was deemed adequate for sleep-dependent consolidation of motor learning to occur and provide a more accurate picture of what participants had learned (Walker & Stickgold, 2004). The usual procedure of so-called catch trials where the repeating segment is replaced by a random segment was extended with trials that positioned the repeated segment on the right or left. To the authors’ knowledge this has not been done before and gives us the opportunity to see whether position of the segment is crucial to the expression of knowledge. The explicit group was informed when the repeated segment occurred right or left, and when it was replaced by a random segment.
After finishing the experiment, participants from the implicit group were further asked to fill in a questionnaire. We used the same questionnaire as in Ewolds et al. (2017) and asked:
We calculated the root mean square error at 100 Hz (RMSE; 1 RMSE ≅ 0.56 cm on screen), mirroring participant’s mean deviation from the target tracking path (Schmidt & Wulf, 1997). For the tone task we recorded RTs.
Prior to the analyses we checked for outliers in the data. Participants were removed from the data set when RMSE or RTs exceeded two or more standard deviations. 2 participants from the implicit group were removed from the data set because they had only completed two testing days. To check whether learning took place for the implicit and explicit group we subjected RMSEs to a 4 × 2 × 2 mixed-measures ANOVA with factors Segment (Repeating middle vs. Repeating right vs. Repeating left vs. Random), Condition (Dual task vs. Single task) and Group (Implicit vs. Explicit), using only data without predictive visual information. We then averaged the repeating segments into a single ‘Repeating Segment’ variable to facilitate further analysis. Next, we analyzed the individual contributions of a repeating segment and predictive visual information to dual-task performance, for both RMSEs and RTs. Lastly, to test the additive contribution of both sources of predictability, we used an ANOVA including the factors Predictive Visual Information (0 ms vs. 400 ms), Segment (Repeating vs. Random), Time (Test Block vs Retention Block) and Group (Implicit vs. Explicit). An additive effect would be given if we can show main effects of Segment and Visual predictive information, and a non-significant interaction between them (APA, n.d.). This means that the difference between random and repeating segments should be unconditional upon visual information, e.g. do not disappear when this source of predictability is added.
The aim of our study was to examine the differential influence of different sources of predictability on dual-task performance in implicit and explicit learning groups. The results support the hypothesis of additive effects of prior knowledge and predictive visual information on dual-task tracking performance.
A main effect of Segment in the Test Block demonstrated that learning of the repeating segment had taken place, F(3, 18) = 3.25, p = .046, ηp2 = .351. Pairwise comparisons confirmed that performance on a random segment (M = 4.49, SE = 0.12) was worse than performance on the repeating segment placed on the left (M = 4.12, SE = 0.13), middle (M = 4.18, SE = 0.18) and on the right (M = 4.25, SE = 0.16; all ps < .05). This effect of Segment was similar for dual-task and single-task conditions, because there was no significant Segment × Condition interaction, F(3, 18) = 1.05, p = .396, ηp2 = .149, and there was also no main effect of Condition, F(1, 20) = 4.20, p = .054, ηp2 = .174. The effect of Segment also did not differ between the implicit group and explicit group, because no Segment × Group interaction was found, F(3, 18) < 1, p = .971, ηp2 = .013. These were important to check since dual-task performance can suppress the expression of implicit knowledge (Cohen et al., 1990). Likewise, there was no Segment × Condition × Group interaction, F(3, 18) < 1, p = .462, ηp2 = .130. To simplify the analysis of the learning effect, we took the repeating middle segments and the average of the random segments to create the two-level factor Segment (Repeating vs Random) in the analyses below. Given that we were mainly interested in the (additive) effects of predictability on dual-task performance, we will continue to report results on the individual contributions of each source of predictability on dual-task performance (first prior knowledge reflected on Segment, and then predictive visual information) in the Test and Retention Blocks. Results on single-task performance can be found in the supplementary material.
We also analyzed the implicit group’s answers on the questionnaire. From 10 participants, 9 indicated that they had not noticed a repeating segment (question 6) and also indicated the wrong segment in question 7. One participant indicated that she had noticed the repeating segment and crossed the middle one in question 7. Given that this participant’s values were not considerably lower than the other group’s participants and there was a 33% chance that the middle item was correct, the participant remained in the data set.
There was an effect of Segment for the Test Block, because tracking of repeating segments was better than tracking of random segments, F(1, 20) = 7.90, p = .011, ηp2 = .283, see Figure 4. There was no significant Segment × Group interaction in the Test Block, F(1, 20) < 1, p = .796, ηp2 = .003, showing that the effect was similar for the implicit and explicit learners.
There was no effect of Segment for the Retention Block, F(1, 20) = 3.05, p = .096, ηp2 = .132, see Figure 5, but there was a significant Segment × Group effect, F(1, 20) = 6.48, p = .019, ηp2 = .245, showing that only the explicit group benefited from the repeating segments in the Retention Block. In a separate ANOVA we tested whether the improvement in tracking from Test Block to Retention Block (comparing Figures 3 and 4) was significant, which was the case as indicated by a main effect of Block, F(1, 20) = 28.93, p < .001, ηp2 = .591. Details show however that the specific improvement during repeated segments was more pronounced for the explicit group from Test to Retention Block, Segment × Group × Block F(1, 20) = 5.02, p = .037, ηp2 = .201.
Reaction times were not significantly shorter while tracking a repeating segment, neither during the Test Block, F(1, 18) < 1, p = .377, ηp2 = .044, nor during the Retention Block, F(1, 20) < 1, p = .388, ηp2= .037.
Predictive visual information significantly improved tracking during the Test Block, F(1, 20) = 238.69, p < .001, ηp2 = .923, as well as during the Retention Block, F(1, 20) = 705.63, p < .001, ηp2 = .972 (Figures 3 and 4). Predictive visual information did not significantly improve reaction times during the Test Block, F(1, 18) < 1, p = .726, ηp2 = .007. However, in the Retention Block, there was a both a main effect of Predictive visual information on RT, F(1, 20) = 6.72, p = .017, ηp2 = .251, and a significant Predictive visual information × Segment interaction, F(1, 20) = 9.97, p = .005, ηp2 = .333 (Figure 5), because while predictive visual information had no effect on reaction times over the repeating segments it increased reaction times over the random segments (predictive information: M = 562 ms, SE = 10 ms vs. no predictive information: M = 517 ms, SE = 12 ms).
Exploratory analyses analyzing the first 5 trials when visual information was given showed that the benefit of visual information was instant, and did not develop over trials, as indicated by a non-significant main effect of Trial, F(1, 851), = 1.60, p = .206 and non-significant interactions between Trial, Visual Information, Segment and/or Block.
If knowledge about the repeating segment and Predictive visual information were unconditional upon each other, there should be separate main effects and also a non-significant interaction. This was the case as shown by the main effects above, and the following non-significant Segment × Visual Information interaction in the Test Block, F(1, 20) < 1, p = .384, ηp2 = .038, and the Retention Block, F(1, 20) < 1, p = .814, ηp2 = .003. This shows that performance improvement in the repeating segment did not differ between trials with and without predictive visual information (see Figure 6).
Bayesian statistics using the Bayesian information criteria proposed by Wagenmakers (2007) were further applied to estimate the likelihood of the existence of the interaction (H0). In accordance Bayes factors classifications by van Doorn et al. (2019), the interaction term shows moderate evidence for H0, BF10 = .278.
The goal of the current study was to investigate the effect of prior knowledge and predictive visual information on dual-task performance. First, we needed to establish whether both sources of predictability influence dual-task performance. We found that predictive visual information had a strong impact on tracking performance and lowered tracking errors under dual-task conditions. The effects of prior knowledge were smaller and less consistent in the sense that implicit learning (of the repeated segment) was not demonstrable in all tests. However, learning in general was not redundant after adding visual information as we still found differences between random and repeated segments. We therefore conclude that both lead to increased performance in the presence of the other, confirming additive effects of predictability. This result extends research on the influence of predictability on dual-tasking which has been predominantly shown in experiments using discrete, sequentially-presented stimuli (Gaschler et al., 2018).
Our results showed no implicit learning effects at retention and two reasons may explain that: a ceiling effect in performance, or participants relying more on predictive visual information at the cost of implicit learning. The latter explanation is unlikely since the learning effect was absent when predictive visual information was unavailable. However, this explanation cannot be completely disregarded because it is possible that implicit knowledge faded out the more participants performed trials with predictive visual information which happened during the practice and retention blocks. Additionally, trials contained a repeating segment less consistently, or in a different position, during these blocks. While implicit knowledge is often retained better than explicit knowledge, it is also more susceptible to suffer when the task environment changes (Abrahamse et al., 2010; Lee & Vakoch, 1996). The existence of a ceiling effect is also possible because both groups improved significantly from the Practice to the Retention Block, although improvement stagnated on the repeating segment without predictive visual information. It must be noted that implicit learning effects in continuous task are more inconsistent than in SRT tasks. This has often been ascribed to methodological issues such as peculiarities of the repeating segment used (Chambaron et al., 2006; Van Ooteghem et al., 2008), but since the repeating segment was unique for every participant the implicit learning observed in the practice block is unlikely to be an anomaly. In contrast to the implicit group, we found the advantage of a repeating segment for the explicit group to be very consistent, this may be of little surprise since these participants were instructed regularly about the repeating segment. It is therefore difficult to establish how much the performance improvements were due to knowledge of the tracking path or a general direction of attention to the repeating segments through instructions.
Overall, the data from the Practice block suggest that participants were able to use information from prior knowledge and predictable visible information to optimize motor output. The effect of predictive visual information was much larger though, which may have had as a side effect that it was relied upon more with continued testing, as in the Retention Block. As described above, it is possible that this is the reason for implicit learning effects not showing in the Retention Block. Predictive visual information increased reaction times during the Retention Block although it is unclear why it happened only over the random segment. In previous work (Broeker et al., n.d.), we have manipulated predictability in the tracking task and the auditory task separately, and found that while visual information had no impact on RT, auditory sequences had no impact on RMSE, suggesting unilateral effects of predictability. If visual information serve visuomotor control, but audiomotor control requires auditory predictability, which was not manipulated in this study, then the fact that RTs increased for visually predictive and random segment speaks for this hypothesis. While the visual information was useless or even debilitative to reaction time performance, random segments increased processing load even further as they were never practiced or learnt/automatized. Beyond the assumption that predictability does not unconditionally reduce resource requirements, we assume that it may also add saliency to a task, which may channel more resources to that task despite instructions to pay equal attention to both tasks. A resource sharing theory of dual-task performance, where resources can be directed to tasks voluntarily or through task characteristics can best explain these findings (Tombu & Jolicœur, 2003; Wickens, 2008).
A limitation to the current study is that it could not be shown that participants reached a plateau in tracking performance by the end of training. Another Retention Block may have clarified whether implicit learning effects were due to a ceiling effect. In the original implicit learning tracking study by Pew (1974) learning of invariant features was not demonstrable until the sixth day of practice. However, the current training protocol was largely adopted from Ewolds et al. (2017), where learning was demonstrated after the same amount of trials, and some evidence pointed towards the bulk of the learning taking place during the first 20 trials.
It is important to investigate which factors may improve multitasking performance. A potential problem with adding information or informative cues to dual tasks is that they need to be processed under high sensory load. However, this study showed that different sources of predictability made available during dual-task do not hinder performance, but additively aid dual-task performance, at least for one dependent variable of interest. Future studies could take these results further by providing sources of predictability useful to both tasks, i.e. visuomotor and audiomotor control, to find further evidence that relevant cues and knowledge can help to circumvent processing limitations of the human system when performing multiple tasks.
Data of this project is available at https://osf.io/65hfk/.
The research was approved by the local ethics committee of the German Sport University Cologne (approval no. 012/2018). All participants provided written informed consent.
This work was supported by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG), Priority Program SPP 1772 [grant numbers RA 940/17-1; KU 1557/3-1].
The authors have no competing interests to declare.
M.R., S.K. and R.O. developed study design and idea. LB carried out the experiment, sorted/cleaned data, wrote the manuscript, and designed the figures. H.E. analyzed the data and drafted the manuscript. All authors discussed and interpreted the results and revised the manuscript.
Laura Broeker and Harald Ewolds shared first authorship.
Abrahamse, E. L., Jiménez, L., Verwey, W. B., & Clegg, B. A. (2010). Representing serial action and perception. Psychonomic Bulletin and Review, 17(5), 603–623. DOI: https://doi.org/10.3758/PBR.17.5.603
APA, A. P. A. (n.d.). Additive Effect. https://dictionary.apa.org/additive-effect
Bherer, L., Kramer, A. F., Peterson, M. S., Colcombe, S., Erickson, K., & Becic, E. (2005). Training effects on dual-task performance: are there age-related differences in plasticity of attentional control? Psychology and Aging, 20(4), 695–709. DOI: https://doi.org/10.1037/0882-79188.8.131.525
Broeker, L., Liepelt, R., Poljac, E., Künzell, S., Ewolds, H., de Oliveira, R. F., & Raab, M. (2018). Multitasking as a choice: a perspective. Psychological Research, 82(1). DOI: https://doi.org/10.1007/s00426-017-0938-7
Chambaron, S., Ginhac, D., & Perruchet, P. (2006). Is Learning in SRT Tasks Robust Across Procedural Variations? Proceedings of the 28th Annual Conference of the Cognitive Science Society (pp. 148–153).
Cohen, D. A., Ivry, R., & Keele, S. W. (1990). Attention and structure in sequence learning. Journal of Experimental Psychology, 16(1), 17–30. DOI: https://doi.org/10.1037/0278-73184.108.40.206
de Oliveira, R. F., Billington, J., & Wann, J. P. (2014). Optimal use of visual information in adolescents and young adults with developmental coordination disorder. Experimental Brain Research, 232(9), 2989–2995. DOI: https://doi.org/10.1007/s00221-014-3983-0
de Oliveira, R. F., & Wann, J. P. (2010). Integration of dynamic information for visuomotor control in young adults with developmental coordination disorder. Experimental Brain Research, 205(3), 387–394. DOI: https://doi.org/10.1007/s00221-010-2373-5
Ewolds, H. E., Bröker, L., de Oliveira, R. F., Raab, M., & Künzell, S. (2017). Implicit and Explicit Knowledge Both Improve Dual Task Performance in a Continuous Pursuit Tracking Task. Frontiers in Psychology, 8(DEC). DOI: https://doi.org/10.3389/fpsyg.2017.02241
Friston, K. (2010). The free-energy principle: a unified brain theory? Nature Reviews. Neuroscience, 11(2), 127–138. DOI: https://doi.org/10.1038/nrn2787
Gaschler, R., Kemper, M., Zhao, F., Pumpe, I., Ruderisch, C.-B., Röttger, E., & Haider, H. (2018). Differential effects of cue-based and sequence knowledge-based predictability on multitasking performance. Acta Psychologica, 191, 76–86. DOI: https://doi.org/10.1016/j.actpsy.2018.09.004
Gentsch, A., Weber, A., Synofzik, M., Vosgerau, G., & Schütz-Bosbach, S. (2016). Towards a common framework of grounded action cognition: Relating motor control, perception and cognition. Cognition, 146, 81–89. DOI: https://doi.org/10.1016/j.cognition.2015.09.010
Gozli, D. G., Aslam, H., & Pratt, J. (2016). Visuospatial cueing by self-caused features: Orienting of attention and action–outcome associative learning. Psychonomic Bulletin & Review, 23(2), 459–467. DOI: https://doi.org/10.3758/s13423-015-0906-4
Kal, E., Prosee, R., Winters, M., & van der Kamp, J. (2018). Does implicit motor learning lead to greater automatization of motor skills compared to explicit motor learning? A systematic review. PloS One, 13(9), e0203591. DOI: https://doi.org/10.1371/journal.pone.0203591
Kemper, M., Umbach, V., Schwager, S., Gaschler, R., Frensch, P., & Stürmer, B. (2012). What I Say is What I Get: Stronger Effects of Self-Generated vs. Cue-Induced Expectations in Event-Related Potentials. Frontiers in Psychology, 3, 562. DOI: https://doi.org/10.3389/fpsyg.2012.00562
Körding, K. P., & Wolpert, D. M. (2004). Bayesian integration in sensorimotor learning. Nature, 427(6971), 244–247. DOI: https://doi.org/10.1038/nature02169
Körding, K. P., & Wolpert, D. M. (2006). Bayesian decision theory in sensorimotor control. Trends in Cognitive Sciences, 10(7), 319–326. DOI: https://doi.org/10.1016/j.tics.2006.05.003
Künzell, S., Sießmeir, D., & Ewolds, H. (2016). Validation of the Continuous Tracking Paradigm for Studying Implicit Motor Learning. Experimental Psychology, 63(6), 318–325. DOI: https://doi.org/10.1027/1618-3169/a000343
Lee, Y., & Vakoch, D. A. (1996). Transfer and retention of implicit and explicit learning. British Journal of Psychology, 87(4), 637–651. DOI: https://doi.org/10.1111/j.2044-8295.1996.tb02613.x
Magill, R. A. (1998). Knowledge is More than We Can Talk about: Implicit Learning in Motor Skill Acquisition. Research Quarterly for Exercise and Sport, 69(2), 104–110. DOI: https://doi.org/10.1080/02701367.1998.10607676
McDowd, J. M. (1986). The effects of age and extended practice on divided attention performance. Journal of Gerontology, 41(6), 764–769. DOI: https://doi.org/10.1093/geronj/41.6.764
Nissen, M. J., & Bullemer, P. (1987). Attentional requirements of learning: Evidence from performance measures. Cognitive Psychology, 19(1), 1–32. DOI: https://doi.org/10.1016/0010-0285(87)90002-8
Pew, R. W. (1974). Levels of analysis in motor control. Brain Research, 71(2–3), 393–400. DOI: https://doi.org/10.1016/0006-8993(74)90983-4
Pfister, R., Heinemann, A., Kiesel, A., Thomaschke, R., & Janczyk, M. (2012). Do endogenous and exogenous action control compete for perception? Journal of Experimental Psychology: Human Perception and Performance, 38(2), 279–284. DOI: https://doi.org/10.1037/a0026658
Raab, M., de Oliveira, R. F., Schorer, J., & Hegele, M. (2013). Adaptation of motor control strategies to environmental cues in a pursuit-tracking task. Experimental Brain Research, 228(2), 155–160. DOI: https://doi.org/10.1007/s00221-013-3546-9
Röttger, E., Haider, H., Zhao, F., & Gaschler, R. (2017). Implicit sequence learning despite multitasking: the role of across-task predictability. Psychological Research (pp. 1–18). DOI: https://doi.org/10.1007/s00426-017-0920-4
Schmidt, R. A., & Wulf, G. (1997). Continuous Concurrent Feedback Degrades Skill Learning: Implications for Training and Simulation. Human Factors: The Journal of the Human Factors and Ergonomics Society, 39(4), 509–525. DOI: https://doi.org/10.1518/001872097778667979
Schuck, N. W., Gaschler, R., & Frensch, P. A. (2012). Implicit learning of what comes when and where within a sequence: The time-course of acquiring serial position-item and item-item associations to represent serial order. Advances in Cognitive Psychology, 8(2), 83–97. DOI: https://doi.org/10.5709/acp-0106-0
Tombu, M., & Jolicœur, P. (2003). A central capacity sharing model of dual-task performance. Journal of Experimental Psychology: Human Perception and Performance, 29(1), 3–18. DOI: https://doi.org/10.1037/0096-15220.127.116.11
Tsang, S. N. H., & Chan, A. H. S. (2015). Tracking and discrete dual task performance with different spatial stimulus-response mappings. Ergonomics, 58(3), 368–382. DOI: https://doi.org/10.1080/00140139.2014.978901
van Doorn, J., van den Bergh, D., Bohm, U., Dablander, F., Derks, K., Draws, T., Evans, N. J., Gronau, Q. F., Hinne, M., Kucharský, Š., Ly, A., Marsman, M., Matzke, D., Raj, A., Sarafoglou, A., Stefan, A., Voelkel, J. G., & Wagenmakers, E.-J. (2019). The JASP Guidelines for Conducting and Reporting a Bayesian Analysis. PsyArxiv Preprint, February, 1–31. DOI: https://doi.org/10.31234/osf.io/yqxfr
Van Ooteghem, K., Frank, J. S., Allard, F., Buchanan, J. J., Oates, A. R., & Horak, F. B. (2008). Compensatory postural adaptations during continuous, variable amplitude perturbations reveal generalized rather than sequence-specific learning. Experimental Brain Research, 187(4), 603–611. DOI: https://doi.org/10.1007/s00221-008-1329-5
Wagenmakers, E.-J. (2007). A practical solution to the pervasive problems ofp values. Psychonomic Bulletin & Review, 14(5), 779–804. DOI: https://doi.org/10.3758/BF03194105
Walker, M. P., & Stickgold, R. (2004). Sleep-dependent learning and memory consolidation. Neuron, 44(1), 121–133. DOI: https://doi.org/10.1016/j.neuron.2004.08.031
Whittlesea, B. W. A. (2004). The perception of integrality: remembering through the validation of expectation. Journal of Experimental Psychology. Learning, Memory, and Cognition, 30(4), 891–908. DOI: https://doi.org/10.1037/0278-7318.104.22.1681
Wickens, C. D. (2008). Multiple resources and mental workload. Human Factors, 50(3), 449–455. DOI: https://doi.org/10.1518/001872008X288394
Wolpert, D. M., Doya, K., & Kawato, M. (2003). A unifying computational framework for motor control and social interaction. Philosophical Transactions of the Royal Society B: Biological Sciences, 358(1431), 593–602. DOI: https://doi.org/10.1098/rstb.2002.1238