Nothing Special   »   [go: up one dir, main page]

Academia.eduAcademia.edu

The effects of attention on ear advantages in dichotic listening to words and affects

This study examined the effects of attention on ear advantages using dichotic listening to words and affects, a focused-attention paradigm. We compared the mixed condition, in which attention is switched between the ears in each trial, to the blocked condition, in which attention is directed to one ear for an entire block of trials. Results showed a decreased right ear advantage for word processing only in the mixed condition and an increased left ear advantage for emotion processing in both attention conditions for hits index. The mixed condition showed smaller laterality effects than the blocked condition for words with respect to hits index, while increasing right ear predominance for intrusions. The greater percentage of intrusions in the right ear for the word task and in the mixed condition suggests that the right ear (left hemisphere) is most vulnerable to attention switching. We posit that the attention manipulation has a greater effect on word processing than on emotion processing and propose that ear advantages reflect a combination of the effects of attentional and structural constraints on lateralisation. Keywords: Attention; Emotion; Hemispheric specialisation; Language; Top-down/bottom-up processing.

This art icle was downloaded by: [ Rot em Leshem ] On: 21 Sept em ber 2013, At : 00: 35 Publisher: Rout ledge I nform a Lt d Regist ered in England and Wales Regist ered Num ber: 1072954 Regist ered office: Mort im er House, 37- 41 Mort im er St reet , London W1T 3JH, UK Journal of Cognitive Psychology Publicat ion det ails, including inst ruct ions f or aut hors and subscript ion inf ormat ion: ht t p: / / www. t andf online. com/ loi/ pecp21 The effects of attention on ear advantages in dichotic listening to words and affects Rot em Leshem ab a Depart ment of Psychology, Universit y of Calif ornia, Los Angeles, CA 90095-1563, USA b Depart ment of Criminology, Bar-Ilan Universit y, Ramat Gan 52100, Israel Published online: 17 Sep 2013. To cite this article: Rot em Leshem , Journal of Cognit ive Psychology (2013): The ef f ect s of at t ent ion on ear advant ages in dichot ic list ening t o words and af f ect s, Journal of Cognit ive Psychology, DOI: 10. 1080/ 20445911. 2013. 834905 To link to this article: ht t p: / / dx. doi. org/ 10. 1080/ 20445911. 2013. 834905 PLEASE SCROLL DOWN FOR ARTI CLE Taylor & Francis m akes every effort t o ensure t he accuracy of all t he inform at ion ( t he “ Cont ent ” ) cont ained in t he publicat ions on our plat form . However, Taylor & Francis, our agent s, and our licensors m ake no represent at ions or warrant ies what soever as t o t he accuracy, com plet eness, or suit abilit y for any purpose of t he Cont ent . Any opinions and views expressed in t his publicat ion are t he opinions and views of t he aut hors, and are not t he views of or endorsed by Taylor & Francis. The accuracy of t he Cont ent should not be relied upon and should be independent ly verified wit h prim ary sources of inform at ion. Taylor and Francis shall not be liable for any losses, act ions, claim s, proceedings, dem ands, cost s, expenses, dam ages, and ot her liabilit ies what soever or howsoever caused arising direct ly or indirect ly in connect ion wit h, in relat ion t o or arising out of t he use of t he Cont ent . This art icle m ay be used for research, t eaching, and privat e st udy purposes. Any subst ant ial or syst em at ic reproduct ion, redist ribut ion, reselling, loan, sub- licensing, syst em at ic supply, or dist ribut ion in any form t o anyone is expressly forbidden. Term s & Condit ions of access and use can be found at ht t p: / / www.t andfonline.com / page/ t erm s- and- condit ions Journal of Cognitive Psychology, 2013 http://dx.doi.org/10.1080/20445911.2013.834905 The effects of attention on ear advantages in dichotic listening to words and affects Rotem Leshem1,2 1 Department of Psychology, University of California, Los Angeles, CA 90095-1563, USA Department of Criminology, Bar-Ilan University, Ramat Gan 52100, Israel Downloaded by [Rotem Leshem] at 00:35 21 September 2013 2 This study examined the effects of attention on ear advantages using dichotic listening to words and affects, a focused-attention paradigm. We compared the mixed condition, in which attention is switched between the ears in each trial, to the blocked condition, in which attention is directed to one ear for an entire block of trials. Results showed a decreased right ear advantage for word processing only in the mixed condition and an increased left ear advantage for emotion processing in both attention conditions for hits index. The mixed condition showed smaller laterality effects than the blocked condition for words with respect to hits index, while increasing right ear predominance for intrusions. The greater percentage of intrusions in the right ear for the word task and in the mixed condition suggests that the right ear (left hemisphere) is most vulnerable to attention switching. We posit that the attention manipulation has a greater effect on word processing than on emotion processing and propose that ear advantages reflect a combination of the effects of attentional and structural constraints on lateralisation. Keywords: Attention; Emotion; Hemispheric specialisation; Language; Top-down/bottom-up processing. Dichotic listening (DL) is a method used to estimate hemispheric specialisation for auditory stimuli, in which two acoustically similar stimuli are simultaneously presented to the two ears and the participant is asked to identify, detect or discriminate between them. Although both ears are represented in both hemispheres, contralateral projections are stronger than ipsilateral projections (Hugdahl et al., 2009). Thus, a right ear advantage (REA), reflecting left hemisphere (LH) specialisation, is usually observed for linguistic material, whereas a left ear advantage (LEA), reflecting right hemisphere (RH) specialisation, often occurs for non-verbal material (Bryden & MacRae, 1988). According to the structural model (Kimura, 1967), when two similar acoustic stimuli are presented to the two ears simultaneously, the ipsilateral ear-to-hemisphere projections are suppressed, so that each ear projects more or less exclusively to the opposite hemisphere. This model has received a great deal of empirical support (Hugdahl et al., 2000; Westerhausen et al., 2009). However, some studies have questioned the simple wiring account of ear advantage, in terms of ear-to-hemisphere connections. Is ipsilateral suppression for a given stimulus in either hemisphere partial or complete? To date, there is no definitive data to answer this question, as evidence from neuroanatomical studies (Jäncke, Buchanan, Lutz, & Shah, 2001; Pollmann, 2010) is incomplete or conflicting. In 1988, Bryden and MacRae introduced a DL test that simultaneously exhibited LH specialisation for phonological processing of words and RH Correspondence should be addressed to Rotem Leshem, Department of Criminology, Bar-Ilan University, Ramat Gan 52100, Israel. Email: rotemlm@yahoo.com This work was supported by an EU Marie-Curie International Fellowship [PIOF-GA-2009-236183] to Rotem Leshem. This study was conducted while the author was a post doc fellow in Eran Zaidel’s Cognitive Neuroscience lab, in the Psychology Department at UCLA. I wish to thank him deeply for his mentorship and support, especially in the conceptualisation of this study. © 2013 Taylor & Francis Downloaded by [Rotem Leshem] at 00:35 21 September 2013 2 LESHEM specialisation for processing emotional prosodies/ intonations. This test, known as ‘Dichotic Listening to Words and Affects’ (DLWA), consists of four dichotically paired words (bower, dower, power, tower) spoken in four different emotional tones (sad, happy, angry, neutral). The task requires the participant to identify the target words (word task) or the target affects/emotions (emotion task) in each ear. In normal participants, a significant REA is exhibited on the word task and a significant LEA on the emotion task (Bryden & MacRae, 1988). Grimshaw, Kwassny, Covell, and Johnson (2003) used Bryden and MacRae’s DLWA task to examine whether the words and emotions were processed exclusively in a single hemisphere or bilaterally. They found that when participants processed words with a negative emotional tone (sad), the typical REA was significantly attenuated due to a decrease in LH and an increase in RH performance. They concluded that the RH is normally capable of processing words under some circumstances, such that both hemispheres have this ability. RH involvement in processing words in the DLWA task is supported by behavioural (Hale, Zaidel, MacGough, Phillips, & McCracken, 2006) and fMRI (Buchanan et al., 2000) data. Other studies of both non-clinical participants (Knecht et al., 2000) and individuals with brain damage (Gazzaniga & Sperry, 1967) similarly suggest that the RH can process certain kinds of words. However, few studies to date have examined the effects of attention on ear advantages in the DLWA task (Hale et al., 2006; Jäncke, Buchanan, Lutz, & Shah, 2001). To the best of our knowledge, ours is the first study to compare the effect of mixed and blocked attention-to-ear conditions in the DLWA task. The goal of the present experiment was therefore to compare the ear advantages revealed by the task when attention is directed to one ear at a time for a whole block of trials (blocked condition) and when attention is switched pseudo-randomly between the two ears from trial to trial (mixed condition). In particular, we focused on the effects of attention switching on additional measures of cognitive control, namely, conflict resolution and response inhibition. In the ‘focused-attention’ condition, the participant must focus attention on one ear at a time. When a participant is required to detect verbal stimuli in one ear, responses are believed to be affected by endogenous top-down/attention-driven verbal processes, which are associated with the LH and therefore the right ear. Thus, attending to the left ear during a verbal task creates interference between the top-down/instruction-driven and the bottom-up/stimulus-driven right ear dominance (Bryden, Munhall, & Allard, 1983; Hugdahl & Andersson, 1986; Hugdahl et al., 2000, 2009). Conversely, attending to the right ear in a verbal task facilitates the stimulus-driven REA (Hugdahl et al., 2009; Westerhausen et al., 2009). Similarly, when participants are asked to detect emotional stimuli presented to the left ear, bottom-up processing and top-down processes are believed to work synergistically, both facilitating an LEA. By contrast, when instructed to detect emotional stimuli presented to the right ear, participants must overcome the bottom-up/stimulus-driven left ear dominance. This condition creates a conflict between bottom-up and top-down processes, and its resolution is said to require cognitive control (Miller & Cohen, 2001; Westerhausen et al., 2009). Failure to resolve these conflicts in the focusedattention condition will likely result in increased ‘intrusion errors’, i.e. positive responses when the target occurs in the unattended ear. These intrusions reflect failure of top-down attentional processing to inhibit bottom-up processing. This failure may be more likely to occur when attention is switched between the two ears unpredictably in the mixed condition, which seems to involve a greater load on cognitive control (Hugdahl et al., 2009). The degree to which either hemisphere is actually involved in processing the stimuli for which it is not specialised depends on the complexity of the task, attention instructions and the available resources within the hemispheres (Hiscock & Kinsbourne, 2011; Hugdahl et al., 2000). Hemispheric asymmetry is perhaps best understood as a dynamic interaction between bottom-up/stimuli-driven processes and top-down/ instruction-driven processes (Hugdahl et al., 2009; Westerhausen et al., 2009). Previous imaging studies have shown that performance on focusedattention DL tasks involves inferior parietal and prefrontal cortex, indicating the existence of a cortical attentional network (Hugdahl et al., 2000; Jäncke et al., 2001; Jäncke & Shah, 2002). Thus, DL is ideal for the study of cognitive factors, such as attention, that may affect the basic stimulusdriven laterality effect. Here, we employed a focused-attention paradigm in which participants had to detect the target ‘bower’ (word task) or the emotion ‘sad’ Downloaded by [Rotem Leshem] at 00:35 21 September 2013 LATERALITY IN DICHOTIC LISTENING TASK (emotion task). We chose these targets because they demonstrated the strongest REA and LEA, respectively, in previous studies (Grimshaw et al., 2003; Grimshaw, Séguin, & Godfrey, 2009). We wanted to compare the effect of switching attention between the ears to blocking it in one ear, on both word and emotion detection. If words are processed bilaterally and negative emotions processed unilaterally (i.e. RH), then both attention conditions should produce smaller ear differences for correct responses (hits) as well as increased intrusions in the word compared to the emotion task. Specifically, if each hemisphere simultaneously processes task-relevant and yet different stimuli, some form of interhemispheric conflict during stimulus processing might occur. This may limit the ability to control responses and increase intrusions for word processing (Hale et al., 2006). Also, if attention switching and conflict resolution involve separate but interdependent mechanisms of cognitive control, there should be a difference between the percentage of correct responses (hits) as well as intrusions in the two attention conditions. Thus, we predicted that, compared to the blocked condition, the mixed condition would be associated with greater difficulty in modulating ear advantages and in overcoming bottom-up stimulus-driven processes, resulting in greater ear differences, especially for emotion processing. This study makes two important contributions. First, the use of the both words and affects can add a great deal to our current understanding of the effects of attention on ear advantages, given that almost all of that research to date has used consonant–vowel syllables as stimuli. Second, it examines the effects of cognitive control on ear advantage, as intrusions from the unattended ear reflect failures of cognitive control. This has implications for the use of DL when studying attentional control in both normal and clinical subgroups. METHODS Participants Twenty-six undergraduate students, all native English speakers, from the University of California, Los Angeles (20 students) and from the International BA programme for US foreign students, Bar-Ilan University, Israel (16 females; mean age 21.1, range: 19–28) participated in the 3 study. All participants completed a shortened version of the Edinburgh Handedness Inventory (Oldfield, 1971). This version involves a sevenitem scale with potential scores ranging from –14, indicating maximum left handedness, to 14, indicating maximum right handedness (0 indicates ambidextrous status). Only participants scoring between 12 and 14 were included. None of the participants reported a history of neurological illness or hearing deficits. Dichotic listening to words and affects Stimuli included the words ‘bower’, ‘dower’, ‘power’ and ‘tower’, spoken in sad, happy, angry and neutral voices (Bryden & MacRae, 1988). Stimuli were presented through Müller headphones, on a 3.00 GHz Intel Pentium D personal computer, running Windows XP, using E-Prime 2.0 software (Schneider, Eschman, & Zuccolotto, 2002). We used a 17-inch LCD monitor with a refresh rate of 75 Hz and a resolution of 1280 × 1024 pixels. Stimuli were digitised in 16 bits at a sampling rate of 44.1 kHz, and edited to a common duration of 500 ms. The original stimulus list included 144 distinct pairings. To highlight the contrast between trials that included the targets in the unattended ear and those that included no targets at all, we used a balanced stimulus set consisting of 72 targets, 72 potential intrusions and 48 simple non-targets, yielding a total of 192 trials. Procedure First, all participants were presented with a 1 Khz sine wave audio tone through headphones to ensure equal hearing in both ears, and to allow modification or calibration on one or both of the two channels if necessary. All participants reported equal hearing at the standardised balance level. Participants were then introduced to each of the 16 types of stimuli presented binaurally, with error feedback provided after each trial. If a mistake was made, the binaural set was presented again. Next, there was a practice block of 10 dichotic pairs with error feedback provided after each trial. The task required participants to detect the target word ‘bower’ (word task) or the target emotion ‘sad’ (emotion task) by pressing ‘Yes’ on the keyboard with the index finger when the target occurred in the attended ear and ‘No’ on Downloaded by [Rotem Leshem] at 00:35 21 September 2013 4 LESHEM the keyboard with the middle finger otherwise. Responses were made using the right (dominant) hand. Each participant received four blocks, corresponding to four combinations of target and ear: (1) target word in the left ear, (2) target word in the right ear, (3) target emotion in the left ear and (4) target emotion in the right ear. These combinations were presented in two experimental conditions. In the blocked condition, participants attended to one ear for an entire block of 192 trials, and then to the other ear for the following block of 192 trials. In the mixed condition, attention was directed alternately to one or the other ear within the same block of trials. The mixed condition included 192 + 192 = 384 trials. The attended ear in both the blocked and mixed conditions was signalled by an arrow (endogenous) presented briefly to the left or right of fixation and by a simultaneous tone (exogenous). The order of attention conditions was counterbalanced across participants, as was the order of attended ear in the blocked condition. Each trial consisted of four events. First, the fixation mark was presented in the middle of the screen for 500 ms, followed by a visual arrow cue and an auditory beep directing attention to the left or to the right ear, presented for 100 ms. After a 250 ms delay, the participant had 2500 ms to respond to the stimulus. The stimulus onset asynchrony (SOA) was set at 350 ms, to unify the effects of the exogenous and endogenous cues and to allow enough time to orient attention (Müller & Rabbitt, 1989). Thus, the average trial lasted 3300 ms. Overall, the experiment took approximately 80 minutes to complete. Greenhouse-Geisser estimates of sphericity (ε < .75). Significant interactions were followed up with paired t-tests, with a Bonferroni correction at .05. We addressed how often participants correctly detected targets and how quickly they responded on such trials (hits). Next, we addressed the number of times participants failed to inhibit their responses when the ‘correct’ target was presented in the unattended ear (intrusions) during focusedattention conditions. Accuracy LI for each task × AC was calculated as the percentage difference between right (R) and left (L) ear scores, as follows: LI ¼ ðR LÞ=ðR þ LÞ if ðR þ LÞ  1 and LI ¼ ðR LÞ=½ð1 RÞ þ ð1 Lފ if ðR þ LÞ > 1: Responses with latencies shorter than 150 ms were regarded as premature anticipatory responses and were excluded from analysis. Responses with latencies more than three standard deviations above the sample mean were regarded as distractions and were excluded from analysis as well. Gender was initially included as a betweensubjects factor, but showed no significant main effects or interactions and was subsequently removed from the analyses. Table 1 presents the significant effects of the basic ANOVAs for each of the dependent variables, and reports means (SDs), effect sizes and 95% confidence interval. RESULTS Proportion hits Statistical analysis The data were analysed with two separate univariate repeated measures analyses of variance (ANOVAs). The first ANOVA consisted of task (emotions, words)×attention condition (AC) (mixed, blocked)×ear (left, right). The dependent variables were proportion hits, proportion intrusions (positive responses when the target occurs in the unattended ear), and median latency of hits. The second ANOVA contained task (emotions, words) × (AC) (mixed, blocked) with the laterality index (LI) for correct responses (hits) as the dependent variable. Separate univariate repeated measures analyses of variance (ANOVAs) were carried out for each dependent variable. Degrees of freedom were corrected using There were significant two-way task × ear and AC × ear interactions, which were qualified by a significant three-way task × AC × ear interaction (Table 1), showing a significant LEA for emotions in both the blocked condition, t(25) = −2.9, p = .007, d = −.58, CI [−.13, −.02] and the mixed condition, t(25) = −4.3, p < .001, d = −.84, CI [−.13, −.05], and a significant REA for words in the blocked condition, t(25) = 2.8, p = .009, d = .56, CI [.03, .16], but no significant difference between ears in the mixed condition, t(25) =.06, p = .95, (Figure 1A). Furthermore, a two-way ANOVA analysing the LI revealed significant main effects of task, F(1, 25) = 10.7, p = .003, η2 = .30, and AC, F(1, 25) = 10.7, p = .002, η2 = .33, showing greater laterality difference for emotions than for words, LATERALITY IN DICHOTIC LISTENING TASK 5 TABLE 1 Results of 2 (task) × 2 (AC) × 2 (ear) analysis of variance (ANOVA). 95% CI Task Proportion hits Task × ear Emotion Downloaded by [Rotem Leshem] at 00:35 21 September 2013 Word AC Ear – Left Right Left Right .823 .742 .599 .646 Left Right Left Right – AC × ear – Mixed – Blocked Task × AC × ear Emotion Mixed Blocked Word Mixed Blocked Latency for hits Task × ear Emotion – Word – Proportion intrusions Task × ear Emotion – Word – AC × ear – Mixed – Blocked Mean (SD) Lower bound Upper bound (.154) (.173) (.157) (.174) .761 .672 .536 .576 .885 .812 .663 .716 .702 .659 .720 .729 (.144) (.149) (.131) (.159) .643 .599 .667 .665 .760 .719 .773 .794 Left Right Left Right Left Right Left Right .798 .710 .848 .774 .606 .608 .593 .684 (.169) (.164) (.152) (.194) (.167) (.181) (.171) (.200) .729 .644 .787 .696 .538 .534 .523 .603 .866 .777 .910 .853 .673 .681 .662 .765 Left Right Left Right 795 847 847 818 (99.09) (110.1) (161.5) (146.2) 755 803 782 759 835 892 913 877 Left Right Left Right .122 .131 .239 .319 (.115) (.117) (.105) (.102) .085 .075 .277 .198 .178 .170 .362 .280 Left Right Left Right .183 .250 .179 .201 (.093) (.122) (.095) (.089) .200 .145 .165 .140 .299 .220 .237 .217 F η2 p 11.74 .32 .002 11.25 .31 .003 4.74 .16 .039 11.05 .31 .003 19.37 .44 <.001 6.83 .22 .015 CI = confidence interval; all df = 1; Proportion intrusions: right ear intrusions while focusing attention to the left ear indicate interference by a target in the unattended right ear, whereas a left ear intrusions while attending to the right ear indicate interference by a target in the unattended left ear. and for the mixed than the blocked condition (Figure 1B). Latency for hits There was a significant two-way task × ear interaction, showing a significantly faster reaction time in the left ear than in the right ear for emotion detection, t(25) = −4.3, p < .001, d = −.85, CI [−.78, −28], whereas there was no significant difference between the right ear and the left ear for word detection, t(25) = 1.4, p = .20. Proportion intrusions There was a significant two-way task × ear interaction, showing significantly more intrusions coming from the right than from the left ear in the word task, t(25) = 5.5, p < .001, d = 1.1, CI [.05, .11], but no significant difference between ears for the emotion task, t(25) = .64, p = .53. There was also a significant two-way AC × ear interaction, showing significantly more intrusions coming from the right than the left ear in the mixed condition, t(25) = 4.9, p < .001, d = .98, CI [.04, .09], whereas Downloaded by [Rotem Leshem] at 00:35 21 September 2013 6 LESHEM Figure 2. Mean proportion of intrusions index for task × ear and attention condition × ear interactions. (A) Significantly more intrusions in the right than the left ear for the word task alongside a non-significant difference between the ears for the emotion task. (B) Significantly more intrusions in the right than the left ear in the mixed condition alongside a non-significant difference between the ears for the blocked condition. Small bars, standard error of the mean; **p < .001. Figure 1. (A) Mean proportion of correct responses (hits) in each task, attention condition, and ear combination, showing significant LEA in both mixed and blocked condition in the emotion task and a significant REA in the word task in the blocked but non-significant REA in the mixed condition. (B) Laterality index (LI) for the two tasks and the two attention conditions. One-sample t-tests revealed a significant LEA for the emotion task, t(25) = −4.1, p < .001, d = −.80, CI[−.31, −.10], but non-significant REA for the word task, t(25) = .13, p = .13. Also, there was a significant LEA for the mixed condition, t(25) = −3.1, p = .005, d = .61, CI[−.20, −.04], but no significant difference between the ears for the blocked condition, t(25) = .11, p = .91. Small bars, standard error of the mean; LI > 0 indicates that the right ear score is greater than the left ear score; **p < .001, *p < .005. there was no significant difference between the ears in the blocked condition, t(25) = −1.5, p = .15 (Figure 2). DISCUSSION The goal of the present study was to examine how laterality effects in the DLWA task change as a function of switching attention from one ear to the other. We found a significant task × AC × ear interaction for hits index, reflecting an LEA for the emotion task in both attention conditions, and an REA for words in the blocked condition alongside a non-significant REA in the mixed condition. In accordance with our first prediction, the difference between the ears in percentage of hits was smaller in the word task than in the emotion task. This is in keeping with findings showing that both hemispheres are capable of processing words (e.g. Buchanan et al., 2000; Gazzaniga & Sperry, 1967). In particular, this may be the case for emotionally charged words, such as those presented in our word task. The intrusions index also revealed a task × ear interaction reflecting significantly more intrusions coming from the right than the left ear for the word task, whereas the ear on which participants focused attention had no effect in the emotion task. This finding also seems to support the possibility that during this paradigm, words are processed bilaterally, whereas emotions are processed unilaterally. Specifically, because the word task involves both hemispheres in processing taskrelevant stimuli, there are fewer controlled responses that may increase the occurrence of intrusions. Because the emotion task primarily involves an RH mechanism, the direction of focus (left or right ear) has no effect on the occurrence of intrusions. The LEA in both attention conditions and the faster reaction time and accuracy (LI) in the left than the right ear, strengthens the possibility that the left ear/RH have superiority in processing auditory emotional stimuli. This supports findings that emotional prosody can be processed automatically, independently of attention (Gädeke, Föcker, & Röder, 2013; Mitchell, Elliott, Barry, Cruttenden, & Woodruff, 2003; Sander et al., 2005). The fact that there was no significant ear difference in intrusions on the emotion task may Downloaded by [Rotem Leshem] at 00:35 21 September 2013 LATERALITY IN DICHOTIC LISTENING TASK suggest that there are combined effects of voluntary attention/top-down processes and automatic attention/bottom-up processes. To suppress targets in the unattended ear, it is necessary to engage cognitive control processes, especially when the target is in the ear contraleteral to the dominant hemisphere (Hugdahl et al., 2009). Thus, it seems that emotion and attention can both exert separate modulatory influences on auditory processing (Gädeke et al., 2013; Sander et al., 2005). When a target had to be detected, bottom-up/stimulidriven processes took precedence over top-down/ instruction-driven processes. Conversely, when the target had to be ignored, top-down/instructiondriven processes modulated bottom-up/stimulidriven processes, as reflected in fewer intrusions in the unattended ear, regardless of attention condition. This ability of the RH to overcome stimuli-driven responses in the unattended ear elucidates how attentional control may interact with emotion processing. With respect to words, the blocked condition accentuated the REA, whereas the mixed condition eliminated the REA. This can also be explained by a combination of the attentional and structural constraints. It is widely accepted that the blocked condition strengthens selective attention to the attended ear (Hugdahl et al., 2000, 2009). If emotionally charged words involve both RH and LH, it may be that directing attention to the right ear for an entire block of trials enhances LH (specialised for verbal processing) activity, whereas focusing attention to the left ear does not change RH involvement, resulting in an REA. Conversely, the mixed condition emphasises attentional switching between the ears such that attention and expectation differ systematically from trial to trial, requiring additional cognitive resources and creating a more demanding task than in the blocked condition. The absence of the REA in the mixed condition but not in the blocked condition suggests that attentional factors can override the basic REA asymmetry, providing evidence for top-down instruction modulation of a bottom-up/stimulus-driven effect. This is in line with DL studies using forced-attention conditions (but not necessarily the same task and paradigm) in normal participants, which showed that attention can dramatically affect ear advantage for words (Bryden et al., 1983; Hugdahl et al., 2000, 2009). In relation to our second prediction, the two attention conditions differentially modulated 7 performance for emotions and words with respect to both the hits and intrusions indices. With respect to correct responses, we found a greater difference between the ears in the mixed than in the blocked condition for emotions. For words, there was greater difference between the ears in the blocked than in the mixed condition. The intrusions index yielded a significant AC × ear interaction, reflecting more intrusions in the right than in the left ear only in the mixed condition. The intrusions in the mixed condition suggest failure of top-down/attention-driven processing to inhibit bottom-up/stimulus-driven processing (Bryden et al., 1983; Hugdahl et al., 2000). It therefore seems that attention switching and conflict resolution are interdependent aspects of cognitive control (Wager & Joindes, & Reading, 2004), and that ‘instruction-driven’ or top-down processing strategies modulate ‘stimulus-driven’ or bottom-up processing strategies less effectively in the mixed than in the blocked condition. Also, the fact that there were more intrusions in the right than in the left ear for the word task and in the mixed condition suggest that the right ear/LH is most vulnerable to attention switching. Indeed, the attenuation or absence of the REA is used as a measure of cognitive dysfunction in brain damaged patients and also to reveal cognitive impairments in psychiatric patients (Hale et al., 2006; Hugdahl et al., 2000). Overall, in this study it appears that emotion processing was controlled by bottom-up/stimulidriven processes and was less sensitive to attentional manipulation. Conversely, word processing was most affected by attentional manipulation, as reflected in the difference in ear advantages between the two attention conditions. Specifically, in the blocked condition the ear advantage is related more to perception of stimulus, whereas in the mixed condition the ear advantage is related more to response selection affected by top-down attention processes. More specifically, this suggests that top-down/instruction-driven attentional processes can modulate the bottom-up/stimuli-driven ear advantage for words. The fact that the right ear/LH is more vulnerable to attention switching and conflict resolution may suggest that the mixed condition relies on top-down cognitive effects of the LH, and that the two attention conditions are associated with different forms of cognitive functioning that are attributed to bottom-up and topdown cognitive processes, especially for word Downloaded by [Rotem Leshem] at 00:35 21 September 2013 8 LESHEM processing. Thus, the ear advantages are not simply a reflection of hemispheric function or of an attentional bias for processing type (words/ emotions), but rather a combination of the effects of attentional and structural constraints on lateralisation. These conclusions need to be confirmed and extended. The paradigm used in this study allows us to examine emotional words and cognitive processes that can clarify attention effects on ear advantages, potentially providing answers to questions that have yet to be addressed. First, we recommend the inclusion of a divided attention condition, to which the effects of the focusedattention paradigm performed in this study can be compared statistically. Second, we used both exogenous and endogenous cues to maximise the effect of cue, and thus of attention. However, one cannot be sure that our focused-attention data are entirely free of attentional bias. It could be argued that the use of an exogenous cue, a brief tone presented to the attended side 150 ms before each trial, would more effectively control the influence of attention (Mondor & Bryden, 1992). In many DL experiments, attention to one ear is specified by endogenous cues (e.g. instructions) (Hugdahl & Andersson, 1986) or by exogenous cues (Mondor & Bryden, 1992). Thus, future studies of the DLWA task using focused-attention paradigms should include both exogenous and endogenous cueing at different SOAs, to examine the optimal cueing conditions for controlling attention. It will be particularly interesting to examine differences between mixed and blocked conditions in the timing of cue-to-target intervals, to control the influence of attention most effectively. This will help validate our results regarding the effects of mixed and blocked conditions on ear advantages. Finally, the conditions and neural architectures that support shifts of hemispheric control to the unspecialised hemisphere remain unknown. Thus, combined behavioural and neuroimaging studies hold the most promise for extending our understanding of the effects of attention on the interaction between top-down and bottom-up attentional systems in DL tasks, in both normal and clinical populations. Original manuscript received February Revised manuscript received June Revised manuscript accepted August First published online September 2013 2013 2013 2013 REFERENCES Bryden, M. P., & MacRae, L. (1988). Dichotic laterality effects obtained with emotional words. Neuropsychiatry, Neuropsychology, and Behavioral Neurology, 1, 171–176. Bryden, M. P., Munhall, K., & Allard, F. (1983). Attentional biases and the right-ear effect in dichotic listening. Brain and Language, 18, 236–248. doi:10.1016/0093-934X(83)90018-4 Buchanan, T. W., Lutz, K., Mirzazade, S., Specht, K., Shah, N. J., Zilles, K., & Jäncke, L. (2000). Recognition of emotional prosody and verbal components of spoken language: An fMRI study. Cognitive Brain Research, 9, 227–238. doi:10.1016/S0926-6410(99)000 60-9 Gädeke, J. C., Föcker, J., & Röder, B. (2013). Is the processing of affective prosody influenced by spatial attention? An ERP study. BMC Neuroscience, 14(1), 1–15. doi:10.1186/1471-2202-14-1 Gazzaniga, M. S., & Sperry, R. W. (1967). Language after section of the cerebral commissures. Brain, 90 (1), 131–148. doi:10.1093/brain/90.1.131 Grimshaw, G. M., Kwassny, K. M., Covell, E. D., & Johnson, R. A. (2003). The dynamic nature of language lateralization: Effects of lexical and prosodic factors. Neuropsychologia, 41, 1008–1019. doi:10.1016/ S0028-3932(02)00315-9 Grimshaw, G. M., Séguin, J. A., & Godfrey, H. K. (2009). Once more with feeling: The effects of emotional prosody on hemispheric specialisation for linguistic processing. Journal of Neurolinguistics, 22, 313–326. doi:10.1016/j.jneuroling.2008.10.005 Hale, T. S., Zaidel, E., McGough, J. J., Phillips, J. M., & McCracken, J. T. (2006). A typical brain laterality in adults with ADHD during dichotic listening for emotional intonation and words. Neuropsychologia, 44, 896–904. doi:10.1016/j.neuropsychologia.2005. 08.014 Hiscock, M., & Kinsbourne, M. (2011). Attention and the right-ear advantage: What is the connection?. Brain and Cognition, 76, 263–275. doi:10.1016/j. bandc.2011.03.016 Hugdahl, K., & Andersson, L. (1986). The “forcedattention paradigm” in dichotic listening to CVsyllables: A comparison between adults and children. Cortex, 22, 417–432. doi:10.1016/S0010-9452(86) 80005-3 Hugdahl, K., Law, I., Kyllingsbaek, S., Bronnick, K., Gade, A., & Paulson, O. B. (2000). Effects of attention on dichotic listening: An15O-PET study. Human Brain Mapping, 10(2), 87–97. doi:10.1002/(SICI)1097-0193 (200006)10:2<87::AID-HBM50>3.0.CO;2-V Hugdahl, K., Westerhausen, R., Alho, K., Medvedev, S., Laine, M., & Hamalainen, H. (2009). Attention and cognitive control: Unfolding the dichotic listening story. Scandinavian Journal of Psychology, 50(1), 11–22. doi:10.1111/j.1467-9450.2008.00676.x Jäncke, L., Buchanan, T. W., Lutz, K., & Shah, N. J. (2001). Focused and nonfocused attention in verbal and emotional dichotic listening: An FMRI study. Brain and Language, 78, 349–363. doi:10.1006/ brln.2000.2476 Downloaded by [Rotem Leshem] at 00:35 21 September 2013 LATERALITY IN DICHOTIC LISTENING TASK Jäncke, L., & Shah, N. J. (2002). Does dichotic listening probe temporal lobe functions? Neurology, 58(5), 736–743. doi:10.1212/WNL.58.5.736 Knecht, S., Deppe, M., Drager, B., Bobe, L., Lohmann, H., Ringelstein, E., & Henningsen, H. (2000). Language lateralization in healthy right-handers. Brain, 123(1), 74–81. doi:10.1093/brain/123.1.74 Kimura, D. (1967). Functional asymmetry of the brain in dichotic listening. Cortex, 3(2), 163–178. doi:10.1016/ S0010-9452(67)80010-8 Miller, E. K., & Cohen, J. D. (2001). An integrative theory of prefrontal cortex function. Annual Review of Neuroscience, 24(1), 167–202. doi:10.1146/annurev. neuro.24.1.167 Mitchell, R. L., Elliott, R., Barry, M., Cruttenden, A., & Woodruff, P. W. (2003). The neural response to emotional prosody, as revealed by functional magnetic resonance imaging. Neuropsychologia, 41(10), 1410–1421. doi:10.1016/S0028-3932(03)00017-4 Mondor, T. A., & Bryden, M. P. (1992). On the relation between auditory spatial attention and auditory perceptual asymmetries. Perception & Psychophysics, 52(4), 393–402. doi:10.3758/BF03206699 Müller, H. J., & Rabbitt, P. M. (1989). Reflexive and voluntary orienting of visual attention: Time course of activation and resistance to interruption. Journal of Experimental Psychology: Human Perception and Performance, 15, 315–330. doi:10.1037/0096-1523.15. 2.315 9 Oldfield, R. C. (1971). Assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia, 9(1), 97–113. doi:10.1016/0028-3932(71) 90067-4 Pollmann, S. (2010). A unified structural-attentional framework for dichotic listening. In K. Hugdahl & R. Westerhausen (Eds.), The two halves of the brain: Information processing in the cerebral hemispheres (pp. 441–468). Cambridge, MA: MIT Press. Sander, D., Grandjean, D., Pourtois, G., Schwartz, S., Seghier, M. L., Scherer, K. R., & Vuilleumier, P. (2005). Emotion and attention interactions in social cognition: Brain regions involved in processing anger prosody. Neuroimage, 28, 848–858. doi:10.1016/j. neuroimage.2005.06.023 Schneider, W., Eschman, A., & Zuccolotto, A. (2002). E-Prime computer software and manual. Pittsburgh, PA: Psychology Software Tools. Wager, T. D., Jonides, J., & Reading, S. (2004). Neuroimaging studies of shifting attention: A metaanalysis. NeuroImage, 22(4), 1679–1693. doi:10.1016/ j.neuroimage.2004.03.052 Westerhausen, R., Moosmann, M., Alho, K., Medvedev, S., Hamalainen, H., & Hugdahl, K. (2009). Top-down and bottom-up interaction: Manipulating the dichotic listening ear advantage. Brain Research, 1250, 183–189. doi:10.1016/j.brainres.2008. 10.070