注意對視聽整合加工的影響
發(fā)布時間:2018-07-20 15:40
【摘要】:多感覺整合(Multisensory Integration)是指,來自不同感覺通道(視覺、聽覺、觸覺等)的信息同時同地呈現(xiàn)時,被個體有效地整合為統(tǒng)一、連貫的知覺信息的現(xiàn)象。通過合并來自不同感覺通道的信息,多感覺整合能夠減少知覺系統(tǒng)的噪音,幫助個體更好地知覺信息,在行為上表現(xiàn)為對同時呈現(xiàn)的多通道信息的判斷更快更準確。已有研究將這種對雙通道信息的加工優(yōu)勢稱為冗余信號效應。回顧以往研究不難發(fā)現(xiàn),早期研究多關注多感覺整合本身的特性和加工方式。作為最主要的感覺通道,視聽整合加工的特性是研究者們關注的焦點。近幾年,研究者們逐漸將興趣點轉向注意與視聽整合加工的關系上。但是,已有研究大多針對注意與非注意條件對視聽整合加工的影響是否存在差異進行探討,而忽略了注意除可以指向空間和客體之外,還可以指向感覺通道的靈活性。因此,本研究旨在考察指向不同感覺通道的注意對視聽整合加工的影響以及注意資源量和注意的起伏在其中的調制作用。本論文包括三個研究,共6個實驗。研究一通過線索刺激將被試的注意指向不同感覺通道(只注意視覺、只注意聽覺、同時注意視覺和聽覺),考察了指向不同感覺通道的注意對視聽整合加工的影響是否不同。實驗1以圖形和短音作為實驗材料,被試的任務是根據線索刺激對相應的目標刺激進行按鍵反應。實驗結果發(fā)現(xiàn),只有在同時注意視覺和聽覺(分配性注意)條件下,被試對視聽雙通道目標的反應最快,即產生冗余信號效應,而在只注意視覺或只注意聽覺(選擇性注意)條件下并沒有冗余信號效應產生。為檢驗冗余信號效應是否源自于雙通道目標的視覺和聽覺成分的整合,采用競爭模型分析法對被試反應時的累積量分布概率進行檢驗。結果發(fā)現(xiàn),實驗1中的冗余信號效應源自于視聽雙通道目標的視覺和聽覺成分的整合。也就是說,實驗1結果表明,只有在分配性注意條件下才會產生視聽整合。實驗2對實驗1的材料進行調整,以人聲讀出的漢語單字詞作為聽覺刺激,使其與視覺圖形形成語義一致和不一致兩種條件,考察了指向不同感覺通道的注意對視聽言語整合加工的影響是否不同。結果發(fā)現(xiàn),在選擇性注意條件下,語義一致和不一致的視聽目標均沒有冗余信號效應產生。在分配性注意條件下,被試對語義一致的視聽目標反應最快,即在語義一致的視聽目標上產生冗余信號效應,而語義不一致的視聽目標不具有加工優(yōu)勢。競爭模型分析發(fā)現(xiàn),冗余信號效應來源于語義一致視聽目標中視覺和聽覺成分的整合。也就是說,實驗2結果表明,只有在分配性注意條件下,語義一致的視聽目標才會產生整合,而語義不一致的視聽目標不會產生整合;在選擇性注意條件下,不論語義是否一致,視聽目標均不會產生整合。研究二在研究一的結論基礎上探討了在分配性注意條件下注意負荷對視聽言語整合加工的影響。包括兩個實驗,分別從視覺負荷(實驗3)和聽覺負荷(實驗4)的角度進行考察。實驗3發(fā)現(xiàn),只有在無視覺負荷條件下才會產生冗余信號效應,而在有視覺負荷條件下無冗余信號效應產生。競爭模型分析發(fā)現(xiàn),冗余信號效應來源于語義一致視聽目標的視覺和聽覺成分的整合。實驗3結果表明,即便是在分配性注意條件下,視聽目標的整合加工仍然受到視覺注意負荷的調制,表現(xiàn)為只有在無負荷條件下才會產生視聽整合,而在有視覺負荷條件下沒有產生視聽整合加工。實驗4結果發(fā)現(xiàn),不論是否呈現(xiàn)聽覺負荷,被試都對視聽雙通道目標的反應最快,即不論有、無聽覺負荷,均產生了冗余信號效應。隨后的競爭模型分析發(fā)現(xiàn),在有、無聽覺負荷條件下均產生了視聽整合。實驗4結果表明,分配性注意條件下產生的視聽整合加工并不受聽覺負荷的調制。綜合研究二的結果可以發(fā)現(xiàn),視覺和聽覺注意負荷在影響視聽言語整合加工時具有不對稱性。研究三采用節(jié)奏化的視聽線索,從動態(tài)注意理論的角度出發(fā),考察了在分配性注意條件下注意的起伏對視聽言語整合加工的影響及其神經機制。實驗5設置了目標刺激與視聽節(jié)奏相符(合拍)、不相符(不合拍)和無視聽節(jié)奏(靜音)三種條件,發(fā)現(xiàn)只有在合拍條件下,被試對視聽雙通道目標的反應最快,即產生冗余信號效應,而在其他兩種條件下均無冗余信號效應產生。競爭模型分析發(fā)現(xiàn),冗余信號效應來源于視聽目標的視覺和聽覺成分的整合。也就是說,只有當視聽雙通道目標落在注意節(jié)奏的峰值上時才會產生視聽整合,而在其他兩種條件下,即使視聽目標的視覺和聽覺成分同時呈現(xiàn),仍然無法進行整合加工。實驗6在實驗5的基礎上,設置合拍和無聲兩種條件,采用具有較高時間分辨率的事件相關電位技術考察了注意的起伏影響視聽言語整合加工的時間進程和神經機制。結果發(fā)現(xiàn),對于單通道聽覺目標而言,合拍條件下的額區(qū)和中央區(qū)N1波幅顯著大于無聲條件,中線和右側電極上合拍條件下的N1波幅顯著大于無聲條件;在Pz和P3電極點處,合拍條件下的P2波幅顯著大于無聲條件下的P2波幅。對于單通道視覺目標而言,合拍條件下頭皮前部和枕部的N1波幅顯著大于無聲條件下的N1波幅;在額區(qū)電極處,合拍條件下的P2波幅顯著大于無聲條件下的P2波幅。綜合上述結果可以看出,在N1成分上,不論對于視覺目標還是聽覺目標,均出現(xiàn)了N1注意效應,即在合拍條件下的波幅顯著大于無聲條件下的波幅。這表明在合拍條件下被試更能將注意指向目標刺激。結合已有研究中對于視聽整合加工的腦電數(shù)據處理方法,在0-500ms時間窗內,對每20ms的ERP波幅數(shù)據進行分析。將不同時間段內單通道聽覺目標和視覺目標誘發(fā)的ERP相加(A+V),與視聽雙通道目標誘發(fā)的ERP(AV)相比,結果發(fā)現(xiàn),只有在合拍條件下,在121-140ms的頭皮前部右側腦區(qū),141-160ms的頭皮前部中央區(qū)均出現(xiàn)AV大于A+V的超加性效應。這表明,在本研究的實驗條件下,當目標刺激處于注意峰值時才會產生視聽整合,但是這種注意峰值對視聽整合的影響并不是持續(xù)性的,而是在目標刺激呈現(xiàn)之后的121-160ms之間產生。綜上所述,注意對視聽整合加工存在一定的影響,表現(xiàn)為只有當注意同時指向視覺和聽覺通道時才會產生視聽整合加工,這種效應在簡單刺激的視聽整合和視聽言語整合加工中均存在。其次,分配性注意條件下視覺和聽覺注意負荷對視聽言語整合加工的影響具有不對稱性,表現(xiàn)為視覺負荷下不會產生視聽整合,而聽覺負荷下仍然存在視聽整合。最后,分配性注意條件下的注意起伏對視聽言語整合存在一定的影響,即當目標刺激處于注意峰值時才會產生視聽整合加工。同時,這種影響并非持續(xù)性,而是在目標刺激呈現(xiàn)后121-160ms之間才會出現(xiàn)。
[Abstract]:Multi sensory integration (Multisensory Integration) means that when information from different sensory channels (visual, auditory, tactile, etc.) is presented simultaneously, the individual is effectively integrated into a unified and coherent perceptual information. By merging the information from different sensory channels, multi sensory integration can reduce the noise of the perceptual system, and help to reduce the noise of the perceptual system. It is not difficult to find out that early research pays much attention to the characteristics and processing methods of multi sensory integration itself. The characteristics of sensory channel and audio-visual integration are the focus of attention. In recent years, researchers have gradually shifted their interest to the relationship between attention and audio-visual integrated processing. However, most of the existing studies have discussed whether there is a difference in the effect of attention and non attention on audiovisual integration processing, but neglecting attention. In addition to pointing to space and objects, it can also point to the flexibility of sensory channels. Therefore, this study aims to examine the impact of attention directed to different sensory channels on audiovisual integration and the adjustment of attention to the amount of resources and attention. This paper includes three studies and 6 experiments. The subjects' attention points to different sensory channels (only attention to vision, only hearing, visual and hearing), and the effects of attention on visual and auditory integration are investigated. In Experiment 1, graphics and short sounds are used as experimental materials, and the tasks of the subjects are based on clues to the corresponding stimulus. The experimental results show that the response of the subjects to the audio-visual dual channel target is the fastest, that is, the effect of redundant signal is produced only under the condition of visual and auditory (distributive attention), and the effect of redundant signal is not produced under the condition of visual or only attention hearing (selective attention). From the integration of visual and auditory components of a dual channel target, a competitive model analysis is used to test the cumulative distribution probability of the tested response. The results show that the redundant signal effect in Experiment 1 derives from the integration of visual and auditory components of audio-visual dual channel targets. That is to say, the result of Experiment 1 shows that it is only in the distribution nature. Audio-visual integration is produced under the condition of attention. In Experiment 2, the material of Experiment 1 was adjusted, and the Chinese mono words read by human voice were used as auditory stimuli, and the two conditions of semantic consistency and inconsistency were formed, and the different effects of attention on the integration processing of different sensory channels were investigated. Under the selective attention condition, both semantic and inconsistent audio-visual targets have no redundant signal effects. Under the condition of distributive attention, the subjects respond to the semantic consistent audio-visual target most quickly, that is, it produces redundant signal effects on the semantic consistent audio-visual target, and the audio-visual target of the semantic inconsistent audio-visual target does not have the processing advantage. The contention model analysis shows that the redundant signal effect comes from the integration of visual and auditory components in the semantic consistent audio-visual target. That is to say, experiment 2 shows that the semantic consistent audio-visual target will produce integration only under the distributive attention condition, and the semantic inconsistent audio-visual target will not produce integration; At the same time, no matter whether the semantic consistency is consistent, the audio-visual target will not produce integration. In study two, the effect of attention load on audio-visual speech integration was explored on the basis of the study one. Two experiments were carried out from the angle of visual load (Experiment 3) and auditory load (Experiment 4). Experiment 3 found that only There is no redundant signal effect under the condition of no visual load, but there is no redundant signal effect under the condition of visual load. The competitive model analysis finds that the redundant signal effect comes from the integration of visual and auditory components of the semantic consistent audio-visual target. Experiment 3 results show that the audio-visual target is even under the distributive attention condition. The integration process is still modulated by visual attention load, which shows that audiovisual integration is produced only under no load conditions, and audio-visual integration is not produced under visual load conditions. Experiment 4 found that whether or not the auditory load was presented, the subjects had the fastest response to audiovisual dual channel targets, that is, no hearing or hearing. The results of the subsequent competition model showed that audio-visual integration was produced under the condition that there was no auditory load. Experiment 4 showed that audio-visual integration produced under the condition of distributive attention was not modulated by the auditory load. The results of the comprehensive study two could be found in the visual and auditory attention load. The influence of audio-visual speech integration was asymmetrical. Study three used rhythmic audio-visual clues and from the perspective of dynamic attention theory, the effects of attention undulating on audiovisual speech integration processing and its neural mechanism were investigated under the condition of distributive attention. In Experiment 5, the coincidence of target stimulation and audio-visual rhythm was set. It is found that the response of the subjects to the audio-visual dual channel target is the fastest, that is, the effect of redundant signal is produced only under the condition of closing, and there is no redundant signal effect under the other two conditions. The competition model is analyzed and the redundant signal effect comes from the visual and hearing of the audio-visual target. Integration of components. That is to say, audio-visual integration is produced only when the audio-visual dual channel target falls on the peak of the rhythm of attention. Under the other two conditions, even if the visual and auditory components of the audio-visual target are presented simultaneously, it is still impossible to integrate and process. In Experiment 6, on the basis of Experiment 5, two conditions are set up and silent. The time process and neural mechanism of the attentional undulating effects on audiovisual speech integration are investigated with the event related potential technique with high time resolution. The results show that the N1 amplitude in the frontal and central regions is significantly greater than the silent condition for the single channel auditory target. The amplitude of the N1 wave is significantly greater than that of the silent condition; at the Pz and P3 electrode points, the P2 amplitude under the combined condition is significantly greater than the P2 amplitude under the silent condition. For the single channel visual target, the N1 amplitude of the front and occipital parts of the scalp is significantly greater than the N1 amplitude under the silent condition. The P2 wave amplitude at the frontal zone electrode is significant. The P2 amplitude is greater than that under silent conditions. The above results can be seen that the N1 attention effect appears on both visual and auditory targets on the N1 component, that is, the amplitude of the wave amplitude is significantly greater than the amplitude under the silent condition. This shows that the subjects are more able to point to the target stimulation under the co beat condition. In the 0-500ms time window, the ERP wave amplitude data of each 20ms are analyzed in the time window of audio-visual integrated processing. The addition of the single channel auditory target and the visual target induced ERP (A+V) in different time periods is compared with the ERP (AV) induced by the audio-visual dual channel target. The results are found only under the matching condition, in 121-140m. In the right brain region of the anterior part of the scalp of S, the super additive effect of AV greater than A+V appears in the central region of the anterior part of the 141-160ms scalp. This indicates that audio-visual integration is produced when the target stimulus is at the peak of attention, but the effect of this peak of attention on audiovisual integration is not persistent, but at the target stimulus. The emergence of 121-160ms after now. To sum up, attention has been made to the effect of audiovisual integrated processing, which shows that audiovisual integration is produced only when attention is directed to the visual and auditory channels. This effect exists in both simple and exciting audiovisual integration and audio-visual speech integration. Secondly, distributive attention conditions. The effect of visual and auditory attention load on audio-visual speech integration is asymmetrical, which shows that audiovisual integration is not produced under visual load, while audiovisual integration still exists under the auditory load. Finally, attention undulating under the condition of distributive attention has a certain influence on audio-visual speech integration, that is, when the target stimulus is at the attention peak. At the same time, the effect is not continuous, but occurs between 121-160ms after the presentation of the target stimulus.
【學位授予單位】:天津師范大學
【學位級別】:博士
【學位授予年份】:2016
【分類號】:B842.3
本文編號:2134002
[Abstract]:Multi sensory integration (Multisensory Integration) means that when information from different sensory channels (visual, auditory, tactile, etc.) is presented simultaneously, the individual is effectively integrated into a unified and coherent perceptual information. By merging the information from different sensory channels, multi sensory integration can reduce the noise of the perceptual system, and help to reduce the noise of the perceptual system. It is not difficult to find out that early research pays much attention to the characteristics and processing methods of multi sensory integration itself. The characteristics of sensory channel and audio-visual integration are the focus of attention. In recent years, researchers have gradually shifted their interest to the relationship between attention and audio-visual integrated processing. However, most of the existing studies have discussed whether there is a difference in the effect of attention and non attention on audiovisual integration processing, but neglecting attention. In addition to pointing to space and objects, it can also point to the flexibility of sensory channels. Therefore, this study aims to examine the impact of attention directed to different sensory channels on audiovisual integration and the adjustment of attention to the amount of resources and attention. This paper includes three studies and 6 experiments. The subjects' attention points to different sensory channels (only attention to vision, only hearing, visual and hearing), and the effects of attention on visual and auditory integration are investigated. In Experiment 1, graphics and short sounds are used as experimental materials, and the tasks of the subjects are based on clues to the corresponding stimulus. The experimental results show that the response of the subjects to the audio-visual dual channel target is the fastest, that is, the effect of redundant signal is produced only under the condition of visual and auditory (distributive attention), and the effect of redundant signal is not produced under the condition of visual or only attention hearing (selective attention). From the integration of visual and auditory components of a dual channel target, a competitive model analysis is used to test the cumulative distribution probability of the tested response. The results show that the redundant signal effect in Experiment 1 derives from the integration of visual and auditory components of audio-visual dual channel targets. That is to say, the result of Experiment 1 shows that it is only in the distribution nature. Audio-visual integration is produced under the condition of attention. In Experiment 2, the material of Experiment 1 was adjusted, and the Chinese mono words read by human voice were used as auditory stimuli, and the two conditions of semantic consistency and inconsistency were formed, and the different effects of attention on the integration processing of different sensory channels were investigated. Under the selective attention condition, both semantic and inconsistent audio-visual targets have no redundant signal effects. Under the condition of distributive attention, the subjects respond to the semantic consistent audio-visual target most quickly, that is, it produces redundant signal effects on the semantic consistent audio-visual target, and the audio-visual target of the semantic inconsistent audio-visual target does not have the processing advantage. The contention model analysis shows that the redundant signal effect comes from the integration of visual and auditory components in the semantic consistent audio-visual target. That is to say, experiment 2 shows that the semantic consistent audio-visual target will produce integration only under the distributive attention condition, and the semantic inconsistent audio-visual target will not produce integration; At the same time, no matter whether the semantic consistency is consistent, the audio-visual target will not produce integration. In study two, the effect of attention load on audio-visual speech integration was explored on the basis of the study one. Two experiments were carried out from the angle of visual load (Experiment 3) and auditory load (Experiment 4). Experiment 3 found that only There is no redundant signal effect under the condition of no visual load, but there is no redundant signal effect under the condition of visual load. The competitive model analysis finds that the redundant signal effect comes from the integration of visual and auditory components of the semantic consistent audio-visual target. Experiment 3 results show that the audio-visual target is even under the distributive attention condition. The integration process is still modulated by visual attention load, which shows that audiovisual integration is produced only under no load conditions, and audio-visual integration is not produced under visual load conditions. Experiment 4 found that whether or not the auditory load was presented, the subjects had the fastest response to audiovisual dual channel targets, that is, no hearing or hearing. The results of the subsequent competition model showed that audio-visual integration was produced under the condition that there was no auditory load. Experiment 4 showed that audio-visual integration produced under the condition of distributive attention was not modulated by the auditory load. The results of the comprehensive study two could be found in the visual and auditory attention load. The influence of audio-visual speech integration was asymmetrical. Study three used rhythmic audio-visual clues and from the perspective of dynamic attention theory, the effects of attention undulating on audiovisual speech integration processing and its neural mechanism were investigated under the condition of distributive attention. In Experiment 5, the coincidence of target stimulation and audio-visual rhythm was set. It is found that the response of the subjects to the audio-visual dual channel target is the fastest, that is, the effect of redundant signal is produced only under the condition of closing, and there is no redundant signal effect under the other two conditions. The competition model is analyzed and the redundant signal effect comes from the visual and hearing of the audio-visual target. Integration of components. That is to say, audio-visual integration is produced only when the audio-visual dual channel target falls on the peak of the rhythm of attention. Under the other two conditions, even if the visual and auditory components of the audio-visual target are presented simultaneously, it is still impossible to integrate and process. In Experiment 6, on the basis of Experiment 5, two conditions are set up and silent. The time process and neural mechanism of the attentional undulating effects on audiovisual speech integration are investigated with the event related potential technique with high time resolution. The results show that the N1 amplitude in the frontal and central regions is significantly greater than the silent condition for the single channel auditory target. The amplitude of the N1 wave is significantly greater than that of the silent condition; at the Pz and P3 electrode points, the P2 amplitude under the combined condition is significantly greater than the P2 amplitude under the silent condition. For the single channel visual target, the N1 amplitude of the front and occipital parts of the scalp is significantly greater than the N1 amplitude under the silent condition. The P2 wave amplitude at the frontal zone electrode is significant. The P2 amplitude is greater than that under silent conditions. The above results can be seen that the N1 attention effect appears on both visual and auditory targets on the N1 component, that is, the amplitude of the wave amplitude is significantly greater than the amplitude under the silent condition. This shows that the subjects are more able to point to the target stimulation under the co beat condition. In the 0-500ms time window, the ERP wave amplitude data of each 20ms are analyzed in the time window of audio-visual integrated processing. The addition of the single channel auditory target and the visual target induced ERP (A+V) in different time periods is compared with the ERP (AV) induced by the audio-visual dual channel target. The results are found only under the matching condition, in 121-140m. In the right brain region of the anterior part of the scalp of S, the super additive effect of AV greater than A+V appears in the central region of the anterior part of the 141-160ms scalp. This indicates that audio-visual integration is produced when the target stimulus is at the peak of attention, but the effect of this peak of attention on audiovisual integration is not persistent, but at the target stimulus. The emergence of 121-160ms after now. To sum up, attention has been made to the effect of audiovisual integrated processing, which shows that audiovisual integration is produced only when attention is directed to the visual and auditory channels. This effect exists in both simple and exciting audiovisual integration and audio-visual speech integration. Secondly, distributive attention conditions. The effect of visual and auditory attention load on audio-visual speech integration is asymmetrical, which shows that audiovisual integration is not produced under visual load, while audiovisual integration still exists under the auditory load. Finally, attention undulating under the condition of distributive attention has a certain influence on audio-visual speech integration, that is, when the target stimulus is at the attention peak. At the same time, the effect is not continuous, but occurs between 121-160ms after the presentation of the target stimulus.
【學位授予單位】:天津師范大學
【學位級別】:博士
【學位授予年份】:2016
【分類號】:B842.3
【相似文獻】
相關會議論文 前1條
1 林斌;;央行最優(yōu)干預下人民幣匯率的決定——基于信號效應和資產調整效應的動態(tài)分析[A];2009年全國博士生學術會議論文集[C];2009年
相關重要報紙文章 前1條
1 高興;從礦產買賣新趨勢看資源股底在何方[N];證券時報;2013年
相關博士學位論文 前1條
1 顧吉有;注意對視聽整合加工的影響[D];天津師范大學;2016年
相關碩士學位論文 前1條
1 孫雪萍;政府研發(fā)補貼的信號效應研究[D];中共江蘇省委黨校;2014年
,本文編號:2134002
本文鏈接:http://sikaile.net/shekelunwen/xinlixingwei/2134002.html
教材專著