by Britney Afram
My supervisor (Heather Payne) works within UCL’s Institute of Cognitive Neuroscience in the Visual Communication Group, who conduct research looking at how the brain processes language in people who are born profoundly deaf. The language they use is very variable such as British Sign Language and spoken language. Looking at language processing in people born deaf gives a unique perspective because they can compare BSL and spoken language and contrast the networks shared to know what brain areas are interested in language whether it is visual or auditory.
So what is Cognitive Neuroscience?
Cognitive Neuroscience is the study of the neural basis of behavior. It aims to explain cognitive processes in terms of brain-based mechanisms- ‘what part of the brain does what’!
What are some of the methods used by the Visual Communication Group?
As they are not able to observe brain processes, VCG use a Near infrared spectroscopy to examine neural basis of signed and spoken language processing. Eye tracking allows to study how infants attend to visual language input early in life. The Visual Communication Group also use functional transcranial Doppler sonography (fTCD) and functional magnetic resonance imaging (fMRI).
Why Functional Transcranial Doppler sonography (fTCD)?
Some deaf children may even have a cochlear implant which is unsuitable for a MRI scan. Functional Transcranial Doppler sonography (fTCD) assesses relative blood flow to the left and right sides of the brain. A benefit to this method is its portability allowing it to be used in different environments.
The set-up of the equipment requires attention to the various wires and ports. The fTCD uses two laptops: one to observe the results from the Doppler box and another to show the stimulus to the participant. Additionally there needs to be a connection between both laptops by the parallel port replicator which allows signals to be sent much quicker and several wires are connected to the laptops and Doppler box.
What do we do in a testing session with children born deaf?
During the actual procedure the ultrasound probes are attached with a conductive gel on the left and right sides of the head, just in front of the ears approximately perpendicular to the direction of blood flow. This enables it to monitor the rate of blood flow in the middle cerebral artery to each brain hemisphere whilst the participant performs a description of a 12 seconds long silent moving penguin animation. Software called QL allows us to visualize the signal simultaneously letting us know we are in the right place for the artery and showing the speed of blood.
How were the results analyzed?
The results are then put into a toolbox for Matlab (a programming language) which extracts the average Doppler signal from the left and right hemispheres over a period of interest in which the task was performed. The graph produced shows the difference in left and right activations to extract a laterality index. Positive values indicate left lateralization and negative values indicate right lateralization. In most people language is processed predominantly by the left hemisphere of the brain.
My favorite moment of my placement was getting to try the technique out: https://youtu.be/l1S3nIekaPU
Britney spent a week in UCL’s Institute of Cognitive Neuroscience, in the group of Dr Mairead McSweeney, and under the supervision of Heather Payne.