Electromagnetic Articulometry (EMA)
We have acquired two 3D Electromagnetic Articulographs (EMA; the model is AG 501, manufactured by Carstens). Each system has excellent temporal resolution (1250 samples per second) which allows us to track the movement of articulators in a precise manner. With two systems the articulatory movements of two speakers can be tracked simultaneously. This makes it possible to examine, for example, speech accommodation and speech production in dialogue.
Articulatory Research: Ultrasound Imaging
We have a Zonare Z.One ultrasound system that we use for imaging the surface of the tongue during continuous speech. Although the unit is highly portable (for use in the field), in the lab the ultrasound data are typically collected in conjunction with lip position data (gathered via video cameras). For example, we’re currently using this system to measure how much participants change their tongue and lip positions over the course of an experiment when imitating novel speech patterns.
Our lab features a Glottal Enterprises Dual View oral/nasal airflow measurement system, which allows us to measure quantity of and changes in airflow during speech.
This flow signal can then be saved and analyzed as a standard wav file, and combined with acoustic measurements to study phenomena like vowel nasalization or other forms of nasal coarticulation.
The Glottal Enterprises EG2-PCX electroglottograph is a non-invasive tool for measuring vocal fold vibration. A speaker wears a neck strap holding a pair of electrodes in place around the glottis (LED indicators on the EGG allow for precise location of the electrodes). The system then sends a low-voltage, high frequency current between the electrodes (two separate currents between the upper and lower halves of each electrode) and reads changes in electrical impedance across the vocal folds as opening (high impedance) and closing (low impedance). The result is a wav file that can be read in any standard software.
Perceptual and Acoustic Research:
IAC Sound Booth
Our 10′ x 10′ IAC sound attenuated booth, located in 400 Lorch Hall, is large enough to support four concurrent participants in a perception experiment and features a custom pass-thru port to allow simultaneous ultrasound imaging and high quality audio recordings. The booth is generally configured with measurement-quality microphones, for quick audio recording so a researcher can display prompts or a word list on the participant’s digital display while isolating the recording/controlling computer from the audio being recorded.
The newest addition to our speech perception lab is a state-of-the-art EyeLink 1000+ Remote Eye Tracking system. This system measures, via a desk-mounted camera, participant's eye movements to images on a computer screen as the acoustic speech signal unfolds. The excellent temporal resolution of this system (up to 1000 samples per second) allows us to study the time course of speech perception. Faculty and students are currently testing hypotheses about the use of coarticulatory and social information in participant's moment-by-moment processing of speech.
Other facilities for perceptual testing include several Macbook Pro computers; four Cedrus response boxes; Experiment Builder, PsychoPy, SuperLab, e-Prime, and Linger software; and an array of small but necessary devices: AKG circumaural headphones, edirol USB audio interfaces, mixing boards, pre-amps, etc.