Home   CV   Calendar   Publications   WRAP Lab  

Joe Toscano, Ph.D.

Assistant Professor, Villanova University
    Office: Tolentine 344
    Schedule Meeting

Hi, my name is Joe Toscano, and I'm an Assistant Professor in the Psychology Department at Villanova University, where I direct the Word Recognition and Auditory Perception (WRAP) Lab. My research focuses on questions about speech recognition and language processing.

Curriculum Vitae, updated Oct 2016.

Academia.edu page

Google Scholar page

Contact Info

Department of Psychology
Villanova University
800 E Lancaster Ave
Villanova, PA 19085
    Email:  joseph.toscano@villanova.edu
    Phone:  +1-610-519-4755
    Fax:  +1-610-519-4269
    Office:  Tolentine 344
    Lab:  Tolentine 231


I'm originally from snowy Rochester, NY and received my B.S. in Brain & Cognitive Sciences from the University of Rochester where I worked with Mike Tanenhaus. Afterwards, I moved to the Midwest, and I received my Ph.D. in Cognition and Perception from the Department of Psychology at the University of Iowa, working with Bob McMurray. After grad school, I spent three years in Champaign-Urbana as a Beckman Postdoctoral Fellow at the University of Illinois, where I had the opportunity to work with researchers in a number of fields, from Psychology to Electrical Engineering, including Susan Garnsey, Duane Watson, Sarah Brown-Schmidt, Jont Allen, Charissa Lansing, Monica Fabiani, and Gabriele Gratton. Now I'm back in the Northeast—I joined the faculty at Villanova in Fall 2014.


My research focuses on auditory perception and spoken language comprehension. How are we able to recognize speech accurately, yet we struggle to build computer systems that can do so equally well? How does the ability to understand speech emerge over development, and how malleable is it in adulthood? How do listeners adapt to accents and understand talkers they have never encountered before? How can we improve speech recognition for listeners who use assistive devices like hearing aids and cochlear implants?

To answer these questions, I use techniques that allow us to study spoken word recognition as it happens. These include cognitive neuroscience methods (ERP and optical neuroimaging techniques) that tell us what information listeners have access to at early stages of perception. I also use eye-tracking approaches to study lexical activation as the speech signal unfolds. These data inform models of speech perception that allow us to ask questions about the limits of unsupervised statistical learning and understand how listeners weight acoustic cues in speech.

For more on my research, check out my lab website here.


Click here for full list on lab website

  1. Buxo-Lugo, A., Toscano, J.C., & Watson, D.G. (in press). Effects of participant engagement on prosodic prominence. To appear in Discourse Processes.
  2. Toscano, J.C., & McMurray, B. (2015). The time-course of speaking rate compensation: Effects of sentential rate and vowel length on voicing judgments. Language, Cognition, and Neuroscience, 30, 529-543. [PubMed Central]
  3. Toscano, J.C., Buxo-Lugo, A., & Watson, D.G. (2015). Using game-based approaches to increase level of engagement in research and education. In S. Dikkers (Ed.), TeacherCraft: How Teachers Learn to Use Minecraft in Their Classrooms. Pittsburgh: ETC Press.
  4. Toscano, J.C., & Allen, J.B. (2014). Across and within consonant errors for isolated syllables in noise. Journal of Speech, Language, and Hearing Research, 57, 2293-2307. [PubMed Central]
  5. Toscano, J.C., Anderson, N.D., & McMurray, B. (2013). Reconsidering the role of temporal order in spoken word recognition. Psychonomic Bulletin and Review, 20. 981-987. [PubMed Central]
  6. Toscano, J.C., & McMurray, B. (2012). Cue-integration and context effects in speech: Evidence against speaking-rate normalization. Attention, Perception, and Psychophysics, 74, 1284-1301. [PubMed Central]
  7. Toscano, J.C. (2011). Perceiving speech in context: Compensation for contextual variability at the level of acoustic cue encoding and categorization. Doctoral dissertation, University of Iowa.
  8. Toscano, J.C., McMurray, B., Dennhardt, J., & Luck, S.J. (2010). Continuous perception and graded categorization: Electrophysiological evidence for a linear relationship between the acoustic signal and perceptual encoding of speech. Psychological Science, 21, 1532-1540. [PubMed Central]
  9. Toscano, J.C., & McMurray, B. (2010). Cue integration with categories: Weighting acoustic cues in speech using unsupervised learning and distributional statistics. Cognitive Science, 34, 434-464. [PubMed Central]
  10. Toscano, J.C., Mueller, K.L, McMurray, B., & Tomblin, J.B. (2010). Simulating individual differences in language ability and genetic differences in FOXP2 using a neural network model of the SRT task. In S. Ohlsson & R. Catrambone, Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp. 2230-2235). Austin, TX: Cognitive Science Society.
  11. McMurray, B., Aslin, R.N., & Toscano, J.C. (2009). Statistical learning of phonetic categories: Insights from a computational approach. Developmental Science, 12, 369-378. [PubMed Central]
  12. McMurray, B., Horst, J., Toscano, J.C., & Samuelson, L.K. (2009). Towards an integration of connectionist learning and dynamical systems processing: Case studies in speech and lexical development. In J.P. Spencer, M. Thomas, & J. McClelland (Eds.), Toward a Unified Theory of Development: Connectionism and Dynamic Systems Theory Re-Considered. New York: Oxford University Press.
  13. Toscano, J.C. & McMurray, B. (2008). Using the distributional statistics of speech sounds for weighting and integrating acoustic cues. In B.C. Love, K. McRae, & V.M. Sloutsky (Eds.), Proceedings of the 30th Annual Conference of the Cognitive Science Society (pp. 433-438). Austin, TX: Cognitive Science Society.
  14. Toscano, J.C., Perry, L.K., Mueller, K.L., Bean, A.F., Galle, M.E., & Samuelson, L.K. (2008). Language as shaped by the brain; the brain as shaped by development. Commentary on Christiansen & Chater, "Language as shaped by the brain", Behavioral and Brain Sciences, 31, 535-536.
  15. Toscano, J.C. (2005). Effect of object salience on eye movements in a visual-linguistic task. Honors thesis in Brain and Cognitive Sciences, University of Rochester.
  16. Toscano, J.C., & Beutter, B.R. (2005). Performance during visual search: Designing spacecraft with humans in mind. University of Rochester Journal of Undergraduate Research, 3, 8-12.
  17. Toscano, J.C. (2004). Effect of salience on saccadic performance during visual search. Technical report for NASA Undergraduate Student Research Program.


I regularly teach a Mendel Science Experience (MSE) course, The Sounds of Human Language. The course is designed to provide non-science majors with a background in the techniques and approaches used in the natural sciences. In the class, we explore the acoustic characteristics of speech sounds and the mechanisms underlying the recognition of speech. The course is particularly well-suited for students who are interested in language and speech communication. Here is the description from the syllabus:

This course explores the sounds used in human language and provides an overview of speech science and phonetics, the study of how humans communicate using spoken language. Language comprehension is one of the most complex human behaviors, and the use of language distinguishes humans from other animals. How does spoken language vary across languages, or across different speakers of the same language? Human speech uses a wide variety of sounds, ranging from the consonants and vowels used in English, to differences in pitch used by tone languages like Chinese, to click consonants used in languages like Xhosa. Understanding the sounds used in spoken language is also of practical significance: it can help us improve computer systems designed to recognize speech, it can help us develop better treatments for language disorders, and it can help us improve assistive devices like hearing aids and cochlear implants.

Throughout the course, we will explore basic concepts used in the scientific study of human speech, including how speech sounds are produced, learned, and perceived. We will examine questions about the origin of human language, language acquisition, and the neural basis of language processing. You will gain first-hand experience with the techniques used by scientists to study speech through laboratory exercises designed to illustrate principles of acoustic analysis, speech synthesis, and methods used to study human speech perception. This provides both an introduction to basic concepts in the natural sciences (such as experimental design and hypothesis testing), as well as experience with the specific tools used by scientists who study hearing, speech perception, and language comprehension. We will also examine how this work intersects with research in the humanities and social sciences, and how our understanding of the sounds used in human language influences the way we think about and use language in general.

Prospective Students

Grad Students


Back to top ↑