Most of the research I do is on Speech Perception, and in that area I am most interested in the perception and production of suprasegmentals in general, and lexical prominence in particular. I’ve studied perception in both first and second languages, but I find the latter case to be more interesting. You can see (much) more on my academic résumé.

I completed my PhD in 2016, and for that I studied the bidirectional case of Spanish and Japanese as L2. Even though the sounds in these two languages are rather similar, their grammar and syntax are very different, as are their ways to mark prominence. I was interested in how (and why) those differences made it easier (or harder) for non-native speakers to hear when a syllable was prominent, and what that has to say about our perceptual system in general.

Since then, I’ve been working on Automatic Speech Recognition, and in particular in its applications for clinical populations. And with that, I’ve started taking more and more steps away from academic research, and more towards the development of software tools.

I am currently very interested in bringing those two worlds closer together, and improving the quality of Research Software. If you agree, you might want to take a look at the Reserch Software Engineer community in the UK.