Hi, I'm Edem 👋🏽

I'm a Speech & Neuro-Behavioral ML Scientist applying machine learning to the complexity of human communication. My PhD at the University of Eastern Finland drives the core of it: designing auditory-inspired acoustic frontends that make children's speech recognition robust under high variability and limited data. That work feeds directly into what I do at aTOOR, where I build and analyze multimodal systems including 360°/180° video, VR, ultrasound tongue imaging, eye-tracking, and fNIRS to support remote speech therapy for children.

Experience

From past to present, I've built and contributed to:

  • Developing and analyzing multi-modal data (360°/180° video, VR, ultrasound tongue imaging, eye-tracking, fNIRS) to support remote speech therapy interventions at aTOOR
  • Doctoral research on advancing children's speech recognition through acoustic frontends inspired by the human auditory system at UEF Computational Speech Group
  • Research-led enhancement of Whisper ASR performance for Nordic languages at Spoken OY
  • Several micro-services applications for high-volume data processing, UI framework improvements and automated testing at Morgan Stanley