MediVox, a Launch Hackathon 2016 project

The product was designed and developed over a 48-hour period. By the end we had a working MVP (Minimum Viable Product).

medivox-logo-design-hackathon.png
 
medivox-wireframing-machine-learning-voice-recognition-biometrics.png

The Problem: healthcare providers disconnected from their patients

Healthcare providers are missing out on key facial, body language, and conversational interactions with their patients while they are heads-down entering data into Electronic Health Records (EHR, EMR). 

The Solution: voice recognition with machine-learning

My team of three at Launch Hackathon was comprised of a front-end engineer, a back-end database expert, and product designer. We used voice recognition and machine-learning to let doctors and patients talk conversationally, capture that conversation into the appropriate areas of  biometrics, symptoms, prescriptions, labs, next appointment, and notes, display the information in real-time, allowed for edits via voice (no keyboard needed), and then saved it in the patient’s EHR (Electronic Health Record). By the end of the 48-hour event we had an MVP that captured the biometrics, symptoms, and notes. Future iterations were planned to let prescriptions be created and sent to their pharmacy, labs ordered, and followup appointments made. As this interface was largely voice, we kept the visual interface simple and intuitive. 

The integration plan for this platform was to integrate into the patient’s permanent EHR – simplifying a user interface that would keep health care providers and their patients happy.

 

TOOLS USED:

Bohemian Coding Sketch App
Illustrator (logo and icon development)
GitHub
Devpost