MediVox, a Launch Hackathon 2016 project
The product was designed and developed over a 48-hour period. By the end we had a working MVP.
The Problem: disconnect with patients
Healthcare providers are missing out on key facial and body language interactions with their patients while they are busy entering data into Electronic Health Records (EHR, EMR). Their patients are looking for a better relationship with their doctors and to be really heard. We approached the project with the knowledge that voice recognition and machine-learning can greatly enhance the average doctor appointment if we could create a product that let's the healthcare providers (doctors and nurses) enter the data without having to be looking typing, looking for codes, and generally interacting with the computer instead of the patient.
The Solution: voice recognition and machine-learning
My team at Launch Hackathon (front-end engineer, back-end database expert, and me, the UX product designer) used voice recognition and machine-learning to let doctors talk conversationally to their patients with the data and symptoms captured from the conversation and then saved into their electronic health record, prescriptions sent to their pharmacy, labs ordered, and followup appointments made. As the new interface is largely voice, we discussed how to keep the visual interface kept simple and intuitive. Past visit data could be easily reviewed, and current biometrics, appointments, notes, etc. could be seen at a glance, and corrected via voice. No typing necessary. Everything that could be automated at the end of an appointment, with prescriptions sent electronically to the pharmacy, follow-up appointments scheduled and texted, emailed, or printed for the patient (depending on preferences set up), etc. The plan for this platform, would be for it to integrate into the EHR, taking a complicated EHR and simplifying a user interface that would keep health care providers and their patients happy.