
Medical Devices / Manufacturing
·Singapore & Global·10 monthsAI 3D Ear Modelling for a Global Hearing Aid Manufacturer
Developed AI-powered 3D ear modelling technology for a global hearing aid manufacturer, enabling the generation of precise ear canal models from smartphone photographs rather than traditional silicone ear impressions. The system reduces the hearing aid fitting process from a multi-visit clinical procedure to a single remote interaction, opening a direct-to-consumer channel previously impossible in the hearing aid industry.
Sub-mm
3D reconstruction accuracy
Same day
Model generation (was weeks)
Smartphone
No specialised hardware required
Challenge
Hearing aid manufacturing has a critical bottleneck: the ear impression. Custom hearing aids require a precise model of the user's ear canal, traditionally obtained by a trained audiologist injecting silicone into the ear, waiting for it to cure, extracting the impression, and shipping it to the manufacturer for 3D scanning and production. This process requires a clinic visit, trained personnel, and physical logistics — constraints that make hearing aids inaccessible to millions of people in regions without nearby audiology clinics and prohibitively slow for the modern consumer who expects digital-first experiences. The manufacturer had explored digital ear scanning technologies, but existing solutions required expensive dedicated hardware (intraoral-style scanners adapted for ears) that cost tens of thousands of dollars per unit — affordable for large audiology chains but not for independent audiologists or direct-to-consumer models. What the manufacturer needed was a way to generate an accurate 3D ear model from images captured on a standard smartphone — no specialised hardware required. The technical challenge was formidable. Ear canal geometry is complex, highly variable between individuals, and must be modelled with sub-millimetre accuracy for a hearing aid to fit comfortably and perform acoustically. Smartphone images provide limited depth information, the ear canal's interior is partially occluded from external viewpoints, and lighting conditions during image capture are uncontrolled. Existing photogrammetry approaches couldn't achieve the required accuracy for medical device manufacturing.
Approach
Reluvate developed a deep learning pipeline that reconstructs 3D ear canal geometry from a series of smartphone photographs taken from multiple angles. The system guides the user (or audiologist) through a structured image capture process — a sequence of angles and positions optimised for geometric reconstruction. Computer vision algorithms assess each captured image for quality (focus, lighting, coverage) in real-time, prompting recapture if needed. The 3D reconstruction model was trained on a paired dataset of thousands of ear impressions alongside corresponding photograph sets, learning the mapping between 2D visual features and 3D geometric properties. The model architecture combines multi-view stereo reconstruction with a learned shape prior — a statistical model of ear canal geometry that constrains the reconstruction to anatomically plausible shapes, even for regions with limited visual information. The output is a watertight 3D mesh suitable for direct input into the manufacturer's CAM (computer-aided manufacturing) pipeline. Validation was conducted against the manufacturer's existing quality standards. 3D models generated from photographs were compared against models generated from traditional silicone impressions for the same ears, measuring geometric deviation across critical fit surfaces. The system was iteratively refined until accuracy fell within the manufacturer's acceptance criteria for custom hearing aid production.
Design Notes
The capture guidance system was a critical design element. The accuracy of the 3D reconstruction depends heavily on the quality and coverage of the input images, but users and audiologists are not photographers. Reluvate built an augmented reality overlay that guides the user through the capture sequence — showing exactly where to position the camera, indicating when the current angle is captured, and providing real-time feedback on image quality. This guidance reduced the skill barrier to zero while ensuring consistent, high-quality input data. Change management addressed two audiences: audiologists and end consumers. For audiologists, the system was positioned as a tool that eliminates the most time-consuming and least enjoyable part of the fitting process (ear impressions) while preserving their role in hearing assessment, device programming, and patient counselling. For the direct-to-consumer channel, the focus was on making the capture process simple enough that a consumer could complete it at home with a smartphone and a mirror, guided by the app. Exception handling accounts for the medical device context. If the reconstruction system's confidence in any region of the 3D model falls below the manufacturing tolerance threshold, the system requests additional images from specific angles or recommends a traditional impression for that patient. Conservative error handling is essential — a poorly fitting hearing aid is not merely a quality issue but a patient safety and comfort concern. The system errs on the side of requesting additional input or recommending traditional methods rather than producing a model with uncertain accuracy.
Result
The manufacturer gained the ability to produce custom hearing aids from smartphone photographs, eliminating the silicone impression process for suitable candidates. Fitting time reduced from a multi-visit process spanning weeks (impression, shipping, manufacturing, fitting) to a streamlined workflow where the 3D model is generated on the same day as capture. The technology enables a direct-to-consumer channel that was previously impossible, expanding the addressable market to consumers in regions without proximate audiology clinics. Accuracy validation confirmed that AI-generated 3D models meet the manufacturer's production quality standards.
Related Case Studies
Education
Award-Winning AI-Powered CAD Assessment for National Education
Developed an AI-powered automated assessment system for Computer-Aided Design (CAD) student submissions, replacing hours of manual instructor evaluation with instant, educational feedback. The system won the Ministry of Education (MOE) Innergy Award (Gold) — national recognition for innovation in education — and has been adopted internationally.
Retail & Conservation
Computer Vision Retail Analytics and Wildlife Monitoring
Developed computer vision systems for two distinct domains: retail analytics (footfall tracking, customer heatmaps, and employee activity monitoring across store networks) and animal welfare monitoring (behavioural analysis, temperature tracking, and health anomaly detection). The retail system outperformed a major Chinese AI competitor's solution in accuracy benchmarks.
Education Technology
AI-Powered Talent Management Platform for an EdTech Company
Built an AI-driven talent management platform that maps employee competencies against job requirements, generates personalised training roadmaps, and automates assessment through AI-generated questions and essay marking. The platform enables organisations to identify skill gaps, track development progress, and make data-driven decisions about talent deployment.