Reluvate
Award-Winning AI-Powered CAD Assessment for National Education

Education

·Singapore·14 months

Award-Winning AI-Powered CAD Assessment for National Education

Developed an AI-powered automated assessment system for Computer-Aided Design (CAD) student submissions, replacing hours of manual instructor evaluation with instant, educational feedback. The system won the Ministry of Education (MOE) Innergy Award (Gold) — national recognition for innovation in education — and has been adopted internationally.

Gold

MOE Innergy Award

Seconds

Assessment time (was 15-20 min each)

International

Adoption beyond Singapore

Challenge

CAD assessment in technical education is one of the most time-consuming evaluation tasks an instructor faces. Each student submission is a three-dimensional model that must be evaluated against a detailed rubric covering dimensional accuracy, design intent, feature relationships, constraint usage, and manufacturing feasibility. An experienced instructor might spend 15-20 minutes per submission, and with class sizes of 30-40 students submitting multiple assignments per semester, the assessment burden consumed thousands of instructor hours across the institution annually. The manual process had quality and consistency problems beyond just speed. Different instructors weighted rubric criteria differently. Assessment quality degraded when instructors were fatigued from evaluating dozens of models in sequence. Students received scores but rarely received the detailed, constructive feedback that would help them understand what they did wrong and how to improve. The feedback loop was too slow — by the time graded work was returned, students had already moved on to the next assignment, making it difficult to correct fundamental misunderstandings. There was also an institutional scaling problem. The education body wanted to expand CAD instruction to more students and more institutions, but the instructor-to-student ratio required for meaningful assessment was a binding constraint. Without automated assessment, growth meant either hiring more instructors (expensive and slow) or reducing assessment quality (unacceptable).

Approach

Reluvate built a computer vision and geometric analysis system that parses CAD file formats (STEP, IGES, and native format files), reconstructs the model's feature tree, and evaluates it against a structured rubric. The system analyses dimensional accuracy by comparing student models against reference geometry, evaluates design intent by examining how features are constrained and related to each other, and assesses manufacturing feasibility by checking for common design-for-manufacturing violations. The feedback engine was designed to be educational, not just evaluative. For each rubric criterion, the system generates specific, constructive feedback explaining what the student did, what the expected approach was, and why the expected approach is preferred. Feedback references specific features in the student's model — "Your fillet on Edge 7 uses a fixed radius of 3mm; consider using a variable radius driven by the adjacent face curvature to maintain tangent continuity" — rather than generic comments. This level of specificity was essential for the system to be accepted by instructors as a genuine improvement over manual grading. Reluvate worked closely with CAD instructors throughout development, conducting iterative calibration sessions where the AI's assessments were compared against expert instructor evaluations. The system's rubric weights and feedback templates were refined until instructor panels agreed that the AI's assessments were at least as accurate and more detailed than their own manual evaluations. The final system was deployed with an instructor dashboard showing class-wide performance patterns, common errors, and learning progression over time.

Design Notes

The design had to work within the institution's existing IT infrastructure, which meant running on standard institutional hardware without requiring GPU clusters or cloud-based processing (due to data sovereignty requirements for student records). Reluvate optimised the geometric analysis pipeline to run efficiently on CPU, reserving the heavier ML-based evaluation components for batch processing during off-peak hours. Change management was handled through a co-design process with instructors. Rather than presenting a finished system, Reluvate involved instructors in defining the rubric encoding, reviewing AI-generated feedback, and calibrating scoring thresholds. This co-design approach meant that instructors felt ownership of the system rather than viewing it as a replacement. Several instructors reported that the process of formalising their tacit assessment criteria into structured rubrics actually improved the consistency of their own manual grading. Exception handling accounts for the creative nature of design work. Not all valid solutions look the same — a student might achieve a functional design through an unconventional approach that the rubric doesn't explicitly anticipate. The system includes a divergence detection module that flags submissions where the student's approach differs significantly from expected solutions without necessarily being wrong. These submissions are routed to instructors for manual review with the AI's analysis as a starting point, ensuring that creative solutions are evaluated fairly rather than penalised for non-conformity.

Result

The system won the MOE Innergy Award (Gold) for innovation in education. Assessment turnaround dropped from days to seconds. Student feedback quality improved significantly, with detailed, specific commentary on every submission rather than brief margin notes. The system has been adopted by institutions internationally, enabling the education body to scale CAD instruction without proportional increases in instructor headcount.

educationcomputer-visionCADassessmentMOEaward-winning

Facing a similar challenge?