Pushing the Boundaries of 3D Imaging
Inside Aralia’s BridgeAI Breakthrough
At Aralia Systems, we're redefining what's possible with mobile 3D imaging. As part of the UKRI-funded BridgeAI programme, we recently completed a cutting-edge project to develop AI-driven 3D reconstruction technology that works with just a smartphone. The goal? To combine the power of photogrammetry and photometric stereo into a single, scalable solution, no expensive LiDAR required.
The Challenge: Quality vs. Portability
Most 3D scanning solutions fall into one of two camps: they’re either highly accurate and prohibitively expensive, or affordable and too limited for professional use. Our mission was to break this trade-off by combining two well-known techniques, Multi-View Stereo (MVS) and Photometric Stereo (PS), with the latest in AI research.
Individually, MVS and PS have their strengths and weaknesses. MVS captures geometric detail but struggles with shiny or texture-less surfaces. PS excels at surface detail under controlled lighting but isn't suited to uncontrolled environments. Merging the two, on mobile hardware, was the technical mountain we set out to climb.
Our Approach: Neural Fusion for Real-World Results
With BridgeAI support, our team developed a neural pipeline that integrates the strengths of both MVS and PS using deep learning. We used uncertainty-aware methods, ray tracing techniques, and implicit neural representations (INRs) to create a system that produces high-fidelity 3D models even from limited or noisy input data.
Here’s what we built:
A deep neural architecture capable of fusing MVS and PS data on the fly
A mobile app to control capture directly from a smartphone
A GPU-powered server backend to perform reconstruction within minutes
Testing the System: From Concept to Reality
To validate the technology, we combined academic datasets with our own image capture setups, including a turntable rig synced with our patented smartphone PS device. We ran trials in a variety of lighting conditions and surface types, then tested mobile capture workflows from start to finish.
Key outcomes included:
Superior 3D detail in smooth or low-texture surfaces
Dramatically improved occlusion handling using ray tracing
Real-time capture and server-side reconstruction via a smartphone app
What This Means for Industry
This isn’t just academic progress, it’s a real shift in what's possible for 3D capture on mobile devices. Our solution opens up new opportunities in:
Cultural heritage: Portable scanning of artefacts and sites
Healthcare: 3D modelling for dermatology or orthotic fittings
Construction & surveying: Fast condition assessment in the field
Retail & e-commerce: High-quality product scanning at scale
What’s Next?
We’re continuing to refine the system through field trials with industry partners. Future developments will focus on:
Improving pose estimation and reconstruction speed
Reducing reliance on external hardware
Bringing all processing directly onto mobile devices
Final Thoughts
This project shows how AI can close the gap between consumer hardware and professional-grade 3D imaging. By combining recent advances in neural networks with clever engineering, we’re making it possible to generate detailed 3D models from a device that fits in your pocket.
We’re excited about where this leads, and we’re just getting started.