3D models as efficient basis for AI training

While the first industrial revolution made human labor to a high degree superfluous, replacing it with machines, we can observe that we are currently experiencing a comparable development: Human thinking power is now increasingly being replaced by more efficient computer-technological processes. According to an Oxford study, almost half of all jobs in the U.S. could be eliminated during the next 20 years, because they will be automated. A similar development can be expected in Germany.

Although we are still far from being able to digitally replicate all the functions of the human brain – which may never be completely possible -, highly specialized, so-called “weak” AI is already being used in many areas, helping to solve specific tasks. This does happen, for example, in voice assistants of smartphones or smart home systems, in stock market analyses, navigation and assistance systems of vehicles or even in image recognition. Machine learning uses training data to recognize patterns and regularities in words or images in order to correctly evaluate new data on this basis. The model here is the human brain, and the process known as learning transfer. The essential difference to humans, however, is that the AI does not understand what it sees, but assigns the “seen” object with a certain probability to a category, based on color & shape. Such systems often reach their limits when objects no longer conform to the norm, deviating from the data they were previously trained with – at such moments, correct recognition fails.

Creation of photorealistic renderings based on high-precision 3D models

With the Virtual Try-On app for footwear from our partners at Vyking, a wide range of shoes can be tried on using AR technology on a smartphone. This innovative digital retail solution offers customers the opportunity to examine clothing almost as accurately as they would in a brick-and-mortar store – only now they can do so from the comfort of their own homes, without having to worry about opening hours or pandemic restrictions. Given such advantages, it should only be a matter of time before this approach becomes standard in online retail.

To further train its reliable AI, Vyking has recently started taking advantage of highly scalable processes such as those enabled by 3D digitization: Instead of training the AI with photos of feet, their crew decided to create renderings of 3D models of those feet and feed that data into the AI. This offers the great advantage that numerous renderings from a wide variety of perspectives can be generated from one single scan. The highly accurate and true-color 3D models created through this procedure are in no way inferior to any 2D-images of feet. In their diversity, they are perfectly suited as an extremely reliable data basis for optimizing the AI.

Training assistant robots in virtual environments

But this is by far not the only way to train AI with 3D models: With computing power on the rise and virtual environments becoming more realistic these days, many endeavors are first designed and tested in the digital world. This includes the continuous improvement of assistance robots. More concretely, we at botpot are currently working on a project from the field of robotics research in which a gripper arm with artificial intelligence is being trained in a virtual environment. This involves digitizing numerous diverse objects, which are then used in a training simulation to optimize the device.

Our photogrammetric 3D scanners are the ideal solution for easily creating 3D assets with absolutely photorealistic texture and pixel resolution down to 0.05 mm, as well as a small file size suitable for VR/AR integration. At the same time, with a measurement accuracy of up to 1mm, the 3D models provide a solid basis for the extraction of measurement data for numerous applications.