It wasn't long ago that dental crowns were produced on assembly lines, with rows of workers engaged in the physical effort of building and shaping them.

To make that process faster, more precise and, ultimately, less expensive, dental product maker Glidewell Laboratories has been building a deep learning environment for designing and manufacturing crowns, also known as caps.

Over the past decade, Glidewell has added significant automation to crown creation in the form of robots and ramped-up use of computer-aided design and manufacturing software. But much variability has remained because of the need for humans to manage refinement of a product that demands a high level of precision.

With a production load of 10,000 units a day, Glidewell has plenty of reasons to want to bring more systematic consistency to that refinement process.

To that end, the company is training GPU-powered generative adversarial networks, which are adept at reconstructing detailed 3D models from images. Glidewell will soon be ready to start live production of AI-designed crowns, according to Sergei Azernikov, machine learning team lead at Glidewell, who spoke last month at the GPU Technology Conference in Silicon Valley.

'In the near future, we will have fully automated clients with everything handled by intelligent systems,' said Azernikov.

Glidewell has faced some special challenges in training its networks because it doesn't actually work from images. Its data is in the form of 3D meshes, which aren't as well suited for being run through a neural network.

Azernikov said he and his team initially tried converting the meshes into images, but they found that each time they changed a render, they had to change their model. Then they tried optimizing their models through voxelization, but that still didn't provide the desired results.

They ultimately decided to convert the meshes into depth maps, which enabled better recreation of the detailed contours and subtleties of a tooth (and the large majority of crowns are for molars).

That level of detail is necessary to ensure that a crown conforms to three considerations: it's shape allows it to fit optimally between adjacent teeth; it fits together with opposing teeth; and its dynamics enable effective biting and chewing.

Combining depth maps with GANs, in which one network generates images and the second inspects those images, results in crowns that have even more anatomical detail than the original teeth they're replacing. The job of the generative network is to randomize its output and get the inspection network to make as many errors as possible, thereby growing increasingly precise over time.

It's an approach that, while highly effective, puts more demand on the deep learning process underneath it.

'To train one network is difficult,' said Azernikov. 'To train two networks simultaneously is even more difficult.'

That said, Glidewell has seen impressive results. The company first started experimenting with AI three years ago, and initial training of networks on CPUs took six weeks. Moving to the first-generation NVIDIA TITAN GPU shortened that to six days. Upping the ante to an NVIDIA TITAN X paired with NVIDIA's cuDNN deep learning library cut that down to just two-and-a-half days.

Azernikov said that training is still being done locally on the TITAN X, but that inference happens in the company's custom-built Amazon Web Services environment, which runs a variety of NVIDIA GPUs. His team also is working with TensorRT (in combination with CUDA runtime) to accelerate the inference process.

'Inference is critical for us,' he said. 'Training happens once and inference can go on for months.'

Azernikov intends for Glidewell patients to be getting AI-designed crowns sometime this year, and looks forward to the dependability it will bring to a product category that historically has had to account for a lot of variability.

'The biggest advantage of AI is that once you train it,' said Azernikov, 'it will be consistent no matter what.'

Attachments

  • Original document
  • Permalink

Disclaimer

Nvidia Corporation published this content on 18 April 2018 and is solely responsible for the information contained herein. Distributed by Public, unedited and unaltered, on 18 April 2018 15:11:02 UTC