Context

Is additive manufacturing going to change the way we fabricate?

Certainly it is, Additive manufacturing has an immense growth potential with a 22% annual growth as it has offers mass customisation and reduced waste. One of the challenges in additive manufacturing is to improve the precision and reliability of the manufacturing process. This can be done by sensing and optimisation using enhanced softwares which itself will be a 43.5 Billion market by 2028.

What are the problems prevailing in the industry?

Synchronization of digital and physical world is a big problem in robotic fabrication. Also, it is time consuming to fix multiple problems at the same time, so it would be better if we can predict some of them in advance. Prototyping takes the most amount of time in additive manufacturing which makes it more expensive and less efficient.

How can we fix these problems?

Having a feedback loop in the manufacturing workflow can enhance quality and save time. This technology demands adaptable workflows. Also, neural networks are perfectly suited to understand complex parameters to address problems like shrinkage.

D E F O R M A T I O N  + S H R I N K A G E

Despite clear economic and environmental benefits, manufacturers have difficulty implementing additive manufacturing at scale. This is due to concerns about its dimensional accuracy and structural integrity. One of the shortcomings of additive manufacturing across all materials is that, it shows deformation due to gravity and shrinkage after curing process which is very difficult to control and overcome. I am focusing on clay as a material for 3d printing to evaluate my research as it shows significant deformations after printing and its ease of prototyping. Nowadays it is being explored extensively for its utility in 3D printing due to its sustainablility and adaptability for mass customization. 

Research Goal

In the potentially growing demand for 3D printed highly customized mass produced elements, an adaptive workflow to predict and correct deformations can provide a cost-effective means of digital fabrication with enhanced dimensional accuracy. This research aims to bridge the gap between digital model and physical results of 3d printing by utilizing data acquired during the prototyping phase to train an artificial neural network capable of simulating deformations and correct it by G-code optimization.

Technology Roadmap

The entire workflow is divided into 6 different steps and here are all the tools and apparatus required for each step. 

Material Preparation

Shrinkage is closely related with moisture content of clay. So to keep it constant we perform a simple test to know the absolute moisture content in clay by baking it and measuring difference in weight. Once we know the moisture we can add more water accordingly to make sure that is mixture is having a constant moisture of 26%.

3D Printing parameters

I am keeping print width, print speed, layer height and air pressure constant. So the only thing that change is ambient humidity, temperature and print time.

Scanning

My scanning setup consists of a well-lit environment, a rotating base with a pattern on the surface to improve camera localization, and aruco markers to scale and import later in rhino. I’m using my phone camera on a tripod with preset angles to automate and speed up the process. The pattern on the platform significantly increased accuracy and reduced the number of raw images required for a good result, saving time.

Data collection workflow

First, I scan the object using the setup we discussed earlier. The point cloud is then scaled and localized using Aruco detect. After scaling, I use OpenCV to use plane segmentation to clean the point cloud. Given that the Aruco’s digital coordinates on the table are known in Rhino, it is simple to overlay with digital design geometry. After that, we compute voxel occupancy in a predetermined voxel grid by voxelizing both geometries. I’m using 3×3 mm voxels because they match my print width, and my voxel grid is 32x32x32. So, if a voxel is occupied in the grid, it has a value of 1, otherwise it has a value of 0. This data for the complete array of pixels is then serialized into a single column in.csv format for both geometries. 

Synthetic dataset

As i cannot physically print a lot of samples, I generated synthetic dataset to test my machine learning algorithm. I am generating 600 different geometries with different number of sides, sizes and variations on the edge. All of these geometries have same height and follows same 32x32x32 voxel grid size. Each geometry is then simulated for deformations using kangaroo with some predefined loads. Both of these geometries are converted into a .csv file as we discussed earlier.

Machine learning workflow

Here is a flowchart showing the entire machine learning workflow. We will discuss each step.I am using GANN as my machine  learning algorithm.

GANs are made up of two neural networks, a generator and a discriminator, that are trained at the same time. The generator generates synthetic data, while the discriminator attempts to separate real samples from training data mixed with synthetic data. The discriminator provides input to the generator, which it uses to modify its parameters and improve the quality of the generated data. The discriminator also adjusts to changes made by the generator, improving its ability to identify between actual and synthetic data. This cycle is repeated until both the generator and the discriminator improve their performance. GANs are excellent for generating synthetic data that closely resembles real data. This is useful in scenarios where collecting large amounts of real data is challenging or expensive.

The generator network architecture is based on the U-NET which is composed of two “paths”. The first is the contraction path, also called the encoder. It is used to capture the context of an image. The second path is the symmetric expansion path, also known as the decoder. It also allows for accurate localisation through transposed convolution. Here as we can see our array of 32,32,32 pixels is convoluted in 3 steps to 4,4,4,128 and then expanded back to the real size.

The discriminator is based on a patch gann architecture. Patch GAN is a type of discriminator which processes overlapping patches of an image instead of the whole image of a predefined size. The Patch GAN discriminator tries to classify if each patch in an image is real or fake.

The 3D Binary voxel occupation grid is split into horizontal layers generating a 2d bitmap of source and target geometries. This images are later randomised and convoluted for training. 

The entire dataset is now divided into two parts: one for training the model and one for testing its accuracy once trained. We have a total dataset of 600 geometries that we divide in an 80-20 ratio for training and testing. I ran the model through several iterations to ensure accuracy. The final model was evaluated using 9600 iterations.

The machine learning model is now ready to be trained. During training, after every hundredth iteration, a model file and a visual graph displaying the source geometry, generated geometry, and goal geometry are saved. It also registers test loss and updates the model’s weights accordingly.

Once the model has been trained, we can see and validate the model’s prediction accuracy. As we can see here, it gets better with each  iteration.

Data conversion

Observations

  1. GANN model is efficient to train binary voxel data.
  2. Machine learning can predict really well any geometry falling in to the genome of its test dataset.
  3. This approach can help doing deformation simulation real time once the model is trained.
  4. This workflow can be optimised to predict deformation in any material for 3D printing.
  5. The amount of test set is directly proportional to the versatility of the model.

Future steps

  1. Experiment with Machine Learning parameters to improve accuracy.
  2. Compare GANN model with point-voxel CNN model, to check better performance.
  3. Data conversion from deformation to inverse scaling to mesh to print file.
  4. Test with real data of the geometry genome of the final case study prototype.