The process begins with human fabrication of woven modules using flexible rods or sticks. Due to tension, compression, and human variability, the woven pattern often deforms. Our system integrates computer vision to scan these deformations and a robotic arm to inject earth between structural members at optimized locations. This bridges physical craft and adaptive digital automation.

What we built: a system that automatically detects stick positions and finds the best location for material injection. Why: woven modules shift due to material behavior, and manually updating robot instructions for each variation is time-consuming. How: by training a YOLOv8 model to detect sticks and using Python to calculate triangle meshes and optimal centroids between them. This replaces the need for manual point selection in Grasshopper and allows for geometry-aware robotic printing.

We designed a full loop system. After designing and weaving the module, the robot injects material. We simulate toolpaths digitally, then use vision to check the real structure. The ML layer detects structure geometry. We compute triangle connections between detected stick centers and extract the centroid of the smallest area triangle to inject material at the most structurally relevant position.

This diagram shows how the physical and digital systems interact. We start with either a photo or could be a live camera feed. The image is processed using a Python pipeline. We load a YOLO model trained on stick detections. After detection, we run geometric logic to find triangle meshes and compute the centroid of the tightest triangle. This result is sent to a robot controller—via serial, JSON, or ROS—closing the loop by physically injecting into the correct spot on the module.

Training data is annotated in Roboflow and exported in YOLOv8 format. We use Ultralytics YOLO for inference. Once sticks are detected in an image, we calculate the Euclidean distance between them and form edges. Using combinations of three valid edges, we generate triangle meshes. From each triangle, we compute centroids and select the one with the smallest area as the best injection location. Outputs include annotated images and a CSV of triangle/centroid data.

We collected around 50–100 images of different woven module layouts under varied lighting and distortion. These were labeled in Roboflow with bounding boxes around each ‘stick’. The data was then used to train a YOLOv8 model. We exported the trained weights and used them locally in our Python pipeline to perform real-time inference.

This was our testing process for improving stick detection using OpenCV. We first used red markers, but they were too close to the background color, so detection failed. Then we switched to blue stickers, which gave much better results. Next, we measured and labeled the distances between sticks. Finally, we tested the same code on another model, but lighting affected the blue color, so detection didn’t work. These tests helped us understand what works and what doesn’t in color-based detection.

After detecting the sticks, we extracted features like position, distance, and spacing. We saved the data in a CSV file, then used pandas in Google Colab to create visualizations. The top histograms show the range of spacing values. The bottom scatter plots show how features like position and stick count relate to spacing. Since our dataset is still small, each value appears only once, but this gives us a clear idea of their ranges.

When sticks were evenly distributed and moderately spaced, the triangle mesh formed consistently and the optimal centroid was stable. In images with sparse or irregular spacing, triangles sometimes failed to form. To solve this, we introduced an adaptive distance threshold — calculated based on the average distance between all detected stick centers — which improved triangle generation even in sparse conditions.

We want to add real-time ROS or serial communication with a robotic arm so the system can operate in live fabrication. We’d also like to incorporate depth cameras (e.g., Intel RealSense) to add a third dimension and compute true (x, y, z) centroids. Longer term, clustering triangle regions and outputting multiple injection points would allow batch fabrication. Support multi-point detection and printing per image using triangle clusters. Packaging the whole pipeline as a Python tool would also make it reusable and sharable with other fabrication researchers.