Real World ML algorithm trained using Computer Graphics

Wipro Tech Blogs
8 min readSep 30, 2022

By Raghu Nandan D, Kiran P V N N

The European Spruce bark beetle or Ips typographus, measures only about 4 millimetres but has damaged more wood in 2018 than the record wildfires that ravaged Sweden. According to the Swedish Forest Agency, in 2018 itself, these species ravaged about 3–4 million cubic meters of lumber, denting the national economy by around 625 million dollars [1][2][3]. To monitor and stop this menace accurate detection methods are required. Currently in most of the locations, infested trees are detected by trained field inspectors. Due to the manual effort involved it is not feasible to monitor large areas in a timely manner. Satellite imagery can be used to monitor the large landscapes. However, these experiments have produced very ambiguous results due to the structural complexities of forest canopies. Drone based monitoring is a viable alternative as this method can provide high-definition imagery to the computer vision models for better prediction. While the definition of bark beetle infestation stages is not always consistent, classifying infestation based on the colour provides good basis- healthy trees are green, infected/weakened trees are reddish, and trees at the terminal stage of infection are white in comparison.

The advent of massively superior and powerful ML models has equipped us to automate some of these visual tasks with state-of-the-art performance. Previously, we successfully built a video-analytics pipeline to detect Bark Beetle infected and at-risk spruce trees for proactive action. However, our pipeline fell short of providing higher accuracy due to a lack of training data (images) for specific conditions. Obtaining data manually via drones for multiple conditions and annotating the images for all possible environmental variations is not feasible as it is highly effort intensive and expensive. Hence, we thought of addressing this lack of image data by generating photo-realistic synthetic images and scenes in Blender and Unreal Engine.

This article describes the ML pipeline briefly and further explains our unique approach of synthetic data generation for training the model using computer graphics / rendering techniques. This has improved the model accuracy and reduced the effort and cost involved in manual collection and annotation of the data.

We have designed a simple two stage pipeline with Yolo v3 and FC Densenet for object detection and image classification. We achieved an accuracy of over 83% (Refer Fig01) with the initial data set of 3000 images. However, our pipeline fell short in providing high accuracy across various environmental conditions and generalization over large forest covers due to the lack of training data (images) for these specific conditions.

Fig-01: Comparison of manually annotated images with system predicted for the same image

Need for rendering / synthetic data generation:

It was obvious that we needed lots of annotated data to train a highly accurate model to cater to the requirement. Due to the amount of effort and the cost involved in flying drones and manually annotating the images for creation of huge volume of datasets, we came up with an idea of generating photo-realistic synthetic images and scenes. The following were the objectives of this project:

· Render photo-realistic forests in Blender.

· Create a Simulated Forest Environment to generate and randomize data (spatial transformations) and domain conditions (weather variations) in Unreal Engine.​

· Deploy the simulator in the cloud in headless mode to solve other similar problems.

Workflow

Meshroom was used to construct 3D models from 2D images. Photos were taken from various perspectives and fed to Meshroom.

Blender provided 3D models’ enhancement, shading, animations, and other modifications, if any.

Unreal Engine was used to render the photo-realistic forest environment from all the 3D models built using Meshroon and Blender. It helped the users go through the environment, take screenshots, and send the data to the ML model for training.

Segmentation was done with new data and was tested using the ML model that was trained using the old data.

Blender

Using Blender to make 3D models for our trees was obvious. We knew that we would get satisfactory results if we managed to make Spruce trees effectively using the Sapling Tree Gen plug-in in Blender. Another approach we had in mind was creating low-poly trees using simple modelling techniques in Blender.

Using various techniques, we were able to create a terrain shown below.

Though this scene looked good enough for a far-off drone shot, it had its own limitations like no fly-through, annoying wait-times, and much less flexibility.

Unreal Engine

Unreal Engine (UE) was an easy choice for rendering tasks. The support from the UE community and lot of free assets surely makes it one of the best platforms to work on. UE also has wide range of customization options and simple shortcuts that would make any developer’s work easier. As UE is used widely by game developers to make large scale games, it was guaranteed that it would render even the most graphically intensive scenes for our project.

Using Spruce Forest asset from Unreal Engine store, we were able to get various spruce tree meshes that we could work with.

Essential elements in data-generation include:

  1. Foliage Spawning: Foliage spawning was done through code (which we wrote from scratch), and not by UE’s Foliage tool. We needed more control over the spawns and ability to allow multiple re-simulations whenever needed. Foliage can be generated by using UE’s built-in Foliage tool, but it would require the developer to paint it. Removing all the foliage and re-painting it randomly is a tedious task. The dome in the figure indicated the paint area where the foliage would be generated. This can be controlled by simple mouse cursor clicks and drags.

2. Default Pawn in Fly-through: In UE, free movement and rotation can be experienced using a “Default Pawn”. A fully functional drone-like “fly-through camera” is fit to the player controller, where the users can fly through the whole terrain and observe it from a bird’s eye perspective. An UI was developed to create a simulator.

3. Environment Control: Environment and Lighting played an important role in generating usable synthetic image data for this project. The look of the trees completely depends on the environment it is in. Below are some snapshots of different conditions including Sunny, Snowy, and Rainy days. Each environment can be simulated with the press of a key. Rain and snow both were created using the native particle system of unreal. The texture, materials, velocity vectors, spawn and spread for both these systems were created manually by following a set of standard procedures. Dynamic brightness controls were added to simulate night and dusk and give additional environment control to the user.

4. Multiple Screenshots: We implemented Multiple screenshots feature to mimic the drone shots including the angle and height. This was done using Timelines in UE. The code consisted of a camera and its timeline and direction. If the user is above a certain height, this feature can be triggered to get drone shots of the trees as shown in the picture below. We use a Sequencer, which is usually used to make cut-scenes in games where the developer adjusts the camera transforms to make smooth transitions in the game. Smooth transitions are made possible using intelligent key-frames. They record the camera transforms accordingly and adjust the transitions from key-frame to key-frame smoothly.

The screenshots can be stored in a specified folder like below:

5. Infected/Dead Trees (Colour Variation): The variation in infected trees can be achieved by adjusting the saturation levels and texture’s RGB/Tint values. The dead trees were made the same static meshes of the trees which were later modified into just the trunk and branches and no leaves. The colours, materials, and textures were adjusted accordingly to match the average look of an infected/dead tree. Different timelines, components and segmentation logic had to be created for the environmental simulation of these infected and dead trees.

6. Finally, our primary goal in creating a forest simulator was to generate labelled data for training with minimum modifications. In an attempt to achieve this, we created pixel-wise segmented data for training computer vision models for segmentation tasks like semantic segmentation. For pixel-wise segmentation, we traverse the hierarchical mesh component based on the type of the instance i.e., landscape, grass, infected tree, etc., and dynamically replace the texture of the component with a simple RGB colour. To remove raytracing, reflections and other realism created by the scene lighting we switch the lighting to unlit, resulting in a pixel-wise segmented scene that can be modified/used for training.

Conclusion

Having high quality datasets is essential for training any machine learning algorithm. The lack of sufficient good quality data results in sub-optimal models with poor accuracy. As a result, none of the models trained with limited datasets could be deployed for real-life usage. Use of most of such models will be limited to research publications or Proof of Concepts.

Rendering / Synthetic data generation for solving various computer vision problems seems to be a good alternate solution to address the lack of adequate training data. Through this method, we improved our model accuracy by 83% when tested on real-world test data and the rendering has not introduced any additional noise in the model.

References:

[1] https://www.bloomberg.com/news/articles/2020-09-15/spruce-bark-beetles-are-ruining-swedish-forests-at-a-record-pace

[2] https://www.bloomberg.com/news/articles/2019-05-07/this-tiny-bug-could-put-a-625-million-hole-in-sweden-s-forests

[3] Swedish Forest Agency (2021): Forest Damages — Spruce Bark Beetle 1.0. National Forest Data Lab. Dataset.

[4] Utilizing Video Analytics to Solve the Bark Beetle Infestation in Sweden | by Wipro Tech Blogs | Medium.

--

--