Aerial Reforestation Using Autonomous Drones

Desertification threatens 74% of Spain’s land, worsened by deforestation and wildfires. To aid reforestation, this project develops an open-source drone for autonomous terrain analysis and seed dispersal. Using the Holybro X500 V2, it integrates SLAM for mapping and navigation. Initial work focused on hardware setup and autonomous flight. Future steps include refining SLAM and optimizing 3D modeling. This research aims to enhance scalable, efficient reforestation efforts.

Context

Desertification in Spain is a significant environmental issue driven by a combination of natural and human factors. Over 74% of Spain’s land is at risk of desertification, particularly in regions like Andalusia, Murcia, Valencia, and Castilla-La Mancha. This phenomenon is exacerbated by deforestation and wildfires. Due to these the soil becomes exposed to erosion from wind and water.For Spain, this is a massive problem as it leads to biodiversity loss, the degradation of fertile soils, impacts on water resources, and has economic consequences, all of which ultimately affect people’s quality of life.

This graph shows the hectares burned in Spain from 1960 to 2019, with a peak between 1975 and 1995, followed by a decline. However, in 2024, over 5,800 fires still occurred, affecting 48,550 hectares – an alarming figure. This underscores the ongoing need for prevention and restoration efforts, driving reforestation initiatives at the European level, supported by multiple funding programs.

So… what can we do?

There are two main approaches to reforestation: manual reforestation, carried out by organizations like Reforesta, and drone-assisted reforestation, which I will introduce with the example below.

Morfo is a Brazilian company primarily dedicated to reforesting areas of the Amazon. The company follows four defined steps in its reforestation process: diagnosis, planning, planting, and monitoring. In three of these stages, drones are used – for analysis, seed distribution, and monitoring growth afterward.

According to Morfo, their system enables a single drone to treat up to 50 hectares per day and plant 180 seed pods per minute.

Proposal

My proposal, though not as complete as Morfo’s example, shares some similarities. I aim to develop an open-source autonomous drone capable of analyzing terrain and dispensing seeds, with the potential for future collaboration with additional drones. I believe scalability is essential, especially in situations like this, where large areas need to be covered.

This is the proposed simple workflow for my system. After the user chooses the site:

  • Drone performs SLAM and maps all the area 
  • Point cloud and images are then post processed to identify target planting points
  • Navigation and dispensing points are planned
  • Route is performed and seeds are dispensed

However, I will primarily focus on developing and implementing the first step: autonomous SLAM. This aligns with my interest in understanding its functionality and serves as a foundational step with potential applications beyond this project.

The ultimate goal is for the user to define a scan area through a simple interface, after which the drone will autonomously navigate, map the area, and generate a 3D model – completely without human intervention.

The drone I chose is the Holybro X500 v2, equipped with the Pixhawk PX4 flight controller. I selected it due to its availability (it was accessible at IAAC) and its payload capacity (around 1kg). This is an open-source drone with excellent documentation, allowing me to easily assemble it. However, the reparation hasn’t been that easy.

The Hardware of the Holybro is as follows: we have 4 motors and its ESC connected to the flight controller, as well as the GPS, radio receiver and a telemetry module and everything is powered by a Lipo battery. On the ground it can be either controlled by a remote controller or using a ground station (like a laptop) connected to another telemetry module and using software like QGroundControl or ROS2. After all these works properly, the plan is to add a small computer like the Jetson nano and a depth camera, to be able to perform the SLAM.

To repair and improve the drone, I 3D-printed broken parts, added a telemetry module, and debugged the controller. A major issue was switching to mission mode, which I resolved by resetting the PX4 – though it took significant time (because it was my last resort). I have also 3D printed a secure PX4 mounting to allow stacking a mini-computer like the Jetson.

Testing

My first test was piloting the drone manually using a controller. As you can see, the initial attempt didn’t go well. I later realized that the PX4 was mounted in the wrong direction, which caused the drone to flip upon takeoff. Once I corrected this, I was able to fly it. However, as you can see, I’m not exactly an expert at flying drones.

For the second test, I tested different mission modes, including waypoint navigation and altitude hold to assess GPS accuracy, with good results. In the following video, you’ll see the survey mode flight from start to finish.

My next step was testing the Jetson, as I had never worked with it before. The first thing I researched was how to connect it to the PX4. I discovered that it can be connected via a telemetry port, which also provides power—conveniently, the Jetson only requires 5V, which this port supplies.

Here you can see the setup.

I was aware of the Jetson Nano’s limitations. However, before considering alternatives or upgrading to a more powerful version (which would require an investment), I wanted to test its capabilities myself. So, I tested it with real time object tracking and it struggled (there was a 1-second lag between real-time movement and processing), which suggests SLAM will be challenging (due to GPU and CPU limitations). 

I considered two possible solutions:

  • Option A: Offloading computation to an external system. However, this would require a highly stable, low-latency communication link – especially if obstacle avoidance is involved.
  • Option B: Upgrading to more powerful hardware, such as the Orin Nano or Xavier, which would allow seamless autonomous SLAM. However, I need to determine how to power it, as I believe it requires more than 5V.

After weighing both options, I have decided to purchase an NVIDIA Orin Nano or a similar alternative (depending on stock and delivery times). While I am considering funding this myself, I am open to external funding and suggestions.

Term III

For Term III, my priority is finalizing the hardware selection and integrating it into the system as shown here. I am also open to suggestions regarding which depth camera or LiDAR to use.

I won’t go into much detail here, but I want to mention that there will be a simple user interface to set missions and monitor the drone. As shown in the diagram, RGB-depth, IMU, and GPS data will be fed into the Jetson to perform SLAM using a package like ORB-SLAM3 and an exploration algorithm such as Rapidly Exploring Random Tree (RRT).

I have also planned an experimentation phase once everything is fully operational. The goal is to achieve the highest-quality 3D site model. After each test, I will evaluate several factors and parameters (I might even generate a script for automatic evaluation). I have calculated that 54 experiments are needed for a thorough analysis of these variables. However, I am currently exploring ways to reduce this to a more manageable number within my available time.

Here’s my planned timeline. However, as mentioned earlier, repairing the drone took longer than expected, leaving me with limited time for SLAM development. My plan is to dedicate April for coding and experimentation, May for final testing and June for documentation. Let’s see how it goes!