Skip to main content

Software

Payload SDK & Raspberry Pi

DJI provides thorough instructions for setting up the Raspberry Pi with the Payload SDK. Follow the setup guide.

Run the sample code to confirm the installation before proceeding to the next steps.

Ensure the Raspberry Pi is connected to the local Wi-Fi network if available, and for ease of use, enable SSH access.

In the Raspberry Pi Configuration menu (accessible from the top left),switch on the "I2C" interface.

If not already installed, install I2C tools by issuing the following command in the terminal window:

sudo apt-get install i2c-tools

Once installed, run the following command to verify that the sensors are correctly detected:

i2cdetect -y 1

A table displaying the I2C addresses will be produced. Confirm that each LiDAR sensor is recognized by the system.

     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00: -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- 66 67 68 69 -- -- -- -- -- --
70: -- -- -- -- -- -- -- --

You are now ready to dive into the code.

Project code

All the code is available in a GitHub repository here.

The code is structured by creating several classes to abstract the sensor functions, manage the array of sensors, and handle the vehicle interaction.

For the vehicle and to remain true to the content set by DJI, the same Application class is used from the DJI sample code.

Connecting to the Aircraft

The Payload SDK is used to simplify the connection to the aircraft. Once the hardware setup is complete, the code handles the interaction as follows:

Application *vehicle = NULL;

int main(int argc, char **argv) {
// Create the vehicle instance to connect
// We are reusing the Application object from DJI's sample for clarity only.
vehicle = new Application(argc, argv);

From this point on, the vehicle object can be used to abstract vehicle functions.

Note

The Payload SDK provides functions that are accessible in the global space. However, structuring these functions using objects is a cleaner and more organized approach.

Connection to the LW20/C

Two objects will be created to facilitate the connection to the LightWare LiDAR sensors: LW20/C and SensorArray.

LW20 is used to abstract a single sensor. This includes all controls and reads.

SensorArray is specific to this project and abstracts the four sensors in their physical configuration.

Note

These abstractions can be further expanded if needed. These classes are the ideal place to encapsulate additional functionality as the project evolves.

Start by including all the necessary headers, and then proceed with the class declarations:

//
// Created by LightWare.
// LW20/c Interface
//

#pragma once

#include <iostream>

#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>

#include <atomic>
#include <thread>
#include <sys/ioctl.h>
#include <asm/ioctl.h>

#include <fcntl.h>
#include <string.h>

class LW20 {

Thereafter, an ioctl configuration macro is defined. This is necessary due to the project using a Raspberry Pi 5 and the wiringPi, which handles these variations, isn't available.

#define I2C_SLAVE	0x0703

The public interface which manages the life cycle of the abstraction and protected access to private information is implemented as follow:


public:
LW20() {

}

~LW20() {
this->disconnect();
}

void connect(int i2cAddress) {
_deviceAddress = i2cAddress;

const char *device = "/dev/i2c-1"; // This may change, wiringPi has a good discovery for this, but it's not available yet on pi5.
_fd = open (device, O_RDWR);

if (_fd == -1) {
std::cerr << "I2C Bus file could not be opened" << std::endl;
} else {
std::cerr << "I2C Bus opened on FD: " << _fd << std::endl;

if (ioctl (_fd, I2C_SLAVE, i2cAddress) < 0) {
std::cerr << "Unable to select I2C device: " << strerror(errno) << std::endl;
} else {
_threadRunning.store(true, std::memory_order_relaxed);
_runningThread = std::thread(&LW20::loop, this);
}
}
}

void disconnect() {
_threadRunning.store(false, std::memory_order_relaxed);
close(_fd);
}

int latestDistance() {
return this->_latestDistance.load(std::memory_order_relaxed);
}

The private implementation which covers the internal loop running in a background thread to read the data from the sensor and the protected variables are implemented as follow:


private:
std::atomic<bool> _threadRunning{false};
std::thread _runningThread;

int _deviceAddress = 0x66; // default factory value
int _fd;

std::atomic<int> _latestDistance{0}; // in cm

// Loop running in background thread.
void loop() {
while (_threadRunning.load(std::memory_order_relaxed)) {
unsigned char byte[2];
int res = read(_fd, byte, 2);

if (res == -1) {
std::cout << "I2C Device with address " << _deviceAddress << " was not available" << std::endl;
} else {
int distanceRead = (byte[0] << 8) | byte[1];
std::cout << "[" << _deviceAddress << "] Distance: " << distanceRead << "cm" << std::endl;

_latestDistance.store(distanceRead, std::memory_order_relaxed);
}

usleep(250);
}
}
};
Note

In the way this class is set, the address is defined during the connect function. Alternatively, the address could be assigned during object allocation and create a parameter-less connect function.

With the single sensor implementation in place, the next step is to create the SensorArray object.

This object is a convenience abstraction that streamlines the main business logic, keeping it clean and organized behind well-defined classes. This approach simplifies interaction with the multiple sensors by handling their configuration in one central object.

Start by including the necessary header files and proceed with the class declaration:

//
// Created by LightWare.
// Software representation of the LW20/c quad sensor array for platform detection
//

#pragma once

#include "LW20.h"
#include <limits>

class SensorArray {

The public interface handles the life cycle of the sensors:

public:
SensorArray() {
this->_sensor1 = new LW20();
this->_sensor2 = new LW20();
this->_sensor3 = new LW20();
this->_sensor4 = new LW20();
}

~SensorArray() {
delete this->_sensor1;
delete this->_sensor2;
delete this->_sensor3;
delete this->_sensor4;
}

// Connects the entire array and start receiving distances
void connect() {
this->_sensor1->connect(0x66);
this->_sensor2->connect(0x67);
this->_sensor3->connect(0x68);
this->_sensor4->connect(0x69);
}

// Average distance in cm
int averageDistance() {
int sensor1LastRead = this->_sensor1->latestDistance();
int sensor2LastRead = this->_sensor2->latestDistance();
int sensor3LastRead = this->_sensor3->latestDistance();
int sensor4LastRead = this->_sensor4->latestDistance();

return (sensor1LastRead + sensor2LastRead + sensor3LastRead + sensor4LastRead)/4;
}

// Return the maximum differences between the closest and the furthest away hit
int maxDelta() {
int sensor1LastRead = this->_sensor1->latestDistance();
int sensor2LastRead = this->_sensor2->latestDistance();
int sensor3LastRead = this->_sensor3->latestDistance();
int sensor4LastRead = this->_sensor4->latestDistance();

int distances[4] = {sensor1LastRead, sensor2LastRead, sensor3LastRead, sensor4LastRead};
int lowest = std::numeric_limits<int>::max();
int highest = std::numeric_limits<int>::min();

for(int i = 0; i < size; ++i) {
if(distances[i] < lowest) {
lowest = distances[i];
}
if(distances[i] > highest) {
highest = distances[i];
}
}

return highest - lowest;
}

Private interfaces aren't doing much except holding onto the sensor pointers.

private:
LW20 *_sensor1;
LW20 *_sensor2;
LW20 *_sensor3;
LW20 *_sensor4;

};

Now that we have an abstraction for the vehicle and the sensor array, it's time to tie it all up together.

Plane detection logic

The project follows a straightforward approach to utilizing the sensors for plane detection. This basic implementation serves as a starting point, leaving plenty of room for further enhancements to build a more resilient and production-ready solution.

After covering this simple logic, we'll discuss potential next steps to refine and expand upon the solution.

Within the main function of the main.cpp, the abstraction for the sensor array and connection is created before the abstraction for the vehicle.

Once both abstractions are set up, the landing logic can be applied.

The code continuously loops, checking the sensor data until the landing condition is satisfied.

When the condition is met, the loop exits, and the landing sequence is initiated.

int main(int argc, char **argv) {
// Create the vehicle instance to connect
// We are reusing the Application object from DJI's sample for clarity only.
vehicle = new Application(argc, argv);

sensorArray = new SensorArray();
sensorArray->connect();

// Check loop
bool checkingToLand = true;
int flatEnough = 10; // in cm; 10cm -> ~15° plane

while (checkingToLand) {
int maxDelta = sensorArray->maxDelta();

// Condition to determine a landing opportunity.
if (maxDelta <= flatEnough && maxDelta > 0) {
checkingToLand = false;
break;
}
}

Landing Routine

Landing is a single function call behind an abstraction:

    if (!checkingToLand) {
// We exited the loop flagging we aren't checking to land, so we are landing.
std::cout << "Landing..." << std::endl;

vehicle->land();
}
}

Additional code

Maintaining the application active until triggered from the ground

The code enters the "checking to land" logic immediately upon launch. This could be impractical in real-world applications. Ideally, the application should activate with the vehicle's lifecycle and only begin its landing logic when triggered by a pilot, operator, or an automated system.

To achieve this, consider the use of MOP (MobileSDK to PayloadSDK communication), also referred to as SDK Interconnection or Pipeline in DJI's MobileSDK. MOP allows a MobileSDK application to send custom commands to a PayloadSDK application. This feature would allow for the build of a user interface on a MobileSDK app with a button that can trigger the "checking to land" logic remotely, making the system more interactive and user controlled.

Following the landing target

Another enhancement could involve implementing a logic for following a landing target. This feature isn't covered in detail due to the variety of approaches available. A dedicated camera sensor could be used, or the existing video stream from the PayloadSDK can be used.

The "follow the target" logic could be triggered using the same mechanism discussed earlier, with a command from the ground application.

This step would refine the approach and lead naturally into the "check to land" logic when the conditions are right.

Landing risk validation

The current implementation assumes that it is safe to commit to a landing when the sensor detects a plane at approximately 15°. This is a broad assumption and may not be ideal in many practical scenarios. A more nuanced plane validation process should be implemented depending on the specific characteristics of the platform being landing on.

Landing on a stable ground vehicle on a road requires different conditions than landing on a moving sea vessel, where stability can vary significantly with weather and sea conditions. Implementing specific parameters for different environments (e.g., smooth surfaces for road vehicles versus dynamic conditions for ships) can make the landing sequence more reliable and tailored to specific use cases.