Hardware
Control Unit
- Flight Controller
- ROM - Firmware - Code, Kalman Filters
- Processor
- Ports
- SD Card
- Companion Computers
- Cores - VIO, SLAM
- Software - Ease of modification, API connections, Autopilot
- Communication to FC - ROS/MAVROS, Dronekit
- Communication to Server - Telemetry Unit Action Unit
- PDB
- ESCs
- Motors
- Rotors Telemetry Unit LTE on Raspberry Pi Perception Unit
- IMU - Gyroscope, Accelerometer, Magnetometer
- GPS
- Vision System - Camera Power Unit
- Battery Skeleton Unit
- Chassis
- Outer body Additional Functional Units
- Cargo box
- Camera wiper
Software
- Mission Control Software and API
- Autopilot System
- Vision Control System
- Weather Detection
- No-Fly-Zone Detection
- Any ATC Compliant Systems
- RTL/Safe Emergency Landing
- Logging System
Algorithms
Visual Inertial Odometry
In robotics and computer vision, visual odometry (VO) is the process of determining the position and orientation of a device by analyzing the associated camera images.
There are various types of VO:
- Based on type of camera:
- Mono - Monocular Cameras
- Stereo - Stereo Camera
- Feature based and Direct method
- Feature based - Tracking by extracting feature points from an image
- Direct method - Estimation based on pixel intensity as the visual input
- There are also hybrid methods
- Visual Inertial Odometry (VIO) If a Inertial Measurement Unit (IMU) is used along with the VO system, it is commonly called VIO.
Algorithm
- Acquire images - single camera, stereo cameras or omnidirectional cameras
- Image correction - apply image processing techniques for lens dirt removal, etc.
- Feature detection
- Use correlation to establish correspondence of two images, and no long term feature tracking. (Feature tracking ~ Motion Estimation<>Motion Vectors ~ Optical flow)
- Feature extraction and correlation
- Construct optical flow field (Lucas-Kanade method)
- Check flow field vectors for potential tracking errors and remove outliers.
- Estimation of the camera motion from the optical flow 1. Choice 1: Kalman Filter for state estimate distribution maintenance. 2. Choice 2: find the geometric and 3D properties of the features that minimize a cost function based on the re-projection error between two adjacent images. This can be done by mathematical minimization or random sampling.
- Periodic repopulation of trackpoints to maintain coverage across the image.
An alternative to feature-based methods is the direct or appearance-based visual odometry technique which minimizes an error directly in sensor space and subsequently avoids feature matching and extraction.
Another method, coined ‘visiodometry’ estimates the planar roto-translations between images using Phase correlation instead of extracting features.
Egomotion
Egomotion is defined as the 3D motion of a camera within an environment. In computer vision, egomotion refers to estimating a camera’s motion relative to a rigid scene.
The estimation of egomotion is important in autonomous robot navigation applications.