Custom built ROV/AUV

ROS2, Electrical Design, Mechanical Design, Python, Mavlink

Overview

Halo AUV is a robot I created to serve as platform for student’s in my master’s program at Northwestern University to use to learn about underwater robotics and expand upon. Personally, I wanted to gain experience in underwater robotics while also building a robot from the ground up. The robot can run as a remotely operated vehicle (ROV) and has been integrated with ROS2 to further explore autonomous capabilities. Below is a block diagram describing the architecture of the robot.

Halo AUV Block Diagram

On Halo itself, the Pixhawk handles low level control and sends the commands to move the motors. It relays depth information from a barometer and heading information from its internal sensors to the Raspberry Pi. The Raspberry Pi sends these sensor measurements alongside the camera stream via Ethernet to a laptop. The measurements and camera stream are then sent to either QGroundControl (ROV mode) or to ROS2 nodes that publish the images to a ROS topic and perform different control functions. In ROV mode, control input comes from a PlayStation controller manned by a human. When high level control commands are received, they are sent back to the Raspberry Pi over the Ethernet tether and eventually to the Pixhawk to move the robot.

Mechanical Design

The key aspects of the mechanical design are that it is easy to manufacture, highly adjustable, and has the ability to easily add on new payloads/sensors. The design is also centered around the BlueRobotics 3” and 4” watertight enclosures.

Halo AUV

The frame itself is constructed of two parallel laser-cut pieces of acrylic that are mounted to the water tight enclosure and reinforced with multiple standoffs. Once the top and bottom frames are cut, the only tools needed to assemble the frame are common hex keys. The assembly process takes about 10 minutes.

The vehicle also consists of six slotted mounts along the bottom of the frame for adding and moving weights. These are used to adjust the vehicle’s overall weight and to align the it center of buoyancy with its center of gravity. This ensures that the vehicle is neutrally bouyant and level in the water. The current design allows for weights to be added and shifted so that adjustments can be made quickly. This is especially useful for adding new payloads to the vehicle as they will change the weight and buoyancy balance of the robot. Additionally, floats may be added to increase the buoyancy.

Weight Sliders, Corner Weights, and Buoyancy Floats

Additionally, payloads and sensors can also be easily added to Halo. There are many mounting holes lining all sides of the frame and going down the center. The weight mounts are a good example of how new payloads may be added.

Electrical Design

The following image displays Halo’s electrical layout. Everything is powered from a 14.8 V 9000 mAh LiPo battery. The motors receive 14.8 V power through motor controllers that receive commands from the Pixhawk. All other components are powered off of the output of a 5 V step down converter. Communication is established from the topside computer to the robot via Cat5e cable. The camera and Pixhawk connect to the Raspberry Pi via USB. The depth sensor communicates with the Pixhawk through I2C. Motor positions for the thrusters and camera gimbal are communicated through PWM from the Pixhawk to the respective motors.

Wiring Diagram

Software

The onboard Raspberry Pi interfaces with the Pixhawk using the Ardusub companion software. The Pixhawk is also running the Ardusub firmware. The robot can be used as an ROV by downloading the QGroundControl (QGC) software. When using the robot as an ROV, a user simply has to plug the robot’s tether into their laptop and run QGC. This software allows someone to control the robot with a standard video game controller while getting depth information, heading information, and a first person view from onboard the robot.

QGroundControl View in ROV Mode

Halo is also integrated with ROS2 through two nodes that handle control of the robot and publishing the stream from the robot’s camera to a ROS topic.

The control node initializes contact with the robot via Mavlink to send commands and receive sensor data. When the node first starts, it reads in the robot’s current heading and depth to hold the AUV at those values. There are then two services to move the robot’s heading and depth either to an absolute value or by a relative amount. The controllers for heading and depth were implemented using PI control. Similarly, there is a controller to move the robot forward, however, there is no internal sensor to measure its forward displacement. This controller could be implemented using an external measurement like an AprilTag.

The camera node reads in the camera stream from the robot via UDP. It then converts it to a gstream video feed to an OpenCV stream and then a ROS2 sensor_msgs image message. This is then published to a ROS topic to be used by other nodes.