icon picker
3PV Drone

A person tracking drone that gives users the feeling of controlling their own body in a video game from a third-person view

Introduction

Most people in our generation grew up playing open-world video games such as Fallout, Skyrim, and Grand Theft Auto that give players a third-person view of the character as they play:
image.png

Our Vision

We want to give people a chance to experience their own bodies from a third-person view (3PV). We will achieve this by using a small quad-copter drone outfitted with an inexpensive camera with wireless transmission of video to a pair of FPV goggles worn by the user. A person would simply launch the Crazyflie drone, put on the FPV goggles and enter the 3PV experience. The in parallel with the user’s goggles, the drone’s video feed will also be received by our system’s ground station, a laptop computer designated to perform person tracking on the user and relative localization and flight control of the drone, sending flight commands back to the drone via radio communication.
image.png

Practical Implications

Our system could be applicable to the field of action sports, giving athletes a unique perspective of themselves as they perform. In general our system will give people a unique and amusing perspective on their own presence in the world, how they interact with environments, and how their senses inform their understanding of the world.

Demo

We imagine setting up a simple outdoor obstacle course for people to test the experience with, which will be tons of fun! We can also test users’ sports skills once hey get acquainted with 3PV, with challenges potentially including archery, playing catch, making a soccer goal, or riding a scooter or bicycle.

Ethics and Responsible Use

How are you planning to engage with the Ethics and Responsible Use Statement part of the project?
Our main concern with Ethics is the safety hazard of a drone hovering close to the user. We will prioritize keeping the drone at a safe distance to humans, minimizing the risk of collision while giving the user an optimal third-person perspective.

Topics We Will Explore Through the Project

What topics will you explore and what will you generate? What frameworks / algorithms are you planning to explore (do your best to answer this even if things are still fuzzy)? What is your MVP? What are your stretch goals?
3PV schem.jpg
System schematic

Image/Video Processing and Computer Vision

We will use the video processing library Open-CV for manipulating and processing images received at the ground station. We will likely need to do some light image corrections due to the low-resolution and noisy video feed from the analog FPV camera. We will first try to track users by dressing them in a brightly colored t-shirt and implementing a simple color-blob detection algorithm. If this approach does not satisfy our needs, we will try Open-CV person detection or attaching an April tag to the user.

Relative Localization

Because the drone needs to keep an appropriate, fixed distance and angle to the user throughout the experience, we will need to implement a robust visual localization algorithm that can measure the relative positioning between the user and the drone. We will use the transform data generated by this algorithm to inform our system’s motion planning.

Motion planning

Given that our system can determine the relative pose of the drone with respect to the user, we will implement a motion planning algorithm that keeps an optimal distance and angle between the drone and user.

Radio control

We will implement a low latency wireless video transmission to send real-time motion commands from the ground station to the drone.

Timeline for milestones

Outline a rough timeline for the major milestones (following the sprint review timeline) of your project. This will mainly be useful to refer back to as we move through the project.

Sprint 1

While we wait for the video hardware of our project to arrive in the mail, we plan to work on structuring our code base and building a neatly organized ROS architecture that we will use at the ground station, including nodes for image processing, relative localization, motion planning, and radio control. We will develop and test our radio control during this sprint. We will also develop our person-detection algorithm using a webcam, as much of the hard work will not require an actual aerial implementation to test with.

Sprint 2

By this time we hope to have our radio control mostly set up. Once our hardware arrives, Ben will focus on setting up the camera and video stream between the camera TX and ground station, including camera calibration. Lulu will focus on motion control of the Crazyflie. We plan to split up our work, while keeping close communication and helping one another throughout the process. By the end of sprint 2 we will begin integrating our system’s sub-components.

Sprint 3

By this time we hope to have a solid video pipeline from the Crazyflie’s camera into ROS. During sprint 3 we aim to realize accurate, user-following drone movement with robust tracking.

Implementation Risks

What do you view as the biggest risks to you being successful on this project?
We are a bit worried about the resolution/noise of the analog video signal, as it may not be adequate for the type of visual localization that we aim to implement.
We expect our goal of robust user-tracking to pose our greatest challenge. We expect that it will take a lot of work and clever coding to make sure our system doesn’t lose sight of the user and wander off, and we are already thinking of solutions to this problem, including fusing multiple visual person-tracking approaches and implementing rudimentary global localization.
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.