Projects
johnny 5, the autonomous race car - february 2017 to may 2017
Information
I worked on a team to create an autonomous race car. The car uses a particle filter to localize itself in the CSAIL basement, a variety of computer vision algorithms to recognize unexpected cones and avoid them, path planning to map a long-term path around the basement, and a pure pursuit controller to dive at high speeds (pure pursuit is a control algorithm that is commonly used to guide autonomous aircraft).
Go to https://github.mit.edu/pages/rss2017-team5/ to learn more (click on titles to find more specific information).
Results
Here is a video of the car navigating around unexpected cones, with a higher level plan of completing a loop around the CSAIL basement.
I worked on a team to create an autonomous race car. The car uses a particle filter to localize itself in the CSAIL basement, a variety of computer vision algorithms to recognize unexpected cones and avoid them, path planning to map a long-term path around the basement, and a pure pursuit controller to dive at high speeds (pure pursuit is a control algorithm that is commonly used to guide autonomous aircraft).
Go to https://github.mit.edu/pages/rss2017-team5/ to learn more (click on titles to find more specific information).
Results
Here is a video of the car navigating around unexpected cones, with a higher level plan of completing a loop around the CSAIL basement.
Here is a video of the car completing the loop as fast as possible.
Source
You can find the source for all of our experiments with the car at GitHub here: github.mit.edu/rss2017-team5
You can find the source for all of our experiments with the car at GitHub here: github.mit.edu/rss2017-team5
AMBAGES, THE MAZE SOLVER - JUNE 2016 TO AUGUST 2016
Information
Ambages is an Android app. You use it by taking photos of a maze (it can be hand-drawn) with your phone's camera; the app then solves the maze.
Results
Here is a set of images that show the app working.
Ambages is an Android app. You use it by taking photos of a maze (it can be hand-drawn) with your phone's camera; the app then solves the maze.
Results
Here is a set of images that show the app working.
First, the app tells you to take a picture of a maze that you drew, and designate start and end points (in red).
|
Then the app uses some simple CV techniques to recognize the borders of the maze and a basic search algorithm to solve it. The app shows a loading screen while trying to solve the maze.
|
Finally, the app presents you with it's solution (shown with a thin red line).
|
ARM MARK 1, THE 3D-PRINTED SMART ARM - JUNE 2016 TO AUGUST 2016
Information
I created a Java API to give autonomy to a 3D printed robot arm. Included in the API are search algorithms that tell the arm how to reach a certain location in space, QR code recognition algorithms, face and other body feature recognition algorithms, ball recognition algorithms, and more. The API sets up communication via a server to a raspberry pi (with a variety of sensors, like cameras) that is mounted on the arm. All of the computations that the arm must perform are offloaded through a wifi network to be performed on an owner's computer in real time. The advantage is that the 'brain' size of the robot is not confined by the size of the arm. With a reasonable wifi speed, the arm also has fast reflexes.
On the hardware side, I extensively modified the design for an open source 3D printed robot arm to give it 6 DOF and a gripper; you can find the original arm here: https://www.thingiverse.com/thing:480446.
Results
Here is a video of the arm fetching and throwing a block, using the API.
I created a Java API to give autonomy to a 3D printed robot arm. Included in the API are search algorithms that tell the arm how to reach a certain location in space, QR code recognition algorithms, face and other body feature recognition algorithms, ball recognition algorithms, and more. The API sets up communication via a server to a raspberry pi (with a variety of sensors, like cameras) that is mounted on the arm. All of the computations that the arm must perform are offloaded through a wifi network to be performed on an owner's computer in real time. The advantage is that the 'brain' size of the robot is not confined by the size of the arm. With a reasonable wifi speed, the arm also has fast reflexes.
On the hardware side, I extensively modified the design for an open source 3D printed robot arm to give it 6 DOF and a gripper; you can find the original arm here: https://www.thingiverse.com/thing:480446.
Results
Here is a video of the arm fetching and throwing a block, using the API.