The idea is simple:
I am on this side of the room, but my canned beverages are on the other side. What if instead of getting up to get it, I can open my hand in the air, and a robot launches one right into the palm of my hand!
A lot of the software is far out of my understanding, but I’ve made valiant efforts towards it none the less.
On the hand identification side, I learned to work with Microsoft’s SDK for the Xbox Kinect and was able to get open and closed hand recognition from a live video feed! What has stalled me thus far on this end has been my inability to output the 3D coordinates of the hand itself, which would allow me to know where the hand is in 3D space relative to where the robot would be.
In my notebooks, there are many versions designed out, with the primary mechanisms being anything from springs, to flywheels and motors. My roommate and I have done the math for it, so really all that is in my way would be the hand positioning, and I can bring this project much more closer to reality. I go off and on this project, but the hand positioning has continued to cause issues. I am slowly in the process of learning openCV, with the hope to understand stereo cameras enough to entirely write my own code for it.
Open Hand Detected
Closed Hand Detected
Want to print your doc? This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (