As I mentioned in my post detailing my implementation of xkcd/941, I had a pair of Kinects from an earlier project.
Back when the Kinect first came out, I grabbed a pair - like many other people, I had ideas for it. Then, as usual with new tech, the limitations came to light, some of the possibilities were explored, and hype quieted down.
So, why did I need two?
Well: I've always wanted to be able to see a top-down view of my car's surroundings. One, obviously, would not be enough. I figured: If I could get it working with two, I could slap a bunch around my car and get it all working together.
So I got started. Here's what I wanted to make:
The black box is your car, gray is safe driving area.
Green boxes are other cars, light green is curb level,
blue dots are trees, people, bikes or other small, moving obstacles.
I wanted to replace the back-up camera with that. If you can see visibly how far away your car is from the curb or another car, you won't need to guesstimate how far it is with the fish-eye lenses of your back-up camera or the "Closer than they appear" margin of error in your mirrors.
So I got started. Step 1 was to get a visual display of the depth sensor:
|A view of my kitchen from my desk, with a camera tripod in the middle.|
Then I buckled down and got working. Mathy stuff: Converting distance as seen from the camera to a top-down plot:
|A view of my living room from my desk. Top down (left) and camera's eye (right).|
Next step was to take two controllers and use 'em to see around each other:
|Annotated display. Darker blue = taller object|
When moving the camera (and updating their location values appropriately), it could render the table as a sort of fuzzy circle: The two Kinects were successfully working together to see around obstacles.
At this point, I figured it ready for a road test. I took it out, set it on my car, plugged everything in, and ...
The maximum range fizzled to 2 meters: The sun, on an overcast day, overpowered anything beyond that. Other than that, it worked surprisingly well. And at nighttime, it showed fairly clearly the nearby cars in the parking lot. They just looked like weirdly distorted rectangles. (Note to self: Go out and take a video in the parking lot, just for the blog.)
So, back to the drawing board. And this time, starting from scratch with the specifications and giving a lot more thought to the capabilities of the Kinect.
* To generate the pictures above chewed up a huge percentage of my CPU. Sure, there was a lot of optimization that I could do, but enough to reduce the CPU usage to a low enough point to where it was feasible to do on a _low power_ chip designed for use in a car? Probably not. And not cheaply.
* Aside from the measurements, I'd need a geometric engine on top of it to fit basic geometric shapes to the different objects, so I could generate something that's not very confusing for the driver.
* To get any sort of real coverage, I'd have to cover the car in Kinects. I was originally thinking 8: Two at each corner, assuming the technology could handle 90 degree horizontal angle, and "as much as possible" 80 degree angle. But the way the Kinect works isn't time of flight, like I originally assumed, but it uses spatial differences in a projection to guess the height. I'm guessing the widest angle the technology behind the Kinect could handle would be 45, and that's pushing it.
* The sun kills almost all hope for the UV tech in the daytime.
* libfreenect requires that each Kinect be on a _separate_ USB _card_. The laptop has two. Using a hub of Kinects would only confuse libfreenect.
Put simply, it'd take a lot of funding, research, hardware, and a different form of mass distance measuring tech to get something I could turn into a viable product. And somebody with business sense to turn it into something other than a cool lab product or blog post :D.
If you want to give my CarBoundary a shot, help yourself! As usual, the project is up on GitHub: https://github.com/captdeaf/CarBoundary