Jetpac CTO Pete Warden has blogged news this week of his team's successful port of the Deep Belief image recognition SDK to the Raspberry Pi.
This is exciting (says Warden) because it shows (or at least it is one example of the fact) that even tiny, cheap devices are capable of performing sophisticated computer vision tasks.
The Deep Belief Teacher App works such that it helps users teach their phone (or small form factor handheld) to recognize an object by taking a short video of it — the user then teaches the application "what is not" the object by taking a short video around the target object and of everything except the object.
Subsequent to the above action the user is then able to scan their entire surroundings with the phone (or other) camera and the app will detect when the user is pointing at the target object.
With the DeepBelief SDK, developers can build object recognition into iOS/Android apps and now the Raspberry Pi, giving phones and the Raspberry Pi the ability to see.
Warden says that he has talked a lot about how "object detection is going to be commoditized and ubiquitous" in the future.
"I can process a frame in around three seconds, largely thanks to heavy use of the embedded GPU for heavy lifting on the math side. I had to spend quite a lot of time writing custom assembler programs for the Pi's 12 parallel 'QPU' processors, but I'm grateful I could get access at that low a level," he said.
The demo is a lot of fun to try out.
The company has been using deep belief networks extensively for image recognition across hundreds of millions of Instagram photos, and now it's excited to bring some of that functionality to the masses and see how developers take advantage of the phone and Raspberry Pi's newfound ability to "see."