On 2013/10/22, I attended an informal (albeit packed) conference by IUMakes. The conference was featuring a talk and demonstration about up and coming Intel’s Perceptual Computing devices and software stacks.
The demo started with a talk about Makerspaces, Arduino, hacker ethic, Leap Motion, and similar devices. There on, the speaker introudced Intel’s new piece of hardware, Creative Senze3D (for now, Creative Interactive Gesture Camera). Details are somewhat sketchy.
This talk touched on Minority Report, gorilla arms, gestures, hand tracking, and many various problems one would have in using gesture based systems like this. And the speaker showed off Intel’s systems of how it could do depth maps, web camera color overlays over the aforementioned depth map, and microphones.
Then the downsides: It’s ONLY for Windows.I believe I found out why. The demonstrator talked about opening the device and it being a no-no due to lack of laser safety. This comment leads me to believe that this is yet another laser-dot field in the IR domain. The Kinect uses a similar technique, and does work well. However, with FOV of about .5-4 ft, would also indicate that the dot pitch is also much tighter than that of the Kinect. However the BIG reason why it’s Windows only is the API. Taken from here: http://software.intel.com/en-us/vcsource/tools/perceptual-computing-sdk
~Speech recognition: Just like Google, Intel will keep their speech corpus close to them. There’s no reason why this library couldn’t be used with any arbitrary microphone.
~Hand and Finger Tracking: There’s still no real reason why this requires anything past a webcam. Hand and joint detection is a software problem via OpenCV. Although, having a library abstract it away is rather nice.
~Facial Analysis: This is cool, but this also goes along the lines of ‘OpenCV software problem’. Even my Android phone can be set up to allow a face login… and it also requires me to blink! There’s no structural scanner in that phone: it’s a software problem, solved by software.
~Augmented Reality: Ok.. This is where the project does get cool. As demoed, this can strip out the backgrounds of a video chat, and put in whatever you want. Better yet, it can also allow real-time stripping and recreation of an environment.
~Background Subtraction: This really has everything to do with the above point. Once we have depth data, we can do cool stuff. But it all hinges around an appropriate depth sensor. And the Kinect first gen seems much more reasonable for hackery than this device. They’re only getting better. Don’t break the bank here guys and gals.
There is absolutely no Mac port planned, and Linux is… “questionable”. Oh, And this device is $150, with only an API.
So, what I see here seems to be the newest trend in trying to get the Hackerspaces and makerspaces together. That method is by releasing peripherals that are 80% done, and getting the community, for free, to finish the rest of the hard work. Even the LeapMotion was $90, with shipping and such included. Admittedly, Leap has SDKs for Mac and Windows, with an alpha SDK for Linux. Even they went to release what they had.
After I asked a question about: “Why not put a LiPoly battery and BT 4.0 and treat this as a portable device and use it with Android..?” And was hushed as that was Intel’s next plan for this device. Any enterprising hacker could easily see that; hell, we could do it. And yet, they’re looking at Android. Real Linux support isn’t that far away.
The Intel spokesman talked about their Arduino-clone Pentium, the lack of any staying power in the Maker community, and their dwindling numbers in the Mobile space (4%.. ouch). And this, yet another Wintel project, shows precisely why.