This is what my program looks like now! The first 3 work well, with the 4th (IPCams) being worked on as I type. I’m going to look for processing 10 at most, to start with. I’m also needing some sort of pretty feedback for the Open Folder, as I currently have feedback via a qDebug() , which is very much against user interactive good practices.
I’ve my work cut out for me!
If you click on Open Video file, you get a QT file dialog. Select the video you want to process. For example……
Or if you want Webcam data processed, click on the webcam. Make sure to have a webcam plugged in. Else, crash. Yeah, I know.
I’ll be adding in file management for figuring out who people are, and more bells and whistles.
I just got done with my initial test, using an experimental faceRecognizer library in OpenCV. This program is uWho (Github Link)
The program works pretty well. It’s simple, as it opens up a OpenCV window, displaying video content immediately. The QT side is sparse, and does nothing… yet. I’m looking at possibly adding a way to control threshold as well as person count.
This person count isn’t just counting faces, but tries to identify who someone is from previous frames. If you are unknown, your likeness is added to the
collectiv(Ω>0 is futile) existing database of faces and further trained on you. If you are known, your face is added as more training data from your previous face data.
As an example, here is uWho classifying the faces on a google image search for “faces”.
Key: Each face has 2 numbers: The upper left is the index of the face found by faceClassifier. The number in the upper right, if displayed, is what the machine learning algorithm detects as a unique face. This face is then given a random number initially.
I made this for a convention we are hosting in Bloomington, Indiana called Makevention(Link to upcoming convention). We needed a way to count how many uniques we have show up. This way seems to be the best. However we still need to discuss this in our next meetings to see if this solution is appropriate. If so, how do we publicly disclose it and all the other privacy issues. We’re not Facebook(2), we’re a hackerspace! We get it!
Regarding badness: My program only profiles and saves the unique data from a face, locally. It’s saved in the project directory as face.xml . This program does nothing online. I am seeking the ‘do no evil’ idea. Facebook and its like already have this that they use against users. My goal is this can be used in a multitude of areas. If you have a building and a club, you can watch when people enter/leave. It’s also good for conventions, given you tell people!
Opto-isolator logic. How peculiar and awesome at the same time.
http://ift.tt/1AMKAlM This is my project that competes (freely) with Webcam Zone Trigger. It failed, so I built this in processing.
This is our Tech showcase, and my Trigger Zone shows off some of the features we currently have. We have a few Oculus Rifts (DK1 and 2). We also have a Liquid Galaxy setup that I built.
Get the software on my Github at:
I recently tried to use Webcam Zone Trigger for a work project.
This software is for making interactive demo walls, where you walk somewhere around a room, and the monitor updates to the appropriate website, or sends an email, or saves the picture… or something. Essentially, its IFTTT for video data. We were using it for an interactive demo where someone can walk up to new devices and technology, and a big monitor updates with the content we wrote about it, or autoplays a Youtube video.
We already paid for a license ($99 or something like that). And guess what? It stunk.
Well, why did it fail my expectations? When I hooked it up to a Kinect, it displayed the 640×480 depth window as the upper left of 320×240 expanded to 640×480. Below is what it SHOULD have looked like (or what the Freenect does seamlessly):
Ok. That looks sane. And BTW, picture shamelessly ripped off of http://www.mindtreatstudios.com/how-its-made/kinect-real-life-occlusion-rendered-content .What Webcam Zone Trigger did was the following:
Checks Kinect SDK and drivers? Using 1.8 . And it works well with other Kinect apps.
Checks OS integrity. Reinstalls Win7 x64 from scratch. Still same problem.
Check Kinect on Linux. Perfect, no hiccups. No problems.
So the problem is Webcam Zone Trigger. So, what can I do to re-make this program so it opens up a webpage when someone walks through a trigger? Well, I should be able to do it in Processing. I’ve done plenty of demos and proof of concepts in that language before. So I download it and “install” (meaning drag zipped directory to desktop). Then I run Processing and Sketch>Import Library> Add Library and then search for SimpleOpenNi . Remember, I’m doing this with MS SDK on a Windows machine. Else, I’d be using Open Kinect For Processing (libfreenect).
Aaaaand… Voila! Trigger Zone.
After preliminary tests, I made 4 zones around our tech showcase. When you walk up to one, it pops up on the main big screen the details of that device. I also have it set up in the browser to only use a single tab and a single browser.
My next update will to be to add a sound file to play to each zone. That should be very simple. The goal here is to find sounds that are delicate and informative. Long winded diatribes about tech can suck if you’re not interested. Perhaps a ‘ding’ would suffice.
But so far, SUCCESS!
I’ve, like most people around this time, have been very busy with family requirements. But aside the holiday season..
I GOT COOL STUFF!
To start, I bought 2 hot ends for my Prusa I2 Reprap printer. I purchased 2 of them for $20 total. One is a .4mm end and the other is a .2mm end.
Next up is a 9DoF accelerometer. The price is $6.28, and is a breakout board for the 9150 Invensense. The board is super-small and 8 of the chips could fit on my pinky fingernail. The company also included 2 headers : one straight, and the other 90 degrees.
The only bad thing is the company (Invensense, NOT the Chinese dropshipper) misrepresented how it can give the Euler Angle and other processed data. If you pay a few thousand dollars, you can get the permission to write your own firmware for the 9150 that might be able to do that. But for now, you do the math in the CPU you hook it up to.
Third, I put in an order for mini Arduino clones as well. I don’t know what to say, other than they work, and well! They’re breadboardable, which makes prototyping easy. No having to bolt down the standard Arduino somewhere.It uses an Atmel MEGA328P AU 1437. The reset switch seems a little loose, but it’s one of those SMT switches. A dab of superglue will fix that. On the underside, you see the USB transceiver chip, the same as my other Arduino clones. It’s another HL-340.
Lastly, I received my GPS chip. And holy smokes its AWESOME!! It’s also tiny and light weight. It came in a box 30x its size, and _WELL_ packed. It has a wiring harness that plugs into the port on board, as well as through holes for easy soldering.
He’s having some supply issues, so even though I bought it for $10, he has to sell a ‘Lot’ of 70 in order to get more. So bummer on that front.