Posts | Comments

Planet Arduino

Archive for the ‘computer vision’ Category

Ever since he was a young boy, [Tyler] has played the silver ball. And like us, he’s had a lifelong fascination with the intricate electromechanical beasts that surround them. In his recently-completed senior year of college, [Tyler] assembled a mechatronics dream team of [Kevin, Cody, and Omar] to help turn those visions into self-playing pinball reality.

You can indeed play the machine manually, and the Arduino Mega will keep track of your score just like a regular cabinet. If you need to scratch an itch, ignore a phone call, or just plain want to watch a pinball machine play itself, it can switch back and forth on the fly. The USB camera mounted over the playfield tracks the ball as it speeds around. Whenever it enters the flipper vectors, the appropriate flipper will engage automatically to bat the ball away.

Our favorite part of this build (aside from the fact that it can play itself) is the pachinko multi-ball feature that manages to squeeze in a second game and a second level. This project is wide open, and even if you’re not interested in replicating it, [Tyler] sprinkled a ton of good info and links to more throughout the build logs. Take a tour after the break while we have it set on free play.

[Tyler]’s machine uses actual pinball machine parts, which could quickly ramp up the cost. If you roll your own targets and get creative with solenoid sourcing, building a pinball machine doesn’t have to be a drain on your wallet.

Ever wish you could augment your sense of sight?

[Nick Bild]’s latest hack helps you find objects (or people) by locating their position and tracking them with a laser. The device, dubbed Artemis, latches onto your eyeglasses and can be configured to locate a specific object.

Images collected from the device are streamed to an NVIDIA Jetson AGX Xavier board, which uses a SSD300 (Single Shot MultiBox Detection) model to locate objects. The model was pre-trained with the COCO dataset to recognize and localize 80 different object types given input from images thresholded in OpenCV. Once the desired object is identified and located, a laser diode activates.

Probably due to the current thresholds, the demo runs mostly work on objects placed further apart against a neutral background. It’s an interesting look at applications combining computer vision with physical devices to augment experiences, rather than simply processing and analyzing data.

The device uses two servos for controlling the laser: one for X-axis control and the other for Y-axis control. The controls are executed from an Adafruit Itsy Bitsy M4 Express microcontroller.

Perhaps with a bit more training, we might not have so much trouble with “Where’s Waldo” puzzles anymore.

Check out some of our other sunglasses hacks, from home automation to using LCDs to lessening the glare from headlights.

A Raspberry Pi with a camera is nothing new. But the Pixy2 camera can interface with a variety of microcontrollers and has enough smarts to detect objects, follow lines, or even read barcodes without help from the host computer. [DroneBot Workshop] has a review of the device and he’s very enthused about the camera. You can see the video below.

When you watch the video, you might wonder how much this camera will cost. Turns out it is about $60 which isn’t cheap but for the capabilities it offers it isn’t that much, either. The camera can detect lines, intersections, and barcodes plus any objects you want to train it to recognize. The camera also sports its own light source and dual servo motor drive meant for a pan and tilt mounting arrangement.

You can connect via USB, serial, SPI, or I2C. Internally, the camera processes at 60 frames per second and it can remember seven signatures internally. There’s a PC-based configuration program that will run on Windows, Mac, or Linux. You can even use the program to spy on the camera while it is talking to another microcontroller like an Arduino.

The camera isn’t made to take sharp photos or video, but it is optimized for finding things, not for picture quality. High-quality frames take more processing power, so this is a reasonable trade. The camera does need training to find objects by color and shape. You can do the training with the PC-based software, but you can also do it with a self-contained procedure that relies on a button on the camera. The video shows both methods.

Once trained, you can even have an Arduino find objects. There’s a library that allows you to find how many items the camera currently sees and find out what the block is and its location. The identification clearly depends highly on color, so you’ll probably need to experiment if you have things that are different colors on different sides or has multiple colors.

Sure, you could use a sufficient computer with OpenCV to get some of these results, but having this all in one package and usable from just about any processor could be a real game-changer for the right kind of project. If you wanted to make a fancy line-following robot that could handle 5-way intersections and barcode commands this would be a no-brainer.

We’ve seen other smart cameras like OpenMV before. Google also has a vision processor for the Pi, too. It has a lot of capability but assumes you are connecting to a Pi.

Computer vision has traditionally relied on an assortment of rather involved components. On the other hand, everything you need to do this complicated task is readily available on an Android phone. The clever setup seen in the video here uses a smartphone to capture and process images, then send out a signal over Bluetooth to tell which way the device needs to be adjusted in order to focus on a nearby face.

An HC-05 Bluetooth module receives this signal and passes it to two servo motors via an Arduino Nano, moving the phone left/right and up/down.

You can find the Arduino code for this project on CircuitDigest, and the Android Processing code can be downloaded there as a compressed folder.

Video resolution is always on the rise. The days of 640×480 video have given way to 720, 1080, and even 4K resolutions. There’s no end in sight. However, you need a lot of horsepower to process that many pixels. What if you have a small robot powered by a microcontroller (perhaps an Arduino) and you want it to have vision? You can’t realistically process HD video, or even low-grade video with a small processor. CORTEX systems has an open source solution: a 7 pixel camera with an I2C interface.

The files for SNAIL Vision include a bill of materials and the PCB layout. There’s software for the Vishay sensors used and provisions for mounting a lens holder to the PCB using glue. The design is fairly simple. In addition to the array of sensors, there’s an I2C multiplexer which also acts as a level shifter and a handful of resistors and connectors.

Is seven pixels enough to be useful? We don’t know, but we’d love to see some examples of using the SNAIL Vision board, or other low-resolution optical sensors with low-end microcontrollers. This seems like a cheaper mechanism than Pixy. If seven pixels are too much, you could always try one.

Thanks [Paul] for the tip.


Filed under: Arduino Hacks, video hacks

Video resolution is always on the rise. The days of 640×480 video have given way to 720, 1080, and even 4K resolutions. There’s no end in sight. However, you need a lot of horsepower to process that many pixels. What if you have a small robot powered by a microcontroller (perhaps an Arduino) and you want it to have vision? You can’t realistically process HD video, or even low-grade video with a small processor. CORTEX systems has an open source solution: a 7 pixel camera with an I2C interface.

The files for SNAIL Vision include a bill of materials and the PCB layout. There’s software for the Vishay sensors used and provisions for mounting a lens holder to the PCB using glue. The design is fairly simple. In addition to the array of sensors, there’s an I2C multiplexer which also acts as a level shifter and a handful of resistors and connectors.

Is seven pixels enough to be useful? We don’t know, but we’d love to see some examples of using the SNAIL Vision board, or other low-resolution optical sensors with low-end microcontrollers. This seems like a cheaper mechanism than Pixy. If seven pixels are too much, you could always try one.

Thanks [Paul] for the tip.


Filed under: Arduino Hacks, video hacks
Dic
06

Making Fun: Color-Hunting, Christmas Tree-Controlling CheerBot

arduino, BeagleBone Black, cheerlights, christmas, computer vision, Electronics, Robot, Robotics Commenti disabilitati su Making Fun: Color-Hunting, Christmas Tree-Controlling CheerBot 

cheerbotsmallI built a robot that controls the my Christmas lights and Christmas lights around the world by roaming my house looking for colors and tweeting them to the Cheerlights service.

Read more on MAKE

Ott
04

Making Fun: Computer Vision Hair Trimmer

arduino, computer vision, Electronics, Jeff Highsmith, Making Fun Commenti disabilitati su Making Fun: Computer Vision Hair Trimmer 

Computer Hair Trimmer DiagramPart of what makes me a maker is that I prefer to do things myself when I can, including cutting my own hair. The tricky part, though, is cutting a good line across the back of the neck. I set out to build a trimmer that I could blindly run up and down the back of my neck, and have a computer vision system automatically turn the trimmer on or off in accordance with its position.

Read more on MAKE



  • Newsletter

    Sign up for the PlanetArduino Newsletter, which delivers the most popular articles via e-mail to your inbox every week. Just fill in the information below and submit.

  • Like Us on Facebook