Posts | Comments

Planet Arduino

Archive for the ‘camera’ Category

[JBumstead] didn’t want an ordinary microscope. He wanted one that would show the big picture, and not just in a euphemistic sense, either. The problem though is one of resolution. The higher the resolution in an image — typically — the narrower the field of view given the same optics, which makes sense, right? The more you zoom in, the less area you can see. His solution was to create a microscope using a conventional camera and building a motion stage that would capture multiple high-resolution photographs. Then the multiple photos are stitched together into a single image. This allows his microscope to take a picture of a 90x60mm area with a resolution of about 15 μm. In theory, the resolution might be as good as 2 μm, but it is hard to measure the resolution accurately at that scale.

As an Arduino project, this isn’t that difficult. It’s akin to a plotter or an XY table for a 3D printer — just some stepper motors and linear motion hardware. However, the base needs to be very stable. We learned a lot about the optics side, though.

Two Nikon lenses and an aperture stop made from black posterboard formed a credible 3X magnification element. We also learned about numerical aperture and its relationship to depth of field.

One place the project could improve is in the software department. Once you’ve taken a slew of images, they need to blend together. It can be done manually, of course, but that’s no fun. There’s also a MATLAB script that attempts to automatically stitch the images together, blending the edges together. According to the author, the code needs some work to be totally reliable. There are also off-the-shelf stitching solutions, which might work better.

We’ve seen similar setups for imaging different things. We’ve even seen it applied to a vintage microscope.

When filming your projects—or day-to-day life—static shots can be fun, but having a moving perspective often looks even better. The challenge is keeping the camera pointed at your subject, which maker Saral Tayal addresses with his automated slider.

This Arduino Uno-controlled slider is powered by a pair of brushed DC motors with encoders attached for feedback. One pulls the camera along a pair of rails on a set of linear bearings, while the other adjusts the camera’s horizontal angle using trigonometry to keep a particular object in-frame. 

Code and print files are available in Tayal’s write-up, and some beautiful resulting shots with an explanation of the project can be seen in the video below. 

A Raspberry Pi with a camera is nothing new. But the Pixy2 camera can interface with a variety of microcontrollers and has enough smarts to detect objects, follow lines, or even read barcodes without help from the host computer. [DroneBot Workshop] has a review of the device and he’s very enthused about the camera. You can see the video below.

When you watch the video, you might wonder how much this camera will cost. Turns out it is about $60 which isn’t cheap but for the capabilities it offers it isn’t that much, either. The camera can detect lines, intersections, and barcodes plus any objects you want to train it to recognize. The camera also sports its own light source and dual servo motor drive meant for a pan and tilt mounting arrangement.

You can connect via USB, serial, SPI, or I2C. Internally, the camera processes at 60 frames per second and it can remember seven signatures internally. There’s a PC-based configuration program that will run on Windows, Mac, or Linux. You can even use the program to spy on the camera while it is talking to another microcontroller like an Arduino.

The camera isn’t made to take sharp photos or video, but it is optimized for finding things, not for picture quality. High-quality frames take more processing power, so this is a reasonable trade. The camera does need training to find objects by color and shape. You can do the training with the PC-based software, but you can also do it with a self-contained procedure that relies on a button on the camera. The video shows both methods.

Once trained, you can even have an Arduino find objects. There’s a library that allows you to find how many items the camera currently sees and find out what the block is and its location. The identification clearly depends highly on color, so you’ll probably need to experiment if you have things that are different colors on different sides or has multiple colors.

Sure, you could use a sufficient computer with OpenCV to get some of these results, but having this all in one package and usable from just about any processor could be a real game-changer for the right kind of project. If you wanted to make a fancy line-following robot that could handle 5-way intersections and barcode commands this would be a no-brainer.

We’ve seen other smart cameras like OpenMV before. Google also has a vision processor for the Pi, too. It has a lot of capability but assumes you are connecting to a Pi.

When you think of image processing, you probably don’t think of the Arduino. [Jan Gromes] did, though. Using a camera and an Arduino Mega, [Jan] was able to decode input from an Arduino-connected camera into raw image data. We aren’t sure about [Jan’s] use case, but we can think of lots of reasons you might want to know what is hiding inside a compressed JPEG from the camera.

The Mega is key, because–as you might expect–you need plenty of memory to deal with photos. There is also an SD card for auxiliary storage. The camera code is straightforward and saves the image to the SD card. The interesting part is the decoding.

The use case mentioned in the post is sending image data across a potentially lossy communication channel. Because JPEG is compressed in a lossy way, losing some part of a JPEG will likely render it useless. But sending raw image data means that lost or wrong data will just cause visual artifacts (think snow on an old TV screen) and your brain is pretty good at interpreting lossy images like that.

Just to test that theory, we took one of [Joe Kim’s] illustrations, saved it as a JPEG and corrupted just a few bytes in a single spot in it. You can see the before (left) and after (right) picture below. You can make it out, but the effect of just a few bytes in one spot is far-reaching, as you can see.

The code uses a library that returns 16-bit RGB images. The library was meant for displaying images on a screen, but then again it doesn’t really know what you are doing with the results. It isn’t hard to imagine using the data to detect a specific color, find edges in the image, detect motion, and other simple tasks.

Sending the uncompressed image data might be good for error resilience, but it isn’t good for impatient people. At 115,200 baud, [Jan] says it takes about a minute to move a raw picture from the Arduino to a PC.

We’ve seen the Arduino handle a single pixel at a time. Even in color. The Arduino might not be your first choice for an image processing platform, but clearly, you can do some things with it.


Filed under: Arduino Hacks

Who doesn’t love a good robot? If you don’t — how dare you! — then this charming little scamp might just bring the hint of a smile to your face.

SDDSbot — built out of an old Sony Dynamic Digital Sound system’s reel cover — can’t do much other than turn left, right, or walk forwards on four D/C motor-controlled legs, but it does so using the power of a Pixy camera and an Arduino. The Pixy reads colour combinations that denote stop and go commands from sheets of paper, attempting to keep it in the center of its field of view as it toddles along. Once the robot gets close enough to the ‘go’ colour code, the paper’s  orientation directs the robot to steer itself left or right — the goal being the capacity to navigate a maze. While not quite there yet, it’s certainly a handful as it is.

With the care of a maker, [Arno Munukka] takes us under the hood of his robot to show how he’s made clever use of the small space. He designed a duo of custom PCBs for the motors and stuck them near the robot’s top — you can see the resistors used to time the steps poking through the robot’s case, adding a functional cosmetic effect. The Arduino brain is stuck to the rear, the Pixy to the front, and the power boards are snug near the base. Three USB ports pepper the robot’s posterior — a charging port, one for programming the Arduino, and a third to access the Pixy camera.

What do you think — had a change of heart regarding our future overl– uh, silicon-based friends? Yes? Well here’s a beginner bot to will get you started.


Filed under: Android Hacks, Arduino Hacks, robots hacks

A Python module for the AuroraWatch UK API

The first goal in the journey to create an automated all-sky camera system for AuroraWatch UK was to interface to the camera using Python. If you missed update #1 you can read it at here.

One of the requirements of the camera software is to be able to change between different recording settings, for instance, in response to solar elevation and the AuroraWatch UK status level. Computing solar elevation is easily acheived with the astral module. For the AuroraWatch UK status level I began with fetching the status XML document but soon realised a much better approach was to write a Python module. The module automatically fetches the various XML documents, parses them and caches the results, both to memory and to disk.

I won't repeat the instructions for installing and using the module - that information is already given in an ipython notebook that you can view at https://github.com/stevemarple/python-aurorawatchuk/blob/master/aurorawatchuk/examples/aurorawatchuk_api.ipynb

A Python module for the AuroraWatch UK API

The first goal in the journey to create an automated all-sky camera system for AuroraWatch UK was to interface to the camera using Python. If you missed update #1 you can read it at here.

One of the requirements of the camera software is to be able to change between different recording settings, for instance, in response to solar elevation and the AuroraWatch UK status level. Computing solar elevation is easily acheived with the astral module. For the AuroraWatch UK status level I began with fetching the status XML document but soon realised a much better approach was to write a Python module. The module automatically fetches the various XML documents, parses them and caches the results, both to memory and to disk.

I won't repeat the instructions for installing and using the module - that information is already given in an ipython notebook that you can view at https://github.com/stevemarple/python-aurorawatchuk/blob/master/aurorawatchuk/examples/aurorawatchuk_api.ipynb

It’s 2017 and even GoPro cameras now come with voice activation. Budding videographers, rest assured, nothing will look more professional than repeatedly yelling at your camera on a big shoot. Hackaday alumnus [Jeremy Cook] heard about this and instead of seeing an annoying gimmick, saw possibilities. Could they automate their GoPro using Arduino-spoken voice commands?

It’s an original way to do automation, for sure. In many ways, it makes sense – rather than mucking around with trying to make your own version of the GoPro mobile app (software written by surfers; horribly buggy) or official WiFi remote, stick with what you know. [Jeremy] decided to pair an Arduino Nano with the ISD1820 voice playback module. This was then combined with a servo-based panning fixture – [Jeremy] wants the GoPro to pan, take a photo, and repeat. The Arduino sets the servo position, then commands the ISD1820 to playback the voice command to take a picture, before rotating again.

[Jeremy] reports that it’s just a prototype at this stage, and works only inconsistently. This could perhaps be an issue of intelligibility of the recorded speech, or perhaps a volume issue. It’s hard to argue that a voice control system will ever be as robust as remote controlling a camera over WiFi, but it just goes to show – there’s never just one way to get the job done. We’ve seen people go deeper into GoPro hacking though – check out this comprehensive guide on how to pwn your GoPro.


Filed under: Arduino Hacks, digital cameras hacks

Video resolution is always on the rise. The days of 640×480 video have given way to 720, 1080, and even 4K resolutions. There’s no end in sight. However, you need a lot of horsepower to process that many pixels. What if you have a small robot powered by a microcontroller (perhaps an Arduino) and you want it to have vision? You can’t realistically process HD video, or even low-grade video with a small processor. CORTEX systems has an open source solution: a 7 pixel camera with an I2C interface.

The files for SNAIL Vision include a bill of materials and the PCB layout. There’s software for the Vishay sensors used and provisions for mounting a lens holder to the PCB using glue. The design is fairly simple. In addition to the array of sensors, there’s an I2C multiplexer which also acts as a level shifter and a handful of resistors and connectors.

Is seven pixels enough to be useful? We don’t know, but we’d love to see some examples of using the SNAIL Vision board, or other low-resolution optical sensors with low-end microcontrollers. This seems like a cheaper mechanism than Pixy. If seven pixels are too much, you could always try one.

Thanks [Paul] for the tip.


Filed under: Arduino Hacks, video hacks

Video resolution is always on the rise. The days of 640×480 video have given way to 720, 1080, and even 4K resolutions. There’s no end in sight. However, you need a lot of horsepower to process that many pixels. What if you have a small robot powered by a microcontroller (perhaps an Arduino) and you want it to have vision? You can’t realistically process HD video, or even low-grade video with a small processor. CORTEX systems has an open source solution: a 7 pixel camera with an I2C interface.

The files for SNAIL Vision include a bill of materials and the PCB layout. There’s software for the Vishay sensors used and provisions for mounting a lens holder to the PCB using glue. The design is fairly simple. In addition to the array of sensors, there’s an I2C multiplexer which also acts as a level shifter and a handful of resistors and connectors.

Is seven pixels enough to be useful? We don’t know, but we’d love to see some examples of using the SNAIL Vision board, or other low-resolution optical sensors with low-end microcontrollers. This seems like a cheaper mechanism than Pixy. If seven pixels are too much, you could always try one.

Thanks [Paul] for the tip.


Filed under: Arduino Hacks, video hacks


  • Newsletter

    Sign up for the PlanetArduino Newsletter, which delivers the most popular articles via e-mail to your inbox every week. Just fill in the information below and submit.

  • Like Us on Facebook