Posts | Comments

Planet Arduino

Archive for the ‘machine learning’ Category

Machine learning is starting to come online in all kinds of arenas lately, and the trend is likely to continue for the forseeable future. What was once only available for operators of supercomputers has found use among anyone with a reasonably powerful desktop computer. The downsizing isn’t stopping there, though, as Microsoft is pushing development of machine learning for embedded systems now.

The Embedded Learning Library (ELL) is a set of tools for allowing Arduinos, Raspberry Pis, and the like to take advantage of machine learning algorithms despite their small size and reduced capability. Microsoft intended this library to be useful for anyone, and has examples available for things like computer vision, audio keyword recognition, and a small handful of other implementations. The library should be expandable to any application where machine learning would be beneficial for a small embedded system, though, so it’s not limited to these example applications.

There is one small speed bump to running a machine learning algorithm on your Raspberry Pi, though. The high processor load tends to cause small SoCs to overheat. But adding a heatsink and fan is something we’ve certainly seen before. Don’t let your lack of a supercomputer keep you from exploring machine learning if you see a benefit to it, and if you need more power than just one Raspberry Pi you can always build a cluster to get your task done just a little bit faster, too.

Thanks to [Baldpower] for the tip!

When was the last time you poured water onto your radio to turn it on?

Designed collaboratively by [Tore Knudsen], [Simone Okholm Hansen] and [Victor Permild], Pour Reception seeks to challenge what constitutes an interface, and how elements of play can create a new experience for a relatively everyday object.

Lacking buttons or knobs of any kind, Pour Reception appears an inert acrylic box with two glasses resting on top. A detachable instruction card cues the need for water, and pouring some into the glasses wakes the radio.

Inside, two aluminium plates —  acting as capacitive touch sensors — are connected to an Arduino using the Tact library from NANDSudio. Wekinator — a machine learning tool — enabled [Knudsen] to program various actions to control the radio. Pouring water between the glasses changes stations, rotating and tweaking the glass’ positions adjusts audio quality, and placing a finger in the glass mutes it temporarily.

It’s a great concept for a more engaging piece of tech, if perhaps a little unnerving to be pouring water around household electronics. Best take preventative measures before applying this idea elsewhere.

If there’s one thing that Hollywood knows about hackers, it’s that they absolutely love data visualizations. Sometimes it’s projected on a big wall (Hackers, WarGames), other times it’s gibberish until the plot says otherwise (Sneakers, The Matrix). But no matter what, it has to look cool. No hacker worth his or her salt can possibly work unless they’ve got an evolving Venn diagram or spectral waterfall running somewhere in the background.

Inspired by Hollywood portrayals, specifically one featured in Avengers: Age of Ultron, [Zack Akil] decided it was time to secure his place in the pantheon of hacker wall visualizations. But not content to just show meaningless nonsense on his wall, he set out to create something that was at least showing actual data.

[Zack] created a neural network to work through multi-label classification data in Python using the scikit-learn machine learning suite. The code takes the values from the neutral network training algorithm and converts them to RGB colors by way of an Arduino. Each “node” in the neutral network is 3D printed in translucent filament, and fitted with an RGB LED module. These modules are then connected to each other via side-glow fiber optic tubes, so that the colors within the tubes are mixed depending on the colors of the nodes they are attached to. This allows for a very organic “growing” effect, as colors move through the network node-by-node.

In the end this particular visualization doesn’t really mean anything; the data it’s working on only exists for the purposes of the visualization itself. But [Zack] succeeded in creating a practical visualization of machine learning, and if you’re the kind of person who needs to keep tabs on learning algorithms, some variation of this design may be just what you’re looking for.

If AI isn’t your thing but you still want a wall of RGB LEDs, maybe you can use this phased array antenna visualizer instead. If you’re really hip, maybe you’ll go the analog route and put a big gauge on the wall.


Filed under: Arduino Hacks, led hacks

CatSpotterThumbThe Jetson TX1 Cat Spotter uses advanced neural networking to recognize when there's a cat in the room — and then starts teasing it with a laser.

Read more on MAKE

The post Nvidia Jetson TX1 Cat Spotter and Laser Teaser appeared first on Make: DIY Projects and Ideas for Makers.

When it comes to farming veggies like cucumbers, the sorting process can often be just as hard and tricky as actually growing them. That’s why Makoto Koike is using Google’s TensorFlow machine learning technology to categorize the cucumbers on his family’s farm by size, shape and color, enabling them to focus on more important and less tedious work.

A camera-equipped Raspberry Pi 3 is used to take images of the cucumbers and send them to a small-scale TensorFlow neural network. The pictures are then forwarded to a larger network running on a Linux server to perform a more detailed classification. From there, the commands are fed to an Arduino Micro that controls a conveyor belt system that handles the actual sorting, dropping them into their respective container.

You can read all about the Google AI project here, as well as see it in action below!

mellis-aday

At Arduino Day, I talked about a project I and my collaborators have been working on to bring machine learning to the maker community. Machine learning is a technique for teaching software to recognize patterns using data, e.g. for recognizing spam emails or recommending related products. Our ESP (Example-based Sensor Predictions) software recognizes patterns in real-time sensor data, like gestures made with an accelerometer or sounds recorded by a microphone. The machine learning algorithms that power this pattern recognition are specified in Arduino-like code, while the recording and tuning of example sensor data is done in an interactive graphical interface. We’re working on building up a library of code examples for different applications so that Arduino users can easily apply machine learning to a broad range of problems.

The project is a part of my research at the University of California, Berkeley and is being done in collaboration with Ben Zhang, Audrey Leung, and my advisor Björn Hartmann. We’re building on the Gesture Recognition Toolkit (GRT) and openFrameworks. The software is still rough (and Mac only for now) but we’d welcome your feedback. Installations instructions are on our GitHub project page. Please report issues on GitHub.

Our project is part of a broader wave of projects aimed at helping electronics hobbyists make more sophisticated use of sensors in their interactive projects. Also building on the GRT is ml-lib, a machine learning toolkit for Max and Pure Data. Another project in a similar vein is the Wekinator, which is featured in a free online course on machine learning for musicians and artists. Rebecca Fiebrink, the creator of Wekinator, recently participated in a panel on machine learning in the arts and taught a workshop (with Phoenix Perry) at Resonate ’16. For non-real time applications, many people use scikit-learn, a set of Python tools. There’s also a wide range of related research from the academic community, which we survey on our project wiki.

For a high-level overview, check out this visual introduction to machine learning. For a thorough introduction, there are courses on machine learning from coursera and from udacity, among others. If you’re interested in a more arts- and design-focused approach, check out alt-AI, happening in NYC next month.

If you’d like to start experimenting with machine learning and sensors, an excellent place to get started is the built-in accelerometer and gyroscope on the Arduino or Genuino 101. With our ESP system, you can use these sensors to detect gestures and incorporate them into your interactive projects!



  • Newsletter

    Sign up for the PlanetArduino Newsletter, which delivers the most popular articles via e-mail to your inbox every week. Just fill in the information below and submit.

  • Like Us on Facebook