Posts | Comments

Planet Arduino

Archive for the ‘TinyML’ Category

One of the main difficulties that people encounter when trying to build their edge ML models is gathering a large, yet simultaneously diverse, dataset. Audio models normally require setting up a microphone, capturing long sequences of sounds, and then manually removing bad data from the resulting files. Shakhizat Nurgaliyev’s project, however, eliminates the need for the arduous process by taking advantage of generative models to produce the dataset artificially.

In order to go from three audio classes: speech, music, and background noise to a complete dataset, Nurgaliyev wrote a simple prompt for ChatGPT that gave directions for creating a total of 300 detailed audio descriptions. After this, he grabbed an NVIDIA Jetson AGX Orin Developer Kit and loaded Meta’s generative AudioCraft model which allowed him to pass in the previously made audio prompts and receive sound snippets in return.

The final steps involved creating an Edge Impulse audio classification project, uploading the generated samples, and designing an Impulse that leveraged the MFE audio block and a Keras classifier model. Once an Arduino library had been built, Nurgaliyev loaded it, along with a simple sketch, onto an Arduino GIGA R1 WiFi board that continually listened for new audio data, performed classification, and displayed the label on the GIGA R1’s Display Shield screen.

To read more about this project, you can visit its write-up here on Hackster.io.

The post Classifying audio on the GIGA R1 WiFi from purely synthetic data appeared first on Arduino Blog.

As Jallson Suryo discusses in his project, adding voice controls to our appliances typically involves an internet connection and a smart assistant device such as Amazon Alexa or Google Assistant. This means extra latency, security concerns, and increased expenses due to the additional hardware and bandwidth requirements. This is why he created a prototype based on an Arduino Nicla Voice that can provide power for up to four outlets using just a voice command.

Suryo gathered a dataset by repeating the words “one,” “two,” “three,” “four,” “on,” and “off” into his phone and then uploaded the recordings to an Edge Impulse project. From here, he split the files into individual words before rebalancing his dataset to ensure each label was equally represented. The classifier model was trained for keyword spotting and used Syntiant NDP120-optimal settings for voice to yield an accuracy of around 80%.

Apart from the Nicla Voice, Suryo incorporated a Pro Micro board to handle switching the bank of relays on or off. When the Nicla Voice detects the relay number, such as “one” or “three”, it then waits until the follow-up “on” or “off” keyword is detected. With both the number and state now known, it sends an I2C transmission to the accompanying Pro Micro which decodes the command and switches the correct relay.

To see more about this voice-controlled power strip, be sure to check out Suryo’s Edge Impulse tutorial.

The post Controlling a power strip with a keyword spotting model and the Nicla Voice appeared first on Arduino Blog.

In July 2023, Samuel Alexander set out to reduce the amount of trash that gets thrown out due to poor sorting practices at the recycling bin. His original design relied on an Arduino Nano 33 BLE Sense to capture audio through its onboard microphone and then perform edge audio classification with an embedded ML model to automatically separate materials based on the sound they make when tossed inside. But in this latest iteration, Alexander added several large improvements to help the concept scale much further.

Perhaps the most substantial modification, the bin now uses an Arduino Pro Portenta C33 in combination with an external Nicla Voice or Nano 33 BLE Sense to not only perform inferences to sort trash, but also send real-time data to a cloud endpoint. By utilizing the Arduino Cloud through the Portanta C33, each AI-enabled recycling bin can now report its current capacity for each type of waste and then send an alert when collection must occur.

Thanks to these upgrades, Alexander was able to submit his prototype for consideration in the 2023 Hackaday Prize competition where he was awarded the Protolabs manufacturing grant. To see more about this innovative project, you can check out its write-up here and watch Alexander’s detailed explanation video below.

The post Improve recycling with the Arduino Pro Portenta C33 and AI audio classification appeared first on Arduino Blog.

We recently showed you Becky Stern’s recreation of the “computer book” carried by Penny in the Inspector Gadget cartoon, but Stern didn’t stop there. She also built a replica of Penny’s most iconic gadget: her watch. Penny was a trendsetter and rocked that decades before the Apple Watch hit the market. Stern’s replica looks just like the cartoon version and even has some of the same features.

The centerpiece of this project is an Arduino Nicla Voice board. The Arduino team designed that board specifically for speech recognition on the edge, which made it perfect for recognizing Penny’s signature “come in, Brain!” voice command. Stern used Edge Impulse to train an AI to recognize that phrase as a wake word. When the Nicla Voice board hears that, it changes the image on the smart watch screen to a new picture of Brain the dog.

The Nicla Vision board and an Adafruit 1.69″ color IPS TFT screen fit inside a 3D-printed enclosure modeled on Penny’s watch from the cartoon. That even has a clever 3D-printed watch band with links connected by lengths of fresh filament. Power comes from a small lithium battery that also fits inside the enclosure.

This watch and Stern’s computer book will both be part of an Inspector Gadget display put on by Digi-Key at Maker Faire Rome, so you can see it in person if you attend.

The post Building the OG smartwatch from Inspector Gadget appeared first on Arduino Blog.

When dealing with indoor climate controls, there are several variables to consider, such as the outside weather, people’s tolerance to hot or cold temperatures, and the desired level of energy savings. Windows can make this extra challenging, as they let in large amounts of light/heat and can create poorly insulated regions, which is why Jallson Suryo developed a prototype that aims to balance these needs automatically through edge AI techniques.

Suryo’s smart building ventilation system utilizes two separate boards, with an Arduino Nano 33 BLE Sense handling environmental sensor fusion and a Nicla Voice listening for certain ambient sounds. Rain and thunder noises were uploaded from an existing dataset, split and labeled accordingly, and then used to train a Syntiant audio classification model for the Nicla Voice’s NDP120 processor. Meanwhile, weather and ambient light data was gathered using the Nano’s onboard sensors and combined into time-series samples with labels for sunny/cloudy, humid, comfortable, and dry conditions.

After deploying the board’s respective classification models, Suryo added some additional code that writes new I2C data from the Nicla Voice to the Nano that indicates if rain/thunderstorm sounds are present. If they are, the Nano can automatically close the window via servo motors while other environmental factors can set the position of the blinds. With this multi-sensor technique, a higher level of accuracy can be achieved for more precision control over a building’s windows, and thus attempt to lower the HVAC costs.

More information about Suryo’s project can be found here on its Edge Impulse docs page

The post Improving comfort and energy efficiency in buildings with automated windows and blinds appeared first on Arduino Blog.

When playing a short game of basketball, few people enjoy having to consciously track their number of successful throws. Yet when it comes to automation, nearly all systems rely on infrared or visual proximity detection as a way to determine when a shot has gone through the basket versus missed. This is what inspired one team from the University of Ljubljan to create a small edge ML-powered device that can be suspended from the net with a pair of zip ties for real-time scorekeeping.

After collecting a total of 137 accelerometer samples via an Arduino Nano 33 BLE Sense and labeling them as either a miss, a score, or nothing within the Edge Impulse Studio, the team trained a classification model and reached an accuracy of 84.6% on real-world test data. Getting the classification results from the device to somewhere readable is handled by the Nano’s onboard BLE server. It provides two services, with the first for reporting the current battery level and the second for sending score data.

Once the firmware had been deployed, the last step involved building a mobile application to view the relevant information. The app allows users to connect to the basketball scoring device, check if any new data has been received, and then parse/display the new values onscreen.

To read more about this project, you can head over to its write-up on Hackster.io.

The post Nothin’ but (neural) net: Track your basketball score with a Nano 33 BLE Sense appeared first on Arduino Blog.

Your dog has nerve endings covering its entire body, giving it a sense of touch. It can feel the ground through its paws and use that information to gain better traction or detect harmful terrain. For robots to perform as well as their biological counterparts, they need a similar level of sensory input. In pursuit of that goal, the Autonomous Robots Lab designed TRACEPaw for legged robots.

TRACEPaw (Terrain Recognition And Contact force Estimation Paw) is a sensorized foot for robot dogs that includes all of the hardware necessary to calculate force and classify terrain. Most systems like this use direct sensor readings, such as those from force sensors. But TRACEPaw is unique in that it uses indirect data to infer this information. The actual foot is a deformable silicone hemisphere. A camera looks at that and calculates the force based on the deformation it sees. In a similar way, a microphone listens to the sound of contact and uses that to judge the type of terrain, like gravel or dirt.

To keep TRACEPaw self-contained, Autonomous Robots Lab chose to utilize an Arduino Nicla Vision board. That has an integrated camera, microphone, six-axis motion sensor, and enough processing power for onboard machine learning. Using OpenMV and TensorFlow Lite, TRACEPaw can estimate the force on the silicone pad based on how much it deforms during a step. It can also analyze the audio signal from the microphone to guess the terrain, as the silicone pad sounds different when touching asphalt than it does when touching loose soil.

More details on the project are available on GitHub.

The post Helping robot dogs feel through their paws appeared first on Arduino Blog.

The traditional method for changing a diaper starts when someone smells or feels the that the diaper has been soiled, and while it isn’t the greatest process, removing the soiled diaper as soon as possible is important for avoiding rashes and infections. Justin Lutz has created an intelligent solution to this situation by designing a small device that alerts people over Bluetooth® when the diaper is ready to be changed.

Because a dirty diaper gives off volatile organic compounds (VOCs) and small particulates, Lutz realized he could use the Arduino Nicla Sense ME’s built-in BME688 sensor which can measure VOCs, temperature/humidity, and air quality. After gathering 29 minutes of gas and air quality measurements in the Edge impulse Studio for both clean and soiled diapers, he trained a classification model for 300 epochs, resulting in a model with 95% accuracy.

Based on his prior experience with the Nicla Sense ME’s BLE capabilities and MIT App Inventor, Lutz used the two to devise a small gadget that wirelessly connects to a phone app so it can send notifications when it’s time for a new diaper.

To read more about this project, you can check out Lutz’s write-up here on the Edge Impulse docs page.

The post This smart diaper knows when it is ready to be changed appeared first on Arduino Blog.

Due to an ever-warming planet thanks to climate change and greatly increasing wildfire chances because of prolonged droughts, being able to quickly detect when a fire has broken out is vital for responding while it’s still in a containable stage. But one major hurdle to collecting machine learning model datasets on these types of events is that they can be quite sporadic. In his proof of concept system, engineer Shakhizat Nurgaliyev shows how he leveraged NVIDIA Omniverse Replicator to create an entirely generated dataset and then deploy a model trained on that data to an Arduino Nicla Vision board.

The project started out as a simple fire animation inside of Omniverse which was soon followed by a Python script that produces a pair of virtual cameras and randomizes the ground plane before capturing images. Once enough had been created, Nurgaliyev utilized the zero-shot object detection application Grounding DINO to automatically draw bounding boxes around the virtual flames. Lastly, each image was brought into an Edge Impulse project and used to develop a FOMO-based object detection model.

By taking this approach, the model achieved an F1 score of nearly 87% while also only needing a max of 239KB of RAM and a mere 56KB of flash storage. Once deployed as an OpenMV library, Nurgaliyev shows in his video below how the MicroPython sketch running on a Nicla Vision within the OpenMV IDE detects and bounds flames. More information about this system can be found here on Hackster.io.

The post This Nicla Vision-based fire detector was trained entirely on synthetic data appeared first on Arduino Blog.

Due to an ever-warming planet thanks to climate change and greatly increasing wildfire chances because of prolonged droughts, being able to quickly detect when a fire has broken out is vital for responding while it’s still in a containable stage. But one major hurdle to collecting machine learning model datasets on these types of events is that they can be quite sporadic. In his proof of concept system, engineer Shakhizat Nurgaliyev shows how he leveraged NVIDIA Omniverse Replicator to create an entirely generated dataset and then deploy a model trained on that data to an Arduino Nicla Vision board.

The project started out as a simple fire animation inside of Omniverse which was soon followed by a Python script that produces a pair of virtual cameras and randomizes the ground plane before capturing images. Once enough had been created, Nurgaliyev utilized the zero-shot object detection application Grounding DINO to automatically draw bounding boxes around the virtual flames. Lastly, each image was brought into an Edge Impulse project and used to develop a FOMO-based object detection model.

By taking this approach, the model achieved an F1 score of nearly 87% while also only needing a max of 239KB of RAM and a mere 56KB of flash storage. Once deployed as an OpenMV library, Nurgaliyev shows in his video below how the MicroPython sketch running on a Nicla Vision within the OpenMV IDE detects and bounds flames. More information about this system can be found here on Hackster.io.

The post This Nicla Vision-based fire detector was trained entirely on synthetic data appeared first on Arduino Blog.



  • Newsletter

    Sign up for the PlanetArduino Newsletter, which delivers the most popular articles via e-mail to your inbox every week. Just fill in the information below and submit.

  • Like Us on Facebook