Posts | Comments

Planet Arduino

Archive for the ‘machine learning’ Category

The entire tech industry is desperate for a practical wearable HMI (Human Machine Interface) right now. The most newsworthy devices at CES this year were the Rabbit R1 and the Humane AI Pin, both of which are attempts to streamline wearable interfaces with and for AI. Both have numerous drawbacks, as do most other approaches. What the world really needs is an affordable, practical, and unobtrusive solution, and North Carolina State University researchers may have found the answer in machine learning-optimized fabric buttons.

It is, of course, possible to adhere a conventional button to fabric. But by making the button itself from fabric, these researchers have improved comfort, lowered costs, and introduced a lot more flexibility — both literally and metaphorically. These are triboelectric touch sensors, which detect the amount of force exerted on them by measuring the energy between two layers of opposite charges.

But there is a problem with this approach: the measured values vary dramatically based on usage, environmental conditions, manufacturing tolerances, and physical wear. The fabric button on one shirt sleeve may present completely different readings than another. If this were a simple binary button, it wouldn’t be as challenging of an issue. But the whole point of this sensor type is to provide a one-dimensional scale corresponding to the pressure exerted, so consistency is important.

Because achieving physical consistency isn’t practical, the team turned to machine learning. A TensorFlow Lite for Microcontrollers machine learning model, running on an Arduino Nano ESP32 board, interprets the readings from the sensors. It is then able to differentiate between several interactions: single clicks, double clicks, triple clicks, single slides, double slides, and long presses.

Even if the exact readings change between sensors (or the same sensor over time), the patterns are still recognizable to the machine learning model. This would make it practical to integrate fabric buttons into inexpensive garments and users could interact with their devices through those interfaces.

The researchers demonstrated the concept with mobile apps and even a game. More details can be found in their paper here.

Image credit: Y. Chen et al.

The post Machine learning makes fabric buttons practical appeared first on Arduino Blog.

Soon after a police station opened near his house, Christopher Cooper noticed a substantial increase in the amount of emergency vehicle traffic and their associated noises even though local officials had promised that it would not be disruptive. But rather than write down every occurrence to track the volume of disturbances, he came up with a connected audio-classifying device that can automatically note the time and type of sound for later analysis.

Categorizing each sound was done by leveraging Edge Impulse and an Arduino Nano 33 BLE Sense. After training a model and deploying it within a sketch, the Nano will continually listen for new noises through its onboard microphone, run an inference, and then output the label and confidence over UART serial. Reading this stream of data is an ESP32 Dev Kit, which displays every entry in a list on a useful GUI. The screen allows users to select rows, view more detailed information, and even modify the category if needed.

Going beyond the hardware aspect, Cooper’s project also includes a web server running on the ESP32 that can show the logs within a browser, and users can even connect an SD card to have automated file entries created. For more information about this project, you can read Cooper’s write-up here on Hackster.io.

The post Classify nearby annoyances with this sound monitoring device appeared first on Arduino Blog.

SOPHGO SG2000 SG2002 block diagram

SOPHGO SG2000 and SG2002 are new SoCs featuring a bunch of RISC-V and Arm cores capable of running Linux, Android, and FreeRTOS simultaneously, and to maximize the fun an 8051 MCU core is also in the mix along with a 0.5 TOPS (SG2000) or 1 TOPS (SG2002) AI accelerator. More specifically we have one 1GHz C906 64-bit core capable of running Linux, one 1GHz Arm Cortex-A53 for Linux or Android, another 700 MHz C906 RISC-V core for FreeRTOS, and a 300 MHz 8051-core for real-time I/Os, as well as 256MB or 512MB SiP DRAM. The chip is designed for AIoT applications such as Smart IP cameras, facial recognition, and smart home devices. SOPHGO SG2000/SG2002 specifications: CPU cores 1x C906 64-bit RISC-V core @ 1GHz 1x C906 64-bit RISC-V core @ 700MHz 1x Arm Cortex-A53 core @ 1GHz MCU – 8051 8-bit microcontroller core @ 25 to 300 MHz with 6KB [...]

The post SOPHGO SG2000/SG2002 AI SoC features RISC-V, Arm, and 8051 cores, supports Android, Linux, and FreeRTOS appeared first on CNX Software - Embedded Systems News.

One of the main difficulties that people encounter when trying to build their edge ML models is gathering a large, yet simultaneously diverse, dataset. Audio models normally require setting up a microphone, capturing long sequences of sounds, and then manually removing bad data from the resulting files. Shakhizat Nurgaliyev’s project, however, eliminates the need for the arduous process by taking advantage of generative models to produce the dataset artificially.

In order to go from three audio classes: speech, music, and background noise to a complete dataset, Nurgaliyev wrote a simple prompt for ChatGPT that gave directions for creating a total of 300 detailed audio descriptions. After this, he grabbed an NVIDIA Jetson AGX Orin Developer Kit and loaded Meta’s generative AudioCraft model which allowed him to pass in the previously made audio prompts and receive sound snippets in return.

The final steps involved creating an Edge Impulse audio classification project, uploading the generated samples, and designing an Impulse that leveraged the MFE audio block and a Keras classifier model. Once an Arduino library had been built, Nurgaliyev loaded it, along with a simple sketch, onto an Arduino GIGA R1 WiFi board that continually listened for new audio data, performed classification, and displayed the label on the GIGA R1’s Display Shield screen.

To read more about this project, you can visit its write-up here on Hackster.io.

The post Classifying audio on the GIGA R1 WiFi from purely synthetic data appeared first on Arduino Blog.

When dealing with indoor climate controls, there are several variables to consider, such as the outside weather, people’s tolerance to hot or cold temperatures, and the desired level of energy savings. Windows can make this extra challenging, as they let in large amounts of light/heat and can create poorly insulated regions, which is why Jallson Suryo developed a prototype that aims to balance these needs automatically through edge AI techniques.

Suryo’s smart building ventilation system utilizes two separate boards, with an Arduino Nano 33 BLE Sense handling environmental sensor fusion and a Nicla Voice listening for certain ambient sounds. Rain and thunder noises were uploaded from an existing dataset, split and labeled accordingly, and then used to train a Syntiant audio classification model for the Nicla Voice’s NDP120 processor. Meanwhile, weather and ambient light data was gathered using the Nano’s onboard sensors and combined into time-series samples with labels for sunny/cloudy, humid, comfortable, and dry conditions.

After deploying the board’s respective classification models, Suryo added some additional code that writes new I2C data from the Nicla Voice to the Nano that indicates if rain/thunderstorm sounds are present. If they are, the Nano can automatically close the window via servo motors while other environmental factors can set the position of the blinds. With this multi-sensor technique, a higher level of accuracy can be achieved for more precision control over a building’s windows, and thus attempt to lower the HVAC costs.

More information about Suryo’s project can be found here on its Edge Impulse docs page

The post Improving comfort and energy efficiency in buildings with automated windows and blinds appeared first on Arduino Blog.

The rapid rise of edge AI capabilities on embedded targets has proven that relatively low-resource microcontrollers are capable of some incredible things. And with the recent release of the Arduino UNO R4 with its Renesas RA4M1 processor, the ceiling has gotten even higher as YouTuber and maker Nikodem Bartnik has demonstrated with his lidar-equipped mobile robot.

Bartnik’s project started with a simple question of whether it’s possible to teach a basic robot how to navigate around obstacles using only lidar instead of the more resource-intensive computer vision techniques employed by most other platforms. The chassis and hardware, including two DC motors, an UNO R4 Minima, a Bluetooth® module, and SD card, were constructed according to Open Robotic Platform (ORP) rules so that others can easily replicate and extend its functionality. After driving through a series of courses in order to collect a point cloud from the spinning lidar sensor, Bartnik imported the data and performed a few transformations to greatly minify the classification model.

Once trained, the model was exported with help from the micromlgen Python package and loaded onto the UNO R4. The setup enables the incoming lidar data to be classified as the direction in which the robot should travel, and according to Bartnik’s experiments, this approach worked surprisingly well. Initially, there were a few issues when navigating corners and traveling through a figure eight track, but additional training data solved it and allowed the vehicle to overcome a completely novel course at maximum speed.

The post Teaching an Arduino UNO R4-powered robot to navigate obstacles autonomously appeared first on Arduino Blog.

When playing a short game of basketball, few people enjoy having to consciously track their number of successful throws. Yet when it comes to automation, nearly all systems rely on infrared or visual proximity detection as a way to determine when a shot has gone through the basket versus missed. This is what inspired one team from the University of Ljubljan to create a small edge ML-powered device that can be suspended from the net with a pair of zip ties for real-time scorekeeping.

After collecting a total of 137 accelerometer samples via an Arduino Nano 33 BLE Sense and labeling them as either a miss, a score, or nothing within the Edge Impulse Studio, the team trained a classification model and reached an accuracy of 84.6% on real-world test data. Getting the classification results from the device to somewhere readable is handled by the Nano’s onboard BLE server. It provides two services, with the first for reporting the current battery level and the second for sending score data.

Once the firmware had been deployed, the last step involved building a mobile application to view the relevant information. The app allows users to connect to the basketball scoring device, check if any new data has been received, and then parse/display the new values onscreen.

To read more about this project, you can head over to its write-up on Hackster.io.

The post Nothin’ but (neural) net: Track your basketball score with a Nano 33 BLE Sense appeared first on Arduino Blog.

Your dog has nerve endings covering its entire body, giving it a sense of touch. It can feel the ground through its paws and use that information to gain better traction or detect harmful terrain. For robots to perform as well as their biological counterparts, they need a similar level of sensory input. In pursuit of that goal, the Autonomous Robots Lab designed TRACEPaw for legged robots.

TRACEPaw (Terrain Recognition And Contact force Estimation Paw) is a sensorized foot for robot dogs that includes all of the hardware necessary to calculate force and classify terrain. Most systems like this use direct sensor readings, such as those from force sensors. But TRACEPaw is unique in that it uses indirect data to infer this information. The actual foot is a deformable silicone hemisphere. A camera looks at that and calculates the force based on the deformation it sees. In a similar way, a microphone listens to the sound of contact and uses that to judge the type of terrain, like gravel or dirt.

To keep TRACEPaw self-contained, Autonomous Robots Lab chose to utilize an Arduino Nicla Vision board. That has an integrated camera, microphone, six-axis motion sensor, and enough processing power for onboard machine learning. Using OpenMV and TensorFlow Lite, TRACEPaw can estimate the force on the silicone pad based on how much it deforms during a step. It can also analyze the audio signal from the microphone to guess the terrain, as the silicone pad sounds different when touching asphalt than it does when touching loose soil.

More details on the project are available on GitHub.

The post Helping robot dogs feel through their paws appeared first on Arduino Blog.

The traditional method for changing a diaper starts when someone smells or feels the that the diaper has been soiled, and while it isn’t the greatest process, removing the soiled diaper as soon as possible is important for avoiding rashes and infections. Justin Lutz has created an intelligent solution to this situation by designing a small device that alerts people over Bluetooth® when the diaper is ready to be changed.

Because a dirty diaper gives off volatile organic compounds (VOCs) and small particulates, Lutz realized he could use the Arduino Nicla Sense ME’s built-in BME688 sensor which can measure VOCs, temperature/humidity, and air quality. After gathering 29 minutes of gas and air quality measurements in the Edge impulse Studio for both clean and soiled diapers, he trained a classification model for 300 epochs, resulting in a model with 95% accuracy.

Based on his prior experience with the Nicla Sense ME’s BLE capabilities and MIT App Inventor, Lutz used the two to devise a small gadget that wirelessly connects to a phone app so it can send notifications when it’s time for a new diaper.

To read more about this project, you can check out Lutz’s write-up here on the Edge Impulse docs page.

The post This smart diaper knows when it is ready to be changed appeared first on Arduino Blog.

Due to an ever-warming planet thanks to climate change and greatly increasing wildfire chances because of prolonged droughts, being able to quickly detect when a fire has broken out is vital for responding while it’s still in a containable stage. But one major hurdle to collecting machine learning model datasets on these types of events is that they can be quite sporadic. In his proof of concept system, engineer Shakhizat Nurgaliyev shows how he leveraged NVIDIA Omniverse Replicator to create an entirely generated dataset and then deploy a model trained on that data to an Arduino Nicla Vision board.

The project started out as a simple fire animation inside of Omniverse which was soon followed by a Python script that produces a pair of virtual cameras and randomizes the ground plane before capturing images. Once enough had been created, Nurgaliyev utilized the zero-shot object detection application Grounding DINO to automatically draw bounding boxes around the virtual flames. Lastly, each image was brought into an Edge Impulse project and used to develop a FOMO-based object detection model.

By taking this approach, the model achieved an F1 score of nearly 87% while also only needing a max of 239KB of RAM and a mere 56KB of flash storage. Once deployed as an OpenMV library, Nurgaliyev shows in his video below how the MicroPython sketch running on a Nicla Vision within the OpenMV IDE detects and bounds flames. More information about this system can be found here on Hackster.io.

The post This Nicla Vision-based fire detector was trained entirely on synthetic data appeared first on Arduino Blog.



  • Newsletter

    Sign up for the PlanetArduino Newsletter, which delivers the most popular articles via e-mail to your inbox every week. Just fill in the information below and submit.

  • Like Us on Facebook