Posts | Comments

Planet Arduino

Archive for the ‘object detection’ Category

Due to an ever-warming planet thanks to climate change and greatly increasing wildfire chances because of prolonged droughts, being able to quickly detect when a fire has broken out is vital for responding while it’s still in a containable stage. But one major hurdle to collecting machine learning model datasets on these types of events is that they can be quite sporadic. In his proof of concept system, engineer Shakhizat Nurgaliyev shows how he leveraged NVIDIA Omniverse Replicator to create an entirely generated dataset and then deploy a model trained on that data to an Arduino Nicla Vision board.

The project started out as a simple fire animation inside of Omniverse which was soon followed by a Python script that produces a pair of virtual cameras and randomizes the ground plane before capturing images. Once enough had been created, Nurgaliyev utilized the zero-shot object detection application Grounding DINO to automatically draw bounding boxes around the virtual flames. Lastly, each image was brought into an Edge Impulse project and used to develop a FOMO-based object detection model.

By taking this approach, the model achieved an F1 score of nearly 87% while also only needing a max of 239KB of RAM and a mere 56KB of flash storage. Once deployed as an OpenMV library, Nurgaliyev shows in his video below how the MicroPython sketch running on a Nicla Vision within the OpenMV IDE detects and bounds flames. More information about this system can be found here on Hackster.io.

The post This Nicla Vision-based fire detector was trained entirely on synthetic data appeared first on Arduino Blog.

Due to an ever-warming planet thanks to climate change and greatly increasing wildfire chances because of prolonged droughts, being able to quickly detect when a fire has broken out is vital for responding while it’s still in a containable stage. But one major hurdle to collecting machine learning model datasets on these types of events is that they can be quite sporadic. In his proof of concept system, engineer Shakhizat Nurgaliyev shows how he leveraged NVIDIA Omniverse Replicator to create an entirely generated dataset and then deploy a model trained on that data to an Arduino Nicla Vision board.

The project started out as a simple fire animation inside of Omniverse which was soon followed by a Python script that produces a pair of virtual cameras and randomizes the ground plane before capturing images. Once enough had been created, Nurgaliyev utilized the zero-shot object detection application Grounding DINO to automatically draw bounding boxes around the virtual flames. Lastly, each image was brought into an Edge Impulse project and used to develop a FOMO-based object detection model.

By taking this approach, the model achieved an F1 score of nearly 87% while also only needing a max of 239KB of RAM and a mere 56KB of flash storage. Once deployed as an OpenMV library, Nurgaliyev shows in his video below how the MicroPython sketch running on a Nicla Vision within the OpenMV IDE detects and bounds flames. More information about this system can be found here on Hackster.io.

The post This Nicla Vision-based fire detector was trained entirely on synthetic data appeared first on Arduino Blog.

As you work on a project, lighting needs change dynamically. This can mean manual adjustment after manual adjustment, making do with generalized lighting, or having a helper hold a flashlight. Harry Gao, however, has a different solution in the form of a novel robotic task lamp.

Gao’s 3D-printed device uses a USB camera to take images of the work area, and a Python image processing routine running on a PC to detect hand positions. This sends instructions to an Arduino Nano, which commands a pair of small stepper motors to extend and rotate the light fixture via corresponding driver boards.

The solution means that he’ll always have proper illumination, as long as he stays within the light-bot’s range!

A Raspberry Pi with a camera is nothing new. But the Pixy2 camera can interface with a variety of microcontrollers and has enough smarts to detect objects, follow lines, or even read barcodes without help from the host computer. [DroneBot Workshop] has a review of the device and he’s very enthused about the camera. You can see the video below.

When you watch the video, you might wonder how much this camera will cost. Turns out it is about $60 which isn’t cheap but for the capabilities it offers it isn’t that much, either. The camera can detect lines, intersections, and barcodes plus any objects you want to train it to recognize. The camera also sports its own light source and dual servo motor drive meant for a pan and tilt mounting arrangement.

You can connect via USB, serial, SPI, or I2C. Internally, the camera processes at 60 frames per second and it can remember seven signatures internally. There’s a PC-based configuration program that will run on Windows, Mac, or Linux. You can even use the program to spy on the camera while it is talking to another microcontroller like an Arduino.

The camera isn’t made to take sharp photos or video, but it is optimized for finding things, not for picture quality. High-quality frames take more processing power, so this is a reasonable trade. The camera does need training to find objects by color and shape. You can do the training with the PC-based software, but you can also do it with a self-contained procedure that relies on a button on the camera. The video shows both methods.

Once trained, you can even have an Arduino find objects. There’s a library that allows you to find how many items the camera currently sees and find out what the block is and its location. The identification clearly depends highly on color, so you’ll probably need to experiment if you have things that are different colors on different sides or has multiple colors.

Sure, you could use a sufficient computer with OpenCV to get some of these results, but having this all in one package and usable from just about any processor could be a real game-changer for the right kind of project. If you wanted to make a fancy line-following robot that could handle 5-way intersections and barcode commands this would be a no-brainer.

We’ve seen other smart cameras like OpenMV before. Google also has a vision processor for the Pi, too. It has a lot of capability but assumes you are connecting to a Pi.



  • Newsletter

    Sign up for the PlanetArduino Newsletter, which delivers the most popular articles via e-mail to your inbox every week. Just fill in the information below and submit.

  • Like Us on Facebook