Posts | Comments

Planet Arduino

Archive for the ‘artificial intelligence’ Category

Boards Guide 2024: Boards Are Back

From Make: Vol. 87: New evolutions in dev boards make this a metamorphic period for Makers.

The post Boards Guide 2024: Boards Are Back appeared first on Make: DIY Projects and Ideas for Makers.

Discovering new music is difficult, making it a frustrating experience for both listeners and services. Identifying what one person liked about a specific song is a challenge when music is so subjective. Two different people may love the same song, but for different reasons that affect their wider tastes. In an attempt to improve the situation, Danning Liang and Artem Laptiev from MIT’s School of Architecture and Planning built a kind of AI-powered boombox called VBox that helps listeners discover music in a new way.

Most existing services use some combination of listener data and qualitative categorization of songs to aid in music discovery. But those connections are obvious and tend not to identify the factors that actually predict a listener’s enjoyment of a song. Artificial intelligence models, on the other hand, excel at finding connections and patterns that we might not see ourselves. In this case, VBox uses OpenAI’s natural language models to categorize music and find similar songs. As a song plays, VBox will list keywords related to the music. If a specific keyword resonates with the listener, they can select it to influence the next song choice.

There aren’t a lot of technical details available, but we do know that an Arduino board is somewhere in the mix. It uses RFID to identify genre cards that start the music discovery process. The keywords scroll across a LED matrix display and a crank handle attached to a rotary encoder lets listeners move through the keyword list. The enclosure is made of gorgeous stamped sheet metal with a leather strap and some 3D-printed internal framework for the electronic components. Music pumps out through a pair of small speakers.

This is more of an art project and an AI experiment than a true attempt at creating an appealing music discovery system, but the idea is novel and it very well could prove useful for some music lovers.

The post VBox is like an AI-powered Pandora boombox appeared first on Arduino Blog.

Artificial intelligence (AI) and natural language processing (NLP) are changing the way we interact with technology. With advancements in machine learning and data processing, we now have AI-powered virtual assistants, chatbots, and voice recognition systems that can understand and respond to our queries in a natural, human-like way. One such technology is ChatGPT, a large language model developed by OpenAI based on the GPT-3.5 architecture. ChatGPT has the ability to generate coherent, context-aware responses to a wide range of questions, making it an ideal tool for communication.

Integrating ChatGPT and Arduino Cloud for IoT projects

Integrating ChatGPT and the Arduino Cloud, a platform that allows you to develop, deploy and manage IoT devices in the easiest way, opens up a brand new world of possibilities for IoT applications. By combining ChatGPT’s natural language processing capabilities with the Arduino Cloud’s IoT platform, we can create intelligent devices that can understand and respond to natural language queries, making the user experience more seamless and intuitive. For example, imagine a smart home system that can be controlled using voice commands, or a chatbot that can provide instant technical support for IoT devices.

Chat with ChatGPT through Arduino IoT Cloud dashboards

This project is a simple demonstration of an Arduino IoT Cloud-compatible device, such as an Arduino Nano RP2040 Connect or any ESP32/ESP8266 device, acting as a middleware between the IoT Cloud and OpenAI’s GPT-3.5 language model. The device acts as a bridge by receiving prompts (questions) from the IoT Cloud and forwarding them to the OpenAI API. Once the model processes the prompts, the device receives and parses the replies and sends them back to the IoT Cloud, which displays the response to the user.

To embark on this project, you will need to create an OpenAI account, create an API key, and have enough credits. Then, you can create your device on the IoT Cloud, program it, and set up the dashboard on the IoT Cloud. The dashboard serves as a user interface, allowing you to write questions (prompts) and receive ChatGPT’s replies.

Check out the project on Arduino’s Project Hub and get more information about how to build the system yourself.

As you get into the project, you can explore variable tweaking, defining the maximum number of tokens that ChatGPT will use in generating a response, and keeping in mind the limits on OpenAI API usage. Overall, this project presents a unique opportunity to integrate the cutting-edge capabilities of OpenAI’s language model with the versatile Arduino IoT Cloud, enabling you to create more intelligent and intuitive IoT applications.

Connect to ChatGPT using MicroPython

If you are interested in an alternative approach of connecting to ChatGPT, you can do so by using a MicroPython script. If you are familiar with making HTTP requests using Python, this is a great approach.

To authenticate and successfully make requests with ChatGPT, you will need to first get your API key from OpenAI, and construct a POST request. We will be using the urequests and ujson modules, where we will simply ask a question to ChatGPT, and get a response. 

The response is printed on a 128×64 OLED display, and that’s pretty much it. It is a minimal example, but a fun one, and easy to get started with.

To get started with MicroPython and ChatGPT, visit this repository which has the code and instructions to get started. 

This type of integration paves the way for many cool projects. You can for example ask ChatGPT to evaluate recently recorded data, or a companion-bot that knows everything that the Internet knows..

Introducing the Arduino Cloud

The Arduino Cloud is a platform that simplifies the process of developing, deploying, and managing IoT devices. It supports various hardware, including Arduino boards, ESP32, and ESP8266 based boards, and makes it easy for makers, IoT enthusiasts, and professionals to build connected projects without coding expertise. What makes Arduino Cloud stand out is its intuitive interface that abstracts complex tasks, making it accessible to all users. With its low-code approach and extensive collection of examples and templates, Arduino Cloud offers a simple way for users to get started. 

The platform’s IoT Cloud tool allows for easy management and monitoring of connected devices through customizable dashboards, which provide real-time visualisations of the device’s data. Furthermore, the IoT Cloud can be accessed remotely through the mobile app Arduino IoT Cloud Remote, which is available for both Android and iOS devices, enabling users to manage their devices from anywhere.

Build your own

The integration of ChatGPT and Arduino Cloud has opened up a new world of opportunities for IoT applications. These projects are just some examples of how these technologies can be used to create intelligent devices that can understand and respond to natural language queries. 

If you have been inspired by these projects and want to share your own creation with the community, we encourage you to publish your project on Arduino Project Hub. By doing so, you can showcase your project and share your knowledge with others. Arduino Project Hub is a platform where users can share their Arduino-based projects and find inspiration for new ones. With a global community of makers and enthusiasts, the hub is the perfect place to collaborate, learn and explore the endless possibilities of IoT. So, whether you are a seasoned maker or just starting, we invite you to join our community and share your project with the world!

Ready to start?

Ready to unleash the potential of IoT devices and ChatGPT integration? Visit the Arduino IoT Cloud website to access official documentation and resources for the Arduino IoT Cloud. Create an account and start building your own projects today!

The post Creating intelligent IoT devices with ChatGPT and Arduino Cloud: A journey into natural language interaction appeared first on Arduino Blog.

People with visual impairments also enjoy going out to a restaurant for a nice meal, which is why it is common for wait staff to place the salt and pepper shakes in a consistent fashion: salt on the right and pepper on the left. That helps visually impaired diners quickly find the spice they’re looking for and a similar arrangement works for utensils. But what about after the diner sets down a utensil in the middle of a meal? The ForkLocator is an AI system that can help them locate the utensil again.

This is a wearable device meant for people with visual impairments. It uses object recognition and haptic cues to help the user locate their fork. The current prototype, built by Revoxdyna, only works with forks. But it would be possible to expand the system to work with the full range of utensils. Haptic cues come from four servo motors, which prod the user’s arm to indicate the direction in which they should move their hand to find the fork.

The user’s smartphone performs the object recognition and should be worn or positioned in such a way that its camera faces the table. The smartphone app looks for the plate, the fork, and the user’s hand. It then calculates a vector from the hand to the fork and tells an Arduino board to actuate the servo motors corresponding to that direction. Those servos and the Arduino attach to a 3D-printed frame that straps to the user’s upper arm.

A lot more development is necessary before a system like the ForkLocator would be ready for the consumer market, but the accessibility benefits are something to applaud.

The post This AI system helps visually impaired people locate dining utensils appeared first on Arduino Blog.

Building The Ghostwriter AI Typewriter

This crazy AI powered interactive typewriter has been bouncing around the internet since CES recently ended. it’s a pretty great example of putting a professional finish on an interesting piece. Luckily Arvind Sanjeev, the creator, has shared a build breakdown on twitter. Be sure to go read the entire thread, it is fascinating. Arvind started […]

The post Building The Ghostwriter AI Typewriter appeared first on Make: DIY Projects and Ideas for Makers.

Like most of us, [Peter] had a bit of extra time on his hands during quarantine and decided to take a look back at speech recognition technology in the 1970s. Quickly, he started thinking to himself, “Hmm…I wonder if I could do this with an Arduino Nano?” We’ve all probably had similar thoughts, but [Peter] really put his theory to the test.

The hardware itself is pretty straightforward. There is an Arduino Nano to run the speech recognition algorithm and a MAX9814 microphone amplifier to capture the voice commands. However, the beauty of [Peter’s] approach, lies in his software implementation. [Peter] has a bit of an interplay between a custom PC program he wrote and the Arduino Nano. The learning aspect of his algorithm is done on a PC, but the implementation is done in real-time on the Arduino Nano, a typical approach for really any machine learning algorithm deployed on a microcontroller. To capture sample audio commands, or utterances, [Peter] first had to optimize the Nano’s ADC so he could get sufficient sample rates for speech processing. Doing a bit of low-level programming, he achieved a sample rate of 9ksps, which is plenty fast for audio processing.

To analyze the utterances, he first divided each sample utterance into 50 ms segments. Think of dividing a single spoken word into its different syllables. Like analyzing the “se-” in “seven” separate from the “-ven.” 50 ms might be too long or too short to capture each syllable cleanly, but hopefully, that gives you a good mental picture of what [Peter’s] program is doing. He then calculated the energy of 5 different frequency bands, for every segment of every utterance. Normally that’s done using a Fourier transform, but the Nano doesn’t have enough processing power to compute the Fourier transform in real-time, so Peter tried a different approach. Instead, he implemented 5 sets of digital bandpass filters, allowing him to more easily compute the energy of the signal in each frequency band.

The energy of each frequency band for every segment is then sent to a PC where a custom-written program creates “templates” based on the sample utterances he generates. The crux of his algorithm is comparing how closely the energy of each frequency band for each utterance (and for each segment) is to the template. The PC program produces a .h file that can be compiled directly on the Nano. He uses the example of being able to recognize the numbers 0-9, but you could change those commands to “start” or “stop,” for example, if you would like to.

[Peter] admits that you can’t implement the type of speech recognition on an Arduino Nano that we’ve come to expect from those covert listening devices, but he mentions small, hands-free devices like a head-mounted multimeter could benefit from a single word or single phrase voice command. And maybe it could put your mind at ease knowing everything you say isn’t immediately getting beamed into the cloud and given to our AI overlords. Or maybe we’re all starting to get used to this. Whatever your position is on the current state of AI, hopefully, you’ve gained some inspiration for your next project.

Baby monitors are cool, but [Ish Ot Jr.] wanted his to only transmit sounds that required immediate attention and filter any non-emergency background noise. Posed with this problem, he made a baby monitor that would only send alerts when his baby was crying.

For his project, [Ish] used an Arduino Nano 33 BLE Sense due to its built-in microphone, sizeable RAM for storing large chunks of data, and it’s BLE capabilities for later connecting with an app. He began his project by collecting background noise using Edge Impulse Studio’s data acquisition functionality. [Ish] really emphasized that Edge Impulse was really doing all the work for him. He really just needed to collect some test data and that was mostly it on his part. The work needed to run and test the Neural Network was taken care of by Edge Impulse. Sounds handy, if you don’t mind offloading your data to the cloud.

[Ish] ended up with an 86.3% accurate classifier which he thought was good enough for a first pass at things. To make his prototype a bit more “finished”, he added some status LEDs, providing some immediate visual feedback of his classifier and to notify the caregiver. Eventually, he wants to add some BLE support and push notifications, alerting him whenever his baby needs attention.

We’ve seen a couple of baby monitor projects on Hackaday over the years. [Ish’s] project will most certainly be a nice addition to the list.

As you work on a project, lighting needs change dynamically. This can mean manual adjustment after manual adjustment, making do with generalized lighting, or having a helper hold a flashlight. Harry Gao, however, has a different solution in the form of a novel robotic task lamp.

Gao’s 3D-printed device uses a USB camera to take images of the work area, and a Python image processing routine running on a PC to detect hand positions. This sends instructions to an Arduino Nano, which commands a pair of small stepper motors to extend and rotate the light fixture via corresponding driver boards.

The solution means that he’ll always have proper illumination, as long as he stays within the light-bot’s range!

CatSpotterThumbThe Jetson TX1 Cat Spotter uses advanced neural networking to recognize when there's a cat in the room — and then starts teasing it with a laser.

Read more on MAKE

The post Nvidia Jetson TX1 Cat Spotter and Laser Teaser appeared first on Make: DIY Projects and Ideas for Makers.



  • Newsletter

    Sign up for the PlanetArduino Newsletter, which delivers the most popular articles via e-mail to your inbox every week. Just fill in the information below and submit.

  • Like Us on Facebook