Posts | Comments

Planet Arduino

Archive for the ‘Featured’ Category

The Arduino CLI is an open source command line application written in Golang that can be used from a terminal to compile, verify and upload sketches to Arduino boards, and that’s capable of managing all the software and tools needed in the process. But don’t get fooled by its name: the Arduino CLI can do much more than the average console application, as shown by the Pro IDE and Arduino Create, which rely on it for similar purposes but each one in a completely different way from the other.

In this article, we introduce the three pillars of the Arduino CLI, explaining how we designed the software so that it can be effectively leveraged under different scenarios.

The first pillar: command line interface

Console applications for humans

As you might expect, the first way to use the Arduino CLI is from a terminal and by a human, and user experience plays a key role here. The UX is under a continuous improvement process as we want the tool to be powerful without being too complicated. We heavily rely on sub-commands to provide a rich set of different operations logically grouped together, so that users can easily explore the interface while getting very specific contextual help.

Console applications for robots

Humans are not the only type of customers we want to support and the Arduino CLI was also designed to be used programmatically — think about automation pipelines or a CI/CD system. 

There are some niceties to observe when you write software that’s supposed to be easy to run when unattended and one in particular is the ability to run without a configuration file. This is possible because every configuration option you find in the arduino-cli.yaml configuration file can be provided either through a command line flag or by setting an environment variable. To give an example, the following commands are all equivalent and will proceed fetching the unstable package index that can be used to work with experimental versions of cores: 

See the documentation for details about Arduino CLI’s configuration system.

Consistent with the previous paragraph, when it comes to providing output the Arduino CLI aims to be user friendly but also slightly verbose, something that doesn’t play well with robots. This is why we added an option to provide output that’s easy to parse. For example, the following figure shows what getting the software version in JSON format looks like.

Even if not related to software design, one last feature that’s worth mentioning is the availability of a one-line installation script that can be used to make the latest version of the Arduino CLI available on most systems with an HTTP client like curl or wget and a shell like bash.

The second pillar: gRPC interface

gRPC is a high-performance RPC framework that can efficiently connect client and server applications. The Arduino CLI can act as a gRPC server (we call it daemon mode), exposing a set of procedures that implement the very same set of features of the command line interface and waiting for clients to connect and use them. To give an idea, the following is some Golang code capable of retrieving the version number of a remote running Arduino CLI server instance:

gRPC is language-agnostic: even if the example is written in Golang, the programming language used for the client can be Python, JavaScript or any of the many supported ones, leading to a variety of possible scenarios. The new Arduino Pro IDE is a good example of how to leverage the daemon mode of the Arduino CLI with a clean separation of concerns: the Pro IDE knows nothing about how to download a core, compile a sketch or talk to an Arduino board and it demands all these features of an Arduino CLI instance. Conversely, the Arduino CLI doesn’t even know that the client that’s connected is the Pro IDE, and neither does it care.

The third pillar: embedding

The Arduino CLI is written in Golang and the code is organized in a way that makes it easy to use it as a library by including the modules you need in another Golang application at compile time. Both the first and second pillars rely on a common Golang API, a set of functions that abstract all the functionalities offered by the Arduino CLI, so that when we provide a fix or a new feature, they are automatically available to both the command line and gRPC interfaces. 

The source modules implementing this API can be imported in other Golang programs to embed a full-fledged Arduino CLI. For example, this is how some backend services powering Arduino Create can compile sketches and manage libraries. Just to give you a taste of what it means to embed the Arduino CLI, here is how to search for a core using the API:

Embedding the Arduino CLI is limited to Golang applications and requires a deep knowledge of its internals. For the average use case, the gRPC interface might be a better alternative; nevertheless this remains a valid option that we use and provide support for.

Conclusion

You can start playing with the Arduino CLI right away. The code is open source and we provide extensive documentation. The repo contains example code showing how to implement a gRPC client, and if you’re curious about how we designed the low-level API, have a look at the commands package and don’t hesitate to leave feedback on the issue tracker if you’ve got a use case that doesn’t fit one of the three pillars.

SSL/TLS stack and HW secure element

At Arduino, we are hard at work to keep improving the security of our hardware and software products, and we would like to run you through how our IoT Cloud service works.

The Arduino IoT Cloud‘s security is based on three key elements:

  • The open-source library ArduinoBearSSL for implementing TLS protocol on Arduino boards;
  • A hardware secure element (Microchip ATECCX08A) to guarantee authenticity and confidentiality during communication;
  • A device certificate provisioning process to allow client authentication during MQTT sessions.

ArduinoBearSSL

In the past, it has been challenging to create a complete SSL/TLS library implementation on embedded (constrained) devices with very limited resources. 

An Arduino MKR WiFi 1010, for instance, only has 32KB of RAM while the standard SSL/TLS protocol implementations were designed for more powerful devices with ~256MB of RAM.

As of today, a lot of embedded devices still do not properly implement the full SSL/TLS stack and fail to implement good security because they misuse or strip functionalities from the library, e.g. we found out that a lot of off-brand boards use code that does not actually validate the server’s certificate, making them an easy target for server impersonation and man-in-the-middle attacks.

Security is paramount to us, and we do not want to make compromises in this regard when it comes to our offering in both hardware and software. We are therefore always looking at “safe by default” settings and implementations. 

Particularly in the IoT era, operating without specific security measures in place puts customers and their data at risk.

This is why we wanted to make sure the security standards adopted nowadays in high-performance settings are ported to microcontrollers (MCUs) and embedded devices.

Back in 2017, while looking at different SSL/TLS libraries supporting TLS 1.2 and modern cryptography (something that could work with very little RAM/ROM footprint, have no OS dependency, and be compatible with the embedded C world), we decided to give BearSSL a try.

BearSSL: What is it?

BearSSL provides an implementation of the SSL/TLS protocol (RFC 5246) written in C and developed by Thomas Pornin.

Optimized for constrained devices, BearSSL aims at small code footprint and low RAM usage. As per its guiding rules, it tries to find a reasonable trade-off between several partly conflicting goals:

  • Security: defaults should be robust and using patently insecure algorithms or protocols should be made difficult in the API, or simply not possible;
  • Interoperability with existing SSL/TLS servers; 
  • Allowing lightweight algorithms for CPU-challenged platforms; 
  • Be extensible with strong and efficient implementations on big systems where code footprint is less important.

BearSSL and Arduino

Our development team picked it as an excellent starting point for us to make BearSSL fit in our Arduino boards focusing on both security and performance.

The firmware developers team worked hard on porting BearSSL to Arduino bundling it together as a very nice and open-source library: ArduinoBearSSL.

Because the computational effort of performing a crypto algorithm is high, we decided to offload part of this task to hardware, using a secure element (we often call it a “cypto chip”). Its advantages are:

  • Making the computation of cryptography operations faster;
  • You are not forced to use all the available RAM of your device for these demanding tasks;
  • Allows storing private keys securely (more on this later);
  • It provides a true random number generator (TRNG).

How does the TLS protocol work?

TLS uses both asymmetric and symmetric encryption. Asymmetric encryption is used during the TLS handshake between the client and the server to exchange the shared session key for communication encryption. The algorithms commonly used in this phase are based on Rivest-Shamir-Adleman (RSA) or Diffie-Hellman algorithms. 

TLS 1.2 Handshake flow

After the TLS handshake, the client and the server both have a session key for symmetric encryption (e.g. algorithms AES 128 or AES 256).

The TLS protocol is an important part of our IoT Cloud security model because it guarantees an encrypted communication between the IoT devices and our servers.

The secure element

In order to save memory and improve security, our development team has chosen to introduce a hardware secure element to offload part of the cryptography algorithms computational load, as well as to generate, store, and manage certificates. For this reason, on the Arduino MKR family, Arduino Nano 33 IoT and Arduino Uno WiFi Rev2, you will find the secure element ATECC508A or ATECC608A manufactured by Microchip.

How do we use the secure element?

A secure element is an advanced hardware component able to perform cryptographic functions, we have decided to implement it on our boards to guarantee two fundamental security properties in the IoT communication: 

  • Authenticity: You can trust who you are communicating with;
  • Confidentiality: You can be sure the communication is private.

Moreover, the secure element is used during the provisioning process to configure the Arduino board for Arduino IoT Cloud. In order to connect to the Arduino IoT Cloud MQTT broker, our boards don’t use a standard credentials authentication (username/password pair). We rather opted for implementing a higher-level authentication, known as client certificate authentication.

How does the Arduino provisioning work?

The whole process is possible thanks to an API, which exposes an endpoint a client can interact with.

As you can see in the diagram below, first the Client requests to register a new device on Arduino IoT Cloud via the API, to which the server (API) returns a UUID (Universally Unique IDentifier). At this point, the user can upload the sketch Provisioning.ino to the target board. This code is responsible for multiple tasks:

  • Generating a private key using the ATECCX08A, and store it in a secure slot that can be only read by the secure element;
  • Generating a CSR (Certificate Signing Request) using the device UUID as Common Name (CN) and the generated private key to sign it;
  • Storing the certificate signed by Arduino acting as the authority.

After the CSR generation, the user sends it via the API to the server and the server returns a certificate signed by Arduino. This certificate is stored, in a compressed format, in a slot of the secure element (usually in slot 10) and it is used to authenticate the device to the Arduino IoT Cloud.

The arduino-cli tool just got some new exciting features with the release of 0.11.0:

  • Command-line completion
  • External programmer support
  • Internationalization and localization support (i18n)

Command-line completion

Finally, the autocompletion feature has landed!

With this functionality, the program automatically fills in partially typed commands by pressing the tab key. For example, with this update, you can type “arduino-cli bo”:

And, after pressing the <TAB> key, the command will auto-magically become: 

There are a few steps to follow in order to make it work seamlessly. We have to generate the required file — to do so, we have added a new command named “completion”. 

To generate the completion file, you can use:

By default, this command will print on the standard output (the shell window) the content of the completion file. To save to an actual file, use the “>” redirect symbol. Now you can move it to the required location (it depends on the shell you are using). Remember to open a new shell! Finally, you can press <TAB><TAB> to get the suggested command and flags.

In a future release, we will also be adding the completion for cores names, libraries, and boards.

Example with Bash (from the documentation)

To generate the completion file, use:

At this point, you can move that file in “/etc/bash_completion.d/” (root access is required) with:

A not recommended alternative is to source the completion file in `.bashrc`.

Remember to open a new shell to test the functionality.

External programmer

Another brand new feature is support for external programmers!

Now you can specify the external programmer to use when uploading code to a board. For example, you can use `arduino-cli upload …. –programmer programmer-id` for that. You can list the supported programmers with `arduino-cli upload –fqbn arduino:avr:uno –programmer list`.

And if you’re using the external programmer to burn a bootloader, you can do that from arduino-cli as well: `arduino-cli burn-bootloader –fqbn …`

Internationalization and localization support

Now the Arduino CLI messages can be translated to your native language thanks to i18n support! We are currently setting up the infrastructure; however, if you would like to help us with the translation, we will provide you more details in another blog post soon!

That’s all folks!

That’s it, we’ve worked hard to add these new features. Check them out by downloading 0.11.0 here. Do you like them? What are your thoughts on the arduino-cli? Are you using it for your projects? Let us know in the comments!

If you’re interested in embedded machine learning (TinyML) on the Arduino Nano 33 BLE Sense, you’ll have found a ton of on-board sensors — digital microphone, accelerometer, gyro, magnetometer, light, proximity, temperature, humidity and color — but realized that for vision you need to attach an external camera.

In this article, we will show you how to get image data from a low-cost VGA camera module. We’ll be using the Arduino_OVD767x library to make the software side of things simpler.

Hardware setup

To get started, you will need:

You can of course get a board without headers and solder instead, if that’s your preference.

The one downside to this setup is that (in module form) there are a lot of jumpers to connect. It’s not hard but you need to take care to connect the right cables at either end. You can use tape to secure the wires once things are done, lest one comes loose.

You need to connect the wires as follows:

Software setup

First, install the Arduino IDE or register for Arduino Create tools. Once you install and open your environment, the camera library is available in the library manager.

  • Install the Arduino IDE or register for Arduino Create
  • Tools > Manage Libraries and search for the OV767 library
  • Press the Install button

Now, we will use the example sketch to test the cables are connected correctly:

  • Examples > Arduino_OV767X > CameraCaptureRawBytes
  • Uncomment line 48 to display a test pattern –  Camera.testPattern();
  • Compiler and upload to your board

Your Arduino is now outputting raw image binary over serial. You cannot view the image using the Arduino Serial Monitor; instead, we’ve included a special application to view the image output from the camera using Processing.

Processing is a simple programming environment that was created by graduate students at MIT Media Lab to make it easier to develop visually oriented applications with an emphasis on animation and providing users with instant feedback through interaction.

To run the Arduino_OV767X camera viewer:

  • Install Processing 
  • Open Examples > Arduino_OV767X > extras > CameraVisualizerRawBytes
  • Copy the CameraVisualizerRawBytes code 
  • Paste the code into the empty sketch in Processing 
  • Edit line 35-37 to match the machine and serial port your Arduino is connected to
  • Hit the play button in Processing and you should see a test pattern (image update takes a couple of seconds):

If all goes well, you should see the striped test pattern above! To see a live image from the camera in the Processing viewer: 

  • If you now comment out line 48 of the Arduino sketch
  • Compile and upload to the board
  • Once the sketch is uploaded hit the play button in Processing again
  • After a few seconds you should now have a live image:

Considerations for TinyML

The full VGA (640×480 resolution) output from our little camera is way too big for current TinyML applications. uTensor runs handwriting detection with MNIST that uses 28×28 images. The person detection example in the TensorFlow Lite for Microcontrollers example uses 96×96 which is more than enough. Even state-of-the-art ‘Big ML’ applications often only use 320×320 images (see the TinyML book). Also consider an 8-bit grayscale VGA image occupies 300KB uncompressed and the Nano 33 BLE Sense has 256KB of RAM. We have to do something to reduce the image size! 

Camera format options

The OV7670 module supports lower resolutions through configuration options. The options modify the image data before it reaches the Arduino. The configurations currently available via the library today are:

  • VGA – 640 x 480
  • CIF – 352 x 240
  • QVGA – 320 x 240
  • QCIF – 176 x 144

This is a good start as it reduces the amount of time it takes to send an image from the camera to the Arduino. It reduces the size of the image data array required in your Arduino sketch as well. You select the resolution by changing the value in Camera.begin. Don’t forget to change the size of your array too.

Camera.begin(QVGA, RGB565, 1)

The camera library also offers different color formats: YUV422, RGB444 and RGB565. These define how the color values are encoded and all occupy 2 bytes per pixel in our image data. We’re using the RGB565 format which has 5 bits for red, 6 bits for green, and 5 bits for blue:

Converting the 2-byte RGB565 pixel to individual red, green, and blue values in your sketch can be accomplished as follows:

    // Convert from RGB565 to 24-bit RGB

    uint16_t pixel = (high << 8) | low;

    int red   = ((pixel >> 11) & 0x1f) << 3;
    int green = ((pixel >> 5) & 0x3f) << 2;
    int blue  = ((pixel >> 0) & 0x1f) << 3;

Resizing the image on the Arduino

Once we get our image data onto the Arduino, we can then reduce the size of the image further. Just removing pixels will give us a jagged (aliased) image. To do this more smoothly, we need a downsampling algorithm that can interpolate pixel values and use them to create a smaller image.

The techniques used to resample images is an interesting topic in itself. We found the simple downsampling example from Eloquent Arduino works with fine the Arduino_OV767X camera library output (see animated GIF above).

Applications like the TensorFlow Lite Micro Person Detection example that use CNN based models on Arduino for machine vision may not need any further preprocessing of the image — other than averaging the RGB values in order to remove color for 8-bit grayscale data per pixel.

However, if you do want to perform normalization, iterating across pixels using the Arduino max and min functions is a convenient way to obtain the upper and lower bounds of input pixel values. You can then use map to scale the output pixel values to a 0-255 range.

byte pixelOut = map(input[y][x][c], lower, upper, 0, 255); 

Conclusion

This was an introduction to how to connect an OV7670 camera module to the Arduino Nano 33 BLE Sense and some considerations for obtaining data from the camera for TinyML applications. There’s a lot more to explore on the topic of machine vision on Arduino — this is just a start!

Today, we’re announcing a new security feature for our community: two-factor authentication (2FA) on Arduino web services. We have implemented a two-step verification login to arduino.cc, so our users can be sure of their online safety.

If enabled, two-factor authentication offers an additional security layer to the user’s account, so the user can have better protection of their IoT devices connected to Arduino IoT Cloud. We encourage our users to enable 2FA to improve their online safety.

How to enable two-factor authentication

Arduino supports two-factor authentication via authenticator software as Authy or the Google Authenticator. To enable 2FA on your account:

1. Go to id.arduino.cc and click on Activate in the Security frame of your account:

2. Scan the QR code using your own authenticator app (e.g. Authy, Google Authenticator, Microsoft Authenticator, etc.)

3. Now, in your authenticator app, it appears a six-digit code that changes every 30 seconds: copy it in the text field and click Verify.

4. Important: Save your Recovery code in a safe place and do not lose it. If you lose your 2FA codes (e.g. you misplace or break your phone), you can still restore your account using the recovery code. If you lose both 2FA and recovery codes, you will no longer be able to access your account.

5. Great! Now you have the Two-Factor Authentication enabled on your Arduino account.

Machine learning (ML) algorithms come in all shapes and sizes, each with their own trade-offs. We continue our exploration of TinyML on Arduino with a look at the Arduino KNN library.

In addition to powerful deep learning frameworks like TensorFlow for Arduino, there are also classical ML approaches suitable for smaller data sets on embedded devices that are useful and easy to understand — one of the simplest is KNN.

One advantage of KNN is once the Arduino has some example data it is instantly ready to classify! We’ve released a new Arduino library so you can include KNN in your sketches quickly and easily, with no off-device training or additional tools required. 

In this article, we’ll take a look at KNN using the color classifier example. We’ve shown the same application with deep learning before — KNN is a faster and lighter weight approach by comparison, but won’t scale as well to larger more complex datasets. 

Color classification example sketch

In this tutorial, we’ll run through how to classify objects by color using the Arduino_KNN library on the Arduino Nano 33 BLE Sense.

To set up, you will need the following:

  • Arduino Nano 33 BLE Sense board
  • Micro USB cable
  • Open the Arduino IDE or Arduino Create
  • Install the Arduino_KNN library 
  • Select ColorClassifier from File > Examples > Arduino_KNN 
  • Compile this sketch and upload to your Arduino board

The Arduino_KNN library

The example sketch makes use of the Arduino_KNN library.  The library provides a simple interface to make use of KNN in your own sketches:

#include <Arduino_KNN.h>

// Create a new KNNClassifier
KNNClassifier myKNN(INPUTS);

In our example INPUTS=3 – for the red, green and blue values from the color sensor.

Sampling object colors

When you open the Serial Monitor you should see the following message:

Arduino KNN color classifier
Show me an example Apple

The Arduino board is ready to sample an object color. If you don’t have an Apple, Pear and Orange to hand you might want to edit the sketch to put different labels in. Keep in mind that the color sensor works best in a well lit room on matte, non-shiny objects and each class needs to have distinct colors! (The color sensor isn’t ideal to distinguish between an orange and a tangerine — but it could detect how ripe an orange is. If you want to classify objects by shape you can always use a camera.)

When you put the Arduino board close to the object it samples the color and adds it to the KNN examples along with a number labelling the class the object belongs to (i.e. numbers 0,1 or 2 representing Apple, Orange or Pear). ML techniques where you provide labelled example data are also called supervised learning.

The code in the sketch to add the example data to the KNN function is as follows:

readColor(color);

// Add example color to the KNN model
myKNN.addExample(color, currentClass);

The red, green and blue levels of the color sample are also output over serial:

The sketch takes 30 color samples for each object class. You can show it one object and it will sample the color 30 times — you don’t need 30 apples for this tutorial! (Although a broader dataset would make the model more generalized.)

Classification

With the example samples acquired the sketch will now ask to guess your object! The example reads the color sensor using the same function as it uses when it acquired training data — only this time it calls the classify function which will guess an object class when you show it a color:

 readColor(color);

 // Classify the object
 classification = myKNN.classify(color, K);

You can try showing it an object and see how it does:

Let me guess your object
0.44,0.28,0.28
You showed me an Apple

Note: It will not be 100% accurate especially if the surface of the object varies or the lighting conditions change. You can experiment with different numbers of examples, values for k and different objects and environments to see how this affects results.

How does KNN work?

Although the  Arduino_KNN library does the math for you it’s useful to understand how ML algorithms work when choosing one for your application. In a nutshell, the KNN algorithm classifies objects by comparing how close they are to previously seen examples. Here’s an example chart with average daily temperature and humidity data points. Each example is labelled with a season:

To classify a new object (the “?” on the chart) the KNN classifier looks for the most similar previous example(s) it has seen.  As there are two inputs in our example the algorithm does this by calculating the distance between the new object and each of the previous examples. You can see the closest example above is labelled “Winter”.

The k in KNN is just the number of closest examples the algorithm considers. With k=3 it counts the three closest examples. In the chart above the algorithm would give two votes for Spring and one for Winter — so the result would change to Spring. 

One disadvantage of KNN is the larger the amount of training example data there is, the longer the KNN algorithm needs to spend checking each time it classifies an object. This makes KNN less feasible for large datasets and is a major difference between KNN and a deep learning based approach. 

Classifying objects by color

In our color classifier example there are three inputs from the color sensor. The example colors from each object can be thought of as points in three dimensional space positioned on red, green and blue axes. As usual the KNN algorithm guesses objects by checking how close the inputs are to previously seen examples, but because there are three inputs this time it has to calculate the distances in three dimensional space. The more dimensions the data has the more work it is to compute the classification result.

Further thoughts

This is just a quick taste of what’s possible with KNN. You’ll find an example for board orientation in the library examples, as well as a simple example for you to build on. You can use any sensor on the BLE Sense board as an input, and even combine KNN with other ML techniques.

Of course there are other machine learning resources available for Arduino include TensorFlow Lite tutorials as well as support from professional tools such as Edge Impulse and Qeexo. We’ll be inviting more experts to explore machine learning on Arduino more in the coming weeks.

Today, we are excited to announce the arrival of the Arduino IDE 1.8.13.

Significant improvements include fixing the crash on Mac OS X with multiple monitor setups and resolving the recent package_index.json issue without other user intervention.

You will also notice that the boards listed in the “Tools” menu are now grouped by platform, making it easier to navigate when you have multiple boards loaded.

To see the full list of features, be sure to check out the changelog here. And as always, a big thank you to our community for their incredible support and contributions!

This article was written by César Garcia, researcher at La Hora Maker.

This week, we will be exploring the Apollo Ventilator in detail! This project emerged at Makespace Madrid two months ago. It was a response to the first news about the expected lack of ventilators in Spain because of COVID-19.

Several members of the space decided to explore this problem. They joined Telegram groups and started participating in the coronavirus maker forum. In this group, they stumbled upon an initial design shared by a doctor, that would serve as a starting point for the ventilator project.

Credits: Apollo Ventilator (Photo by Apollo Ventilator Team)

To advance the project, a small but active group would join daily at “Makespace Virtual.” This virtual space used open-source video conferencing software Jitsi. Each one of the eight core members would contribute with their expertise in design, engineering, coding, etc. Due to the confinement measures in place, access to the space was quite limited. Everyone decided to work from home and a single person would merge all advances at the make space physically. A few weeks later doctors from La Paz Hospital in Madrid got in touch with the Apollo team, looking for ways to work together on the ventilator.

One of the hardest challenges to overcome was the lack of medical materials. The global demand has disrupted supply chains everywhere! The team had to improvise with the means at their disposal. To regulate the flow of gases, they created a 3D-printed pinch, that would collapse a medical-grade silicone tube in the input. This mechanism is controlled using the same electronics used in 3D printers: an Arduino Mega 2560 board with a RAMPS shield!

Credits: 3D-printed valve pinch (Photo by Apollo Ventilator Team)

In respect of sensors, they decided to go for certified versions that could be sterilized in an autoclave. They looked everywhere without success. A few days later, they got support from a large electronics supplier to provide them an equivalent model suited for children or adults up to 80 kg.

They decided to work on a shared repository to coordinate all the distributed efforts. This attracted new members and talents, doubling in size and sparking new lines of development. The Apollo Ventilator is an open-source project, meaning that new people can learn and create together new features.

Based on their expertise sourcing components, they wanted Apollo to be flexible. Most other certified ventilators are too specific. But they want to become “the Marlin for ventilators!” Marlin is one of the most used firmware in the world to control 3D printers. This software can manage all kinds of boards and adapt to different configurations easily.

In the case of the Apollo Ventilator, the initial setup runs on a single Arduino Mega board. It uses the attached computer as the display. Current code can be configured to use a secondary Arduino board connected by serial port as a display too. As for the interface, there are several alternatives using GTK and QT. It’s also possible to send this data using MQTT, so data from many ventilators can be centralized. Other alternative builds used even regular snorkeling pieces! The Apollo Ventilator aspires to serve as the basis for several new projects and initiatives where off the shelf solutions are not available. Another potential outcome would be low-cost ventilators for veterinary practice or education.

Credits: Apollo Ventilator made out of snorkeling equipment (Photo by Apollo Ventilator Team)

The Apollo Ventilator is currently under development. They plan to expand the tests on lung simulators right now. Next steps would involve working with hospitals and veterinary schools. They will tackle these phases once the medical services are less overwhelmed.

The Apollo Ventilator takes its name from the famous Apollo missions to the moon. They managed to overcome all obstacles to take us where humanity had not been before. This project shares the same goals in regards to open-source ventilators. They are trying to overcome one of the biggest contemporary challenges, the COVID-19 pandemic. 

To learn more about the Apollo Ventilator, you can check out its repository. At this link you can also find an interview (in Spanish) to Javi, Apollo Ventilator’s project leader.

If you’d like to know more about Makespace Madrid, visit their website.

Arduino staff and Arduino community are strongly committed to support projects aimed at fighting and lessening the impact of COVID-19. Arduino products are essential for both R&D and manufacturing purposes related to the global response to Covid-19, in building digital medical devices and manufacturing processes for medical equipment and PPE. However, all prototypes and projects aimed to fight COVID-19 using Arduino open-source electronics and digital fabrication do not create any liability to Arduino (company, community and Arduino staff members). Neither Arduino nor Arduino board, staff members and community will be responsible in any form and to any extent for losses or damages of whatever nature (direct, indirect, consequential, or other) which may arise related to Arduino prototypes, Arduino electronic equipment for critical medical devices, research operations, forum and blog discussions and in general Covid-19 Arduino-based pilot and non pilot projects, independently of the Arduino control on progress or involvement in the research, development, manufacturing and in general implementation phases.

This post is written by Jan Jongboom and Dominic Pajak.

Running machine learning (ML) on microcontrollers is one of the most exciting developments of the past years, allowing small battery-powered devices to detect complex motions, recognize sounds, or find anomalies in sensor data. To make building and deploying these models accessible to every embedded developer we’re launching first-class support for the Arduino Nano 33 BLE Sense and other 32-bit Arduino boards in Edge Impulse.

The trend to run ML on microcontrollers is called Embedded ML or Tiny ML. It means devices can make smart decisions without needing to send data to the cloud – great from an efficiency and privacy perspective. Even powerful deep learning models (based on artificial neural networks) are now reaching microcontrollers. This past year great strides were made in making deep learning models smaller, faster and runnable on embedded hardware through projects like TensorFlow Lite Micro, uTensor and Arm’s CMSIS-NN; but building a quality dataset, extracting the right features, training and deploying these models is still complicated.

Using Edge Impulse you can now quickly collect real-world sensor data, train ML models on this data in the cloud, and then deploy the model back to your Arduino device. From there you can integrate the model into your Arduino sketches with a single function call. Your sensors are then a whole lot smarter, being able to make sense of complex events in the real world. The built-in examples allow you to collect data from the accelerometer and the microphone, but it’s easy to integrate other sensors with a few lines of code. 

Excited? This is how you build your first deep learning model with the Arduino Nano 33 BLE Sense (there’s also a video tutorial here: setting up the Arduino Nano 33 BLE Sense with Edge Impulse):

  • Download the Arduino Nano 33 BLE Sense firmware — this is a special firmware package (source code) that contains all code to quickly gather data from its sensors. Launch the flash script for your platform to flash the firmware.
  • Launch the Edge Impulse daemon to connect your board to Edge Impulse. Open a terminal or command prompt and run:
$ npm install edge-impulse-cli -g
$ edge-impulse-daemon
  • Your device now shows in the Edge Impulse studio on the Devices tab, ready for you to collect some data and build a model.
  • Once you’re done you can deploy your model back to the Arduino Nano 33 BLE Sense. Either as a binary which includes your full ML model, or as an Arduino library which you can integrate in any sketch.
Deploy to Arduino from Edge Impulse
Deploying to Arduino from Edge Impulse
  • Your machine learning model is now running on the Arduino board. Open the serial monitor and run `AT+RUNIMPULSE` to start classifying real world data!
Keyword spotting on the Arduino Nano 33 BLE Sense
Keyword spotting on the Arduino Nano 33 BLE Sense

Integrates with your favorite Arduino platform

We’ve launched with the Arduino Nano 33 BLE Sense, but you can also integrate Edge Impulse with your favourite Arduino platform. You can easily collect data from any sensor and development board using the Data forwarder. This is a small application that reads data over serial and sends it to Edge Impulse. All you need is a few lines of code in your sketch (here’s an example).

After you’ve built a model you can easily export your model as an Arduino library. This library will run on any Arm-based Arduino platform including the Arduino MKR family or Arduino Nano 33 IoT, providing it has enough RAM to run your model. You can now include your ML model in any Arduino sketch with just a few lines of code. After you’ve added the library to the Arduino IDE you can find an example on integrating the model under Files > Examples > Your project – Edge Impulse > static_buffer.

To run your models as fast and energy-efficiently as possible we automatically leverage the hardware capabilities of your Arduino board – for example the signal processing extensions available on the Arm Cortex-M4 based Arduino Nano BLE Sense or more powerful Arm Cortex-M7 based Arduino Portenta H7. We also leverage the optimized neural network kernels that Arm provides in CMSIS-NN.

A path to production

This release is the first step in a really exciting collaboration. We believe that many embedded applications can benefit from ML today, whether it’s for predictive maintenance (‘this machine is starting to behave abnormally’), to help with worker safety (‘fall detected’), or in health care (‘detected early signs of a potential infection’). Using Edge Impulse with the Arduino MKR family you can already quickly deploy simple ML based applications combined with LoRa, NB-IoT cellular, or WiFi connectivity. Over the next months we’ll also add integrations for the Arduino Portenta H7 on Edge Impulse, making higher performance industrial applications possible.

On a related note: if you have ideas on how TinyML can help to slow down or detect the COVID-19 virus, then join the UNDP COVID-19 Detect and Protect Challenge. For inspiration, see Kartik Thakore’s blog post on cough detection with the Arduino Nano 33 BLE Sense and Edge Impulse.

We can’t wait to see what you’ll build!

Jan Jongboom is the CTO and co-founder of Edge Impulse. He built his first IoT projects using the Arduino Starter Kit.

Dominic Pajak is VP Business Development at Arduino.

If you’ve been following the development of the Arduino IoT Cloud closely, you have probably noticed that over the months the Dashboard features have been progressing by leaps and bounds.
Sure, behind the scenes there’s work being done every day, but our users need and want features that better help them manage their connected devices.

As Arduino moves towards a more cohesive UX and UI, we have recently released a set of new widgets for our enhanced, aggregated Dashboard which allows users to pick from multiple IoT things and build beautiful control panels with lots of flexibility.

Here’s a quick summary video highlighting what these new features and widgets are.

We look forward to showing you more in the next few weeks. 



  • Newsletter

    Sign up for the PlanetArduino Newsletter, which delivers the most popular articles via e-mail to your inbox every week. Just fill in the information below and submit.

  • Like Us on Facebook