When I saw the announcement, Sparkfun Edge board gave an impression of a potential to demonstrate low level machine learning applications. It has all the essentials, ability to gather raw data as sound, vibration, gesture and possibly video, very low power consumption, Bluetooth and serial interfaces, a LED array and a button as a very simple user interface. Simple means to try something out, and see whether it is actually working.

In hardware machine learning there is a community vacuum.

Although machine learning or deep learning has its community, embedded machine learning could not enjoy the presence of it so far. Data scientists can demo their applications with relative ease for a while. This has only recently become available to the embedded applications of ML.

I believe that the hacker culture of the IoT community will make a reach towards elementary ML applications. That is now possible with their tools, we are also observing its infancy, and it will present a great potential and high variation of applications.

After a long day walking around in Eindhoven (NL) I had to take some pictures of the Evoluon.
The workshop happened in Eindhoven but unfortunately not in this beauty, the Evoluon. I just thought you should see it. Photo by AAÇ

So, since this is to be ignited consequently around the world, we thought why wouldn't we do it here ourselves? Therefore we agreed to host a crash course/workshop with the Sparkfun boards at TU/e with the courtesy of FruitpunchAI.

In this blog post, I would like to share my observations on such an event.

The audience is highly heterogeneous, in all ways.

I thought that many people would have Windows 10 laptops without a Virtualbox installation, would have no knowledge of the terminal, and have an elementary knowledge of C programming.

It was quite a turnout. We had to share the boards and even worse, serial adapters.

Therefore I tried to prepare as much as possible for them. Which meant for me, the OS, the drivers, the sources, the prerequisites, a coding mixtape (see below) and a lightweight editor to crack on.

So I prepared a Virtualbox image with all, informed the people accordingly urged them to install it beforehand and shared it at the beginning of the workshop.

Here are some things to consider it you want to give it a spin yourself.

  • Sharing large files fast is still pretty hard. We tried sending the file as a torrent, in order to make use of some distribution, but it has proven to be very impractical, even for the tech-savvy. Then we chose distributing it with USB sticks, which was pretty easy.
  • The amount of problems associated with using a VM is still high.
    • At least 30% of the laptops in the workshop had a disabled virtualization technologyin the BIOS settings. They had to change it and restart.
    • Some 30% were already using Linux, and a couple of them had non-Ubuntu distros, therefore had some difficulty in installing the extension pack.
    • The hardware requirments of the Ubuntu 18.04LTS VM is still high enough to impede smooth user experience.
    • There are people already using virtualization in their daily tasks, therefore matching versions is really hard.
  • Having Google Colab notebooks were immensely helpful. Practically, a workshop does not proceed in lock-step, and Colab notebooks give some autonomy that is needed.
  • Notebooks over virtual machines.
    • Next time, I will definitely find a way to get people on board a remote notebook. That is, for setting up the environment, editing, training, solving challenges and even cross-compiling.
    • For the final functionality that requires a hardware interface, I might opt for MINGW, or let them use their own Linux distros.
  • In short, in the value-chain of the workshop, I would go for a solution that is sourced from one particular location, and only of necessity arises, I would diversify.
  • The fact that I bought one cable for two boards impeded the progress a lot. No need to be frugal about it, just buy the boards. Trying to manage the boards takes too much energy.
  • Getting to the phase where everybody had flashed these things at leat once took more than 1.5 hours.
  • Many people already had an idea of what ML is, so an introduction was rather unnecessary.
  • Synchrony is impossible, just draw the map, prelare the road but let them tread it.

Most importantly, people are genuinely curious about it.

I must admit, I did not have much idea how appealing this would be. Naturally, I am interested in embedded systems running machine learning, but I really don't know what people in many other domains think about it. The attendance was exemplary of how so many backgrounds easily relate to machine learning hardware.

The feedback we got reminded us that there is a group of people out there that feel sheer joy from working on a physical piece of hardware. It was enjoyable for many to go further than what Arduino boards can accomplish and add some machine learning inference to the game. It is liberating.

We definitely will consider our takeaways and come back with a greater sequel, and it might even be a hybrid challenge then, one with one data science and one embedded flavored session.

The content

For the record, this is the mixtape in question.

Notes 15/09/2019

This reception has acquired a lot of reception, and a lot of insight there. I will put them together here for those who are searching.

So, evidently, the chip is a Cortex-M processor by ARM, and most of the content here is also relevant to any chip that has an ARM core.

But, ARM has also created a really nice guide for STM32F7 boards, and it is step-by-step as Google Colab notebooks here as.  If you have an STM32F7 board (or I guess, anything that has the necessary memory) check it out. I don't own an F7, but I will give it a go.

https://developer.arm.com/solutions/machine-learning-on-arm/developer-material/how-to-guides/build-arm-cortex-m-voice-assistant-with-google-tensorflow-lite