We have already made some comments on Google Coral, and now it's time to announce its rival in the super-lightweight class: Nvidia Jetson Nano.

Nvidia has its own philosophy, it is founded as a hardware company. It also has a special place in the development community, since it was already in a favorable spot when machine learning started to make use of application-specific hardware. Nvidia was within arm's reach with its GPUs when cryptoheads and machine learners needed something to multiply and add, fast.

Time went by, and the application domains of Nvidia GPUs flourished with ever-increasing demand for machine learning computation. Nvidia furnished the cloud, data-scientists' workstations and lately, embedded developers' workbenches.

When I take the device in my hand, it gives a decent impression. It is not built for end application, of course, but it is sturdy enough to be handled directly on the bench (of course, don't.). The box that it comes with changes into a nice dev-kit stand. I appreciate this a lot, since the cables that go into it turn out to be pretty hefty.

And here we have the device, we flashed the image supplied by the vendor. We connected our dedicated screen to Jetson Nano and booted it. We were greeted with the signature electrifying, gaming-level-performance-seeking Uranium green aesthetics. It was easier than Coral.

Our testbench. We have Intel Movidius, Google Coral and Nvidia Jetson Nano side by side here.

Unlike Coral, the board is a functional graphical environment, which is good for onboarding, but won't matter after a while. For those who use Jupyter notebooks to work, Jetson Nano runs the server without a fuss.

I used USB as the power source and a cell phone charger, and when I used the device on load with Display Port connected, I lost power several times. The guide specifically asks for a high quality power source, for which I will use my bench power supplies from now on. Just a heads up, you will be bugged if you want to use a standard phone charger.

Given their current presence in the domain, educational materials and the community is where Nvidia outshines Google. The introductory walkthrough uses a pretty neat trick that I noticed only recently, it is a walk through .md markdown files within the code itself on GitHub.

Notes from developers' webinar

In order to gauge the interest and get feedback en masse, Nvidia set a webinar up for the 2nd of May. The support for development is their path towards market adoption, therefore they took good care of questions and answered a lot, and set up a meeting for future discussion particularly on robotics applications. After some time, and going through the developers' webinar of Nvidia, some things are definite:

1) Video processing is their bread and butter

Nvidia wants to focus on its video processing capabilities. They have an application where data from eight 1080p30 cameras are processed for object detection and it seems that they demonstrated it with a network camera data storage node. PoE is emphasized repeatedly, therefore the device is meant to be supplied from the network devices for surveillance.

2) There is a serious investment in robotics

They also would like to keep robotics. They have development kits for robotics applications and support for ROS, the Robot Operating System. Nvidia also has its own robotics simulation suite, ISAAC.

3) Development environment is their unfair advantage

The current development environment is their advantage, and they are emphasizing the comfort associated with it. Most things can be executed on the board, which is new. It simplifies many things.

4) Parallelism is emphasized for their already parallel chips

They exhibited scalability with OpenMPI support, which means that clusters of Jetson Nanos can message each other for tasks that can utilize parallelism with messaging. It is also interesting to note that the device is meant for a SODIMM connector, which might present a case for using clusters of it.

5) They have memory sharing schemes of CUDA

They emphasized ZeroCopy, a shared memory scheme, which "enables GPU threads to directly access host memory". It seems to eliminate memory barriers on one direction, from the network model (GPU thread) to the application (host). Whether this eliminates security issues or whether it is implemented in a foolproof way to avoid loopholes is a future task for me.

Promotional image of the Jetson Nano Module with its glory.


Nvidia is still the market leader on high-throughput machine learning applications, powering most autonomous vehicles and robotic systems. With Jetson Nano, it is evident that Nvidia is willing to dominate the low-cost high-volume applications of machine learning. The focus is on having an extensive ecosystem, with minimal cost. The toolchain and the ecosystem of bigger Jetsons give great advantage to Jetson Nano over its competition. There are already many robotics applications readily available to tinker around.