In this tutorial, you will learn how to configure your Google Coral TPU USB Accelerator on Raspberry Pi and Ubuntu. You’ll then learn how to perform classification and object detection using Google Coral’s USB Accelerator.
A few weeks ago, Google released “Coral”, a super fast, “no internet required” development board and USB accelerator that enables deep learning practitioners to deploy their models “on the edge” and “closer to the data”.
Using Coral, deep learning developers are no longer required to have an internet connection, meaning that the Coral TPU is fast enough to perform inference directly on the device rather than sending the image/frame to the cloud for inference and prediction.
The Google Coral comes in two flavors:
- A single-board computer with an onboard Edge TPU. The dev board could be thought of an “advanced Raspberry Pi for AI” or a competitor to NVIDIA’s Jetson Nano.
- A USB accelerator that plugs into a device (such as a Raspberry Pi). The USB stick includes an Edge TPU built into it. Think of Google’s Coral USB Accelerator as a competitor to Intel’s Movidius NCS.
Today we’ll be focusing on the Coral USB Accelerator as it’s easier to get started with (and it fits nicely with our theme of Raspberry Pi-related posts the past few weeks).
To learn how to configure your Google Coral USB Accelerator (and perform classification + object detection), just keep reading!
Getting started with Google Coral’s TPU USB Accelerator
In this post I’ll be assuming that you have:
- Your Google Coral USB Accelerator stick
- A fresh install of a Debian-based Linux distribution (i.e., Raspbian, Ubuntu, etc.)
- Understand basic Linux commands and file paths
If you don’t already own a Google Coral Accelerator, you can purchase one via Google’s official website.
I’ll be configuring the Coral USB Accelerator on Raspbian, but again, provided that you have a Debian-based OS, these commands will still work.
Let’s get started!
Update 2019-12-30: Installation steps 1-6 have been completely refactored and updated to align with Google’s recommended instructions for installing Coral’s EdgeTPU runtime library. My main contribution is the addition of Python virtual environments. I’ve also updated the section on how to run the example scripts.
Step #1: Installing the Coral EdgeTPU Runtime and Python API
In this step, we will use your Aptitude package manager to install Google Coral’s Debian/Raspbian-compatible package.
First, let’s add the package repository:
$ echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list $ curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - $ sudo apt-get update
Note: Be careful with the line-wrapping and ensure that you copy each full command + enter in your terminal as shown.
Now we’re ready to install the EdgeTPU runtime library:
$ sudo apt-get install libedgetpu1-std
Followed by installing the EdgeTPU Python API:
$ sudo apt-get install python3-edgetpu
Step #2: Reboot your device
Rebooting your Raspberry Pi or computer is critical for the installation to complete. You can use the following command:
$ sudo reboot now
Step #3: Setting up your Google Coral virtual environment
We’ll be using Python virtual environments, a best practice when working with Python.
A Python virtual environment is an isolated development/testing/production environment on your system — it is fully sequestered from other environments. Best of all, you can manage the Python packages inside your your virtual environment inside with pip (Python’s package manager).
Of course, there are alternatives for managing virtual environments and packages (namely Anaconda/conda and venv). I’ve used/tried them all, but have settled on pip, virtualenv, and virtualenvwrapper as the preferred tools that I install on all of my systems. If you use the same tools as me, you’ll receive the best support from me.
You can install pip using the following commands:
$ wget https://bootstrap.pypa.io/get-pip.py $ sudo python get-pip.py $ sudo python3 get-pip.py $ sudo rm -rf ~/.cache/pip
Let’s install virtualenv
and virtualenvwrapper
now:
$ sudo pip install virtualenv virtualenvwrapper
Once both virtualenv
and virtualenvwrapper
have been installed, open up your ~/.bashrc
file:
$ nano ~/.bashrc
…and append the following lines to the bottom of the file:
# virtualenv and virtualenvwrapper export WORKON_HOME=$HOME/.virtualenvs export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3 source /usr/local/bin/virtualenvwrapper.sh
Save and exit via ctrl + x
, y
, enter
.
From there, reload your ~/.bashrc
file to apply the changes to your current bash session:
$ source ~/.bashrc
Next, create your Python 3 virtual environment:
$ mkvirtualenv coral -p python3
Here we are creating a Python virtual environment named coral
using Python 3. Going forward, I recommend Python 3.
Note: Python 3 will reach end of its life on January 1st, 2020 so I do not recommend using Python 2.7.
Step #4: Sym-link the EdgeTPU runtime into your coral
virtual environment
A symbolic link is a virtual link from one file/folder to another file/folder. You can learn more on Wikipedia’s article.
We will create a symbolic link from the system packages folder containing the EdgeTPU runtime library to our virtual environment.
First, let’s find the path where the Python EdgeTPU package is installed:
$ dpkg -L python3-edgetpu /. /usr /usr/lib /usr/lib/python3 /usr/lib/python3/dist-packages /usr/lib/python3/dist-packages/edgetpu /usr/lib/python3/dist-packages/edgetpu/__init__.py /usr/lib/python3/dist-packages/edgetpu/basic /usr/lib/python3/dist-packages/edgetpu/basic/__init__.py /usr/lib/python3/dist-packages/edgetpu/basic/basic_engine.py /usr/lib/python3/dist-packages/edgetpu/basic/edgetpu_utils.py /usr/lib/python3/dist-packages/edgetpu/classification /usr/lib/python3/dist-packages/edgetpu/classification/__init__.py /usr/lib/python3/dist-packages/edgetpu/classification/engine.py /usr/lib/python3/dist-packages/edgetpu/detection /usr/lib/python3/dist-packages/edgetpu/detection/__init__.py /usr/lib/python3/dist-packages/edgetpu/detection/engine.py /usr/lib/python3/dist-packages/edgetpu/learn /usr/lib/python3/dist-packages/edgetpu/learn/__init__.py /usr/lib/python3/dist-packages/edgetpu/learn/backprop /usr/lib/python3/dist-packages/edgetpu/learn/backprop/__init__.py /usr/lib/python3/dist-packages/edgetpu/learn/backprop/ops.py /usr/lib/python3/dist-packages/edgetpu/learn/backprop/softmax_regression.py /usr/lib/python3/dist-packages/edgetpu/learn/imprinting /usr/lib/python3/dist-packages/edgetpu/learn/imprinting/__init__.py /usr/lib/python3/dist-packages/edgetpu/learn/imprinting/engine.py /usr/lib/python3/dist-packages/edgetpu/learn/utils.py /usr/lib/python3/dist-packages/edgetpu/swig /usr/lib/python3/dist-packages/edgetpu/swig/__init__.py /usr/lib/python3/dist-packages/edgetpu/swig/_edgetpu_cpp_wrapper.cpython-35m-arm-linux-gnueabihf.so /usr/lib/python3/dist-packages/edgetpu/swig/_edgetpu_cpp_wrapper.cpython-36m-arm-linux-gnueabihf.so /usr/lib/python3/dist-packages/edgetpu/swig/_edgetpu_cpp_wrapper.cpython-37m-arm-linux-gnueabihf.so /usr/lib/python3/dist-packages/edgetpu/swig/edgetpu_cpp_wrapper.py /usr/lib/python3/dist-packages/edgetpu/utils /usr/lib/python3/dist-packages/edgetpu/utils/__init__.py /usr/lib/python3/dist-packages/edgetpu/utils/dataset_utils.py /usr/lib/python3/dist-packages/edgetpu/utils/image_processing.py /usr/lib/python3/dist-packages/edgetpu/utils/warning.py /usr/lib/python3/dist-packages/edgetpu-2.12.2.egg-info /usr/lib/python3/dist-packages/edgetpu-2.12.2.egg-info/PKG-INFO /usr/lib/python3/dist-packages/edgetpu-2.12.2.egg-info/dependency_links.txt /usr/lib/python3/dist-packages/edgetpu-2.12.2.egg-info/requires.txt /usr/lib/python3/dist-packages/edgetpu-2.12.2.egg-info/top_level.txt /usr/share /usr/share/doc /usr/share/doc/python3-edgetpu /usr/share/doc/python3-edgetpu/changelog.Debian.gz /usr/share/doc/python3-edgetpu/copyright
Notice in the command’s output on Line 7 that we have found the root directory of the edgetpu library to be: /usr/lib/python3/dist-packages/edgetpu. We will create a sym-link to that path from our virtual environment site-packages.
Let’s create our sym-link now:
$ cd ~/.virtualenvs/coral/lib/python3.7/site-packages $ ln -s /usr/lib/python3/dist-packages/edgetpu/ edgetpu $ cd ~
Step #5: Test your Coral EdgeTPU installation
Let’s fire up a Python shell to test our Google Coral installation:
$ workon coral $ python >>> import edgetpu >>> edgetpu.__version__ '2.12.2'
Step #5b: Optional Python packages you may wish to install for the Google Coral
As you go down the path of working with your Google Coral, you’ll find that you need a handful of other packages installed in your virtual environment.
Let’s install packages for working with the PiCamera (Raspberry Pi only) and image processing:
$ workon coral $ pip install "picamera[array]" # Raspberry Pi only $ pip install numpy $ pip install opencv-contrib-python==4.1.0.25 $ pip install imutils $ pip install scikit-image $ pip install pillow
Step #6: Install EdgeTPU examples
Now that we’ve installed the TPU runtime library, let’s put the Coral USB Accelerator to the test!
First let’s install the EdgeTPU Examples package:
$ sudo apt-get install edgetpu-examples
From there, we’ll need to add write permissions to the examples directory:
$ sudo chmod a+w /usr/share/edgetpu/examples
Project Structure
The examples for today’s tutorial are self-contained and do not require an additional download.
Go ahead and activate your environment and change into the examples directory:
$ workon coral $ cd /usr/share/edgetpu/examples
The examples directory contains directories for images and models along with a selection of Python scripts. Let’s inspect our project structure with the tree
command:
$ tree --dirsfirst . āāā images āĀ Ā āāā bird.bmp āĀ Ā āāā cat.bmp āĀ Ā āāā COPYRIGHT āĀ Ā āāā grace_hopper.bmp āĀ Ā āāā parrot.jpg āĀ Ā āāā sunflower.bmp āāā models āĀ Ā āāā coco_labels.txt āĀ Ā āāā deeplabv3_mnv2_pascal_quant_edgetpu.tflite āĀ Ā āāā inat_bird_labels.txt āĀ Ā āāā mobilenet_ssd_v1_coco_quant_postprocess_edgetpu.tflite āĀ Ā āāā mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite āĀ Ā āāā mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite āĀ Ā āāā mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite āāā backprop_last_layer.py āāā classify_capture.py āāā classify_image.py āāā imprinting_learning.py āāā object_detection.py āāā semantic_segmetation.py āāā two_models_inference.py 2 directories, 20 files
We will be using the following MobileNet-based TensorFlow Lite models in the next section:
mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite
: Classification model trained on the iNaturalist (iNat) Birds dataset.mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite
: Face detection model.mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite
: Object detection model trained on the COCO dataset.
The first model will be used with the classify_image.py
classification Python script.
Both models 2 and 3 will be used with the object_detection.py
Python script for object detection. Keep in mind that face detection is a form of object detection.
Classification, object detection, and face detection using the Google Coral USB Accelerator
At this point we are ready to put our Google Coral coprocessor to the test!
Let’s start by performing a simple image classification example:
$ python classify_image.py \ --mode models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \ --label models/inat_bird_labels.txt \ --image images/parrot.jpg --------------------------- Ara macao (Scarlet Macaw) Score : 0.61328125 --------------------------- Platycercus elegans (Crimson Rosella) Score : 0.15234375
As you can see, MobileNet (trained on iNat Birds) has correctly labeled the image as “Macaw”, a type of parrot.
Let’s try a second classification example:
$ python classify_image.py \ --mode models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \ --label models/inat_bird_labels.txt \ --image images/bird.bmp --------------------------- Poecile carolinensis (Carolina Chickadee) Score : 0.37109375 --------------------------- Poecile atricapillus (Black-capped Chickadee) Score : 0.29296875
Notice that the image of the Chickadee has been correctly classified. In fact, the top two results are both forms of Chickadees: (1) Carolina, and (2) Black-capped.
Now let’s try performing face detection using the Google Coral USB Accelerator:
$ python object_detection.py \ --mode models/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite \ --input images/grace_hopper.bmp ----------------------------------------- score = 0.99609375 box = [143.88912090659142, 40.834905445575714, 381.8060402870178, 365.49142384529114] Please check object_detection_result.jpg
Here the MobileNet + SSD face detector was able to detect Grace Hopper’s face in the image. There is a very faint red box around Grace’s face (I recommend clicking the image to enlarge it so that you can see the face detection box). In the future, we will learn how to perform custom object detection during which time you can draw a thicker detection box.
The next example shows how to perform object detection using a MobileNet + SSD trained on the COCO dataset:
$ python object_detection.py \ --mode models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite \ --input images/cat.bmp ----------------------------------------- score = 0.96484375 box = [52.70467400550842, 37.87856101989746, 448.4963893890381, 391.3172245025635] ----------------------------------------- score = 0.2109375 box = [0.0, 0.5118846893310547, 191.08786582946777, 194.69362497329712] ----------------------------------------- score = 0.2109375 box = [300.4741072654724, 38.08128833770752, 382.5985550880432, 169.52738761901855] ----------------------------------------- score = 0.16015625 box = [359.85671281814575, 46.61980867385864, 588.858425617218, 357.5845241546631] ----------------------------------------- score = 0.16015625 box = [0.0, 10.966479778289795, 191.53071641921997, 378.33733558654785] ----------------------------------------- score = 0.12109375 box = [126.62454843521118, 4.192984104156494, 591.4307713508606, 262.3262882232666] ----------------------------------------- score = 0.12109375 box = [427.05928087234497, 84.77717638015747, 600.0, 332.24596977233887] ----------------------------------------- score = 0.08984375 box = [258.74093770980835, 3.4015893936157227, 600.0, 215.32137393951416] ----------------------------------------- score = 0.08984375 box = [234.9416971206665, 33.762264251708984, 594.8572397232056, 383.5402488708496] ----------------------------------------- score = 0.08984375 box = [236.90505623817444, 51.90783739089966, 407.265830039978, 130.80371618270874] Please check object_detection_result.jpg
Notice there are ten detections in Figure 5 (faint red boxes; click to enlarge), but only one cat in the image — why is that?
The reason is that the object_detection.py
script is not filtering on a minimum probability. You could easily modify the script to ignore detections with < 50% probability (we’ll work on custom object detection with the Google coral next month).
For fun, I decided to try an image that was not included in the example TPU runtime library demos.
Here’s an example of applying the face detector to a custom image:
$ python object_detection.py \ --mode models/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite \ --input ~/IMG_7687.jpg ----------------------------------------- score = 0.98046875 box = [190.66683948040009, 0.0, 307.4474334716797, 125.00646710395813]
Sure enough, my face is detected!
Finally, here’s an example of running the MobileNet + SSD on the same image:
$ python object_detection.py \ --mode models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite \ --label models/coco_labels.txt \ --input ~/IMG_7687.jpg ----------------------------------------- person score = 0.87890625 box = [58.70787799358368, 10.639026761054993, 371.2196350097656, 494.61638927459717] ----------------------------------------- dog score = 0.58203125 box = [50.500258803367615, 358.102411031723, 162.57299482822418, 500.0] ----------------------------------------- dog score = 0.33984375 box = [13.502731919288635, 287.04309463500977, 152.83603966236115, 497.8201985359192] ----------------------------------------- couch score = 0.26953125 box = [0.0, 88.88640999794006, 375.0, 423.55993390083313] ----------------------------------------- couch score = 0.16015625 box = [3.753773868083954, 64.79595601558685, 201.68977975845337, 490.678071975708] ----------------------------------------- dog score = 0.12109375 box = [65.94736874103546, 335.2701663970947, 155.95845878124237, 462.4992609024048] ----------------------------------------- dog score = 0.12109375 box = [3.5936199128627777, 335.3758156299591, 118.05401742458344, 497.33099341392517] ----------------------------------------- couch score = 0.12109375 box = [49.873560667037964, 97.65596687793732, 375.0, 247.15487658977509] ----------------------------------------- dog score = 0.12109375 box = [92.47469902038574, 338.89272809028625, 350.16247630119324, 497.23270535469055] ----------------------------------------- couch score = 0.12109375 box = [20.54794132709503, 99.93192553520203, 375.0, 369.604617357254]
Again, we can improve results by filtering on a minimum probability to remove the extraneous detections. Doing so would leave only two detections: person (87.89%) and dog (58.20%).
What about training custom models for the Google’s Coral?
You’ll notice that I’m only using pre-trained deep learning models on the Google Coral in this post — what about custom models that you train yourself?
Google does provide some documentation on that but it’s much more advanced, far too much for me to include in this blog post.
If you’re interested in learning how to train your own custom models for Google’s Coral I would recommend you take a look at my upcoming book, Raspberry Pi for Computer Vision (Complete Bundle) where I’ll be covering the Google Coral in detail.
How do I use Google Coral’s Python runtime library in my own custom scripts?
Using the edgetpu
library in conjunction with OpenCV and your own custom Python scripts is outside the scope of this post.
I cover custom Python scripts for Google Coral classification and object detection next month as well as in my Raspberry Pi for Computer Vision book.
Thoughts, tips, and suggestions when using Google’s TPU USB Accelerator
Overall, I really liked the Coral USB Accelerator. I thought it was super easy to configure and install, and while not all the demos ran out of the box, with some basic knowledge of file paths, I was able to get them running in a few minutes.
In the future, I would like to see the Google TPU runtime library more compatible with Python virtual environments. Requiring the sym-link isn’t ideal.
I’ll also add that inference on the Raspberry Pi is a bit slower than what’s advertised by the Google Coral TPU Accelerator — that’s actually not a problem with the TPU Accelerator, but rather the Raspberry Pi.
What do I mean by that?
Keep in mind that the Raspberry Pi 3B+ uses USB 2.0 but for more optimal inference speeds the Google Coral USB Accelerator recommends USB 3.
Since the RPi 3B+ doesn’t have USB 3, that’s not much we can do about that until the RPi 4 comes out — once it does, we’ll have even faster inference on the Pi using the Coral USB Accelerator.
Update 2019-12-30: The Raspberry Pi 4B includes USB 3.0 capability. The total time it takes to transfer an image, perform inference, and obtain results is much faster. Be sure to refer to Chapter 23.2 “Benchmarking and Profiling your Scripts” insideĀ Raspberry Pi for Computer VisionĀ to learn how to benchmark your deep learning scripts on the Raspberry Pi.
Finally, I’ll note that once or twice during the object detection examples it appeared that the Coral USB Accelerator “locked up” and wouldn’t perform inference (I think it got “stuck” trying to load the model), forcing me to ctrl + c
out of the script.
Killing the script must have prevented a critical “shut down” script to run on the Coral — any subsequent executions of the demo Python scripts would result in an error.
To fix the problem I had to unplug the Coral USB accelerator and then plug it back in. Again, I’m not sure why that happened and I couldn’t find any documentation on the Google Coral site that referenced the issue.
What's next? I recommend PyImageSearch University.
30+ total classes • 39h 44m video • Last updated: 12/2021
★★★★★ 4.84 (128 Ratings) • 3,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning.
Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?
Thatās not the case.
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And thatās exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here youāll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.
Inside PyImageSearch University you'll find:
- ✓ 30+ courses on essential computer vision, deep learning, and OpenCV topics
- ✓ 30+ Certificates of Completion
- ✓ 39h 44m on-demand video
- ✓ Brand new courses released every month, ensuring you can keep up with state-of-the-art techniques
- ✓ Pre-configured Jupyter Notebooks in Google Colab
- ✓ Run all code examples in your web browser ā works on Windows, macOS, and Linux (no dev environment configuration required!)
- ✓ Access to centralized code repos for all 500+ tutorials on PyImageSearch
- ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
- ✓ Access on mobile, laptop, desktop, etc.
Summary
In this tutorial, you learned how to get started with the Google Coral USB Accelerator.
We started by installing the Edge TPU runtime library on your Debian-based operating system (we specifically used Raspbian for the Raspberry Pi).
After that, we learned how to run the example demo scripts included in the Edge TPU library download.
We also learned how to install the edgetpu
library into a Python virtual environment (that way we can keep our packages/projects nice and tidy).
We wrapped up the tutorial by discussing some of my thoughts, feedback, and suggestions when using the Coral USB Accelerator (be sure to refer them first if you have any questions).
I hope you enjoyed this tutorial!
To be notified when future tutorials are published here on PyImageSearch, just enter your email address in the form below!
Thank you for the great tutorial. Can you share how much time it took for the edgetpu to perform inference for mobilenet and SSD models? How does that compare with NCS1 and NCS2?
I’ll be doing more tutorials on the Google Coral, including comparisons with the NCS1 and NCS2, in future tutorials. Stay tuned!
Very timely post! I just got mine installed following Google’s instructions, yours are clearer!
If you hack on the install.sh script to make 32-bit Ubuntu-Mate on the Odroid XU-4 think its a Raspberry Pi 3B+, it works. Although for Mate18 (current default if you buy it pre-loaded) you’ll need to “sideload” Python 3.5.6, since it defaults to Python 3.6 which I couldn’t get to work, and install to a virtual envrinoment.
If one wants a headstart using Coral USB TPU with OpenCV take a look at this:
https://github.com/PINTO0309/TPU-MobilenetSSD
Awesome, thanks for sharing Wally!
How it compares to Intel Movidius?
Thanks.
I’ll be covering a comparison of the Google Coral to Intel Movidius NCS in a future post, stay tuned!
I’m curious if you have also taken a look at Gyrfalcon’s PLAI plug or the Orange Pi AI stick both of which are iterations of the Lightspeeur 2801s.
I have not but thanks for mentioning it.
See https://qengineering.eu/deep-learning-with-raspberry-pi-and-alternatives.html
At the end you find the comparison.
Tinus
Adrian,
Thanks for another great post!
I agree that edge computing is going to be the Next Big Thing for deep learning applications. Maybe we should call it AIoT? š
I was surprised at how inexpensive the USB Coral module is. Even at that low price I suspect that for large-scale deployment at low cost you’ll be designing and building a custom single-board computer. At some point I expect that there will be a system-on-a-chip that will include TPUs (or their NVIDIA equivalent).
When you talk about having to unplug things from the Pi to get them working again — that seems to be a recurring theme with any kind of fairly exotic hardware and the Pi. I suspect a lot of the problem is that the drivers (or whatever you’d call them) on the Pi are rather primitive and don’t handle all error cases very well.
Hi Adrian,
What’s the inference time per frame when using SSD Mobilenet COCO?
Thanks,
Tom
I’ll be sharing a full benchmark in a separate post. I’m still gathering my results. This is just a getting started guide.
I just followed these instructions and installed the Coral edgetpu support onto a Pi3 system after I’d made a test install of OpenVINO 2019R1
The coral test code runs successfully while the openvino real-time tutorial is also running. Nice that they don’t interfere!
A typo crept in on the parrot example command:
–model ~/edgetpu_models/ mobilenet_v2_1.0_224_quant_edgetpu.tflite \
Had to remove the space to make it:
–model ~/edgetpu_models/mobilenet_v2_1.0_224_quant_edgetpu.tflite \
Awesome, thanks for sharing Wally!
Adrian,
Thank you very much for all your hard works and awesome posts.
I just have a question regarding the processing speed. You mentioned in your post that it is not as fast as, what Google claims on its website because of USB-2 speed limitations. Have you done any testing of your own to get an approximation of how much slower?
I have done my own testing. I’ll be sharing my results in a separate tutorial.
Any comment on how well we can follow your tutorials, here and in the forthcoming book, if we use the Dev board (SOM) instead of the USB device? e.g. to overcome the USB 2.0 performance hit.
Once I get my hands on the Dev Board I’ll be doing tutorials on those as well.
Is there a significant advantage to using a virtual environment on a RPi dedicated to these exercises?
(I usually just swap out SD cards to change config/etc)
The advantage is that you don’t have to swap SD cards. Manually swapping is tedious and unnecessary. Virtual environments help you overcome that limitation.
Can the USB Carol run on a x86 host while not Raspberry Pi?
As long as it’s Debian-based, yes.
Hi Adrian,
Thanks a lot for the awsome post again,
I want to explain how to convert the models to tflite?
I’ll be covering that exact topic in my Raspberry Pi for Computer Vision book.
I’m very interested in this stick, especially compared to the NCS2. Thanks for your work!
Thanks Victor, I’m glad you enjoyed it!
What was the average frame rate using the coral usb accelerator on a usb2.0 port of the pi?
Thanks for the great post! I’m fairly new to deep learning on the pi and your content has been extremely valuable.
This is just a getting started guide. I’ll be sharing benchmarks in a future tutorial.
What is the average time it takes to run inference on a single frame for the various models that you have evaluated?
I’ll be providing a more thorough evaluation in a separate tutorial (this is just a getting started guide).
Any idea of where this error comes from, and how to fix?
7044 package_registry.cc:65] Minimum runtime version required by package (5) is lower than expected (10)
Everything seems to work OK.
I saw that as well. I’m not sure what caused it but as you noted, everything seems to work fine.
I am running the classify_image.py example on my RPi 3B board with Python 3.5 and I continue to get an error stating “ImportError: No module named ‘edgetpu.swig.edgetpu_cpp_wrapper'”
I have a feeling this is related to the _edgetpu_cpp_wrapper.cpython-35m-arm-linux-gnueabihf.so file not being renamed to something like _edgetpu_cpp_wrapper.so but I have also tried this.
Any ideas on what the issue could be?
That sounds like it could be the issue. Did you try using virtual environments? Or using the standard install instructions without virtual environments?
Hi Adrian, thanks for the response. So it turns out that after following the Google instructions I had to run the demo’s as root (feedback provided by Google support).
However, I have since then setup a symlink into my VirtualEnvironment for the library and this has solved the problem.
As an aside, do you know (post install) how I would go about enabling maximum frequency on the unit?
During the install of the “edgetpu” library it will ask you if you want to enable maximum frequency. The easiest way would be to re-install it. I personally haven’t tried to re-enable it post-install.
Is there something negative? I think it’s almost perfect
I am more interested in handheld/portable applications and you can’t get more portable than a smartphone.
I just want to put it out there that I tested this edge TPU with a rooted samsung s7, with Linux installed via the ‘Linux Deploy’ app and it works.
It could probably also work with a non-rooted Android phone, with Linux installed via UserLand. However to get it to work it would require writing a libusb wrapper library that forwards the few libusb function calls the api makes to libusb on Android.
Just putting that out there in case other people are interested. I was curious and I tried it.
Hi Matt,
Thank you so much for sharing this idea. Would it be possible for you to write a brief tutorial on how you were able to use Coral USB computation power from the Android phone through Linux or potentially through Android Apps? Thank you.
Thank you for the great tutorial that makes me think to use a spare FireFly RK3328-CC running Ubuntu 19.04 on an aarch64 architecture. The nice thing is that the board got a USB 3.0. After installing the newest Edge TPU API version 1.9.2 and patch everything is running fine on python 3.6.
$ lsusb
Bus 005 Device 002: ID 18d1:9302 Google Inc.
Bus 005 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
$ python3 classify_image.py \
–model ~/edgetpu_models/mobilenet_v2_1.0_224_quant_edgetpu.tflite \
–label ~/edgetpu_models/imagenet_labels.txt \
–image parrot.jpg
—————————
macaw
Score : 0.99609375
Putting the above in case anyone is trying to do the same.
Adrian,something must have been changed recently, because the first demo program no longer works because of a typo.
There is a space in this address:
–model ~/edgetpu_models/ mobilenet_v2_1.0_224_quant_edgetpu.tflite \
Can you please fix this?
Tks,
Paul
Thanks Paul, I’ve updated the post to remove the space. Thanks for bringing it to my attention!
Hi Adrian, thank you very much for the tutorial and your content, which is great!
We just got the new Raspberry Pi 4 and the installation of the edge TPU runtime library doesn’t work for us as you described. Running the install.sh script leads to the error: “Platform not supported”. Do you know a workaround or what causes the issue? We already updated the file to python3.7, which is the standard python version on the newest raspbian OS version.
Thank you in advance!
Paul
I was able to get the install.sh script working with the help of Alasdair’s blog post ( https://blog.hackster.io/benchmarking-machine-learning-on-the-new-raspberry-pi-4-model-b-88db9304ce4 ). However, running the classify_image,py demo leads to the error ModuleNotFoundError: No module named ‘_edgetpu_cpp_wrapper’
Sorry, I don’t have a RPi 4 yet. I’ve ordered mine but it hasn’t arrived so I can’t really provide any suggestion there.
Hi Paul,
Were you able to fix the issue error: _edgetpu_cpp_wrapperā ? we are facing same issue.
Best Regards!!
“…..Since the RPi 3B+ doesnāt have USB 3, thatās not much we can do about that until the RPi 4 comes out ā once it does, weāll have even faster inference on the Pi using the Coral USB Accelerator….”
I guess that you will have to revisit some of these blogs now….. or should I say the new book as well ?
I’ll be doing an updated blog post with the RPi v4 (now that it has USB 3). Results reported in the new Raspberry Pi for Computer Vision book will also use the RPi 4.
Hi QQ – now we have Rp 4 š š Do you know how to get the edge software to install on the board?
Currently it’s complaining (./install.sh) “Your Platform Is Not Supported”
Thanks
Mark
I don’t have a RPi 4 yet, but once I do I’ll be doing an updated tutorial for the Google Coral USB Accelerator (and sharing some benchmark information).
Hi Mark,
You just have to add the following lines in the install.sh:
elif [[ “${MODEL}” == “Raspberry Pi 4 Model B Rev”* ]]; then
info “Recognized as Raspberry Pi 4 B.”
LIBEDGETPU_SUFFIX=arm32
HOST_GNU_TYPE=arm-linux-gnueabihf
Hi Adrian
Any chance you faced this issue?
ImportError: No module named ‘edgetpu’
https://github.com/f0cal/google-coral/issues/42
I followed every step of this tutorial but I can’t run the classification sample due to that error
What version of Python are you trying to install ‘edugetpu’ for? Additionally, are you using a Python virtual environment?
To debug I would start by checking the output of ‘bash ./install.sh’. You should see output similar to the following:
/home/pi/.local/lib/python3.5/site-packages/edgetpu.egg-link
Find that line and it will show you were the ‘edgetpu’ library has been installed to.
Oh didn’t see you already replied to my comment (not refreshed this page) Thanks for the quick reply
It works now after doing the changes in my previous comment and I see the desired output
I’ll try rest of the post now that I have tested this accelerator
Congrats on resolving the error, Zubair!
Alright this is because of the classic case of __init__.py not doing its job, not sure how these are fixed but as a workaround, I removed ‘edgetpu’ from all the required import statements in all the scripts
Does anyone know if it’s possible to attach two or more google coral USBs to a Pi to boost the performance even more?
good evening Dr Adrian
I am a beginner and I am currently writing my memoir on face recognition and classification of objects from coral USB and Raspberry PI.what book can you consse me among your books that can help me reach my goal
I would recommend reading Raspberry Pi for Computer Vision. That book covers face recognition on the RPi.
Any way to use this process to retrain a model using the TPU without using Docker?
Hi, My project is based on ROS 1.0 and python 2.7 on Raspberry Pi 4. How can I use the edgeTPU for my project? The issue is that ROS 1.0 depends python2.7 while edgeTPU is for python3. So … Could you give me some suggestions?