Today marks the 100th blog post on PyImageSearch.
100 posts. It’s hard to believe it, but it’s true.
When I started PyImageSearch back in January of 2014, I had no idea what the blog would turn into. I didn’t know how it would evolve and mature. And I most certainly did not know how popular it would become. After 100 blog posts, I think the answer is obvious now, although I struggled to put it into words (ironic, since I’m a writer) until I saw this tweet from @si2w:
I couldn’t agree more. And I hope the rest of the PyImageSearch readers do as well.
It’s been an incredible ride and I really have you, the PyImageSearch readers to thank. Without you, this blog really wouldn’t have been possible.
That said, to make the 100th blog post special, I thought I would do something a fun — ball tracking with OpenCV:
The goal here is fair self-explanatory:
- Step #1: Detect the presence of a colored ball using computer vision techniques.
- Step #2: Track the ball as it moves around in the video frames, drawing its previous positions as it moves.
The end product should look similar to the GIF and video above.
After reading this blog post, you’ll have a good idea on how to track balls (and other objects) in video streams using Python and OpenCV.
Looking for the source code to this post?
Jump Right To The Downloads SectionBall tracking with OpenCV
Let’s get this example started. Open up a new file, name it ball_tracking.py
, and we’ll get coding:
# import the necessary packages from collections import deque from imutils.video import VideoStream import numpy as np import argparse import cv2 import imutils import time # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-v", "--video", help="path to the (optional) video file") ap.add_argument("-b", "--buffer", type=int, default=64, help="max buffer size") args = vars(ap.parse_args())
Lines 2-8 handle importing our necessary packages. We’ll be using deque
, a list-like data structure with super fast appends and pops to maintain a list of the past N (x, y)-locations of the ball in our video stream. Maintaining such a queue allows us to draw the “contrail” of the ball as its being tracked.
We’ll also be using imutils
, my collection of OpenCV convenience functions to make a few basic tasks (like resizing) much easier. If you don’t already have imutils
installed on your system, you can grab the source from GitHub or just use pip
to install it:
$ pip install --upgrade imutils
From there, Lines 11-16 handle parsing our command line arguments. The first switch, --video
is the (optional) path to our example video file. If this switch is supplied, then OpenCV will grab a pointer to the video file and read frames from it. Otherwise, if this switch is not supplied, then OpenCV will try to access our webcam.
If this your first time running this script, I suggest using the --video
switch to start: this will demonstrate the functionality of the Python script to you, then you can modify the script, video file, and webcam access to your liking.
A second optional argument, --buffer
is the maximum size of our deque
, which maintains a list of the previous (x, y)-coordinates of the ball we are tracking. This deque
allows us to draw the “contrail” of the ball, detailing its past locations. A smaller queue will lead to a shorter tail whereas a larger queue will create a longer tail (since more points are being tracked):
Now that our command line arguments are parsed, let’s look at some more code:
# define the lower and upper boundaries of the "green" # ball in the HSV color space, then initialize the # list of tracked points greenLower = (29, 86, 6) greenUpper = (64, 255, 255) pts = deque(maxlen=args["buffer"]) # if a video path was not supplied, grab the reference # to the webcam if not args.get("video", False): vs = VideoStream(src=0).start() # otherwise, grab a reference to the video file else: vs = cv2.VideoCapture(args["video"]) # allow the camera or video file to warm up time.sleep(2.0)
Lines 21 and 22 define the lower and upper boundaries of the color green in the HSV color space (which I determined using the range-detector script in the imutils
library). These color boundaries will allow us to detect the green ball in our video file. Line 23 then initializes our deque
of pts
using the supplied maximum buffer size (which defaults to 64
).
From there, we need to grab access to our vs
pointer. If a --video
switch was not supplied, then we grab reference to our webcam (Lines 27 and 28) — we use the imutils.video
VideoStream
threaded class for efficiency. Otherwise, if a video file path was supplied, then we open it for reading and grab a reference pointer on Lines 31 and 32 (using the built in cv2.VideoCapture
).
# keep looping while True: # grab the current frame frame = vs.read() # handle the frame from VideoCapture or VideoStream frame = frame[1] if args.get("video", False) else frame # if we are viewing a video and we did not grab a frame, # then we have reached the end of the video if frame is None: break # resize the frame, blur it, and convert it to the HSV # color space frame = imutils.resize(frame, width=600) blurred = cv2.GaussianBlur(frame, (11, 11), 0) hsv = cv2.cvtColor(blurred, cv2.COLOR_BGR2HSV) # construct a mask for the color "green", then perform # a series of dilations and erosions to remove any small # blobs left in the mask mask = cv2.inRange(hsv, greenLower, greenUpper) mask = cv2.erode(mask, None, iterations=2) mask = cv2.dilate(mask, None, iterations=2)
Line 38 starts a loop that will continue until (1) we press the q
key, indicating that we want to terminate the script or (2) our video file reaches its end and runs out of frames.
Line 40 makes a call to the read
method of our camera
pointer which returns a 2-tuple. The first entry in the tuple, grabbed
is a boolean indicating whether the frame
was successfully read or not. The frame
is the video frame itself. Line 43 handles VideoStream
vs VideoCapture
implementations.
In the case we are reading from a video file and the frame is not successfully read, then we know we are at the end of the video and can break from the while
loop (Lines 47 and 48).
Lines 52-54 preprocess our frame
a bit. First, we resize the frame to have a width of 600px. Downsizing the frame
allows us to process the frame faster, leading to an increase in FPS (since we have less image data to process). We’ll then blur the frame
to reduce high frequency noise and allow us to focus on the structural objects inside the frame
, such as the ball. Finally, we’ll convert the frame
to the HSV color space.
Lines 59 handles the actual localization of the green ball in the frame by making a call to cv2.inRange
. We first supply the lower HSV color boundaries for the color green, followed by the upper HSV boundaries. The output of cv2.inRange
is a binary mask
, like this one:
As we can see, we have successfully detected the green ball in the image. A series of erosions and dilations (Lines 60 and 61) remove any small blobs that may be left on the mask.
Alright, time to perform compute the contour (i.e. outline) of the green ball and draw it on our frame
:
# find contours in the mask and initialize the current # (x, y) center of the ball cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) center = None # only proceed if at least one contour was found if len(cnts) > 0: # find the largest contour in the mask, then use # it to compute the minimum enclosing circle and # centroid c = max(cnts, key=cv2.contourArea) ((x, y), radius) = cv2.minEnclosingCircle(c) M = cv2.moments(c) center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"])) # only proceed if the radius meets a minimum size if radius > 10: # draw the circle and centroid on the frame, # then update the list of tracked points cv2.circle(frame, (int(x), int(y)), int(radius), (0, 255, 255), 2) cv2.circle(frame, center, 5, (0, 0, 255), -1) # update the points queue pts.appendleft(center)
We start by computing the contours of the object(s) in the image on Line 65 and 66. On the subsequent line, make the function compatible with all versions of OpenCV. You can read more about why this change to cv2.findContours
is necessary in this blog post. We’ll also initialize the center
(x, y)-coordinates of the ball to None
on Line 68.
Line 71 makes a check to ensure at least one contour was found in the mask
. Provided that at least one contour was found, we find the largest contour in the cnts
list on Line 75, compute the minimum enclosing circle of the blob, and then compute the center (x, y)-coordinates (i.e. the “centroids) on Lines 77 and 78.
Line 81 makes a quick check to ensure that the radius
of the minimum enclosing circle is sufficiently large. Provided that the radius
passes the test, we then draw two circles: one surrounding the ball itself and another to indicate the centroid of the ball.
Finally, Line 89 appends the centroid to the pts
list.
The last step is to draw the contrail of the ball, or simply the past N (x, y)-coordinates the ball has been detected at. This is also a straightforward process:
# loop over the set of tracked points for i in range(1, len(pts)): # if either of the tracked points are None, ignore # them if pts[i - 1] is None or pts[i] is None: continue # otherwise, compute the thickness of the line and # draw the connecting lines thickness = int(np.sqrt(args["buffer"] / float(i + 1)) * 2.5) cv2.line(frame, pts[i - 1], pts[i], (0, 0, 255), thickness) # show the frame to our screen cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the 'q' key is pressed, stop the loop if key == ord("q"): break # if we are not using a video file, stop the camera video stream if not args.get("video", False): vs.stop() # otherwise, release the camera else: vs.release() # close all windows cv2.destroyAllWindows()
We start looping over each of the pts
on Line 92. If either the current point or the previous point is None
(indicating that the ball was not successfully detected in that given frame), then we ignore the current index continue looping over the pts
(Lines 95 and 96).
Provided that both points are valid, we compute the thickness
of the contrail and then draw it on the frame
(Lines 100 and 101).
The remainder of our ball_tracking.py
script simply performs some basic housekeeping by displaying the frame
to our screen, detecting any key presses, and then releasing the vs
pointer.
Ball tracking in action
Now that our script has been coded it up, let’s give it a try. Open up a terminal and execute the following command:
$ python ball_tracking.py --video ball_tracking_example.mp4
This command will kick off our script using the supplied ball_tracking_example.mp4
demo video. Below you can find a few animated GIFs of the successful ball detection and tracking using OpenCV:
For the full demo, please see the video below:
Finally, if you want to execute the script using your webcam rather than the supplied video file, simply omit the --video
switch:
$ python ball_tracking.py
However, to see any results, you will need a green object with the same HSV color range was the one I used in this demo.
What's next? I recommend PyImageSearch University.
30+ total classes • 39h 44m video • Last updated: 12/2021
★★★★★ 4.84 (128 Ratings) • 3,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning.
Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?
That’s not the case.
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.
Inside PyImageSearch University you'll find:
- ✓ 30+ courses on essential computer vision, deep learning, and OpenCV topics
- ✓ 30+ Certificates of Completion
- ✓ 39h 44m on-demand video
- ✓ Brand new courses released every month, ensuring you can keep up with state-of-the-art techniques
- ✓ Pre-configured Jupyter Notebooks in Google Colab
- ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
- ✓ Access to centralized code repos for all 500+ tutorials on PyImageSearch
- ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
- ✓ Access on mobile, laptop, desktop, etc.
Summary
In this blog post we learned how to perform ball tracking with OpenCV. The Python script we developed was able to (1) detect the presence of the colored ball, followed by (2) track and draw the position of the ball as it moved around the screen.
As the results showed, our system was quite robust and able to track the ball even if it was partially occluded from view by my hand.
Our script was also able to operate at an extremely high frame rate (> 32 FPS), indicating that color based tracking methods are very much suitable for real-time detection and tracking.
If you enjoyed this blog post, please consider subscribing to the PyImageSearch Newsletter by entering your email address in the form below — this blog (and the 99 posts preceding it) wouldn’t be possible without readers like yourself.
Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!
Andrew
Hello Adrian!
As always, a very nice tutorial, very well explained 🙂
How would you handle the situation where we have, let’s say 10 green balls in the video?
Best regards!
Adrian Rosebrock
Great question Andrew, thank’s for asking. If you had more than 1 ball in the image, you would simply loop over each of the contours individually, make sure they are of sufficient size, and draw their enclosing circles individually. And if you wanted to track multiple balls of different colors, you would need to define a list of lower and upper boundaries, loop over them, and then create a mask for each set of boundaries.
Anderson
Hi Adrian, can you explain in detail how to loop over each of the contours individually so that I can handle more than 1 ball? Thank you!
Adrian Rosebrock
Remove the call to
max
on Line 66. Then just loop over the detected contours:The biggest problem is that you need to maintain a
dequeue
for each ball which involves object tracking. The simplest way to accomplish this is via centroid tracking. I will try to do a blog post on this technique in the future.ANKIT SAINI
Hey Adrian! I need help a bit more specifically in tracking motion for multiple objects.
asyraf
hye adrian im try to do this method but not working.. can u help me please..
Suraj
How to run this code on pycharm .
Adrian Rosebrock
Hi Suraj — the easiest way is from the command line. You should read this blog post on setting up an environment with PyCharm.
David
Great post Adrian. This would be useful for tracking tennis balls! And the time to process a frame is fast! I wonder if Hawk-eye uses OpenCV https://en.wikipedia.org/wiki/Hawk-Eye
Adrian Rosebrock
Tracking fast moving objects in sports such as tennis balls and hockey pucks is a deceptively challenging problem. The issues arrises from the objects moving so fast that the standard computer vision algorithms can’t really process them — all they see is a bunch of motion blur. I’m not sure about tennis, but I know in the case of hockey they ended up putting a chip in the puck that interfaces with other algorithms, allowing it to be more easier tracked (and thus watched on TV).
honesty
His posts are dope!
Tyrone
Awsome.
If you don’t have his book I suggest you get it.
Keep up the good work Adrian.
Adrian Rosebrock
Thanks Tyrone! 🙂
Nathanael Anderson
Any chance you could put a tutorial together to track a green laser pointer dot over multiple surfaces, including moving video? I’ve been enjoying reading all the info you post. thanks for all the work you put into it. I started working with opencv because of your work.
Adrian Rosebrock
Hey Nathanael — welcome to the world of computer vision, I’m happy that I could be an inspiration. I hope you’re enjoying the blog!
Unfortunately, I personally don’t own any laser pointers. There might me a red laser pointer buried somewhere in the boxes the last time I moved, but I’m not sure. If I can get my hands on a laser pointer I’ll try to do a tutorial on tracking it.
Neeraj
Hi Adrian- Thanks for such detailed explanation on open cv concepts, your site is the best site for learning open cv concepts, Now these days I eagerly wait for your email for what new you have published.Thanks always!.I am right now not able to capture video from my webcamera, I am using virtual box and installed ubuntu on VB, My host operating is OSX. my frame returned are NONE and grabbed is always false. I tried changing the camera = cv2.VideoCapture(0) argument from 0 to 1,-1 . Do I need to do anything special if I need to access webcamera from virtual box.Under VB->Device->USB-> apple HD face time camera is selected.
Adrian Rosebrock
Unfortunately, using VirtualBox you will not be able to access the raw webcam stream from your OSX machine. This is considered to be a security concern. Imagine if a VM could access your webcam anytime it wanted! So, because of this, accessing the raw webcam is disabled. In this case, you have two options:
1. Try VMWare which does allow for the webcam to be accessed from the VM. I personally have not tried this out, but I have heard this from other.
2. Install OpenCV on your native OSX machine.
I hope that helps!
gabrigam
Hi Adrian and all fiends
I’am using virtualbox 6.2 with Ubuntu 19.04 and for fix cam problem try this:
1) install Virtualbox extension pack
2) enable usb 2/3 from virtualbox property machine and add cam device
3) start vm and enable cam
I hope this Helps
Gab
ciao
Adam Gibson
Just FYI, adding the following code allows operation on Python 3 & OpenCV 3 (at the top, near line 7):
Adrian Rosebrock
Thanks for the comment Adam! The code will work will OpenCV 3 without a problem, but the change you suggested is required for Python 3. I’ll update this post to use NumPy’s
arange
instead to make the code compatible with both Python versions.Luis
Hi Adrian,
Thanks for another great tutorial on OpenCV.
I am working with a freshly compiled Python3 + OpenCV3 on a Raspberry Pi 2, installed from your tutorial on the subject and running this code I am getting the following error:
I even added the lines suggested by Adam Gibson for compatibility withto allow Python3 and OpenCV3 but the error persists.
Do you have ay idea of what am I missing?
Thanks,
Adrian Rosebrock
Any time you see an error related to an image being
None
and not having ashape
attribute, it’s because the image/frame was either (1) not loaded from disk, indicating that the path to the image is incorrect, or (2) a frame is not being read properly from your video stream. Based on your provided command, it looks like you are trying to access the webcam of your system. Try using the supplied video (in the code download of this post) and see if it works. If it does, then the issue is with your webcam.Nick B
Hi Adrian,
I have the same attribute error, however I tested my webcam with your video test script, so it should be working?
Thank you
Adrian Rosebrock
Which webcam video test script did you use?
Namal
Hi Adrian,
I also have this problem.with video track, its working properly.but not with web cam.error as above mentions.but camera is working good.what should i do?
Adrian Rosebrock
I’m sorry to hear that your OpenCV setup is having issues accessing your webcam. Which webcam are you using?
Santo
Hello Adrian,
I want to modify this code to detect a digit using picamera.
Could you suggest a way to do it?
Thanks,
Santo
Adrian Rosebrock
That really depends on the types of digits you’re trying to detect as well as the environment they are in. Typical methods for object detection in general include Histogram of Oriented Gradients. I cover object detection in detail inside the PyImageSearch Gurus course, but without seeing an example image of what you’re trying to detect, I can’t point you in the right direction.
avtar
i m getting the same error!! i need help! i also installed ur open cv on youtube.. i am using pi camera. i am getting a video feed of the video i supplied but i am not getting live stream.
Luis
Hi,
Just figured it out.
To use this code on a Raspberry Pi with Python3 OpenCV3 and the RaspiCAM I needed to load the v4l2 driver:
sudo modprobe bcm2835-v4l2
To load the driver every time the RPi boots up, just add the following line to /etc/modules
bcm2835.v4l2
Thanks for the tutorial
Adrian Rosebrock
Thanks for sharing Luis! Another alternative is just to modify the frame reading loop to use the
picamera
module as detailed in this post.John
Hello Adrian,
I got the same as error as Luis. Would you explain detail how to modify frame reading loop?
Adrian Rosebrock
Hey John — take a look at Luis’ other comment on this post, he mentioned how he resolved the error.
ancientkittens
I love this article – nice work. I even noted that it got picked up in python weekly!!
Adrian Rosebrock
Thanks! 😀 I’m glad you enjoyed it!
Yuke
Hi Adrian,
Thanks for sharing this project.
I have a question regarding to recover the ball when it appears in the scene again, do you using detector to do it? Or only use HSV color boundaries?
Another is that, I find out your application is robust for illumination changes, do you using other feature for tracking? Because I think only HSV could not handle it…
Adrian Rosebrock
When the HSV drops out of the frame, the HSV boundaries are simply used to pick it back up when it re-enters the scene. To answer your second question, since this is a basic demonstration of how to perform object detection, I’m only using color-based methods. In future methods I’ll show more robust approaches using features.
Vlad
Great!
Luis Jose
Hi Adrian!
Amazing work, as always! I wonder, how difficult do you think is to extend this code and follow the position of more balls of different colors?
Thanks for sharing all this knowledge with the world!
Luis
Adrian Rosebrock
Not too challenging at all. Just define a list of lower and upper color boundaries you want to track, loop over them for each frame, and generate a mask for each color. I actually detail exactly how to do this in the PyImageSearch Gurus course.
HienTran
thanks for amazing project. I have a question that if I want to caculate or estimate the speed of the ball so what I have to do. this is the first time I learn about the tracking
Adrian Rosebrock
Take a look at Raspberry Pi for Computer Vision where we do speed calculation of vehicles. The same method can be applied to ball tracking as well.
Ali
Hi Adrian,
Beautiful tutorial. I am motivated to try this sort of tracking on a squash ball. Do you think it might work? Some of the challenges that come to mind:
1) ball is black
2) ball absolute diameter is small, and the perceived ball size becomes even smaller as the distance between the camera sensor and the ball increases
3) very high speed of ball
Adrian Rosebrock
Hey Ali — great questions, thanks for asking. If the ball is black, that could cause some issues when using color based detection, but that’s actually not too much of an issue provided that there is enough contrast between the black color and the rest of the image scene. What’s actually really concerning is the very high speed of the ball. Motion blur can really, really hurt the performance of computer vision algorithms. I think you would need a very high FPS camera, incorporate color tracking (if at all possible), and might want to use a bit of machine learning to build a custom squash ball tracker.
Willem Jongman
Hi Adrian,
When there is initially no contour in the mask, and then the green object is moved into view, it will generate a “deque index out of range” exception on line 94.
I Modified line 94 to:
if counter >= 10 and i == 1 and len(pts) >= 10 and pts[-10] is not None:
That seems to have solved it.
Thank you very much for sharing your image-processing knowledge, I learned some neat tricks from it and I hope you will be keeping up this good work.
Cheers,
Willem.
Adrian Rosebrock
Thanks for sharing Willem! 🙂
Pedro
Hi Adrian,
Awesome tutorial, as always.
Best website to source OpenCv and computer vision 🙂
Adrian Rosebrock
Thanks for the kind words Pedro! 😀
Prasanna K Routray
Hello,
I tried to run this but it’s giving me this error:
Adrian Rosebrock
You need to install the
imutils
package:$ pip install imutils
deshario
Can we check that which position is ball coming from ?
For example :: if our ball is in right position and we move it into left.
the output that i need is :: print(“Ball is coming from right to left”)
How can i do it ?
Thankx
Adrian Rosebrock
Absolutely. Please see this post.
Sharad Patel
Adrian,
I am planning to do a deep dive into your tutorial for a project of mine. I am new to motion tracking and I have a question (it may be answered in the code – if so please can you point it out). Is it possible to set regions on the image such that when the ball enters it, the code can do something (e.g. output a message)? Thanks.
Adrian Rosebrock
All you really need are some
if
statements and the bounding box of the contour. For example, if I wanted to see if the ball entered the top-left corner of my frame, I would do something like:The code in the
if
statement will only fire if the center of the ball is within the upper-left corner of the frame (within 50 pixels). You can of course modify the code to suit your needs.Sharad Patel
Great! Thank you. When I came across this post I wasn’t aware of your Quickstart package. Just downloaded it and I am working my way through the tutorials – enjoying it all so far!
Adrian Rosebrock
Thanks for picking up a copy of the Quickstart Bundle Sharad, enjoy! 🙂
Sharad Patel
Sorry – one more question. I have a video similar to yours but I have a red ball. Do you have any tips / tools that you can recommend in establishing the colour bounds for my object (I have tried guesstimating with a web based color-picker). Thanks.
Adrian Rosebrock
Take a look at the
range-detector
script I link to in the body of the blog post. You can use this to help determine the appropriate color threshold values.Amirul Izwan
Hello Adrian,
good job on your tutorials, significant big help to my school project. I tried experimenting with the ‘if’ statement as you suggest here, the problem is it only work once for the first time, the second (and later) time I run the code it throws me error: name ‘x’ is not defined. Is there any way to fix this? Thanks!
Adrian Rosebrock
If you’re getting an error that the variable
x
is undefined, then you’ll want to double check your code and ensure thatx
is being properly calculated during each iteration of thewhile
loop. It sounds like a logic error in the code that has been introduced after modifying it.Jessie
Thanks for sharing!
I’m wondering what is the longest distance between the ball and the camera can be to guarantee the accuracy?
Adrian Rosebrock
As long as the ball is in the field of view in the camera and the radius doesn’t fall below the minimum radius of 10 pixels (which is a tunable parameter), this will work. You might also be interested in measuring the distance from the camera to an object.
Hilman
Hey Adrian, I have a question.
I can’t help it but to notice that you didn’t change the lowerGreen and upperGreen boundary in the line
mask = cv2.inRange(hsv, greenLower, greenUpper)
into NumPy array when in your OpenCV and Python Color Detection post, you said that the OpenCV will expect the colour limit will be in form of NumPy’s array. Why is that?
Adrian Rosebrock
That’s a good point! I thought it did need to be a NumPy array, but it seems a tuple of integers will work as well. Thanks for pointing this out Hilman.
mathivanan
some one help me how can i print the coordinates of the ball on the terminal …..
Adrian Rosebrock
After Line 72, simply do:
print((x, y))
Bart
This is a nice tutorial, well explained, I was wondering how to add a pan/tilt servo to the project so that an external camera (USB) can move like the contrails
Adrian Rosebrock
I honestly haven’t worked with a pan/tilt servo before, although that is something I will try to cover in a future blog post — be sure to keep an eye out!
Guru
Extremely Great Post Man. I would like to request you to demonstrate shape based tracking instead of color based tracking in this context. It would help me greatly to be frank.
Thanks.
Adrian Rosebrock
Have you tried looking into HOG + Linear SVM (commonly called object detectors)? It’s a great way to perform shape based detection followed by tracking.
david
Hi, great post. Just curious about lines 44-46.
frame = imutils.resize(frame, width=600)
blurred = cv2.GaussianBlur(frame, (11, 11), 0)
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
Should: hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
Be: hsv = cv2.cvtColor(blurred, cv2.COLOR_BGR2HSV)
?
Otherwise, the “blurred” frame isn’t used that I can see.
Adrian Rosebrock
Hey David, thanks for pointing this out. I didn’t mean for the blurring to be included in the code, I have commented it out. Sorry for any confusion!
Tain
Evening Adrian
i am absolutely new to Python and openCV, however have some programming experience.
But i am struggling to get the demo running.
My guess is that for some reason frames arent being grabbed from camera or video and so I end with an error, “noneType” doesnt have an attribute called Shape, when the code calls for resize of an object.
Any thoughts on where I am going wrong?
Windows 10, Python 2.7
Thanks for your help
Tain
Adrian Rosebrock
Any time you see an image or frame being “NoneType” it’s almost 100% due to the fact that (1) the image is not correctly read from disk or (2) the frame could not be read from the video stream. I would double check that you can properly access your webcam via OpenCV since that is likely were the issue lies.
Bob
Hi Great Tuto, got it to work well, the thing is that I am hopeless with color spaces and don’t understand anything else than RGB. I’ve tried using rgb2hsv() conversion function to try and track a red ball or a blue ball, but … didn’t get it to work. I searched python documentation but the functions they propose (like colorsys.rgb_to_hsv()) don’t give results in the same ranges.I also tried different wikipedia functions and online functions but I don’t seem to get it to work with anything else than the green.
Any help welcome.
Cheers
Adrian Rosebrock
Take a look at the
range-detector
script that I link to in this post. You can use this script to help you determine appropriate color threshold values.Hilman
Hey Adrian. I got one question.
On line 96, the code ‘key = cv2.waitKey(1) & 0xFF’, why the ampersand sign and the ‘0xFF’ is needed? I’ve googled it, the best explanation I have found is something about if the computer is 64-bit (if I remember correctly).
Adrian Rosebrock
This is used to take the bitwise AND between the return value of
cv2.waitKey
gives the least significant byte. This byte is then passed into theord
function so we can get the actual value of the key press.David Kadouch
Hey Adrian
Quick question: I want to track a different color than the green/yellow ball. What’s the formula you used to transform the RGB values to HSV values? (in your code sample the values for greenLower = (29, 86, 6) and greenUpper = (64, 255, 255).
I’m struggling with that and I can’t make it work. I want to track a blue object.
thanks
David
Adrian Rosebrock
To transform the RGB values to HSV values, it’s best to use the
cv2.cvtColor
function. You can find the formula for the conversion on this page. However, if you’re trying to detect a different color object, I suggest using therange-detector
script I mention in this post.ghanendra
hey Adrian..! I am not able to do it on raspberry pi 2.. Its showing NoneType error and for ball_tracking_example.mp4 fps is very low.
please Help me out.
Adrian Rosebrock
Anytime you see a
NoneType
error it’s 99% of the time due to an image not being read properly from disk or a frame not being read from the video stream. The issue here is that you’re usingcv2.VideoCapture
when you should instead be using thepicamera
Python package to access the Raspberry Pi camera module. You can read more on how to access the Raspberry Pi camera module here.You could also swap out the
cv2.VideoCapture
for theVideoStream
that works with both the Raspberry Pi camera module and USB webcams. Find out more here.Ghanendra
Thanks a lot Adrian.
I was able to do live stream ball tracking with pi.
I want to detect front head light of a vehicle during night time. Still I am just a beginner. Can you help me out on this?
Adrian Rosebrock
That’s definitely a bit more of a challenge. To start, you’ll want to find the brightest spots in an image. Then, you’ll need to filter these regions and apply a heuristic of some sort. A first try would be finding two spots in an image that are lie approximately on the same horizontal line. You might also want to try training an object detector to detect the front of the car prior to processing the ROI for headlights.
Ghanendra
Hey Adrian!!
Can you help me out with the code for detecting two spots horizontally in an image.??
I need to determine multiple bright objects in a live video stream.
Just like finding multiple balls with same colors.
Thanks in advance.
Adrian Rosebrock
The same code can be applied. Just define the color ranges for each object you want to detect, then create a mask for each of the color ranges. From there, you can find and track the objects. If you don’t want to use color ranges, then I suggest reading reading this post on finding bright spots in images.
ghanendra
hey Adrian thanks for help.
I tried for blue color creating a different mask and setting color range. it was getting tracked simultaneously.
I was able to track green and blue.
1. how to track two balls in same horizontals line??
2. in your tutorial we are finding the largest contour in the mask, instead of that how to find all the contours and track them separately??
3. how to track multiple objects of same color?? just like if I have 5-10 green balls. So how to track them?
Adrian Rosebrock
The most important aspect of getting multiple colors is to use multiple masks — one for each color you want to track. You then apply each technique of color thresholding, finding the largest contour(s), and then tracking then. But again, you need to create a mask for each color range that you want to track.
ghanendra
Haha… Adrian you misunderstood me. I was asking about tracking same color. Tracking multiple objects of ” SAME COLOR”.
Adrian Rosebrock
Got it, I understand now. See by reply to “Maikal” in this comments section. I detail a procedure that can be used to handle objects that are the same color.
ghanendra
Hey Adrian really thanks a lot for answering my questions. I just love these tutorials and everyday I will come with a new question and I hope you won’t mind answering them. haha!!
One more
I need to indicate the detected green ball using an LED, so how can I use RPi.GPIO with this code?? I tried importing but showing an error.
How to use GPIO pins with this code??
Adrian Rosebrock
I’ll be covering this soon on the PyImageSearch blog, keep an eye out 🙂
amin
Hi Adrian,
thanks for your GREAT tutorials,
i want to merge this tracking ball code with “unifying-picamera-and-cv2…” to have best result in tracking green ball
at first install last jessie update and install opencv 3.1.0 with python 3 same as your post “how-to-install-opencv-3-on-raspbian-jessie”
for simple imshow (no tracking & max witdh = 400) can reach 39 FPS with picamera & about 27 FPS for webcam
but when add ball-tracking code FPS decrease to 7.8 with picamera and 7 with webcam 😐
why webcam has close speed to picamera when add tracking code?
is it possible to reach better FPS (without change size)?
i try several ways to increasing FPS but they are not good enough
e.g. increase priority by change nice of python process “renice -n -20 PID of process”
but no so good maybe increase FPS just 0.1
thanks a lot
Adrian Rosebrock
So keep in mind that the
FPS
is not measuring the physical FPS of your camera sensor. Instead, it’s measuring the total number of frames you can process in a single second. The more steps you add to your video processing pipeline, the slower it will run.Your results reflect this as well. When using just
cv2.imshow
you were able to process 39 frames per second. However, once you included smoothing, color thresholding, and contour detection, your processing rate dropped to 7 frames per second. Again, this makes sense — you are adding more steps to your processing pipeline, therefore you cannot process as many frames per second.Think of your video processing pipeline as a flight of stairs. The less functions you have to call inside your pipeline (i.e., the “while” loop), the faster you can go down the stairs. The more functions you have, the longer your staircase becomes — and therefore it takes longer for you to descend the stairs.
kazem
Hi Adrian, great tutorial. You mentioned you have used range-detector to determine the boundaries. Would you mind telling me how did you do that? I ran it and I can see I can use the sliders to make sure that My object stands out as black from the white background. But there is nowhere I can see any value?
Adrian Rosebrock
Indeed, the sliders control the values. The easiest way to get the actual RGB or HSV color thresholds is to insert a
print
statement right after you press theq
key to exit the script. I’ll be doing a more detailed tutorial on how to use therange-detector
in the future.Hojo
I have just started learning python about a week ago and I am still trying to wrap my head around the language.
So while this question sounds dumb, How do you run range-detector in python? Is it already in imutils?
I am trying to detect and track multiple moving black balls in the same frame, print out the respective positions and calculate the distance traveled, velocity, etc.
I have written code before to do this but in matlab, (I split an image into R-G-B, performed a background subtraction on each channel, inverted the resulting images, took the similar and binarized) however when reading up on object tracking; I noticed that many use HSV instead of RGB. After reading more I can see why HSV is preferred over RGB but because of this, I need to be able to define the color ranges. The range-detector looked liked perfect to use, but… (back to my question above).
Adrian Rosebrock
There are many ways to execute the
range-detector
script but most are based on how your Python PATH is defined. Where do you have theimutils
package installed on your system? The script itself is already inimutils
. The easiest method would be to change directory to it and execute using your input image/video stream as the source.Selim M.
Hello Adrian,
Thanks for the tutorials, I learned a lot from them. I have a problem about the camera though. It does not capture the frames. I didnt have a problem when taking photos but it seems that the video is a bit problematic. I run the code and it doesnt capture the frames. Do you have an idea about why it happens?
Have a nice day!
Adrian Rosebrock
What type of camera are you using? Additionally, you might want to try another piece of software (such as OSX’s PhotoBooth or the like) to ensure that your camera can be accessed by your OS.
giulio mignemi
hello, I need to set the color to identify the ball covered with aluminum foil, could you help me?
Adrian Rosebrock
I would recommend against this. Trying to detect and recognize objects that are reflective is very challenging due to the fact that reflective materials (by definition) reflect light back into the camera. Thus, it becomes very hard to define a color range for reflective materials. Instead, if at all possible, change the color of the object you are tracking.
maikal
Anyone can tell me how to detect two green balls simultaneously???
Adrian Rosebrock
Change Line 66 to be a
for
loop and loop over the contours individually (rather than picking out the largest one). You can get rid of themax
call and then process each of the contours individually. I would insert a bit of logic to help prune false-positive contours based on their size, but that should get you started!maikal
Yeah Adrian thanks a lot. I changed to for loop, multiple contours are getting detected and they are overlapping each other. I tried to change the radius size but still not getting proper result.
Waiting for your logic.
Adrian Rosebrock
If the contours are overlapping, then that will cause an issue with the tracking — this is also why you might want to consider using different color objects for tracking. In the case of overlapping objects, you should consider applying the watershed algorithm for segmentation.
Wallace Bartholomeu
Can you please this part of your code ?
Im trying to do, but unsucessful.
Alan
Hi Adrian,
If we are tracking multiple balls, you said loop over the contours in the earlier post. However, how do you identify the contours so that when you draw the lines, its belong to the correct ball.
Adrian Rosebrock
There are many ways to accomplish this, some easy, some complicated. The quickest solution is to compute the centroid of each object in the frame. Then, find the objects in the next frame. Compute the centroids again. Take the Euclidean distance between the centroids. The pairs of objects that have the smallest distances are thus the “same” object. This would make for a great blog post in the future, so I’ll make sure I cover that.
yaswanth kumar
Hey Adrian, don’t we have any python library or any algorithm to do the same? if yes, can you please suggest some! Thanks
Adrian Rosebrock
No, there isn’t a library that you can pull off the shelf for this. It’s not too hard to code though. I’ll try to do a blog post on it in the future, but my queue/idea list is quite large at the moment.
Matt
Hi Adrian,
Thanks for your great tuto. Do your script run with a beaglebone black card ?
Thanks in advance
Adrian Rosebrock
I don’t own a Beaglebone Black, so I honestly cant’ say.
Matt
Ok thanks. But in your project, what kind of card have you used ?
Adrian Rosebrock
I either use a laptop or desktop running Ubunutu or OSX, or I use a Raspberry Pi.
Matt
Ok but from where do you run your script ? Raspberry Pi or Laptop ? Thanks.
Adrian Rosebrock
I run it on both. From this particular script, I executed it on my laptop. But it can also be run on the Raspberry Pi by modifying the code to access the Raspberry Pi camera.
Matt
Thanks. According to your video, you seem to track your ball in the plane (x,y). What’s happened if you move the ball on the z-axis ?
Adrian Rosebrock
This code doesn’t take in account the z-axis. But you can certainly combine the code in this blog post with the code from measuring the distance from your camera to an object to obtain the measuring in the z-axis as well.
Matt
Yes, I read it. But in your code, I do not understand how do you compute the coordinates of the ball in the frame world. Did you compute the coordinates changing frames (e.g world frame -> camer frame) ?
Adrian Rosebrock
The (x, y)-coordinates of the ball are obtained from the image itself. They are found by thresholding the image, finding the contour corresponding to the ball, and then computing its center. These coordinates are then stored in a queue (i.e., the actual “tracking” part). If you would like to add in tracking along the z-axis, you’ll need to see the blog post I linked you to above. The trick is apply an initial calibration step that can be used to measure the perceived distance in pixels and convert the pixels to actual measurable units.
Matt
Thanks for your answer. But in your tuto, you do not measure distance from the camera to an object but only distance between different objects. Moreover, in your algo, I do not see the using of intrinsic parameters (e.g focal length of the camera). Could you help me please ? Thanks
Adrian Rosebrock
Hey Matt, as I mentioned in a previous reply to one of your comments, you need to see this blog post for measuring the distance from an object to camera. This requires you to combine the source code to both blog posts to achieve your goal. I’ll see about doing such a blog post in the future, but if you would like to build a system that measures distance + direction, you’ll need to study the posts and combine them together.
Matt
Yes, I see but I asked myself if a simple webcam work for 3D tracking… I did a state of the art, and I read that a special 3D camera sensor is required. That’s why I asked you 🙂
Adrian Rosebrock
For 3D tracking, you’ll likely want to explore other avenues. If you want to use a 2D camera (which would be a bit challenging), then camera calibration via intrinsic parameters would be required. Otherwise, you might want to look into stereo/depth cameras for more advanced tracking methods. Hopefully I’ll be able to cover both of these techniques in future blog posts 🙂
Matt
I hope for you 🙂
Otherwise, I thought to use a simple way for computing z-coordinate which consists to use the size of the object to determine z-coordinate roughly. When it appears larger on the camera, it must be closer and inversely, if it’s smaller, it’s farther away. But I don’t know if this method is robust. Because if I use a small object, it would be difficult.
Jon
This uses a USB camera. I have your code for the picamera working from another module and would like to use the picamera. What is the correct way to do this?
Can I replace line 26:
camera = cv2.VideoCapture(0)
with:
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
Thanks.
Adrian Rosebrock
Instead of replacing the code using
picamera
directly, I would instead use the “unified” approach detailed in this post.Dan
Just to get this particular tutorial working with picamera or the unified approach, would you detail (or post) the specific changes to get ball tracking working with the picamera?
Adrian Rosebrock
To be totally honest, it’s not likely going to wrote a separate blog post detailing each and every code change required. If you go through the accessing Raspberry Pi camera post and unifying access post, I’m more than confident that you can update the code to work with the PiCamera module.
Start with the template I detail inside the “accessing the picamera module” tutorial. Then, start to insert the code inside the
while
loop into the video frame processing pipeline. It’s better to learn by doing.WouterH
Why are you calculating the center? The function minimumEnclosingCircle already returns the center + radius or am I missing something?
Regards and thanks for the nice example.
Adrian Rosebrock
The
cv2.minEnclosingCircle
does indeed return the center (x, y)-coordinates of the circle. However, it presumes that the shape is a perfect circle (which is not always the case during the segmentation). So instead, you can compute the moments of the object and obtain a “weighted” center. This is a more accurate representation of the center coordinates.michael
hi Adrian!
yes we have two centers here :
1) the center of the object which is not perfect circle
2) the center of the estimated ball
i think more correct to draw the center of estimated ball
additionally we can smooth the path to reduce some noises in the curve.
Om
Help me iam not able to install imutils on RPI how can I do it
Adrian Rosebrock
You should be able to install it via pip:
$ pip install imutils
anirban
Hi – Excellent blog, but when running i get the below error. Can someone help?
File “ball_tracking.py”, line 39, in
image, contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
ValueError: need more than 2 values to unpack
Adrian Rosebrock
It sounds like you’re using OpenCV 3; however, this blog post utilizes OpenCV 2.4. To fix this error, you simply need to change the
cv2.findContours
line to:(_, cnts, _) = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
I detail the difference in
cv2.findContours
between OpenCV 2.4 and OpenCV 3 in this blog post.anirban
Hi Adrian – So kind of you to reply in such a short time, i appreciate your help to starters as me. i am not running opencv 3 But i am on 2.4.9 which i got by running commands on python terminal cv2.__version__
Can you suggest anything else?
Adrian Rosebrock
My mistake — I read your original comment to fast. I should have been able to tell that you were using OpenCV 2.4. In that case, you just need to modify the code to be:
(cnts, _) = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
I discuss the changes in the
cv2.findContours
function between OpenCV 2.4 and OpenCV 3 in this blog post.Shubham Batra
@Adrian Hey!, I am tracking a table tennis ball using the color segmentation and hough circle method, but this only works fine when the ball is moving slowly.
When the ball is moving very fast then tracking is lost.
I am using the Kinect for Windows V2 Sensor which gives at most 30fps.
Do I need a better high speed camera or any other algorithms can do the trick with the same 30fps camera ?
Adrian Rosebrock
I wouldn’t recommend using Hough Circles for this. Not only are the parameters a bit tricky to get just right, but the other issue is at high speeds, the ball will become “motion blurred” and not really resemble a circle. Instead, I would suggest using a simple contour method like I detailed in this blog post. Otherwise, if you really want to use Hough Circles, you’ll want to get a much faster camera and have the hardware that can process > 60 FPS.
Shubham Batra
@Adrian I’ll try out the simpler contour method and see if that works just fine,
else I’ll have to get a better camera.
Anyways thanks for the help!
reza aulia
if i want to change colour , where i can find type of color ??
Adrian Rosebrock
Please see the
range-detector
script that I mention in this blog post.Suraj
Hello Adrian,
I want to blur the rest of the video while the specified colour region stays normal.
Any leads on how I can get to it?
Adrian Rosebrock
I would suggest using transparent overlays. You can blur the entire image using a smoothing method of your choice. This becomes your “background”. And then you can overlay the original, non-blurred object. This will require you to extract the ROI/mask of the object.
Matt
Hi adrian,
I have an other question please. I do not understand how have you computed the coordinates of the ball without considering the focal length of your camera in your algo.
Could you explain me what is the difference between your work and the case where you could use the focal length ?
Thanks
Matt
Oscar
Hi Adrian.
great tutorial.
One question, is it possible create an executable of this script and a shortcut? this way, the program runs by double-clicking on the shortcut. I have UBUNTU.
Adrian Rosebrock
Thanks Oscar. And regarding your question, I don’t know about that. I’ve never tried to create a Python script that runs via shortcut.
Ihtasham
Hi, I want to know how we can do people track and get the track direction. In your tutorial just direction is start from where body escaped from the cam and how we can come the window in neutral form again.
Adrian Rosebrock
In this tutorial I demonstrate how to compute direction and track direction. You can apply the same methodology to other objects (such as people) as well.
Yasaman
Dear Adrian,
Could we apply this method to detect a bird that fly in different backgrounds?
Adrian Rosebrock
Birds can be various shapes and colors so I don’t think I would recommend this method. You might want to consider background subtraction if your camera is fixed and not moving. Another option may be to train your own custom object detector as well.
Arkya
Hey, thanks for the awesome tutorial.
What if I need to track any other colored ball (say, black), how would I get the HSV range of that color?
Adrian Rosebrock
Please see the
range-detector
script that I linked to in the blog post. This script will help you define the HSV color range for a particular object.Arkya
thanks, got it
yaswanth kumar
Hi Adrian,
Can’t we use RGB color space and RGB colour boundaries to detect a colour?
Adrian Rosebrock
Absolutely. Please check this blog post as an example.
Amin
Hi Adrian,
i make a robot (see that-> https://www.dropbox.com/s/c7ctgyzjhepxqc7/Raspberry_Robot.jpg?dl=0 )
its work base on your code to find green ball
now im trying to optimize codes and have some question
1.you use erode & dilate functions.why you dont use like this ?:
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, None, iterations=2)
2.you initialize center by None but i delete it & nothing happened : /
3.im searching for object detection methods for detect ball in my robot so find this algorithms
* transfer color space to HSV & find contours then find circle in it as you explain in this post
** Hough Circle Transform -> as you said its need camera with high FPS
*** HOG
**** Cascade Classification
***** CAMshift
now i want to now which algorithm is fastest and have best performance?
what are other algorithms i can use to find ball in video frames?
thanks
Adrian Rosebrock
You could certainly use a closing operation as well. In this case, I simply used a series of erosions and dilations. As for your second question, I’m not sure what you mean.
The fastest tracking algorithm will undoubtedly be CAMShift. CAMShift is extremely fast since it only relies on the color histogram of an image. The Hough circle transform can be a bit slow, and worse, it’s a pain to tune the parameters. Haar cascades are very fast, but prone to false-positive detections. HOG is slower than Haar, but tends to be more accurate. If all you want to track is a green ball, then I would suggest using either the
cv2.inRange
function or CAMShift.Amin
thanks a lot
just one thing i had forgotten to ask :
some times when i bring out the ball from camera view
get this error :
Zero Division Error : float division by zero
its related to this line :
center = (int(M[“m10”] / M[“m00”]), int(M[“m01”] / M[“m00”]))
how fix it?
Adrian Rosebrock
If the ball goes out of view and you are trying to compute the center, then you can run into a divide by zero bug. Changing teh code to:
Should resolve the issue.
Ewerton Lopes
Hey Adrian!
First of all, thanks for the blog! It is amazing! 😀
Right now I am doing my PhD research.. and somehow I need to track a person using a robot base… Not all people in the scene, but just one given person of interest, lets say! Well, I am thinking on tracking her based on a given color the person is wearing (let’s say green!). I wondering, however, wether it is possible, for instance, to use a kind of AR or QR code tag on the person instead of the color.. Just to avoid getting noise from other colors around… Do you have any idea on this matter? I would love to hear your feedback!
Thanks, man!
Adrian Rosebrock
Sure, this is absolutely possible. It all comes down to how well you can detect the “object/marker”. If it’s easier to detect the person via color, do that. If the QR code gives you better detection accuracy, then go with that. I would suggest running a few tests and seeing what works best in your situation.
Jarno Virta
Hi! Thanks for the tutorial! I have been learning OpenCV for a while now and I must say it is fascinating! I have an Arduino robot that I can control from my phone via bluetooth and it can also move around randomly while avoiding obstacles using a sonar range finder. I’m in the process of adding a Raspberry Pi to the robot, which will detect a ball and instruct the Arduino to move toward it. Your tutorial has been very useful!
I was thinking of using houghcircles to check that the object is infact a ball but this proved a bit too difficult because of other stuff being picked up, or if I set the color range for the mask and the diameter for the circle too restrictively, the ball is not found because of, among other things, variations in the tone of color of the ball… The robot should be able to detect the ball at some distance, so that brings certain requirements as well. I must say, I dont fully understand the Hough circle detection either, I often get a huge number of circles… Maybe just detecting contours is enough for now.
Is it possible to detect the ball without resorting to color filtering?
Adrian Rosebrock
The parameters to Hough Circles can be a real pain to tune, so in general, I don’t recommend using it. I would suggest instead filtering on your contour properties.
As for detecting a ball in an image without color filtering, that’s absolutely possible. In general, you would need to train your own custom object detector.
Ed
Hi Adrian,
Ive been following a few of your tutorials and have openCV setup on my pi but I cannot get this tutorial to work! (even running your source exactly)
Whenever I type the command to run it I simply end up back at the prompt, heres my output:
(cv) pi@pi:~/Documents $ python ball_tracking.py –video ball_tracking_example.mp4
(cv) pi@pi:~/Documents $
Any idea why its doing this?
Adrian Rosebrock
If you end up back at the prompt, then OpenCV cannot open your .mp4 file. Make sure you compiled OpenCV on your Pi with video support.
Ed
Hi,
I’m not sure that I have. It’s installed on a raspberrypi as per your tutorial to install openCV 3.0 and Python 2.7.
Is the video support required for acessing the video feed from the picamera?
Adrian Rosebrock
Video support is not required for accessing the Raspberry Pi camera module provided that you are using the Python
picamera
package. However, if you are reading frames from a video file, then yes, video support would be required.Yao Lingjie
Hi Adrian,
Can I know how do I find out the object’s lower and upper boundaries by using the imutils range_detector?
Adrian Rosebrock
My favorite way would be to add a “print” statement at the bottom of the range_detector script that prints out the values when you press a key on your keyboard or exit the script. I’m currently looking at overhauling the script to make it a little more user friendly.
Olivier Supplien
Hi,
I am lookink for a way to track several object in the same time, like coloured sticky labels.
Do you have any idea?
By th way, your code was very helpfull and very well-commented, thank you.
Adrian Rosebrock
You can certainly track multiple objects at the same time. You just need to define the lower and upper color boundaries for each object you want to track. Then, generate a mask for each colored object and use the
cv2.findContours
function to find each of the objects.Aris
Hey Adrian
from line 19 and 20. Is it the HSV or RGB color code?
Adrian Rosebrock
That is in the HSV color space.
Marcel
Hello Adrian,
Excellent tutorial on tracking ball with OpenCV.
I am starting studies in computer vision.
I have some questions..
The first is: How can I change the trace color the ball and let permantente in the image?
And the second question: How would the code to find another color and apply a square mask?
Thanks,
Adrian Rosebrock
To track the movement of the ball, you can use this post.
To track a different color object, be sure to use the
range-detector
script that I mention in the blog post. You can apply a square mask usingcv2.rectangle
Marcel
Many thanks for the reply,
I managed to create a square mask for the color red and would like to create a condition to check whether center the green ball went over the color red, how it could be this condition?
Adrian Rosebrock
Basically, all you need to do is create two masks — one for the red and one for the green ball. Then, once you have these masks take the bitwise AND between them. This will give you any regions where the two colors overlap. You can find these regions using
cv2.findContours
or more simplycv2.countNonZero
Mark
Hi Adrian,
thanks for your great job with sharing all this knowledge!
But I have a question. Have you ever tried to use higher FPS camera with raspberry? For instance action camera like GoPro. I’m wondering is it even possible. It should be connected via USB so I think it could be a bottleneck. Considering that CSI is the best option here => raspberry camera is the only way to capture HD video at ~60FPS in real time, right? I’ve read about some tricky HDMI input to CSI adapter so said GoPro could action like raspberry cam but its like 2 times the price of RPi3 and the availability leaves much to desire… What do you think?
Have a nice day!
Adrian Rosebrock
I personally haven’t tried using a GoPro before, regardless of processing the frames on a Pi or standard hardware. In general, I think the Pi will be strained to process 60 FPS unless you are only doing extremely basic operations on each frame.
Nilesh
Hello,
Wonderful post. I have a question on the similar lines – How about tracking two or more/2 same color objects in the video. Lets say for instance we have 2 red, 2 green and 1 blue balls in the scene. How would you recommend tracking them with a unique identifier?
I am expecting 5 different trajectories (similar to one in ball tracking example), one for each ball. Thank you for your help.
Adrian Rosebrock
For multiple objects, I would suggest constructing a mask for each color. Once you have the masked regions for each color range, you can apply centroid tracking. Compute the Euclidean distance between your centroid objects between subsequent frames. Objects that have minimum distance should be “associated” and “uniquely tracked” together.
John
Hello ! I actually have a few questions to ask
What I’m trying to do is to run this program using a Raspberry Pi 3 using the PiCamera , but I keep getting this error : ‘NoneType’ object has no attribute ‘ shape ‘
I tried to modify your code a little by adding these lines
from picamera import PiCamera ( At the very top )
camera = PiCamera() Line 26
But the error ‘PiCamera’ object has no attribute ‘read’
I looked at the tutorial here https://www.pyimagesearch.com/2015/03/30/accessing-the-raspberry-pi-camera-with-opencv-and-python/ but still couldn’t quite understand..
Not sure what to do about this..Would really appreciate some help, thanks!
Adrian Rosebrock
Anytime you see an error related to “NoneType”, it’s because an image was not read properly from disk or (in this case) a frame was not read from a video stream. I would suggest going back to the Accessing the Raspberry Pi camera post and ensure that you can get the video stream working without any frame processing. From there, start to add in the code from this post in small bits.
Finally, if you need a jumpstart, I would suggest going through Practical Python and OpenCV to help you learn the basics of computer vision and OpenCV. All code examples in the book are compatible with the Raspberry Pi.
John
I managed to bring up a live feed video but still can’t get ball_tracking.py to work. I tried inserting with picamera.PiCamera() as camera: before the image processing part but still recieved the same error
Adrian Rosebrock
It’s really hard to say what the exact issue is without being in front of your physical setup. I’m not sure how much it would help, but I would suggest going through my post on common camera errors on the Raspberry Pi and seeing if any of them relate to your particular situation.
Mostafa Sabry
Hi adrian, .
I really appreciate your effort in these blogs that we alot benefit from.
I am trying to run the code on python 2.7 with open CV2 and i keep getting the same error of ‘NoneType’ object has no attribute ‘shape’.
I am working on the computer NOT raspberry PI as I checked the comments above.
I would be grateful if you can help me handle this issue.
I am using a webcam built in the laptop and I checked it is working using the command
cv2.VideoCapture(0)
on a separate python file .
I am new to python. I traced the code and to try to let the code run on the video instead but I failed to understand the “argparse” library.
Adrian Rosebrock
Hi Mostafa, I think you might have missed my reply to Nick above. My reply to him is true for you as well. Anytime you see a frame as “NoneType”, it’s because the frame was not read properly from the video file or video stream. Given your inexperience with argparse, I think this is likely the issue though. Be sure to download the source code to this post using the “Downloads” form and then my example command found at the top of the file to help you execute it.
Mostafa Sabry
Thank U Adrian for your reply.
After searching a little bit online for the issue, I found a suggestion online that DID WORK which is to put a timed delay from the imported “time library” just after the command “cv2.VideoCapture(0)” to give the webcam time to load.
The code did work and thank U very much for your coordination and the incredible stuff you are providing.
I might need your help soon 🙂 as I want to adjust the code a little bit to fit my problem I will address you.
THANKS
Adrian Rosebrock
Great job resolving the issue Mostafa!
Ranjani Raja
I installed imutils package but still there is a error “no module named imutils”.
Adrian Rosebrock
If you are using a Python virtual environment, make sure you have installed
imutils
into the Python virtual environment:I would also read the first half of this blog post to learn more about how to use Python virtual environments.
Andre Brown
Hi Adrian
I would like to know if it is possible for the contrail to be drawn based on the size of the detected contour or circle drawn, so as you move the ball closer, the thickness of the contrail increases, and further away it decreases.
Also, is it possible to not have the contrails disappear with time. I have tried setting the buffer to 1280 but they still eventually disappear. it seems they start thick, then thin to nothing with time. I would like to keep all contrails in buffer and am currently writing these to an img file on exiting.
thanks
Andre
Adrian Rosebrock
It’s certainly possible to make the contrail larger or smaller based on the size of the ball. The larger the ball is, the closer we can assume it is to the camera. Similarly, the smaller the ball is, the farther away it is from the camera. Using this relationship you can adjust the size of contrail. The radius of the minimum enclosing circle at any given time will give you this information.
As for keeping all points of the contrail, simply sway out the
deque
for a standard Python list.Mohamed
I’d like to thank you for your efforts. The code is well explained.
I have a question that might be naive as I am not a vision guy. Does the same/similar code work on non-circular objects? For example, rectangular ones?
Thanks again.
Adrian Rosebrock
Yes, the code certainly works for non-circular objects — this code assumes the largest region in the
mask
is the object you want to track. This could be circular or non-circular. If your object is non-circular you may want to compute the minimum bounding (rotated) rectangle versus the minimum enclosing circle. Other than that, nothing has to be changed.Daniel
Hi Adrian!
When I run your code it works pretty well with my green ball, but when there is no ball in the screen the red contrail turns crazy and doesn’t disappear as in your video. What could be the problem?
Psdt: Awesome website! Thanks for shareing your work.
Adrian Rosebrock
The red contrail tracks the last position(s) of the ball. If the red contrail is doing “crazy” things then check the
mask
. There is likely another green object in your video stream.huang
How to display the data in the form of text in the deque
Adrian Rosebrock
Can you elaborate? I’m not sure what you are trying to accomplish.
lokesh p
can we track the ball in the outfield?? is that possible?
Adrian Rosebrock
You can, but it’s not easy. You would require a high FPS camera since baseballs move extremely fast. If the camera is not a high FPS then you’ll have a lot of motion blur to deal with. In fact, even with a high FPS camera there will still be a lot of motion blur. Tracking motion of fast moving objects normally is a combination of image processing techniques + machine learning to actually predict where the ball is traveling.
I would suggest starting with this paper that reviews a ball tracking technique for pitches.
Ejjelthi
Hi Adrian,
i want to track a player and a ball (like in a football background),
may you give tell me what can i change in the code?
Thanks in advance.
Adrian Rosebrock
Tracking a player and a ball is a much more advanced project. I wouldn’t recommend simple color thresholding for that. Instead, you should investigate correlation-based filters. These are much more advanced and even then being able to track a player across the pitch for an entire game is unrealistic. We can do it for short clips, but for not entire matches.
Ranim
Thank you so much for your efforts. I am really enjoying and benefting from this blog. I have questions regarding the number of frames.
Is it possible to know how many frames in a second we are processing for the video ?
can we custmoize it to process a specific number of frames oer second ?
Adrian Rosebrock
You can use this blog post to help you determine how many frames per second you are processing. Calls to
time.sleep
will allow you to properly set the number of frames per second that you want to process.Ranim
Thanks a lot.
Adrian Rosebrock
No problem, happy I could help 🙂
mandy
i am actually doing some similar project,
a small question,
after obtaining centroid x, y
(1) store in the buffer (buffer size = 128 is better for my project)
(2) drawline using Opencv
How do convert your code in java ?
Thanks if you can help
Adrian Rosebrock
Hey Mandy, while I’m happy to provide free tutorials to help you learn about computer vision and image processing, I only support the Python code that I write. I do not provide support in converting code to other languages. I hope you understand.
Marlin
I would like to be able to use this to be able to transform this example to track multiple objects of different colors. However, How can I define a long list of colors and then define upper and lower boundaries of the color given the RGB (or HSV) color.
For example: I want to detect a silver ball the RGB for silver is 192, 192, 192. The HSV for silver is 0 0 75.
How can I get the upper and lower limits of the color silver without actually using the script and detecting an object?
Adrian Rosebrock
Hey Marlin — I would suggest using the HSV or L*a*b* color space as they are easier to define color ranges in. The problem will be lighting conditions. Consider a “white” object for instance. Under blue light the object will have a blue-ish tinge to it. Under direct light the white object will actually reflect the light. This makes it challenging to use color-based detection in varying lighting conditions.
In short, you should play around with varying HSV and L*a*b* values in your lighting conditions to determine what appropriate values are.
Alex Johansson
HI,
What would be the simplest ready to use (free or cheap) software to use for just tracking a tennis player’s movement on the court in order to create visual tracing or heat map of that movement?
Thank you for any help/direction.
Adrian Rosebrock
That really depends on the type of camera feed you are using. If the camera is fixed then simple background subtraction would suffice. If you are trying to work with moving cameras with lots of different lighting conditions then the problem becomes much harder. In general you will not find an out-of-the-box solution for this — you’ll likely need to code the solution yourself.
Alex
Thank you so much Adrian. It would be a fixed camera.
Shervin Aslani
Hi Adrian, Awesome work. I’m new to python but I was able to learn quite a bit by going over your tutorial and code. Im currently working on a school project where we are trying track the path of a barbell while someone is performing weighted squats so we can assess and correct the technique that is being performed. We have painted to the end of the barbell with a bright yellow which allows us to track the bar path using the contrails which you designed. In order for us to properly assess squatting techniques we need to measure velocity, position, and be able to track these kinematic relationships. Is it possible to save to trail or path positions with time to an excel file or something similar?
Also, we were wondering if it would be possible to record the video stream so we can review it in the future?
Thanks for all your help and support.
Adrian Rosebrock
Very cool, as a fellow lifter I would certainly be interested in such a barbell tracking project. Regarding measuring position (and therefore velocity), you can derive both by extending the code from this post.
I then demonstrate how to record the video stream to video here.
Carlos
Hi Adrian
I was wondering how can I implement this to identify several balls at the same time, i don’t really need to draw the connecting lines
Thanks for your help
Adrian Rosebrock
You need to define color thresholds for each of the balls. Loop over each of boundaries, color threshold the image, and compute contours. Instead of keeping only the largest contour, keep the contours that are sufficiently large.
ANIL
Hi Adrian,
Thanks for the well explained tutorial. I want to use your code to detect eye pupils by using 2 cameras simultaneously. Than, I want to use serial communication between Python and Arduino (possibly by using pyserial) which will drive servo motors according to the location of the eye pupils in real time. I’m fairly new to both Python and OpenCV. How should I proceed to run the code for 2 cameras simultaneously?
Thanks in advance for any support.
Adrian Rosebrock
If you’re looking to access 2 cameras at the same time, please take a look at this blog post.
kanta
how to see tracking video for this code plz help me …code is suceesfully executed but how to see the output i dont know
Adrian Rosebrock
Hey Kanta — are you using my example video included in the “Downloads” section of this post? If so (and you’re not seeing an output video) then it sounds like your OpenCV installation was not compiled with video support. I would suggest following one of my tutorials on installing OpenCV on your system or using my pre-configured Ubuntu VirtualBox virtual machine included in Practical Python and OpenCV.
Juekun
Thanks for the awesome and well-explained tutorial!
Adrian Rosebrock
Thanks Juekun, I’m happy it helped you 🙂
ivandrew
how to eliminate red line on the detection of the ball? manah parts that must be replaced or removed? thanks you before
Adrian Rosebrock
Comment out the call to
cv2.line
to remove the red line.syukron
how to do tracking color of clothes? so that the robot can follow the color using OpenCV 3.0.0 in raspberry pi using raspicam?
Adrian Rosebrock
There are many ways to track an object in a video stream. For color-based methods you should consider CamShift.
Sam
You may have answered this already but What if you want to track a red ball or a blue ball?
The crux of the matter is knowing what to pass for color filter in the inrange function.
Where did you get the values you passed in?
Adrian Rosebrock
I’ll write a blog post to demonstrate exactly how to do this since many readers are asking, but the gist is that you need to use the range-detector script to manually tune the color threshold values.
saransh
eagerly waiting for your tutorial on range-detector.
Apiz
Hi, Adrian. why my video from raspberry came is flipped?
Adrian Rosebrock
It’s hard to say. Perhaps you installed your Raspberry Pi camera module upside down?
Himani
Hi Adrain,
Happy new year Adrain….
I want to identify the white line on the image and coordinates(x,y) of the line. Can you help me?
Thank you.
Adrian Rosebrock
The technique to do this really depends on how complex your image is. For basic images, thresholding and contour extraction is all you need. For more noisy images, you may need to apply a Hough Lines transform. For complex images, it’s entirely domain dependent.
Gabriel Rech
Hi Adrian,
Thanks for the tutorial!
I’m having some difficulties to detect balls with robustness.
Basically I’m using your range-detector scritp to identify the mask, and it works, but when I change the position of the ball, like when I put it backwards, the parameters I use to detect the ball in the first position it don’t detect the entire ball,sometimes it doesen’t even detect the ball at all. In others words, my code itsn’t robust enough. How can I make it more robust?
Other question, I didn’t quiet understantd what is the function of the HSV transform, I know what it does, but I don’t understand why are you using it
Thanks for your attention
Adrian Rosebrock
It sounds like you’re in an environment where there are dramatic changes in lighting conditions. For example, the area of the room may be “more bright” close to your monitor and then as you pull the ball back the ball shifts into a darker region of the room.
Ideally, you should have uniform (or as close to as uniform) lighting conditions as possible to ensure your color thresholds work regardless of where the ball is. It’s easier to write code that works for well-controlled environments that unconstrained ones.
If you can’t change your lighting conditions, consider trying the L*a*b* color space or using multiple color thresholds. We use HSV/L*a*b* over RGB since it tends to be more intuitive to define colors in these color spaces and more representative of how humans perceive color.
Luca Mastrostefano
Hello Adrian,
First of all, I would like to congratulate with you for this amazing blog!
I’ve just installed OpenCV on my laptop (https://www.pyimagesearch.com/2016/11/28/macos-install-opencv-3-and-python-2-7/) and copied-pasted your code.
It works perfectly as I run it!
But.. it goes at 12 FPS instead of reaching the 32 FPS you referred.
How can I speed up this algorithm? Is the 32 FPS version of your code different compared to the one published in this blog post?
Currently, I’m working on a Macbook Pro (2,4 GHz Intel Core i5, 8GB Ram) with OpenCV 3.2.0 and Python 2.7.
Thank you again for your help!
Adrian Rosebrock
My suggestion would be to apply video stream threading to help improve your frame rate throughput.
Luca Mastrostefano
Thank you for your fast response!
I have to correct my first post:
12 FPS was the speed with the stream from the camera.
If I switch to a pre-saved video the same algorithm goes to 63 FPS!
So, it is really fast as it is.
But as you suggest I’m testing the video stream threading and now it is really super fast! The sampling from the camera goes up to hundreds of FPS.
I’m eager to test also the tracker with this technic!
Thank you again!
James
Very useful information Adrian,
I’m curious is it a simple line of code added that would show the speed/velocity of the ball?
Adrian Rosebrock
You can certainly derive the speed and velocity of the ball using the tracked coordinates; however, the numbers may be slightly off if your camera isn’t calibrated. It would serve as a simple estimation though.
Chris
Hi
I’m looking to running this code with a live feed from a pi camera. Can that be done?
Adrian Rosebrock
Yes, you just need to update the code to access the Raspberry Pi camera module rather than
cv2.VideoCapture
.Jaspreeth
hii great work !!!
Im planning to design a quadcopter which can track objects and move according with the object.
how can i use this code for making trajectory planning ? can you help me please ?
thanks in advance 🙂
Adrian Rosebrock
Hey Jaspreeth — this certainly sounds like a neat project! However, I don’t have any tutorials on trajectory planning. I will certainly consider it for a future blog post.
Rhitik
in which directory should I install imutils?
Adrian Rosebrock
You should install “imutils” via the “pip” command:
$ pip install imutils
This will automatically install imutils for you.
Dan Price
Adrian,
I thoroughly enjoy your book and tutorials, really helping a newbie like me understand the concepts. Is this the best method to track an IR led using a Pi NOIR camera? I have the code thresholding for the “white” light, but is highly dependent on a compatible background. Would you help please?
Adrian Rosebrock
I admittedly have only used the NoIR camera once so I regrettably don’t have much insight here. The problem here is that trying to detect and track light itself should be avoided. Think of how a camera captures an image — it’s actually capturing the light that reflects off the surfaces around it. This makes detecting light itself not advisable.
Wanderson
Hi Adrian,
Have you worked with the kalman filter? Do you have a link to indicate?
Thank you.
Wilbur Bacalso
Hi Adrian,
Thanks for all your post. I’m super new at all this and have been learning a lot from your blog. I downloaded the code and video file for the ball tracker and I’m getting an error code of: NameError: name ‘xrange’ is not defined. I’m obviously missing something and I can’t seem to figure our what. Any help would be appreciated.Thanks in advance!
Adrian Rosebrock
It sounds like you’re using Python 3 where the
xrange
function is simply namedrange
(no “x” at the beginning). Update the function call and it will work.Wilbur Bacalso
That did it! Thanks for your quick response Adrian. Look forward to buying your lessons and learning more when I get some cash together.
Adrian Rosebrock
Awesome, I’m happy we could clear up the error 🙂
Glenn Holland
Hi Adrian.
Great Tutorial.
You are getting upwards of 32fps with colour detection tracking, do you think you could get a similar rate using brightness detection like you demoed in your tutorial of finding the bright spot of the optic nerve in a retinal scan?
Adrian Rosebrock
Since the tutorial you are referring to relies only a Gaussian blur followed by a max operation, yes, you should be able to obtain comparable FPS.
aslan
Hi Adrian,
Your code perfectly works with arranging exact HSV range for an object thanks for this, but by varying lighting conditions HSV values of an object may significantly change, especially S and V values so it may detect other colors in different lighting conditions. For example, I arranged the HSV values for my blue object, but in some conditions it detected gray things or black things. You mentioned above about this problem and you said you need to find a solution yourself.
Can you suggest any tecnique or algorithm or document for this problem?
Adrian Rosebrock
If your lighting conditions are changing dramatically you may want to try the L*a*b* color space. It might also help if you can color calibrate your camera ahead of time. If that’s not possible, you might want to consider machine learning based object detection. Depending on your objects, HOG + Linear SVM would be a good start.
peter
Hi Adrian, Thank you for you amazing posts first!
I’m new to OpenCV. Following this post, I now can detect a moving object within a certain HSV range via my webcam. Nonetheless, I have encountered some problems when I tried to only detect multiple round tennis balls.
Here are my concerns:
(i). I can’t detect multiple balls. I tried a for-loop and I also tried to follow one of your post(https://www.pyimagesearch.com/2016/10/31/detecting-multiple-bright-spots-in-an-image-with-python-and-opencv/). I also tried the “watershed algorithm”, but my program result is extremely unstable. (The circle is jumping around and lots of unnecessary tiny circles)
(ii) I can’t detect round objects only. I tried the HoughCircles Function. However, it’s seems like detecting perfect circle only. Then I tried the circularity parameter via the SimpleBlobDetector using HSV picture after some thresholding; I’m sure that only the contour of tennis ball is left in the HSV, but the SimpleBlobDetector always ignores my tennis ball contour.
(ii) when there is an another object with similar HSV range, my program will output a false result. http://imgur.com/q8xverE http://imgur.com/DAuUz1g
Any help would be appreciated.Thanks in advance!
Adrian Rosebrock
If you are getting a lot of tiny circles you might be detecting “noise”. Try applying a series or erosions and dilations to cleanup your mask.
You are also correct — if there are multiple objects with similar HSV ranges, they will be detected (this is a limitation of color-based object detection). In that case, you should try more structural object detection such as HOG + Linear SVM. I discuss this method in detail inside the PyImageSearch Gurus course.
Louay
Hi Adrian,
Thanks a lot for the tutorial! I managed to replicate it in C# to integrate in a project I’m working on.
Now I want to change the color of the tracked object. I read all the comments and you answer was to use the range-detector, which I really can’t use because I’m a Python noob.
It would be great if you could guide me towards an another way to find the upper and lower bounds of a color.
I’m particularly confused because in your green upper you have (64, 255, 255) which seems like an RGB value! As far as I know in hsv s and v only go up to 100. But also, the lower bound (29, 86, 6) actually corresponds to green in RGB.
If you could please explain a little more how you have those values, it would help a lot to find the ones I’m looking for (Orange, for a ping pong ball)
Thanks again and keep up the good work!
Adrian Rosebrock
Thank you for the suggestion Louay. I’m actually in the process of overhauling the range-detector script to make it easier to use. Once it’s finished I’ll post a tutorial on how to use it.
Louay
great news! thanks.
In the meantime, could you explain how you have your values and how do they correspond to hsv?
I’m just trying to mimic that so I get my values for other colors (using a simple color picker tool)
Adrian Rosebrock
The values I determined using the range-detector script which used the HSV color space when processing the video/frame. I’m not sure what you mean by how they correspond to HSV? Are you asking how to convert RGB to HSV?
Greg
I’m still trying to figure out the answer to Louay’s question:
“I’m particularly confused because in your green upper you have (64, 255, 255) which seems like an RGB value! As far as I know in hsv s and v only go up to 100. But also, the lower bound (29, 86, 6) actually corresponds to green in RGB.”
It seems like you have set greenLower and greenUpper using RGB values but then you use them to mask an image in HSV format. In HSV, (29, 86, 6) is black, not green. In RGB format, (29, 86, 6) is a nice shade of green.
The question is, are you setting lowerGreen and upperGreen in RGB or HSV?
Adrian Rosebrock
In OpenCV HSV values are in the range:
– H: [0, 180]
– S: [0, 255]
– V: [0, 255]
In this tutorial we are using HSV for the color threshold.
Vijay
Hi Adrian,
I need to extract key frames from the given video to do certain machine learning algorithms.
If you have any idea about it, can you share some details. I need to use opencv and PIL for this purpose.
Converting videos into frames and extraction key-frames from frames (Using Python – OpenCV and PIL)
Videos –> Frame extraction from videos (using Python) –> Frames (DB) –> keyframe extractor (Using Python) –> Keyframes (DB)
Thanks in advance!
Adrian Rosebrock
I think this all depends on what you call a “key frame”. I discuss how to detect, extract, and create short video clips from longer clips inside this post. If you’re instead interested in how to efficiently extract and store features from a dataset and store in a persistent (efficient) storage system, take a look at the PyImageSearch Gurus course.
Annie Dobbyn
Right this might be a dumb question but how did you find your FPS?
Adrian Rosebrock
The FPS of your physical camera sensor? Or the FPS processing rate of your pipeline? Typically we are concerned with how many frames we can process in a single second.
sinjon
Hi Adrian,
Is there a way to check all modules have been downloaded?
My opencv wouldn’t bind with python in the virtual environment, so I’m currently creating outside it.
I’m getting error messages that mask from (mask.copy() is not defined making me think something is missing
Thanks in advance
Adrian Rosebrock
I’m not sure what you mean by “all modules have been downloaded”. You can run
pip freeze
to see which Python packages have been installed on your system, but this won’t include thecv2.so
file in the output. You can alsols
the contents of your Python’ssite-packages
directory.sinjon
Hello Adrian,
I’m getting an error that the ‘mask’ from mask.copy() is not defined.
I was unable to bind my opencv to python for my virtual environment so I’m i’m building outside it. Feel like this could be causing problems.
Thanks in advance
Arati
Sir can i use Kinect Sensor for accessing video?is it possible?please explain how it is..
Umang
hello Adrian
I am doing a similar kind of project but i want track vehicles from a cctv camera to detect the speed of vehicle can you suggest me any method
Adrian Rosebrock
Determine the frames per second rate of your camera/video processing pipeline and use this as a rough estimate to the speed.
Jim
Adrian,
This is great. I’m essentially wanting to make an extension of this application, but have the ball (or tracking marker) fixed to a person, and measure how quickly (in real-world speed) they can shuffle from side to side.
Assuming they are moving in a straight line perpendicular to the camera, could this application be extended to calibrate pixels in the frame to a real world distance, and somewhat accurately measure the subject’s side to side motion (velocity, acceleration)?
Adrian Rosebrock
Yes, provided that you know the approximate frames per second rate of the camera you can use this information to approximate the velocity.
Mehdi
Just Great
Arun
Arun
its over
Adrian Rosebrock
If you are getting a
NoneType
error, it’s likely because your system is not properly decoding the frame. See this blog post for more information on NoneType errors.dharu
I want the report of this project
Adrian Rosebrock
I don’t know what you mean by “report”.
Ghani Putra
I installed imutils in virtual environment but still i had error said “No modules named imutils” even when i checked in the console it showed me the directory of the folder (so it has already installed). What should i do?
Adrian Rosebrock
Double-check that
imutils
was correctly installed into your virtual environment by (1) accessing it via theworkon
command and then runningpip freeze
.Shraddha
Hi Adrian,
This code is amazing! It works perfectly with a tennis ball but when I try to implement with a white table tennis ball, it doesn’t track it. I used the range detector script to get the min max threshold values as follows whiteLower=[158,136,155] and whiteUpper=[255,255,255] and just replaced the greenLower and greenUpper with those values, which are in bgr. I’m using mp4 video files one with the table tennis ball with a brown background(which it tracks) and same background with the white ball(no luck here). The issue seems to be that cnts=0 so maybe its not finding the contour?
Shraddha
I meant “green tennis ball with a brown background(which it tracks) and same background with the white ball(no luck here). “
Adrian Rosebrock
It sounds like your
mask
does not contain the object you are looking for. Try displaying the mask to your screen to help debug the script. It might be that your color thresholds are incorrect as well.Tony Du
Hi Adrian:
When I running your code on https://www.pyimagesearch.com/2016/12/26/opencv-resolving-nonetype-errors,it can return a image on screen,but when I use my webcam,it also return none type error.I’ve alredy used vlc to test my webcam,it succeed!Could you help me with this question?
Adrian Rosebrock
Hi Tony — I cover NoneType errors and why they happen when working with images/video streams in this blog post. I would suggest starting there.
Yusron
Hi Adrian, I have some questions for you, I have one project with motion detection based on color object
1. Objects that I use do not have to circle, may be square, or formless. Because I just want to focus on color
2. How can I detect that the object is moving or not?
Adrian Rosebrock
If you want to use color to detect an object, then you would use the color thresholding technique mentioned in this blog post. Compute the centroid of object after color thresholding, then monitor the (x, y)-coordinates. If they change, you’ll know the object is moving.
sinjon
Hello Adrian,
Is there a way to set the video / image as an array, so that when the buffer reaches the highest of its journey before returning, it’ll stop tracking?
Many thanks
Sinjon
Adrian Rosebrock
Hi Sinjon — the
dequeue
data structure can store an object that you want (including a NumPy array). If would like to maintain a queue of the past N frames until some stopping criteria is met, just update thedequeue
with the latest frames read from the camera.Marcos Idaho
hi Adrian, great job. I am very new to Open CV and Python. When i give the path of the default video, the video is not getting uploaded. The switch fails and my web_cam turns on, and green objects can be tracked. Can you tell me how to add the path of the video_file in the arguments?
Adrian Rosebrock
Hey Marcos — you supply the video file path via command line argument when you start the script:
$ python ball_tracking.py --video ball_tracking_example.mp4
Notice how the
--video
switch points to a video file residing on disk.Marcos Idaho
Hi Adrian, thank you! What if i am using pycharm interface?
Adrian Rosebrock
If you are using PyCharm you would want to set the script parameters before executing. Alternatively you could comment out the command line argument parsing code and just hardcode paths to your video file.
Hari
Great work Adrian. I am getting one error while running the code, its showing in pts.appendleft(center),pts is not defined. Can you help me on this?
Adrian Rosebrock
Hi Hari — make sure you use the “Downloads” section of this post to download the code. The
pts
variable is instantiated on Line 21Marcos Idaho
Thank you Adrian! I was trying to track two ants moving in a video file. I wrote a sample code inspired from your code. But i am not able to track the ants, can you help me in this regard?
youtube link to video is attached
https://www.youtube.com/watch?v=bc_OdLgGrPQ&feature=youtu.be%22
Adrian Rosebrock
If color thresholding isn’t working to track the individual ants, have you tried background subtraction/motion detection? Of course, this implies that the ants are moving.
Marcos Idaho
@Adrian. Thank you very much for the reply. I was able to track them through background subtraction. I have one more question, i tried to save the processed video, but its not getting saved. Its giving an error.
…
frame = imutils.resize(frame,width=600)
(h, w) = image.shape[:2]
AttributeError: ‘NoneType’ object has no attribute ‘shape’
Adrian Rosebrock
Hi Marcos — please see this blog post where I discuss common causes of
NoneType
errors and how to resolve them.kiran
@Adrian. Thank you for such a nice tutorial. What to do when there are multiple balls in the video having and they are all green. I tried doing this, but the tracks are getting messed up, when ball crosses each other or one ball goes near the other one.
Adrian Rosebrock
This will become extremely challenging if the balls are all the same color. I would suggest looking into correlation trackers and particle filters.
terance
Hello. Is there a way to just output the red line that is following the movement?
Adrian Rosebrock
Hi Terance — what do you mean by “output”? Do you mean just draw the red line? Print the coordinates to your terminal? Save them to disk?
Marcos Idaho
Hi Adrian, if there are multiple objects, then how to track them? If the objects are moving, is it better to use image subtraction methods?
Adrian Rosebrock
Hey Marcos — please see my reply to “Ghanendra” above related to tracking multiple objects. The gist is that you define color ranges for each type of object you want to detect and construct masks for each of them. If the objects are moving and there is a fixed, non-moving background background subtraction would be a better bet.
Ouma
Is there a C++ version ? Can this algorithm deals with industrual object motion
Adrian Rosebrock
Sorry, I only provide Python + OpenCV implementations on the PyImageSearch blog.
Wallace Bartholomeu
Hi Adrian..
Im Trying to change line 66 to provide a for loop to track multiple balls, like you explaned ad time ago, but I cant do it. Can you please exemplify ??
Thanks a lot !
Adrian Rosebrock
Hi Wallace — basically, you need to loop over each of the individual contours detected instead of taking the max-area contour.
This will loop over each of the individual contours instead of taking the largest one.
Alex Ronnebaum
Hi,
I am working on designing a drone summer camp where the students in the camp build and program drones to perform search and rescue missions. They will be tracking different colored targets and I am having trouble changing the color that the camera is tracking. Could you please give me some tips.
Adrian Rosebrock
Hey Alex — this sounds like a great project. I would suggest using the range-detector script in imutils to help you define the color ranges. I’ll also be releasing an updated, easier to use range-detector in the future as well.
Boris
Hi,
I have followed your tutorial to install OpenCV and python on macOS Sierra, however when I run this .py file on my mac, the camera LED lights up, but no camera window opens. Could you help me?
Thank you.
Adrian Rosebrock
Hi Boris — that is indeed very strange. Can you confirm that frames are being read from the stream by inserting a
print(frame.shape)
statement? That will at least tell you if frames are being read from the camera sensor.Bryce
Hello,
When I run the script with either the test video or trying to use my raspberry pi camera, no window pops up. The terminal just seems to run the code and move on to the next line. How do I get a window to pop up to show the object tracking?
Thanks!
Adrian Rosebrock
When you say the terminal just runs the code, do you mean that the script keeps running with no window displaying? Or the terminal just exits. If it’s the latter you need to update Line 28 to be:
vs = VideoStream(usePicamera=True).start()
Fahim
Hi Adrian,
I see that you referred to the imutils documentation for the range-detector to automatically determine the upper and lower range for the object to detect. But would you please please show me an example exactly how you used it?
I just can’t get it.
Adrian Rosebrock
Hi Fahim — I will add writing a tutorial on how to use the
range-detector
script to my to-do list.Murali Vikas Reddy
Your tutorial was nice .
Can you explain me in detail how you are tracking those x and y points such that I can track the radius of the ball and print the message whether the ball is moving forward or backward.
Adrian Rosebrock
The (x, y)-coordinates are stored in a
dequeue
object, as the code explains. To determine if a ball is moving forward, I would actually suggest monitoring the radius. As the radius increases, the ball is getting closer to the camera. As the radius decreases, the ball is moving farther away.Dibakar Saha
Hey Adrian, nice blog and and a great post. I was wondering if I designed a Haar cascade for a ball and use it to track its motion instead of using a mask as you did then how would I do it?
Adrian Rosebrock
You certainly could train your own Haar cascade, although I would recommend the more accurate HOG + Linear SVM detector. I demonstrate how to build and implement the HOG + Linear SVM detector inside the PyImageSearch Gurus course.
Daniel
Hello Adrian,
I have the same problem changing the HSV colour. I try to find a pink colour and is not working … I checked every tutorial available on the web, nothing working. If you can create kind of colour picker which gives you the range straight away will be cool. The one you specified is giving me a range but no the right one, and I have no ideea which out of 3 to use.
Thanks!
Adrian Rosebrock
Hi Daniel — I will certainly do a color picker tutorial in the future.
DAniel
Do you have any email address I can chat with you? I will put mine here and if you can reply will be cool. I have an interesting project and I need some help on the image processing side.
Thank you !
Adrian Rosebrock
Hi Daniel, if you need to contact me, simply use the contact form here on PyImageSearch.
Daniel
Hello Adrian,
I need to find a ball on the roulette table and tell where exactly is placed on the table ( like number and colour ) using image processing.
Any suggestions?
Thank you very much
Adrian Rosebrock
That sounds quite tricky as there are a number of variables here. I would start by using a fixed camera in controlled lighting conditions. Color thresholding can be used to reveal the red vs. green color. As for ball tracking, that really depends on the type of ball. If the ball is colored, color thresholding could work. If the ball is a silver metallic, then color thresholding will not work due to reflections. You might need to train a custom object objector in that case.
vishnu
hi
im getting name ‘xrange’ is not defined
Adrian Rosebrock
This blog post was designed for OpenCV 2.4 and Python 2.7 (as there were no Python 3 bindings back then). I’m assuming you are using Python 3 in which case you can change
xrange
torange
and the code will work.Rouhollah
Dear Adrian,
Thank you for your awesome tutorials; I have tried your code with my Raspberry Pie 2 (jessie python3 + openCV3.2) but the result is far slower than we can see in your clips; is there anything that i can do about it?
For now I have put the graphical memory on 128
Also i was wondering is it better to use a tracking algorithm instead of detection , even for simple tasks, like this post or not; My goal is to detect a color-specific circle in real-time; I’ve also implemented the MIL algorithm but still the result was not satisfactory at all-very slow. Should I change my hardware?
Thank you again! 🙂
Adrian Rosebrock
To start, I would suggest using threading to increase your FPS processing rate.
As for tracking, it depends on which algorithm you choose. Some tracking algorithms are fast, some very slow. If you’re just getting started with computer vision you might want to use more standard laptop/desktop hardware to get a feel for various algorithms and compare performance on the Pi.
seth
Hi adrian, great work on openCV code, its been useful while trying to learn computer and machine vision.
1 question though; i’m trying to devise a way of using openCV to track 1 of 2 objects and their coordinates in space i.e. if it is on the left hand side of a central divider or a right hand side, to then translate this information to my rpi3 and make a robot arm appropriately move to either left or right before kinematically actuating the arm motors to grab the objects.
is it possible to use this method you’ve posted to achieve this?
thanks
Adrian Rosebrock
Hi Seth — yes, that is certainly possible. This method finds the largest object and tracks it; however, you could modify it to loop over all contours and simply discard those that are too small. Once you have detected the contours of the objects you can monitor their (x, y)-coordinates and check to see if they pass over a central line. From there, you would need to write code to communicate with your robot arm which is obviously device specific.
seth
HI adrian, the method i was planning on using is to draw a centralised point on a video feed much like your coordinate tracking sequel to this post. and then have code dictate how far the object is from that line, however i can’t seem to get opencv to actually 1) draw a line, and 2) have it as a reference point in the capture.
to measure the distance i was going to try get it to form a box around the colour filtered object that is the dimensions of the object within a cuboid container. the object always being the same distance from the camera.
i just purchased the ebook of yours but have not had a chance to read it, is a technique covered in that that would aid me?
thanks again.
Adrian Rosebrock
Hi Seth — thank you for picking up a copy of Practical Python and OpenCV, I’m sure you’ll enjoy it.
As for your question, I think it would be helpful if you could share a screenshot of what you are working with (ideally over email). From there I can give you better suggestions on what to try.
David
Hi Adrian,
First off, sweet website. Second, there must be at least two of you because you are just so on top of responding to all of the comments on this website. Anyway, I just wanted to make a comment regarding the range detector.
At first it was not working out for me. More specifically, I was not able to move the sliders that were controlling the threshold values or they would reset themselves and it was very difficult to terminate the script by pressing ‘q.’ It seemed like python kept going through the while loop, which in turn was resetting my trackbars. Only pressing ‘q’ at the right moment would allow me to stop the script.
I just added two more lines of code and now it works wonderfully. First, as you had suggested I inserted print(v1_min, v2_min, v3_min, v1_max, v2_max, v3_max) on line 103 within the ‘if’ statement so the script would spit out some values. Secondly, I inserted cv.waitKey(0) as the final line of the while loop.
I’m a total novice, so i don’t know why it didn’t work initially. Maybe it had something to do with my system? My operating system is MacOS Sierra (version 10.12.5) and I ran the script with python3.6.1 and the latest version of openCV.
Adrian Rosebrock
Hi David — thanks for the comment, it’s much appreciated. I’m actually planning on re-coding the entire range-detector script (making it much easier to use) and doing a blog post on it. The new script will help resolve these types of frustrating issues.
Jazz
Hi,
Your work is really helpful.
Can you please guide, I want to plot the x and y axis of the ball in MATLAB.
Which parameters shall I save in .txt file, in order to get the plot of Green Ball x and y axis
Thanks,
Jazz
Adrian Rosebrock
Hi Jazz — at each iteration of the
while
loop you would want to save the thecenter
tuple computed on Line 69 to disk. From there you can ingest the .txt file into MATLAB and plot (or better yet, just plot using matplotlib).John K
Hey Adrian, i want to ask something. I want to change the colorLower and colorUpper into White. What number should i change? btw i’m new in image processing.
Adrian Rosebrock
Hi John — I would suggest you use the
range_detector
script I’ve mentioned in previous comments to help you tune the color threshold change. The exact values will vary depending on your environment and lighting conditions.Also, if you’re new to image processing, I would highly recommend that you work through Practical Python and OpenCV — this will help you learn the fundamentals of computer vision and image processing.
John K
Thank you for your advice. I have another question how to connect this code to rtsp protocol (ip camera)?which one should i change.
Adrian Rosebrock
I do not have any tutorials on IP cameras, but I will try to cover this topic in the future.
Abhranil
When I am trying to run this python code on videos that I downloaded,it is not accurate enough.
What should I do now?
Adrian Rosebrock
You will need to use the
range-detector
script mentioned in the blog post/comments to manually tune your thresholds.Jyoti
Adrian, how can I detect a hand movement instead of a ball ?
Adrian Rosebrock
You could try skin detection using color thresholding. Otherwise, I discuss hand detection and gesture recognition inside the PyImageSearch Gurus course.
Manggala
Hai Adrian,
Thank you for your tutorial. Your tutorial is great! i will make my project using it. BUt i have problem, i will access video form RTSP protocol, i’m using Yi Ants. I have put RTSP link in VideoCapture(“my_rtsp_link”) but it’s doesn’t work. Do you have any idea why?Thank you.
Madhu Oruganti
Dear Andrian,
How can I save this file after running the code.
Adrian Rosebrock
Save the video to file? Or a specific frame?
Madhu
As a video to same location.
Adrian Rosebrock
Please refer to this blog post and this one as well for information on how to write frames to disk in a video file.
satinder
sir your tutorial is by far the best i have found on the internet, but the problem is with lighting ,whenever light conditions are changed i am unable to detect my ball, moreover can you also tell me whether there is any technique by which i can click on the object to track,(because it is very difficult to set the HSV value for a specific color, i have to tune it for hours, then i got the best one), THANK YOU,
it would be very nice of you if you can tell me what to do because i can’t afford the course
Adrian Rosebrock
Correct, when your lighting conditions change you cannot apply color thresholding-based techniques (as you the color threshold values will change). Instead you should consider applying a different object detection technique such as HOG + Linear SVM.
Vignesh
Adrian, You are the Boss!! Keep up the good Work
Adrian Rosebrock
Thank you Vignesh, I appreciate that 🙂 Have a great day.
umbnich
Hi Adrian!
I want to integrate this script with the “Unifying picamera …”, so I write:
——-
if not args.get(“video”, False):
camera = VideoStream(usePiCamera=1).start()
else
camera = cv2.VideoCapture(args[“video”])
while True
frame = camera.read()
———
but when I execute a Nonetype error appear
Adrian Rosebrock
To start, your code is incorrect. You should be using
VideoStream
for both your USB camera and your Raspberry Pi camera module. For video files, useFileVideoStream
.Secondly, it sounds like Python cannot access your Raspberry Pi camera module. You can read up on common reasons for this in this post.
Pawan
Hey Adrian, thanks for the tutorial.
I have one question,
the coordinates that you are getting are live coordinates and are you storing them anywhere?
If not where exactly I can store them?(In the code)
Thanks in advance
Adrian Rosebrock
The coordinates are stored in the
dequeue
data structure.Manuel Alejandro Diaz Zapata
Hello Adrian. Loved this tutorial.
As a final project for my Image Processing class (engineering undergrad) we chose to implement a program that tracks people given a video feed. So researching online I’ve found that two of your tutorials could help us greatly: Pedestrian Detection OpenCV and this one.
What we want to make is something that fuses these two together, but thinking about this it comes to my mind something rather sketchy to make it work.
This is an oversimplified step by step :
https://imgur.com/a/jBQfw
Since the Pedestrian Dector draws a hollow rectangle on the person, we could make it solid, then do the BGRtoHSV colorspace convertion to apply this code, but a problem comes to my mind and it’s when two people come near, resulting in a bigger square and perhaps losing track of one of the subjects. Maybe this can be avoided if the rectangles drawn on the image per person are small.
If you can give me your take on this approach, it would be much appreciated.
Adrian Rosebrock
Hi Manual — I’m a little bit confused by the project here. Your goal is to detect a pedestrian in a video stream and then track the (x, y)-coordinates? Instead of bothering with color filling, why not just track the (x, y)-coordinates directly? Is there a particular reason you do not want to do this?
Manuel Alejandro Díaz Zapata
Well, thinking about it, it does makes a lot more sense, because the pedestrian detection already has the centroid of the person. I’ll try it and report back with results.
Some friends are working on this problem but using blobs. Which do you think can be more effective tracking multiple pedestrians simultaneously.
Thanks again.
Adrian Rosebrock
There are various algorithms you can use for multi-object tracking. Centroid-based tracking would be the easiest. Correlation filters are more advanced but would provide better accuracy.
sanup s babu
Hi adrian,
Iam doing a project similar to human following robot by using python 2.7 and opencv 3.1 with raspberry 3 model b.
In my project,human wear a jacket printed with 3 circles on the back with same color (lets say RED) in the circle. camera detects these circle and measures each width of the circles.
when humans move left, left circle’s width increses than other two and also same in the case of humans moving right.My problem arises in a frame there will be 3 circles and i have to find maximum area from 1 of the 3 circles and i must indicate whether it is LEFT or RIGHT.
please help..
Adrian Rosebrock
I would suggest you create a mask using
cv2.inRange
for each color you want to detect. You can then sort your contours to determine the left, right, etc.sanup s babu
In this project ,how to calculate sum of horizontal width of 3 circles discussed above , after performing bounding box around the circles.
(After creating bounding box we get x,y,w,h. frame it i choose w as a horizontal distance.)
Since 3 circles are in same frame it is difficult to calculate each contour width.
Gaudon Florian
Hi Adrian,
First of all, big thank’s to you for all your tutorials.
I’m running to project on my raspberry pi but I got a little question for you, why my raspberry isn’t using 100% of the processor when we are processing the frames and when I don’t get my 30FPS ( about 5-6 FPS i guess) ?
Adrian Rosebrock
Hi Gaudon — Sometimes algorithm’s simply take time to execute but do not use all of the processor capabilities.
Jesus De Jesus
Hello Adrian,
I’m wondering if there is a way to do this kind of track but usin RGB instead of HSV?
Adrian Rosebrock
Hi Jesus — It’s of course possible. You’ll find that the Hue Saturation Value color space is easier to work with in this case. For a detailed read up on color spaces, be sure to check out PyImageSearch Gurus Lesson 1.8.
sanup s babu
Hi adrian,
how the contours are numbered in a frame?
if 5 is the length of the contour it starts from left to right or Right to left or Random
C0,C1,C2,C3,C4 -left to right or
C4,C3,C2,C1,C0 – Right to left or
C3,C2,C4,C0,C1 – Random
Adrian Rosebrock
Hi Sanup. In PyImageSearch Gurus Lesson 1.11.5: Sorting contours, I detail how to sort contours and provide code you can use in your projects. You can also find the code on GitHub.
Kiel
Hi Adrian,
First off, you are doing some incredible things. In this ball tracking code I would like to do a few things to accommodate my goal.
1. Track multiple circles simultaneously (even better if different colored dots could be used)
2. Not erase the lines drawn (and make them not so thick)
3. Write a new video file with the lines.
The goal here is to look at objects and their movement from a time lapse video to create a spaghetti diagram of where the objects traveled.
Adrian Rosebrock
You can track multiple objects by modifying my code to find all “reasonably sized” contours instead of simply taking the largest one. You can also create masks for different colors as well.
To accomplish the second goal use a list data structure to store all points.
To save the resulting output as a video, follow this tutorial.
Goh Zhi Wen
Hello,
I am a beginner on this topic, but I need help with regards to a project I am working on for my university report. If I need to track the trajectory of billiard ball on the table surface, is it possible to use your code? The most important thing I need is firstly, the coordinate of the initial position of the ball and the coordinate of the ball at a certain time during the video. Is it possible to do this with this code?
Adrian Rosebrock
For tracking the actual movement and location of the ball I would recommend this tutorial instead.
Maram Qurban
Can I contact you? I have some questions about the tutorial!
Thanks
Dhara
Click the Contact Tab at the top of the page
Dhara
Hey Adrian,
I was wondering why you calculated the center of the object in Lines 68-69 when cv2.minEnclosingCircle() gives you the (x,y) for the center. Great tutorial btw. It helped me so much.
Adrian Rosebrock
It’s a bit of a (slightly) redundant calculation, but the minimum enclosing circle may not be exact versus the centroid of the mask which would be more exact.
Alex Sev
Is it easy to make your robot using raspberry pi. to go to the desired color? I mean when the ball with this color is in left then go left? then center also in right part. i don’t know how to determine the coordinates of the frames :'(
Adrian Rosebrock
If your goal is to determine the direction, and then have the direction inform the robot on where to go, you can track the object movement. And from there control the servos of your robot.
amber
can we use this (after some modifications) for tracking eye pupil movement?
Adrian Rosebrock
Not really, no. This method relies on color thresholding. Pupil detection and tracking is much more challenging. I haven’t personally tried it, but I know other PyImageSearch readers have had good luck with this tutorial.
TJ
Hi Jonathan,
I am trying to use the colour picker Python script that you created, but when I run the file I get this error:
usage: color_picker.py [-h] -i IMAGE [-l LOWER] [-u UPPER]
color_picker.py: error: argument -i/–image is required
How do I define an image in this script? Thanks for helping.
Adrian Rosebrock
Hi TJ — you need to supply the command line arguments to the script.
Suhas
Hey Adrian,
What if the ball changed color in its trajectory? Like, suppose the ball was a color-changing led bulb. Any way to track objects not just based on color? In fact, this happens in real scenarios too, like differential lighting.
Adrian Rosebrock
Yes, if you are looking for structural descriptors take a look at HOG + Linear SVM.
anzar
in raspberry pi camera these following line show error IndentationError: Unexpected indent
if not args.get(“video”, False):
camera = cv2.VideoCapture(0)
else:
camera = cv2.VideoCapture(args[“video”])
and
if len(cnts) > 0:
Adrian Rosebrock
Make sure you use the “Downloads” section of this blog post to download the code + example video. It looks like an indentation error occurred when you tried to copy and paste the code. Using the “Downloads” section will ensure this does not happen.
Lantao
Hi Adrian,
Thank you for your incredible tutorial, it helps me a lot about me project! May I have question about the object tracking? First, if I want to reach a hand gesture recognition with webcam. This method in your tutorial sees difficult to track hand gestures, because our faces will be also detected if we set a threshold for our skin. Secondly, if I want to detect the movement of each figures what methods you would recommend? Your previous tutorial said about to change the value of dx and dy to detect tiny movement of objects, is that possible to detect the movement of fingures? If you could help me to answer these questions it will help me a lot for my project.
Have a good day!
King Regards,
Lantao
Adrian Rosebrock
1. There are a variety of ways to perform hand gesture recognition. For controlled environments simple thresholding/background subtraction and contour properties will work. For more advanced hand gesture recognition you may need a stereo vision camera to compute the depth map and then recognize the gesture. If this is for a school project, I recommend the former.
2. I’m not sure I understand the question here. The code in this post demonstrates how to track movement. Monitoring the dX and dY values to determine direction can be found here.
Lantao
Hi Adrian,
Thank you for your reply! The second question is actually about how to track the movement of figures. In other words, I want to detect the movement of each figures to zoom in and out a image. So, is there any way you would recommend? Thank you.
Have a good day!
King Regards,
Lantao
Adrian Rosebrock
Once you’ve found the bounding box of the object you want to track you would want to apply a dedicated tracking algorithm to it. You could apply centroid tracking or correlation tracking. Such as “zooming in and out” that sounds like an additional post processing effect. You can achieve this by cropping the ROI out and resizing it.
Owais
c = max(cnts, key=cv2.contourArea) Adrian sorry for my english will you please explain what happening in this code
Adrian Rosebrock
The variable
cnts
is a list of contours. We are looking for the largest contour so we callmax
which will find the contour with the largestcv2.contourArea
. Basically, we are testing each individualcnts
usingcv2.contourArea
and returning the contour that maximizes this value.owais
Thank you for reply Adrian could you explain what is difference between contour and contour_area i know contour is boundary of a object i am stuck in contour_area i’ll be thankful to you
Fexyler
Hi Adrian,how can I track white objects with HSV or something? Like an egg? Please answer this,thank you!!
Adrian Rosebrock
Detecting white objects is pretty challenging as white will reflect and appear lighter or darker (or varying shades of a color, depending on proximity and lighting conditions). If you’re specifically working with eggs it might be better to take a look at structural descriptors and object detection such as HOG + Linear SVM.
Fexyler
And How Can I count contoured eggs? How Can I count detected eggs ?
Ayesha
Hey Adrian, great tutorial as always. Can you please tell how can we modify this code for human tracking instead of ball? i need to implement that in my project. i would be grateful for your help.
Adrian Rosebrock
I would suggest using a dedicated human detector such as this Haar cascade or modifying my deep learning object detection code. From there you would want to pass the ROI into a dedicated tracking algorithm, such as correlation tracking. I hope that helps get you started!
hardik singh shekhawat
hello Adrian ,
I really like this tutorial, I have a question … suppose I have green, red , yellow ball on detection of green ball gpio 18 should be set in output mode else remain off so basically I want to assign three different gpio pins on detection three different colour. can you please help me on it?
Adrian Rosebrock
I cover how to use GPIO pins + OpenCV in this post. Use a Python dictionary to map a color ID (such as an integer or string) to a GPIO pin. Use color thresholding as we do in this post to detect each color. Then lookup the color in the dictionary and access the GPIO pin. I hope that helps.
Radhika Kamal Agrawal
I could find out the direction of the ball whether it is up down or left right. But I am interested in finding out if the ball is in circular motion. Please help me to track the same. Kindly help me with the Python code.
Adrian Rosebrock
So you’re trying to detect if the ball is making in a circle as it travels across the frame?
Arpit Shukla
Hey Adrian, I have a problem when I am running the code in Ubuntu that the tracker is moving here and there even when the ball is there in the frame. Why is the tracker moving even when there is no ball? I am using my webcam for this. Please help me out!
Adrian Rosebrock
Use “cv2.imshow” to visualize the “mask” generated by color thresholding. It sounds like there is a lot of “green” in your background which is causing the detector to falsely fire. You may need to tune the color threshold parameters or choose a different color altogether.
Arpit Shukla
Thank you Adrian!
It worked 🙂
Adrian Rosebrock
Awesome, congrats on resolving the issue, Arpit!
muratcan
Hello Adrian
First of all your project is gorgeous, congratulations! I’m also dealing with a project like yours. The thing that I want to do is: In addition to your code, object tracking robot under the control of engine. For that: If object is on the right side of center of frame, it’s going to move forward with the servo motor which is on the right. If object is on the left, move forward with left servo motor.. In short, I want the robot to follow the object wirelessly.
But I couldn’t create this algorythm. Could you help me about that? If you help me, I will be really appreciate. Because it’s my final project and need to finish in 10 days..
Adrian Rosebrock
Congrats on working on your final year project, that’s awesome. I don’t have any tutorials on controlling a servo based on object movement. And even if I did, the code wouldn’t likely work out of the box. You would need to modify it to work with your own hardware. I wish you the best of luck with your graduation project.
mo1878
Hello Adrian,
Firstly I’d like to thank you for this tutoriaI. Secondly,I am playing around with the code right now, but I am wondering if it is possible to just output the (x,y) coordinates of the centroid? rather than the change in the x and y; (dx, dy)?
Adrian Rosebrock
I think you might be confusing this blog post with this one. We don’t compute dX and dY in this post.
mo1878
Apologies, Yes I got my tabs mixed up. In that case, should I post the answer on the other page rather than here?
Lisa
I am trying to to follow this tutorial. I tried to install the imutils using pip. It says it’s installed but whenever I am trying to run the code, there is an error saying there is no such module names imutils. Please help.
Adrian Rosebrock
Hey Lisa — are you using Python virtual environments? If so, I get the impression you are either (1) not installing “imutils” into the Python virtual environment or (2) you’re using “sudo” when trying to pip-install the package (you can’t use sudo when pip-installing a package into a Python virtual environment). Your commands should look something like this:
vishal
hello hope you are fine,
i want to know how to show mask in frame and only detect the tennis ball red color is not pass through it
thanks
Adrian Rosebrock
If you want to detect the color “red” you’ll need to tune the color threshold values used to generate the mask. Once you have the mask you add use a bitwise OR to add it to frame.
Ferishta
HI Adrian for some reason it is not identifying the word “xrange” as a key word: it outputs the following error: name ‘orange’ is not defined
HELP please:(
thank you so much Adrian, you’re awesome.
Ferishta
ok my previous error was fixed. I changed xrange to range. Now when I run the code it says, picamera.exc.PiCameraValueError: Incorrect buffer length for resolution 640X480. I am using the Picamera.
Adrian Rosebrock
Make sure you are clearing the buffer at the end of every loop. See this blog post for a template on using the Raspberry Pi camera module.
sara
I am using this tutorial to track a yellow line and I want to use a rectangle instead of a circle. any suggestions on how to do that? Im doing a line following robot
Adrian Rosebrock
This blog post will help you detect rectangles.
park
Hello Adrian!! It was a great help for me.
Actually, I have one question..
I want keep red line in my frame(with no disappear) and after a certain period of time, it is cleaned at once. How can I build the code?
Adrian Rosebrock
Whenever the “certain period of time” criteria is met you can simply re-initialize the deque data structure as an empty queue.
chrisw
Hi Adrian,
Your tutorials are fantastic. Loved this one. Just wondering, how would one go about dumping the cvCircle shape as x,y,z co-ordinates in a simple csv or xml file? I’ve followed some of your other tutorials on 3d camera reconstruction but I guess I need a little more of a pointer of what and where are the matrix values I need to reference for the ext file output.
awesome. Thanks.
Adrian Rosebrock
It sounds like you may be new to writing Python code (which is perfectly okay of course!) but I would suggest reading up on file I/O. Python makes it dead simple. Python also includes a
csv
module for reading/writing CSV files but I would suggest using simple Python file I/O operations.Colton
Great tutorial. One question though… I want to do ball tracking where color may not be reliable for a variety of reasons (no guarantee of ball color or lighting conditions). Any suggestions on other filters I can implement prior to finding the ball that may be more robust to lighting conditions. In brainstorming this, optical flow and background subtraction emerged as options but they could be made difficult by a non-stationary background. Any other options I should be considering?
Adrian Rosebrock
If your lighting conditions that that uncontrolled and there is no guarantee on ball color you should consider training your own custom object detector. HOG + Linear SVM would be a good starting point but you may need to train your own custom deep learning object detector.
Sweets
Hi Adrian,
I want to detect a silver color tool, which is continuously moving and changing its position and orientation. The area around tool is white and grey color. I tried using color based detection but I am unable to get exact threshold .Grey, white and silver colors are looking same (pinkish). which algorithm shall I use for this application?
regards,
Pooja
Adrian Rosebrock
Do you have an example of the image/video you are working with? If you cannot create a color range for the tool then you’ll want to look into more advanced object detection methods. Since the tool is changing in orientation I would not recommend HOG + Linear SVM. If you have enough training data a deep learning object detector may be your best bet.
Sweets
I have few sample videos. If I have enough sample videos and I know 80% position and orientation of tools, can I use HOG+SVM?
Adrian Rosebrock
You can use HOG + Linear SVM but keep in mind that HOG feature vectors are not rotation invariant. You would need to train a HOG + Linear SVM model for each orientation, normally in 10-25 degree increments.
jonathan_g
Hi Adrian. I have a question about finding the center of the ball. I am most interested in getting the center of the circle enclosing the ball. I am getting an error in line 78 “Get ZeroDivisionError: float division in python”. Could I change the code to run line 78 only if m00 > 0.
Do you have any idea why I am getting the error?
Thanks.
Adrian Rosebrock
To prevent the error you can either:
1. Check if m00 is indeed greater than zero
2. Add a small epsilon value (such as 1e-7) to the denominator to prevent any division by zero errors
jonathang
Thanks Adrian. That makes sense. I was wondering, however, as an alternative to using cv2.moments(), could I rely on the x,y of line 76 to get the center of the circle:
((x, y), radius) = cv2.minEnclosingCircle(c)
Is the X,Y returned by cv2.minEnclosingCircle(), similar to the center derived from cv2.moments?
Thanks!
Adrian Rosebrock
It is similar, yes. The moments would give you the “true” center of the mask though as the mask would never be perfectly circular. You could swap in the (x, y)-coordinates of the bounding circle though and the code would still work.
Manbashuo
How can I apply the desired video directly to the program? I use spyder (opencv+python) because my program always runs out of webcam.Is there a way to change programs?
Adrian Rosebrock
I’m not sure what you mean by “runs out of webcam” — could you clarify?
kishore.cherala
hi Adrian.
I am new to python and open cv.i have found your blog recently.i am going through your tutorials which are very nice.
I have a question here.
The red line trajectory should be stable.it should not be deleted. whenever I press any key(for example:c) then only it will be deleted. To accomplish this what changes to be made in the code.
thanks…
Adrian Rosebrock
To start, change Line 23 to have an infinitely long deque:
pts = deque(maxlen=args["buffer"])
From there you’ll want to check if the “c” key is pressed and if so, re-initialize the deque.
Manbashuo
Thank for your reply,I am a novice.I don’t want to use webcam,how can I change programs to use my video in my folder.Example for VedioCapture(‘C:\Users\User\Jspyder\ball.mp4’).Can do this and how to change
Adrian Rosebrock
That should work; however, if you are using the Windows OS you’ll want to escape the “\” character by typing “\\” (I’m not a Windows user so this behavior may have changed over time, you’ll want to double-check that yourself).
kishore.cherala
Hi Adrian,
I want the output of this program, in which I can able see only ball detection and path of the ball (i.e.contrail of the ball).is it possible? and how to record the output of this program in raspberry pi 3.how did you do it? what are the tools available?
thanks…
Adrian Rosebrock
1. Take a look at computing the bitwise AND between the input image and the mask. That will give you just the ball. Then you can draw the contrail on it.
2. This blog post will allow you to write the output frames to a video file.
Brian
Thanks for the tutorial. It is great.
I have a question regarding the range_detector script.
I can get it working with my webcam, but I have a hard time getting it to load a video
I know the command is
python range_detector.py –filter RGB –image /path/to/image.png
however, if I enter:
python range_detector.py –filter RGB –pollen /users/korisnik/mystuff/pollen.mp4
it does not work. I have tired a number of other variations and I can’t get the video to “load”. I am sure I am doing something simple wrong, but can’t figure it out.
Any help?
Adrian Rosebrock
Can you clarify what “load” means in this context? Are you supplying the path to the video and then the script automatically exits? Is there an error of some kind?
Brian
sorry, I figured out my issue. sorry for wasting your time on this one.
i do have another question though (of course!):
is it possible to “export” the x,y coordinates of the “contrail” created in this exercise into a .csv file? and can this be done with a “live” image (i.e. from a webcam).
thanks in advance here.
Adrian Rosebrock
Yes, absolutely. You should take a look at the basics of the Python programming language such as file I/O operations. The Python language even includes a library for reading and writing CSV files but for this project it’s probably overkill. Again, read up on the Python programming language and you’ll be all set.
I’ll also add that if you’re trying to learn both Python and OpenCV at the same time I would encourage you to read through Practical Python and OpenCV. Many PyImageSearch readers have used this book to help them learn the fundamentals of image processing and computer vision using Python + OpenCV. Be sure to take a look!
Patrick
Hi, thanks so much for this tutorial I really enjoyed how simple it was overall.
I had a question about the HSV color space boundaries at the beginning. I’m a bit confused as to how this is HSV space, aren’t the max values of Saturation and Value 100 and not 255? (In greenUpper). So shouldn’t greenUpper be like (64, 100, 100)? I know Hue can go up to 360
Adrian Rosebrock
Be sure to refer to the OpenCV docs. The S and V are scaled to fit in the range [0, 255]. The H is divided by two so the range is [0, 180].
Peter Garay
Hi Adrian. Would this work for tracking a ping pong ball? The ping pong ball could be even sprayed with stuff to make it better distinguishable, or even color it more specifically. Would this be fast enough? There is no need to draw the track of the ball, just to basically know the coordinates of the ball.
Adrian Rosebrock
Provided you can segment the ping pong ball this method should work. Color-based tracking algorithms are extremely fast but you’ll want to make sure your camera has a high FPS capture rate.
Peter Garay
Thank you for the very fast response. Would the raspberry pi camera be sufficient for this? I’ve seen your other tutorials on how to increase the fps of the raspberry pi camera.
Adrian Rosebrock
My guess is most likely not. While the Raspberry Pi is a nice little piece of hardware I don’t think it’s going to give you the speed you need. Give it a try and see but you’ll likely need a laptop or desktop.
Jose
Hi !
I would like to know how it is possible to know the velocity of the object through the centroid ?
Thanks and keep with your amazing work!
Peter Garay
Hi Adrian,
I have followed your other tutorial: https://www.pyimagesearch.com/2017/09/04/raspbian-stretch-install-opencv-3-python-on-your-raspberry-pi/ and compiled with python3, the python 3 section looked just liked it was supposed to in the tutorial.
However when I try to run the code in this tutorial I keep getting the error: no module named cv2
Do I need to do a clean install of opencv3 with python 2.7 to make this code work?
Peter Garay
trying to run the python ball_tracking,py –video ball_tracking_example.mp4
it seems (to me) that it is trying to use the python 2.7 instead of python 3. it shows : File “/usr/local/lib/python2.7/dist-packages/imutils/convenience.py .. .
I compiled with python 3 before so naturally (I think) there is nothing in python 2.7.
How can I fix this?
Adrian Rosebrock
This code will work with both Python 2.7 and Python 3. That said, I think you may have two different Python virtual environments on your system. Make sure you access your Python virtual environment and then execute the script:
Alec
Hi Adrian,
Thank you for all your hard work. I am working on having a robot with an arm find and grab a ball and bring it to me. (way over my head, but setting my goals high) I have used your code here to get it to recognize a red ball, have removed the trails, but have some questions:
1) It recognizes anything the same color red – any way to limit it to a circle, for example? Or better yet a specific red ball and no other?
2) An idea for a future blog post could be to have code that determines where in the camera frame the ball is (for location) AND how big it is (for distance)
3) then maybe I can figure out how to get the arm to pick it up after all that! 🙂
4) Finally, I notice all your newer code is geared toward open CV 3. I’m nervous to upgrade! I’m hoping to stay with v2 for a while until I get the courage. I believe you have a blog post about this, so I’ll have to find that when the time comes…
Probably #1 is really the only question that fits, but just throwing that out there so you have the context of what I’m trying to do.
Adrian Rosebrock
1. You should consider training your own custom object detector.
2. See this blog post and this one.
3. I’ll consider it but I cannot guarantee if and when I would cover it.
4. I would suggest upgrading to OpenCV 3 when you get a chance. OpenCV 3 has a bunch of powerful features. I do my best to keep code backwards compatible though.
Usama
hy adrian,
thanks for the demonstration.
So when there are multiple objects in the screen, algorithm would detect the object with better contour.
Is there a way we can only keep tracking the first object and avoid any other object which comes later in the frame?
thanks
Usama
Adrian Rosebrock
There are a way different ways to approach this. I would suggest looking into tracking algorithms, in particular basic centroid tracking and correlation filters. I will be covering them in a future blog post.
Hritik
Thanks Adrian for such a nice tutorial but I have installed opencv package on my system but cv2 is not available on python 3.6.5 please help!
Adrian Rosebrock
Did you follow one of the tutorials on my blog? Or something else?
sriram
HI sir please help how to detect the motion of eye ball
Sagar
Wonderful tutorial!!
How do we generate a data of the path and export the data??
Adrian Rosebrock
The
deque
data structure stores the (x, y)-coordinates. You could also maintain a simple list of the coordinates as well. Once you have the list you can write them to disk via simple Python I/O operations.vijay
hi Adrian. I tested this code. It is working well i but it have a lot of false detection.
I am working on ATM security automation. which include detection of human face and object like sword, gun, Helmet, Mask etc. so, I can’t afford false detection.
please suggest me something which is going to work for me.
and also suggest me how should i train my own model for optimal detection.
Adrian Rosebrock
Hm, I’m not sure why you would be using color-based tracking for an ATM security automation. Maybe you could elaborate on that. I would recommend you train a deep learning-based object detector. I demonstrate how to train your own custom deep learning object detectors inside Deep Learning for Computer Vision with Python.
shashi
hi adrian, nice blog as always, can we get hsv values for a given color captured in the camera..
Adrian Rosebrock
Are you asking how to define your own color ranges via the HSV color space? If so, take a look at the range-detector script in the imutils library.
Jochen
Hi Adrian
That looks awsome. I would like to track a golf ball and calculate its speed and direction. Do you believe this will work based on your example plus some calculating? Perhaps the speed of up to 150 to 200Mph and the view point do not deliver enough data
Thanks for a reply
Jochen
Adrian Rosebrock
You would need a high speed camera for golf ball tracking and furthermore, color thresholding/segmentation is likely not enough. Successful golf ball trackers can measure velocity and angle off the tee, but then actually fit a function to the ball, enabling it to be tracked.
Shivakumar Nair
Hi Adrian, thanks for the tutorial first of all. Your simple way of approach and simple language makes it more interesting and crystal clear steps make it awesome. That said, i am in the process of making a robotic Table tennis player, where in the ball will be watched by a camera and that video will feed the gcode to the robot. Can you give me some advice on choosing the camera and can you tell me what kind of possibilities to track the TT ball using opencv?
Adrian Rosebrock
I would try to use a higher FPS camera if possible which of course means your pipeline will in turn need to be very fast. Table tennis balls can move very quickly and will likely have a decent amount of motion blur. You may even want to consider training a custom object detector rather than simple color thresholding as well.
Asyraf
hi adrian
nice project and very helpful but i have one prob
how to “print” the data of coordinate?
and how to put one more ball with other coordinate?
Adrian Rosebrock
1. Line 78 gives you the centroid (i.e., center) of the object. You can print that to your screen, save it to disk, etc.
2. To work with multiple objects see this comment.
sonal
how to record the motion means i want to write the small alphabet ‘a’ then compare it with svm model whether it is ‘a’ or not.So my question is how to draw ‘a’ and save it?
Adrian Rosebrock
You want to use OpenCV to actually draw a character on your screen? And once the character is drawn then recognize what you drew? Is my understanding correct?
Widhera Yoza
Hi Adrian!! Your post is very amazing. I m so happy i found ball detection in this site. But, can we know the distance between ball and camera?
Adrian Rosebrock
Yes, see this tutorial.
jayshree
In place of ball if there is bike is the same code will work? Can you please explain what all changes need to be done if there is a bike.
Adrian Rosebrock
Keep in mind we are tracking the ball based on color. Tracking a bike with this method using this method will not work since a bike could be multiple colors. Instead, try dedicated object tracker.
Asif
Thanks Adrian, great tutorial and explanation.
Just one minor question, Is it possible to detect ball using Hough Circle, If yes which one is more efficient.
Adrian Rosebrock
The Hough Circles function tends to only work under ideal situations. Here we are using color-based thresholding which can detect arbitrary objects provided that fall into the defined color range.
samar
thanks alot for the code…
Alexandre Giraldi
Hello,
Congratulations for it! It’s amazing!
I’m new in python and i’m having troubles when I try to use my own video.
The code just go to my webcam and ignore it and yours video as well.
Could anyone help me ?
Lady
hello adrian really thank you very much for sharing your work and provide us with your codes, thank you very much for explaining step by step every line of the code , since in this way we can also start to make programs in python. Thank you very much for sharing your knowledge. greetings.
Adrian Rosebrock
Thank you Lady, I really appreciate your kind words 🙂
Malik Fasih
Hi Adrian, great work.
I wanna ask you how can we detect white baseball through this?
Adrian Rosebrock
You would need to define the color threshold range for whatever color you wanted to detect, in this case, white.
Louie
Hi sir Adrian!…..Im just new in object detection deep learning and i tried this tutorial of yours and its awesome and work perfecly…using raspberry pi3…..but i have some question how about i want to track different colors on different objects and has multiple boundaries…like i want to detect blue colour and red colour on same detection…….how can possible is that?…
hope that you will response my question?…..thank you also for this tutorial it was very helpful!
Adrian Rosebrock
Hey Louie — you’ll want to create color threshold ranges for each of the colors you want to track. Then, for each frame, loop over each of the colors and construct a mask using cv2.inRange. From there you’ll be able to track multiple colors 🙂
Astor
Hello, how can I select another color range? The changes in greenLower and greenUpper must be in RGB, I’m trying to detect a pink / magenta ball.
Greetings.
Adrian Rosebrock
Take a look at the “range-detector” script in the imutils library to define your own custom color ranges.
Alistair
I have ust tried this with OpenCV V4.0.0-Beta (only change from the Alpha instructions were to use “beta”) and the downloaded code threw the “None” error that you blogged about, but it isn’t the file that is the problem in this case.
By adding a print statement, this shows where the data is – which appears to have the array in cnts[0] – as per cv version 2. Added a test for cv version 4, to handle this case:
print (“cnts[0] {} cents[1] {}”.format(cnts[0],cnts[1]))
cnts = cnts[0] if imutils.is_cv2() or imutils.is_cv4() else cnts[1]
center = None
# only proceed if at least one contour was found
if len(cnts) > 0:
Adrian Rosebrock
Thanks for sharing Alistair; however, if you update your version of imutils the code will not need to be changed:
$ pip install --ugprade imutils
Lenelle
Hi, i am facing with this error.
Traceback (most recent call last):
File “ball_tracking.py”, line 75, in
if len(cnts) >= 0:
TypeError: object of type ‘NoneType’ has no len()
i am using the codes you provided and downloaded everything according your tutorial. However, this demo is unable to work for me and gave me the error above. Could you kindly explain what is this error about and how do i resolve it please? Thank you.
Adrian Rosebrock
What version of imutils are you using? Make sure you’re calling
imutils.grab_contours
as we do in the post. Secondly, make sure you upgrade your imutils version:$ pip install --upgrade imutils
Jundi
Hello adrian! thank you so much for this tutorial. I am currently trying to use the tracking to detect whether a launched ping pong ball was hit and returned to the other side of the table (in short, a score), but more challenges arises :
1. The differing size
2. It is hard to find the ideal range to segment the ball
Do you have any advice on this? Thanks in advance!
Adrian Rosebrock
I would suggest training a dedicated object detector such as HOG + Linear SVM or Faster R-CNN, SSD, or YOLO. If you’re interested in learning how to train your own custom object detectors you’ll want to refer to my book, Deep Learning for Computer Vision with Python.
Stonez
Hi Adrian,
Thanks for this great tutorial! Is it possible to take a photo and circle an object in the photo with a mouse, then ask Python to track the particular object I just circled? How to find out the circled HSV color range?
Thank you!
Stonez
Adrian Rosebrock
I would recommend using the range-detector script from the imutils library to help you tune the HSV color range.
Arzoo Yadav
Hi Adrian, Thank you for such a great article. I am very new to python and openCV. I want to know why the green lines are disappearing after certain amount of time. I want to use this to draw something and then classifying the image. So I don’t want the green lines to disappear. I read your earlier comments on this. But I am not able to understand and code that.
Could you please provide the code snippet for the same? Thanks in advance!
Arzoo Yadav
I got it. I took an infinite deque to achieve it.
Adrian Rosebrock
If you want to keep the points indefinitely then you could use a simple Python list as well. Either will work.
Peter
Hi Adrian
Nice post
I’m new to Computer vision and I don’t really understand why you used HSV color scheme in the ball tracking ?
Thanks
Adrian Rosebrock
The HSV color space tends to be easier to define color ranges in. The HSV color range is also a bit more robust for object segmentation than standard RGB.
Joris Wouters
Hi, nice tutorial
I’m using a raspberry pi camera and then this is not working fine. I’m trying to find the issue but i can’t find it
Adrian Rosebrock
Can you be a bit more descriptive regarding what you mean by “not working”? What specifically is not working? Are you receiving an error message?
Paul Zikopoulos
Since someone mentioned tennis, your color works just fine on my Wilson US Open ball I have … and the way I hit it, I bet you the demo would work 🙂
August
Hi Adrian,
Thank you, for the tutorial it was a big help in getting my feet wet with open cv.
I do have a question and I may have just overlooked it, but is there a simple way to get the X and Y location data of the center of the object(s) that I’m tracking. I was wondering if that is possible because I would like to use that data in a separate function. Thanks again.
Adrian Rosebrock
Line 76 gives you the center (x, y)-coordinates of the object.
Chris
HI Adrian, you have a very impressive blog. Thank you for sharing your knowledge.
I’ve got the code listed here working well, but I’ve been stumped for several days on an enhancement I’ve been trying to make. I found that when there are two green balls in the image that are touching the circle gets drawn around both balls and not the single largest.
Do you have a recommendation for how I could target the single largest ball?
Adrian Rosebrock
You’ll want to use the watershed algorithm to segment the touching balls. From there you can use the
cv2.contourArea
function to find the biggest one.Sanjay Swami
Hello Mr. Adrian,
I have one small doubt. I was trying this program and trying to track one ball. I am facing problem because of high light intensity. If light intensity is very high then it is not finding the contour correctly (Because it is difficult to find correct HSV Limits with high light intensity). So what can I do to solve this problem?
I was thinking to use “cv2.adaptiveThreshold”, but I am not understanding where I can use this. Should I use this in range-detector program or in ball-tracking program.
Please tell me can I use cv2.adaptiveThreshold to solve this light intensity issue? If yes then in which program should I use it?
If this solution is wrong then can you please tell me which solution is possible for this problem?
Charles D'Silva
Hello Adrian
Thank you for taking the time to share your knowledge on video capture and manipulation. I have have successfully loaded your ball tracking program and was surprised and pleased at how easy it was to implement and follow your blog. However as a novice to python I have had a problem implementing the command line ‘python ball_tracking.py –video ball_tracking_example.mp4’, any suggestions.
Adrian Rosebrock
What is the exact error you are getting? Without knowing the error I cannot provide any suggestions.
Charles D'Silva
I have placed both the ‘ball_tracking.py ‘ and ‘Ball_tracking_example.pm4’ in my Mac desktop.
When I open a shell in python and execute “python ball_tracking.py –video ball_tracking_example.mp4”, I receive this feedback
SyntaxError: invalid syntax
If I go in terminal, I receive the same response when I execute the same command line. Since the ball tracking program is working fine, I feel it must be something to do with the way I am entering the command line. I would be most grateful for any suggestions, as I have spent many evening entering various combinations of the command line without any success.
Thanks
Adrian Rosebrock
It sounds like you may have copied and pasted my code rather than using the “Downloads” section of this post. The syntax error is due to a problem with the code file itself, not the command line arguments. You need to either fix the syntax error or download my orginal code (don’t copy and paste).
Jose
Hello I was wondering if it was possible to lock the framerate of the camera or the whole process in the sample code you showed? I know that its possible in OpenCV using CAP_PROP_FPS but i’m not sure about imutils. Thanks!
Adrian Rosebrock
Not with the current implementation of “imutils”. You would need to hack the “VideoStream” implementation to manually set that parameter.
Jose
Would you be able to explain how I would be able to go on about that? I assume I would need to modify both the videostream and the webvideostream?
Adrian Rosebrock
Just the WebVideoStream. You would take that class and modify the “cv2.VideoCapture” object to accept any parameters you want before actually starting the stream.
Blaine
Hi Adrian,
Finally got it to work!!!
Was wondering what would be the lower and upper boundaries of a black object. Trying to track a puck. Thanks
Adrian Rosebrock
The exact color boundaries of an object is going to be dependent on your lighting conditions. Objects can look different in varying lighting conditions. Secondly, you may want to look into more advanced object tracking algorithms.
Han Ooi
I’m interested in tracking multiple color objects of the same color. What modification would I need to make to the code to enable this?
Love your tutorial!
Adrian Rosebrock
This tutorial is what you need.
Milan pandey
Is it possible to store the result ( The lines indicating the total movement of the ball from start to end) somewhere to see the path of the ball later?
Adrian Rosebrock
You mean like writing the data to file? If so, yes, a CSV file would work well here. Take some time to read up on the basics of Python programming, namely file I/O.
Terri
Hi Adrian, the ball is only being tracked near the borders of the frame, more specifically the left, bottom and the right edges. What could be causing this?
Adrian Rosebrock
Are you using my code and example video? Or are you using your own video?
Luluh
Hello,
Can we use the same code for detecting two colors? I have issue in stopping the picam it remains ON.
Thankyou
Adrian Rosebrock
Yes, but you would need to define the color threshold ranges for each color.
Carol
Hi Adrian,
Loving the series of daily tutorials – thank you for all of your hard work! This is very useful.
I want to track a Rubik’s Cube – not by color – just the overall shape and individual cubies shape, but having a hard time finding the right HSV settings. Since we’re not using color, what do you suggest?
Adrian Rosebrock
Since a Rubik’s cube has many colors it’s not a good idea to use just color. Have you tried using a bit of object detection? HOG + Linear SVM would be a good first start.
Carol
Thanks, will check it out – to clarify, i don’t want to track the Rubik’s by color – just by shape.
Nihar
Hi Adrian,
Can you please help me to set different color if I use different object like for eg, red object, blue object.
I did executed the GitHub imutils/range-detector but I am still unable to understand which color should i choose for object to be tracked and which color to be kept for the background and how to implement it, also what do you mean by –
greenLower = (29, 86, 6)
greenUpper = (64, 255, 255)
and why 3 types (29,86,6) and (64,255,255) as we just want to track a single color?
Shravan Sriram
Hello Adrian,
Thank you for your tutorials. They have been of great help to me for understanding the basics of Machine Learning and OpenCV in general. I am stuck up with an error while trying to execute the above code.
line 75
cnts = imutils.grab_contours(cnts)
AttributeError: module ‘imutils’ has no attribute ‘grab_contours’
The platform is Spyder with Python version 3.6.3 and the imutils version is 0.4.6
Awaiting your solution
Thank you 🙂
Adrian Rosebrock
You need to upgrade your imutils version:
$ pip install --upgrade imutils
Shravan
Thank you Adrian, the code is working perfectly!
As per my project statement, I have made changes to this code to detect multiple red bands worn by a human on each elbow and each knee by applying the for loop “for c in cnts” and have labelled them accordingly.
I wish to check for the symmetry in his posture and use the points that I have detected. Is there any logic or changes in the code that I could implement to compare the co-ordinates?
Thank you!
Adrian Rosebrock
Have you tried using human pose estimation algorithms? That would give you approximated (x, y)-coordinates of key joints in the body. That would make it easier to compare. If your band detection is already working well though I would suggest continuing with that.
Justin
Hey, this is a great site, with tons of information! I stumbled on this while I was looking for a way to create my own app or program to trace the path of a disc golf disc. The disc travels away from the camera and it’s not shaped like a ball while in flight. Also, each player typically drives with a different colored disc. Is this open opencv project adaptable to fit my needs or is there something you could suggest for me. I’ve read most of the replies here and I understand that color tracking may not be my best option so if you could point me in the right direction I would appreciate it a lot. Thank you.
Adrian Rosebrock
I don’t believe color tracking would work well here. A deep learning-based object detector would likely work though. Do you have training data available to train your own detector?
Anshul
Hey, will this work on a raspberry pi 3B+? I have opencv installed on it.
Adrian Rosebrock
Yes. Give it a try.
Dave
Hi Adrian。
I am a Chinese fan of yours. I have completed many interesting projects with your blog. I admire you very much.
These days I try to use the code in this article to detect tennis. The code is running smoothly. I try to improve it by adding detection circles to avoid detecting green books or ground. I haven’t find any article in your blog, Can you help me?
Adrian Rosebrock
Are you trying to detect/track the tennis players? Or the tennis ball?
Stuthi
Hi Adrian!
First of all let me thank you for the great work you’re doing.. Your blog and easy to grasp teaching style are just the best and is one of the major reasons I’ve put my inhibitions aside and dipped my toes into the previously scary world of computer vision. Great content too!
If its not too much trouble I would like to get your advice on a small project I’ve taken up. So the task at hand is to get an idea of the number of occupied vs free tables and seats in a room.
Since the video is captured from a corner in the room, the perspective problem is huge, and also not easy to detect people when they are facing away from the camera.
While I look for other options to tackle this, I would love to hear your ideas on how to approach this seemingly mammoth of a problem.
Looking forward to your response,
Thanks !!!
Adrian Rosebrock
Thanks for the kind words, Stuthi. I’m really happy to hear that you’re enjoying the free tutorials.
As for your question, have you taken a look at the PyImageSearch Gurus course? The course will help you build your project. It also includes private community forums which I participate in daily. It’s a great way to get advice from me and other students. Plus, it will enable me to provide more detail suggestions for your project.
Jean
Hello Adrian.
I am working on a project where i am tracking a bowling ball and i would like to draw a trajectory of it and when i press a button on the keyboard, that trajectory would be deleted and when the new ball is tracked it would start to draw a new one and so on. There is no problem with multiple balls on the lane because i filled the surroundings in the program that only my lane is visible.
In your program everything works fine just the trajectory is lost over time now my only question is how to change the code that the trajectory (drawn points) wouldnt be lost over time but would stayed drawn on the screen? The buffer wont be a problem because the bowling ball would be on the lane probably around 3-4 seconds so there wouldnt be alot of lines in case of an error for buffer full.
Thank you very much
Adrian Rosebrock
Instead of using a “deque” object you could just use a standard Python list, that way older coordinates are not removed from the queue.
Jean
Thank you, it works! So simple yet i couldnt figured it out 🙂
Adrian Rosebrock
Great! 🙂
azad rashid
hi,hope you are good,please tell me if i want all coordinates of the line you are drawing ,how i can i get that?
Adrian Rosebrock
The “deque” data structure has the (x, y)-coordinates of the line.
Adithya Raj
Hi Adrian,
First of all thanks for the wonderful blog. I am working on a project in which I have to track multiple person walking through an area, for that I have done background subtraction,counter detection and found out the centroid using your code. But while drawing the line it is getting connected to all the objects in the frame. I know that it happening because we are appending all centers to pnts list. so it is connecting everything. So could you please help me to how to create separate list for the center of individual objects, or if there is any other way to rectify this problem, can you share with me.
Adrian Rosebrock
Start with this tutorial. That tutorial maintains a list of centroids for each object. You can then loop over the centroids and draw the line.
Victor Rivera
hello Adrian, will you be able to track a soccer ball in a game while in a match?
Adrian Rosebrock
I would recommend using a dedicated object detector like. Try using this one.
Alan
On advice of the PPaO “after chapter 6” reading, I enjoyed putting this ball_tracking.py on my Raspberry Pi 3B robot with picamera v1.3.
I printed out a tennis_ball, and used range-detector on frames showing the printed tennis ball to get the best detection / masking possible under various lighting. My room has a lot of false detection contours but holding the ball print close to the camera with good lighting works well.
Interestingly, performance without waitKey(1) and any imshow() in the loop is around 940ms per frame, and 1 fps with the waitKey(1) and imshow(image) and imshow(mask).
Off to take the chap 6 quiz…
Adrian Rosebrock
The reason the “cv2.imshow” call results in slower code is due to the I/O and rendering of pushing the frame to your screen to be displayed. Without having to do that operation the code can run much faster.
david
Hi, Adrian. Thank you for a nice project, but I have one problem. I installed imutils and opencv, but when I run the program, raspberry pi syas: Imorterror: no module named cv2(or imutils) How can i fix it? Thanks a lot.
Adrian Rosebrock
Are you using Python virtual environments? If so, make sure you access them before running the script.
Deba
Hi Adrian,
I am new to this Computer Vision world. And reading your blog on Computer Vision. I have a question here. What is the use of converting the BRG color to HSV in code
hsv = cv2.cvtColor(blurred, cv2.COLOR_BGR2HSV) ?
And, How to calculate a range to blur the region
blurred = cv2.GaussianBlur(frame, (11, 11), 0)
Adrian Rosebrock
1. We’ve defined color thresholds for the green ball. It’s easier to work with color thresholds in the HSV color space.
2. We blur the frame with an 11×11 kernel.
If you are new to the world of computer vision and image processing I would recommend reading Practical Python and OpenCV so you can learn the fundamentals.
mateoo
what is the algorithm that you used in ball tracking
Adrian Rosebrock
It’s just simple color thresholding.
Rishi
Hi Adrian, Thanks for the informative article. I’m new to Video Object detection. I need to know the Software tools required for detecting Objects in a Video (indoor household objects) using Smartphone Camera (Azure based solution).
Can you please advice what tools/technologies are required to be used in this? I can think of Azure Machine Learning Studio, OpenCV, Python and your Library pyImageSearch.
A) Is there any other tool I need to install? (DLVM?)
B) Will Microsoft Cognitive services and SDKs are also needed when OpenCV is installed?
Thanks for your immediate attention,
-R V
Adrian Rosebrock
Hey Rishi — I would suggest you use Microsoft’s DSVM.
Niranjan Adhyapak
Hello Dr. Adrian,
You always raise the bars and never cease to amaze me with your tutorials.
I implemented this for a specific use-case at my workplace and it works like a charm.
I wanted to store the trajectory followed by the ball(the red line created) as a sequence of co-ordinates. Is that possible ? If so could you please Help me with it?
Thanks,
Niranjan
Adrian Rosebrock
The “deque” object stores your (x, y)-coordinates. You can write them out to disk if you want to save them.
Rishabh Dutt
Hi Adrian,
I am a big fan of your work and believe me when I say that nobody could have put these tutorials as subtly as you have. I am also following your 17 day tutorials.Keep up the good work!
I wish to track speed of the ball, as it moves. Can you help how I should implement it?
Adrian Rosebrock
Hey there — tracking an object’s speed is covered inside Raspberry Pi for Computer Vision. I suggest you start there.
Mohammed Ahmed
Hello Adrian,
I am a big fan from yours from India i am working on a project wherein i am detecting different different colour balls the issue which i am facing is that as the light intensity changes in the background the balls aren’t detected. Can you suggest me some solutions right now i am converting the rgb image into hsv format then eroding and dilating it after that we check the circles using hough Circle detection. Kindly suggest us some solutions.Thanking you in anticipation for your kind cooperation.
-Mohammed Ahmed
Olli
Hello Adrian,
In this post you have determined the green color HSV upper and lower values beforehand by using the range-detector script. I managed to do the same for a video including a white colored ball.
My aim would be to detect a white colored ball from a live stream or a set of different videos. I was wondering how feasible this solution is to different kinds of videos (different lighting, different colors), does one always have to determine the HSV upper and lower values beforehand using the range-detector script?
Ehud Horowitz
Hi Adrian,
Thanks for the nice example!
I did some work in testing it in the outdoor scenarios and the results were not so good.
The light changes a lot the actual color of the ball and there were many other elements that seem to have the same color…
Do you know an AI method for it? that already trained to detect balls?
Thanks!
Ehud
George
Hi Adrian:
Big fan here.
I am trying to find a method to detect a basketball, which is orange, but most wooden basketball courts also look yellow or orange-ish. Would your method of HSV thresholding work in this application? And what are some other methods for object tracking?
Thanks, George
Adrian Rosebrock
I wouldn’t recommend either as color thresholding won’t be robust enough. Instead, try using an object detector such as Faster R-CNN, SSDs, YOLO, or RetinaNet. If you need a pixel-wise segmentation Mask R-CNN would be a good option too.
Eugene
Hello Adrian, i am in a progress to steady openCV and ML. Your blog is awesome! Intask with tracking a ball i what to change ball color – from green to red. I have RGB colog of ball (90,40,47), but can’t calculate right HSV color threshold (low and up). Can you ask how i can do it ? Thanks for helping and your blog!
HienTran
Hello Adrian
I want to add some code to your balltracking,py for caculate or estimate the speed of the ball. I want to ask you Does OpenCV has any functions about that. If not, how can I do that?
Adrian Rosebrock
Take a look at Raspberry Pi for Computer Vision which covers object speed estimation.
Christopher Astbury
Hi Adrian
I followed your OpenCv4 installation on Ubuntu 18.04, and the installation was perfect.
I then proceeded to your ball tracking example, and it works very good. If I was to change the object to an orange for example, what would I need to do to ensure tracking an orange worked.
Thanks for a excellent website and tutorial
Adrian Rosebrock
Thanks Christopher — and congrats on getting OpenCV installed! If you need a next step I would suggest working through Practical Python and OpenCV — that book will teach you the basics of computer vision and image processing through the OpenCV library.
Jam
What if the ball contains many colors, like red, blue, green…?
Is there other way to find balls?
Adrian Rosebrock
If that’s the case I would recommend you train your own custom ball detector, one that isn’t dependent on colors. HOG + Linear SVM would be a good starting point.
Khoi
Hi Adrian,
Thanks a lot for the post. I’m so surprised that your program can operate at 32 FPS. I’m working on an object tracking quadcopter. I want my quadcopter to detect and track the ball.
I found a lot of documents about object tracking, most of them use deep-learning algorithms to detect the objects, but can only perform at <10 FPS.
Could you give me an advice that I should apply your program or other deep-learning algorithm ?
Adrian Rosebrock
The ball tracking method used here does simple color thresholding — the algorithm itself has no semantic understanding of the image. Deep learning-based object detectors are far more powerful but they are also far slower.
Alvabra
Hello Adrian,
Thanks for your nice tutorial!
I’m still confused about the use of GaussianBlur function. I wonder in what color space should we apply the GaussianBlur function? Is there any different between applying GaussianBlur before BGR to HSV conversion and after BGR to HSV conversion?
Thanks in advance!
Adrian Rosebrock
You can apply Gaussian blurring in any color space. Normally you see it applied to RGB/BGR color channels before any other conversion though.
Nonal
Hi Adrian, I have a question. How to using it with Tkinter?
Adrian Rosebrock
See this tutorial.
Oscar
Hello Adrian,
Thanks for this awesome tutorial. You’ve made me a hero at my job.
I just need to track multiple objects period, regardless of color. How can I achieve that?
Thanks again.
Adrian Rosebrock
I would suggest you follow my tutorials on multi-object tracking.
Levi
Hey Adrian,
Thanks for this article. It has been helpful for teaching me to use OpenCV to detect balls in the FRC game this year. There are a few things I am having a bit of trouble with. The first is that each ball has a black FIRST logo that runs across the side of each ball, sometimes splitting the ball up into two contours. The second is that if two balls are pressed against each other, one contour will surround both of them, perceiving one ball. Finally, similarly to the first one, the ball may be dissected by a beam on the robot depending on the location of the camera, which would cause two contours to be drawn. What do you think would be the easiest way to solve these problems?
Thanks!
Char
Hello, I’m trying to detect a bowling ball without relying on color in a video file. Got any tips?
Adrian Rosebrock
I would suggest you train a dedicated object detector. HOG + Linear SVM would be a good start which is covered in the PyImageSearch Gurus course. Otherwise, deep learning-based detectors are covered in Deep Learning for Computer Vision with Python.
Kimhak
Are you using 2D webcam or 3d camera?
Adrian Rosebrock
A 2D camera.
Vincenza
Hi Adrian, thanks for your blog. I got an error when I try to capture video from my webcam. I tried with cv2.VideoCapture and with VideoStream, the camera is activated (the camera light is on) but the returned frame is NoneType. I am using Opencv 4.2.0 and imutils 0.5.3. How can I resolve that?
Thank you
Adrian Rosebrock
It sounds like OpenCV cannot access your webcam, causing the frame read to return “None”. You can read more about “NoneType” errors, including how to resolve them, here.
Julien Guégan
Did you think about using Hough transform for detecting circle in image ? Use the ball’s color don’t seem very robust (if there is other object of same color in your environment)
Adrian Rosebrock
Hough Circles methods are really hard to tune and prone to false-positives. For a more reliable detector consider using HOG + Linear SVM or a deep learning-based detector.