In this post, I interview my former UMBC lab mate, Dr. Huguens Jean, who was just hired to work at Google’s Video AI Group as an artificial intelligence researcher.
Huguens shares his inspirational story, starting from Port-au-Prince, Haiti where he was born and raised, to his schooling at UMBC, and now to his latest position at Google.
He also shares details on his humanitarian efforts where he’s successfully applied computer vision and deep learning to rural Rwanda to help count footfall traffic.
The data him and his team gathered through footfall traffic analysis was used to help a non-profit organization to construct infrastructure such as bridges and roads, to better connect villages in Subsaharan Africa.
Let’s give a warm welcome to Dr. Huguens Jean as he shares his story.
An interview with Dr. Huguens Jean, video AI researcher at Google
Adrian: Hi Huguens! Thank you for doing this interview. It’s such a wonderful pleasure to have you here on the PyImageSearch blog.
Huguens: It’s my pleasure to be here with you.
Adrian: Can you tell us a bit about yourself? Where did you go to school and how did you become interested in computer vision?
Huguens: I’m from Port-au-Prince, Haiti. I went to Institution Saint Louis, Gonzague.
After the Haitian earthquake of 2010, I filmed a very intimate documentary with Philip Knowlton, an alum of UMBC. The film tells the story of two brothers keeping a promise to their grandfather. In it, I talk more about my family and life in Haiti.
When I came to the United States in 1997, I went to Howard High School. Coach David Glenn introduced me to the high jump. I was recruited by Coach Jim Frogner and Coach David Bobb of UMBC. That led to me studying Computer Engineering and Electrical Engineering at UMBC.
I worked at NASA during graduate school. I went into the private sector after the Haitian Earthquake of 2010 and began working as a Software Engineer. I think the tragedy caused me to stop believing in my advisors’ vision of attaining a PhD.
One day, the Dean of the UMBC graduate school, Dr. Janet Rutledge, took me out for lunch. She said: “You’re making me look bad.” I quit my job and went to see Dr. Tim Oates. We won some research funding and I eventually graduated in 2015 with a PhD.
I didn’t believe I could do it until I went to Tanzania. I read about Kuang Chen’s research at Berkeley University. His work inspired me. At Captricity, he and I wrote a patent together on analyzing the content of digital images, and I lived in Oakland, CA for about 3 years after graduating.
Adrian: You were recently offered a position at Google’s Video AI group, congratulations! How did you land such an amazing opportunity?
Huguens: I reconnected with my recruiter from two years ago. After failing Google the first time, one needs to wait a year before trying again. I tried two years ago and I was not successful. I interviewed at the NY office and my performance was not that strong. I knew it going in.
But 6 weeks ago, I was a different engineer. I felt different about computer science. I studied my ass off for about two weeks prior to interviewing.
I followed their guide and focused on really knowing data structures, things like lists, stack, queues, trees, heaps, graphs and trie. I practiced algorithms like DFS, BFS, A*, and sorting. I wanted to be ready for whatever. For the computer vision and data science part, a lot of it, l learn from you.
Adrian: We all know Google is notorious for challenging interviews. What was the interview process like for a computer vision/deep learning job?
Huguens: As you put it, it was notoriously difficult. In one week, I did 7 technical interviews. 5 video interviews in one day and 2 technical screens, one at Google and the other at Facebook.
At Google, I was interviewing for two positions at the same time: a machine learning generalist and a data science position. For the machine learning generalist role, the first 2 interviews were on data structures. Problem solving with data structures takes practice. You have to think fast and avoid overthinking the solution. I’m not the best test taker and solving them in a Google Doc without a way to run the code is nerve wracking.
The third interview was on Google-yness. The 4th and 5th interviews were on Computer Vision. This happened because my recruiter made a special request to make sure I was given a fair shot at showing my strengths in machine learning. The field is vast.
There is so much to know and Google was ready to ask about NLP and reinforcement learning. I’m not that strong in those areas.
For the data science role, after the technical screen, Google felt that I would be a better fit for their Video AI Group.
Adrian: Before working with Google, you were involved with some incredible humanitarian efforts that utilized computer vision and deep learning in rural Subsaharan Africa. Can you tell us about this project and how you even submitted a paper to be published on this topic?
Huguens: A researcher hired Synaptiq to work on this project. Synaptiq.ai is owned by Dr. Tim Oates. He advised both you and I as PhD students at UMBC.
I needed to be close to my daughter and working locally in Maryland provided the right opportunity. Dr. Oates needed someone for an OCR project, and I started working there as a consultant. Tim and I did similar research in the past.
My work there eventually led me to this project. He had set up video cameras to watch pedestrians cross bridges in rural Subsaharan Africa.
At first, the researchers tried using your code on people counting but in that tutorial the pre-trained MobileNet SSD used to detect objects performed poorly. With the help of Synaptiq, we were able to upgrade the detector to YOLOv3 on GPU and reinforce the centroid tracker with DeepSort.
Note: Originally I had included a figure demonstrating the work of Huguens and the research team in action; however, the team asked I take down the figure until their paper is officially published.
Referencing both tutorials in our paper was truly an honor. Using these new models on GPU, we were able to extract meaningful information from hours of video in a timely manner.
Adrian: What was the most difficult aspect of your rural footfall counter project and why?
Huguens: Even on a GPU machine, processing hours of video for the purpose of collecting data took a long time. The end of my contract was approaching, and we needed a NVIDIA docker container that could run the code automatically on hours of remaining footage on an RTX2080 computer, otherwise known as the Synaptiq Machine at UMBC. That’s when Tim and another mutual friend of ours, Zubair Ahmed, got things over the finish line.
Adrian: If you had to pick the most important technique you applied during your research, what would it be?
Huguens: If you’re talking about computer science techniques, recursion wins. But if you are talking about computer vision and machine learning, clustering motion vectors is a good one.
Adrian: What deep learning/computer vision tools and libraries do you normally use? Which ones are your favorites?
Huguens: I use a lot of OpenCV. It is by far my favorite Python library for computer vision. With deep learning, again, a lot of it, I learn from you. I’m a big fan of Keras and Tensorflow.
Adrian: What advice would you give to someone who wants to perform computer vision/deep learning research but doesn’t know how to get started?
Huguens: After finishing graduate school, I wasn’t sure where to get started myself until I purchased a lot of materials from PyImageSearch and started following your blog. We learn by doing. You say that in your book. That’s no lie.
If you want to become really good at something, you have to practice. I think like an athlete. With regards to learning something new, I try to push more weight than what I did the day before. My mind has the benefit of not getting sore like my body. I don’t have to skip a day. I get on LinkedIn or Facebook and search for an eye-catching repository to fork or some amazing tech/book to read next.
Adrian: You’ve been a longtime reader and customer of PyImageSearch, having read Deep Learning for Computer Vision with Python, Raspberry Pi for Computer Vision, and gone through the PyImageSearch Gurus course. How have these books and courses helped you throughout your career?
Huguens: They’ve helped me enormously. Like my friend, Salette Thimot-Campos, CEO of Studio Jezette, writes on Facebook:
The only way to silence the doubt has been through education. The more I learn, the more powerful and connected to the world I feel. I’m exploring topics I never thought I had any business inquiring about, 4 years ago. But now, with each tech terminology and function I demystify and master, the more empowered and brave I feel.
My experience with your books and blogs echoes her words. A PhD only helps to remind me that I was always good enough to learn anything.
I’m not sure if you remember Professor Fow-Sen Choa. He co-advised me with Tim. He would say “breadth and depth”. To me, that always translated to know a lot about one thing and know a little about everything. He encouraged me to always be curious.
In addition to providing your readers with well commented code, you have a creative way of explaining things, a lot of time, in pictures and videos. I wait on your next blog like the next iPhone because I have no idea what’s coming. Sometimes I’m busy, but every Monday morning, I at least try to remember what you did. You just never know where you might see a similar idea again.
Adrian: Would you recommend Deep Learning for Computer Vision with Python, Raspberry Pi for Computer Vision, and the PyImageSearch Gurus course to other developers, students, and researchers who are trying to learn computer vision and deep learning?
Huguens: Absolutely. Learn the fundamentals like Wax On, Wax Off in the movie Karate Kid. I had to install OpenCV many times on Linux. Doing it for GPU machines takes patience. Training deep learning models takes patience, but experiencing the magic is worth it.
Adrian: Is there any advice you would give to someone who wants to follow in your footsteps, learn computer vision and deep learning, and then land an amazing job at Google?
Huguens: I encourage people to think of their education like a sport, a mental one, something like chess and always be open to learn from people who are older and younger than you. Practice. Practice. Practice and shoot for the Moon.
Adrian: If a PyImageSearch reader wants to chat, what’s the best place to contact you?
Huguens: They can follow me on LinkedIn, email me at me@huguensjean.com or check out my website at huguensjean.ai.
Summary
In this blog post, we interviewed Dr. Huguens Jean, an artificial intelligence researcher at Google’s Video AI Group.
Huguens and I were lab mates during our time in graduate school at UMBC. We’ve been friends ever since (he even came to my wedding).
It’s truly an honor to share Huguens work — he’s truly made a difference in the world.
If you want to successfully apply computer vision and deep learning to real-world projects (like Huguens has done), be sure to pick up a copy of Deep Learning for Computer Vision with Python.
Using this book you can:
- Successfully apply deep learning and computer vision to your own projects at work
- Switch careers and obtain a CV/DL position at a respected company/organization
- Obtain the knowledge necessary to finish your MSc or PhD
- Perform research worthy of being published in reputable journals and conferences
- Complete your hobby CV/DL projects you’re hacking on over the weekend
I hope you’ll join myself, Dr. Huguens Jean, and thousands of other PyImageSearch readers who have not only mastered computer vision and deep learning, but have taken that knowledge and used it to change their lives.
I’ll see you on the other side.
To be notified when future blog posts and interviews are published here on PyImageSearch, just enter your email address in the form below, and I’ll be sure to keep you in the loop.
Join the PyImageSearch Newsletter and Grab My FREE 17-page Resource Guide PDF
Enter your email address below to join the PyImageSearch Newsletter and download my FREE 17-page Resource Guide PDF on Computer Vision, OpenCV, and Deep Learning.
Comment section
Hey, Adrian Rosebrock here, author and creator of PyImageSearch. While I love hearing from readers, a couple years ago I made the tough decision to no longer offer 1:1 help over blog post comments.
At the time I was receiving 200+ emails per day and another 100+ blog post comments. I simply did not have the time to moderate and respond to them all, and the sheer volume of requests was taking a toll on me.
Instead, my goal is to do the most good for the computer vision, deep learning, and OpenCV community at large by focusing my time on authoring high-quality blog posts, tutorials, and books/courses.
If you need help learning computer vision and deep learning, I suggest you refer to my full catalog of books and courses — they have helped tens of thousands of developers, students, and researchers just like yourself learn Computer Vision, Deep Learning, and OpenCV.
Click here to browse my full catalog.