Low-cost autonomous taxis for the world: AMA with Tarin Ziyaee, Director of AI at Voyage

Mikel Bober-Irizar
Imploding Gradients
6 min readApr 21, 2017

--

Voyage is a recently launched Self-Driving Taxi startup, led by Oliver Cameron, with a mission to bring ultra-low-cost autonomous taxis to the world. Today in the Udacity Self-Driving Car Slack channel, we had an “Ask Me Anything” session with Tarin Ziyaee, Director of AI at Voyage, and former Deep Learning researcher at Apple.

Here is a compilation of all the answers that both Tarin and Oliver gave throughout the session!

Q: If you were to take someone and train them from scratch for the next few months to be a machine learning engineer? How much would that focus on mathematics vs computer systems and programming?

Tarin: Good question! By far the most important factor I look for is a good understanding of fundamentals. This is really important when it comes to being creative with designing algos. If there is no strong intuition on fundamentals, then creativity can take a hit. In the end, an algorithm designer / researcher is an artist! :)

Q: What advice would you give someone just starting in deep learning?

Tarin: Passion Passion Passion! There is certainly something to be said about raw grit. Do not be discouraged if you do not understand a concept. Always always push through, learn, and ask questions. Some people might get discouraged by what “everyone else” is doing — but do not think like that. Focus on yourself! :)

Similarly, I would also recommend not being distracted by the litany of papers, blogs, etc. Definitely track them, but focus on fundamentals and intuition: Think about it like learning a language. Do backprop on paper. Then code it. Then do it for a CNN, code it, etc etc.

Some people might get discouraged by what “everyone else” is doing — but do not think like that. Focus on yourself!

Q: I work in the automotive field, in the area of testing. How do you think is it possible to test the reliability of a system that has AI functionality? Especially when this should prove the manufacturer is not liable for a particular accident? Is this even necessary to prove when it comes to AI? Are standards like ISO26262 relevant for systems with AI?

Tarin: You touched on this very important point: Current standards can have come from assumptions about a human being at the wheel. There are honestly no easy answers to those questions as such — I think we will have to be more granular as we progress on the technology, and update our rules as we go. There is still no clarity for what form of AI will be eventually deployed for SDCs.

Q: Would you rather fight 100 duck-sized horses or 1 horse-sized duck? (I’ll own up, this was my question)

Tarin: I would rather design algos that will be able to detect and track 100 duck-sized horses! Here is actually an example of where classical “train a classifier for x” fail, and we need to think about building generic obstacle detectors.

Q: Are there any types of neural network (say recurrent vs convolutional) you think students of deep learning should definitely be trying to learn?

Tarin: I always recommend the following: Deeply understand fully connected, followed by CNNs, followed by RNNs, and then GAN, RL. However you can get really really far with CNNs alone as well.

Q: Will Voyage be using deep learning for things like path planning/decision making/behaviour as opposed to just deep learning for perception?

Tarin: We will certainly not shy away from any technology. At Voyage we try to be as non-ideological about the tech used to enable us.

Q: What do you think about how far we can get with sensors available on the market today in terms of autonomous driving functionality? Please disregard the sensors from Luminaries or any emerging sensors from startups. My question is focused on: how much can AI improve the current systems?

Tarin: This is actually a GREAT question. Sensors can limit algorithms which can limit perception which can limit path planning, etc etc all the way down. Part of the challenge with SDCs is actually trying to make-do with the current sensor limitations that we do have. There are also a lot of un-sexy details related to how to time sync different sensors to each other, if that is even necessary, and the corner cases therein. Like I said — great question.

Some of the many sensors on the Voyage Car

Q: What are the most commonly used frameworks for Deep Learning? There’s obviously Tensorflow but which others are widely used in the industry? Also, does the framework really matter or are concepts more important?

Tarin: Yes, there are certainly many frameworks, and the community hasn’t really settled on one or the other. At the end of the day, what matters is what a developer is most comfortable with, and how fast they can code and train on it. At the end of the day a model is a model. If a dev can get their model running on a car with minimal fuss, then that is the method that should be used. Rapid prototyping is key.

Q: Do you plan on labelling your own training data, or do you have any clever strategies to get labelled image data?

Oliver: We actually plan to open source a tool to annotate point cloud maps efficiently, watch this space!

A LIDAR point-cloud collected from the Voyage Car

Q: What is the ratio of real world raw training data (images / radar) collected from the vehicles versus simulated / augmented data that the models are trained on? How has this changed over time?

Tarin: Well certainly with simulations the data is near “infinite”, although questions can be raised in regards to its diversity. The opposite can be said about real-date, (very diverse, but hard to acquire/label). When it comes to DNNs being trained though, there has been interesting work on using hybrid approaches though.

Q: Do you envision availability of a kit to transform a traditional car into a self driving car?

Oliver: It’ll be cheaper/better to just summon a Voyage!

Q: Where do you stand on the DL vs traditional classic control theory for Vehicle Autopilot? Do you think that an end to end DL autopilot is viable or would it be a mix between DL perception module and classic approaches?

Tarin: End-to-end can mean a lot of things to different people, so let me say this: Is it possible to learn the statistical mapping between image sequences and steering angles? Yes. Is it useful? It is safe? — This is where it gets very grey. How to enforce global planning constraints on such a mapping? Ultimately all techniques will have to be assessed based on how we use the vehicle, and where it is being deployed.

Q: What do you think of the possibility of using “direct mapping of images to steering angles” along side traditional approaches? Or is it just a waste of time then?

Tarin: Not so much a waste of time, as much as potentially being ill-posed: For example, a user driving down a middle lane with a car in front of them, may chose to turn right, or turn left. Which one is the correct one? Aggregated over all the data, those two possibilities are sensible. So I see it as a potentially useful use as a tactical decision maker, maybe to guide path plans/controls.

For more news about Voyage, follow Oliver Cameron and Tarin Ziyaee on Twitter!

For more machine learning-related AMAs, analysis and posts, follow Imploding Gradients on Medium!

--

--