The three companies poised to deploy Level 4 AVs first: a 4-part series
Autonomous Vehicles (AVs) seem to be everywhere these days, as technology and regulations accelerate towards live pilots and deployment readiness. What will the initial waves of adoption look like? What does that mean for different types of cities? Which players are best positioned and why? These are the types of questions I’ve been exploring during my final year at Harvard Business School with Professor John Macomber, whose work spans infrastructure finance, economic development, urban planning and cutting edge technology. Thus sparked this 4-part miniseries on near-term AV deployment implications, incorporating hours of interviews with experts from the field.
If you feel versed in AV fundamentals — jump ahead to the following subjects:
Part II: Where in the world will Level 4 autonomous vehicles land first? How to segment cities and what each type means for the future of AV.
Part III: Integrated vs modular approaches to AV development: comparing Tesla, Waymo & Uber. A closer look at these contrasting business models.
Part IV: The first to get to the Level 4 finish line: Tesla’s race to lose?: The keys to near-term deployment and who fares best.
AVs offer a chance to dramatically enhance the way our world works. For starters, the technology strives to address the average of 20–50mm annual road crash injuries and fatalities with a ~10x safer experience, improve mobility options for the disabled and elderly, decrease congestion and commuting time, and reclaim swathes of cities dedicated to parking.
Those are profound changes and tech players and carmakers alike are preparing for a shifting landscape. Partnerships captured the headlines in 2016 and 2017 — a smattering of which are below — as players sought to integrate new and evolving autonomous capabilities. These partnerships have created a motley value chain with many hedged bets and vague advantages.
- They provide ample data for deep learning-informed decision making software. This data serves as a critical input to train algorithms that power and learn from AV rides — turning camera images and sensor data into rationale behind car turns and movement. While simulation data is powerful, existing deep-learning algorithms are quite fragile, so the more real-life scenarios across weather conditions, human and animal encounters and infrastructure anomalies, the better. For more on the underpinnings of AV software (deep learning, artificial neural networks and computer vision), read on here.
- They serve as a bridge to regulators and the public. Cities and end users have yet to fully understand how AVs will mesh with our existing society. This has prompted questions like: will humans be able to “bully” AVs into always stopping for them, and will AVs drive around aimlessly because that’s cheaper/easier than parking (zombie cars)? As such, pilots allow consumers, regulators and urban planners to interact with the technology and become comfortable with what our future will look like.
That said, they beg the question of now what? With the technology improving daily, how fast/broadly should companies look to deploy? And where should AVs go next? For example (in part II), how do companies weigh regulatory climate versus total addressable market size?
What does deployment look like?
Numerous players — Ford, Toyota, Volvo, Waymo to name a few — have stated publicly that they intend to jump straight to Level 4 autonomy. That essentially means they want to bypass Level 3 (levels explained below), where cars are autonomous but require a backup driver at all times.
Ford’s AV expert Jim McBride explained this move, “We’re not going to ask the driver to instantaneously intervene — that’s not a fair proposition”. That is because situational awareness takes 35 seconds on average to achieve but the technology likely only allows for 10 seconds, meaning the average backup driver would fail to re-engage. Recent tests have shown that trained backup drivers struggle to even stay awake in an AV.
If we assume Level 4 (we’ll cycle back to this assumption in Part IV), then we’re likely targeting the 90% case not the 99% case. That is, we are trying to maximize utility within specific “operational design domains (ODDs)”, not in every scenario and on every road. According to NHTSA’s 2016 guidance, ODDs are defined by roadway type, geographical location, speed range, lighting conditions, weather conditions, etc. So, a really basic ODD might be highway driving in clear conditions. By constraining AV deployment within specified ODDs, AVs will be “geofenced” to routes or zones and possibly by time or weather.
This is well matched with the current state of supervised learning that AVs rely on, as supervised learning is constrained by labeling to create context. This constraint is exactly what Drive.ai set out to resolve, attempting to use deep learning to supplement human labeling. Cofounder Carol Reiley said,
“We’ve learned that certain companies have a large army of people annotating…Thousands of people labeling boxes around things. For every one hour driven, it’s approximately 800 human hours to label.”
Thus, AVs would ideally be as attuned to their real routes as possible, and a narrow ODD allows for the highest likelihood of software awareness and success. In the short run, Level 4 AVs will be best suited for geofenced recurring short-range trips in high density areas. This will also alleviate some of the nuanced problems around deployment, such as mapping and infrastructure (e.g. clear lane lines, construction diversions).
So, those are the implications of most players going straight to Level 4. What should cities do to harness that? And how can companies sell into cities best? These questions are explored further in Part II here.
Still curious about the future of AV and who will fare best? Stay tuned for Part III to explore Waymo, Uber and Tesla’s emerging business models, and Part IV to understand how those may affect successful deployment.