Zoox is transforming mobility-as-a-service by developing a fully autonomous, purpose-built fleet designed for AI to drive and humans to enjoy.
The Perception Object Detection and Tracking team at Zoox deals with perception of all people and objects that have a capability to move. Your role is to work with the ML model teams to bring cutting-edge models into the vehicle stack.
In this role, you will have access to the best sensor data in the world and an incredible infrastructure for testing and validating your algorithms. We are creating new algorithms for segmentation, tracking, classification, and high-level scene understanding, and you could work on any (or all!) of these components.
About Zoox
Zoox is developing the first ground-up, fully autonomous vehicle fleet and the supporting ecosystem required to bring this technology to market. Sitting at the intersection of robotics, machine learning, and design, Zoox aims to provide the next generation of mobility-as-a-service in urban environments. We’re looking for top talent that shares our passion and wants to be part of a fast-moving and highly execution-oriented team.
Accommodations
If you need an accommodation to participate in the application or interview process please reach out to
[email protected] or your assigned recruiter.
A Final Note:
You do not need to match every listed expectation to apply for this position. Here at Zoox, we know that diverse perspectives foster the innovation we need to be successful, and we are committed to building a team that encompasses a variety of backgrounds, experiences, and skills.
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
In this role, you will...
Define on-vehicle architecture for producing the core tracking results from the Perception stack. Work with both the model teams and optimization teams to develop a highly performant and efficient system that can run on vehicle.Work with Perception data, both on the input and output of machine learned modelsTake tracking output and integrate this into the larger behavioral system in the Autonomy stackQualifications
BS or higher in Computer Science or a related degreeFluency in C++ and PythonExperience delivering ML model integration in latency-sensitive systemsHave experience implementing tracking systemsStrong mathematical skills and understanding of probabilistic techniquesBonus Qualifications
Familiarity with modern, Sparse-BEV Joint Detection and Tracking modelsExperience with CUDA codeContribute to the ongoing development of new architecture based on new, state of the art ML researchInvestigate, prototype and train / evaluate networks for solving causal / recursive multi-target multi-modal tracking problem