Work

Waymo

Software Engineering Intern, Perception | Summer 2018

If you have been to Mountain View, you have likely seen one of these:

Waymo is truly bringing the next era of mobility into the world. Each year around the world, over a million people die in car accidents due to human error. Additionally, many people working in cities spend hours each day for their commute. Self-driving vehicles can drive without the impairment of sleepiness or alcohol, and allow passengers to gain back more time in their lives.

Formerly Google’s self-driving car project, Waymo has been at the forefront of autonomous vehicles, and I am lucky to have been able to learn about the immense research and development it takes to get an autonomous vehicle on the road. I interned with Waymo’s perception team, developing a deep learning pipeline for automatically detecting sensor miscalibrations. This was my first experience developing an entire learning pipeline from start to finish, from dataset generation and cleaning, to network architecture development, to performance analysis and benchmarking. Given a large degree of independence for my project, I was able to learn how to research state-of-the-art methods in the field and draw inspiration for my own implementations.

Like at Tesla, I was humbled by all of my coworkers. I look in one direction and I’ll see PhD graduates from MIT and CMU. If I look the other way, I’ll see one of the authors of Tensorflow, or engineers from winning teams of the DARPA Grand Challenges. I am grateful to have the opportunity to begin my career at Waymo in 2019, but I know I still have an incredible amount to learn, and that amount is constantly growing.

Tesla

Autopilot Intern, Perception | Spring 2018

An Elon Musk Company. Before starting my internship I felt obligated to read his biography by Ashlee Vance, and after learning about Musk’s life I was so exited to experience the atmosphere of his work. I worked with Tesla’s Autopilot Perception team. I created a tool for quantitatively assessing localization algorithms that estimate a vehicle’s pose in the world, analyzed the performance of several candidate sensors, and implemented improved inertial navigation algorithms.

Tesla was a transformative and amazing experience. It was incredible to see how much people truly cared about their work. All of my coworkers could have easily been working at a Bay Area tech company and drowning in perks. However, Tesla’s mission was more than enough motivation. I was so drawn to my work and fueled by my environment that 12+ hour days were the norm. It also helped that my desk was one of the closest to Elon’s. During my time at Tesla, I got to watch a Roadster get launched into space with the Falcon Heavy, see the assembly line with massive robots and machines in Fremont, and hear how an innovator like Elon solves problems. I talked to the human benchmark for ImageNet, Andrej Karpathy, and saw how deep learning is applied to autonomy at scale. My team came from diverse backgrounds, although the thing they all had in common is talent and passion. Tesla taught me how to work effectively at a pace significantly faster than similar companies. Tesla taught me how to appreciate the importance of its mission to accelerate the world’s transition toward sustainable energy.

NASA Jet Propulsion Laboratory

Robotics Software Intern | Summer 2017

Quite possibly the home of the coolest robots in the world. I cannot possibly imagine having a first internship experience better than my time at JPL. As part of JPL’s robotics computer vision group (347H), I worked on developing underwater 3D reconstruction algorithms that fuse sonar and stereo camera images in order to give a robot the ability to interact with subsea structures here on Earth, and perhaps one day on faraway bodies like Europa.

aquasimian

You can read my final report here.

Underwater robotics involves several major challenges. Light is degraded by backscattering and absorption, and turbidity can limit underwater visibility to less than a meter away. 3D reconstructions of an underwater scene are necessary in order to identify the pose of objects that we wish to interact with, and high fidelity reconstructions often require some form of depth sensing. In robotics, LIDAR is typically used to estimate range. However, LIDAR is extremely difficult to use underwater, especially when turbidity can completely mask the target of interest. Sonar works well underwater, although sonar measurements often suffer from noise that makes the data difficult to use for object grasping and manipulation. A stereo pair of cameras can return a dense reconstruction of a scene as long as there are enough features on object surfaces. Using active stereo, which involves projecting light patterns onto the target scene to create more features for stereo matching, along with sonar readings, I was able to give our robot the capability to make denser reconstructions of the target scene.

Nowhere else in the world could I have a team such as the one I had at JPL. One of my team members may spend several weeks of the year controlling the Curiosity rover on Mars, while my other team member would be simultaneously working on four different robots at a time, including a rock climbing robot and Robosimian, JPL’s submission to the DARPA Robotics Challenge. Each week there would be captivating talks from incredible people, ranging from the engineers of Pathfinder to the current and former directors of JPL. I will never forget my experience at JPL, and I look forward to seeing all of the amazing missions in the future.

jpl

search previous next tag category expand menu location phone mail time cart zoom edit close