Neil Nie

MSCS @ Stanford 2025.
B.S. Columbia University, 2023
Contact: [neilnie] at stanford.edu

Thanks for stopping by. I’m Neil, a first-year master’s student studying computer science at Stanford. I received my B.S. in computer science (minor in applied math) from Columbia University, School of Engineering. My background and interests are in computer vision, robotics, algorithms, and software engineering.

News & Updates:

2023

Returned to Apple as an algorithms scientist intern on the HID (human interface devices) team, working on the Apple Vision Pro.

2022

Returned to Apple as a software engineering & algorithms intern on the CoreMotion fitness team.

20212023

Researcher at Columbia Artificial Intelligence & Robotics Lab, advised by Prof. Shuran Song, working on articulated object manipulation, computer vision, & embodied AI. First-author publication: https://arxiv.org/abs/2207.08997

20212023

Teaching assistant for Computational Aspects of Robotics (COMS 4733) since spring 2021. Head teaching assistant for Artificial Intelligence (COMS 4701), and Discrete Math.

2021

Returned to Apple as a software & algorithms intern on the CoreMotion fitness team. Invented a new multi-sensing fitness tracking algorithm.

20192021

Served as President and Software Engineering Lead for the Columbia University Robotics Club, leading the MATE ROV project and the autonomous vehicles project.

2020

Worked at Apple as a software engineer & algorithms intern on the CoreMotion team. Invented a new multi-sensing algorithm (patent: ).

2019

TEDxDeerfield Executive Committee member and TEDx speaker.

Selected Projects

Structure From Action: Learning Interactions for Articulated Object 3D Structure Discovery

Neil NieSamir Yitzhak GadreKiana EhsaniShuran Song

We introduce Structure from Action (SfA), a framework to discover 3D part geometry and joint parameters of unseen articulated objects via a sequence of inferred interactions. (Appearing in IROS 2023)

Self-Driving Golf Cart

From 2017-2019, I have been developing a self-driving golf cart. From the drive-by-wire system to the autonomous navigation stack, every component was developed from the ground up to learn as much as possible about robotics, electrical engineering, software engineering, and machine learning. I also developed, trained, and deployed deep neural networks for behavioral cloning and semantic segmentation.

Robotic Arm Gripping System

The project focused on using deep learning, semantic segmentation, ICP, and RRT algorithms to build a complete robotic arm gripping system. The arm uses image segmentation to detect the objects in the bins, uses ICP for pose estimation, and plans paths around obstacles using RRT. (credit: COMS4733 @columbia. not open-sourced)

Quadcopter

I built a quadcopter and programmed a flight controller from the ground up. The biggest challenge was programming a PID controller for the Arduino with an IMU. Eventually, it was able to fly around and self-balance in the air.

Talks & Presentations

In the spring of 2017, I gave a TEDx talk about Artificial Intelligence at TEDxDeerfield, which now has over half a million views on YouTube. You can find my talk here. I also served as an executive board member and event coordinator for TEDxDeerfield from 2017-2019.