I am a Postdoctoral Associate at the MIT Biomimetic Robotics Lab, working with Prof. Sangbae Kim. Previously, I was an AI Research Fellow at the Korea Institute for Advanced Study (KIAS) from March 2023 to February 2025. I earned my Ph.D. from SNU Robotics Lab under the guidance of Prof. Frank C. Park (March 2018 – February 2023). Before that, I completed my B.S. in Mechanical Engineering and Physics at SNU.
Research Interests: Geometric Learning, Robotics, Computer (3D) Vision
Contact: yhl@mit.edu, yhlee.gabe@gmail.com
Recent talks
A Geometric Take on Motion Manifold Learning from Demonstration (May 17th, 2024 - Yokohama, Japan)
Riemann and Gauss meet Asimov: 2nd Tutorial on Geometric Methods in Robot Learning, Optimization and Control at ICRA 2024 (SITES)
Research area
Robotics
3D Vision Recognition: Humans understand the world in 3D. Even with partial observations -- e.g., we often view objects from one side, unable to see the back -- we automatically infer 3D geometric structures. The ability to recognize 3D structures is essential to human manipulation skills, not only for collision avoidance but also as we instinctively exploit global and local 3D symmetries.
The challenges of 3D recognition arise from various factors, such as the transparency of object surfaces, which leads to unreliable depth measurements, and limited camera viewpoints. Additionally, it is essential to consider which 3D representations are most useful for downstream robotic tasks, ranging from simple primitives to more expressive models like neural networks or voxels. Our research below addresses some of these issues.
DSQNet (T-ASE 2022): Recognize a 3D scene as a set of deformable superquadircs given a partial points cloud input.
NFL (RA-letters 2023): Fit a voxelized surface normal vector field from multi-view RGB images, even for transparent object surfaces.
T2SQNet (CoRL 2024): Recognize a 3D scene as a set of deformable superquadrics from partial-view RGB images, even for transparent object sufaces.
We can explicitly leverage 3D symmetries when generating motions conditioned on 3D world representations. One prominent approach is through equivariant neural networks, which are guaranteed to maintain equivariance with respect to specified symmetry groups and transformations. See our related works below.
Equivariant Pushing Dynamics (CoRL 2022): Learn a planar pushing dynamics model conditioned on superquadric representations, with equivariance to SE(2) transformations.
EMMP (CoRL 2023): Fit a manifold of trajectories conditioned on the 3D positions and orientations of objects, with equivariance to 3D rotations and translations.
EquiGraspFlow (CoRL 2024): Generate 6-DoF grasp poses conditioned on 3D point clouds, with equivariance to SE(3) transformations.
Our current works on equivariance guarantees focus on global transformations. A promising research direction is to develop models that are also equivariant under local transformations, as the world contains diverse objects, including deformable ones, where each object or part can transform independently. Humans naturally leverage these local symmetries.
Another direction is to utilize knowledge from vision foundation models. Humans have two notable abilities that enable generalizable manipulation skills. First, we can imagine unseen, occluded parts of 3D scenes, thanks to our large-scale vision generative capabilities. Second, we can identify semantic correspondences across different 3D scenes due to our ability to extract semantic features. These abilities now seem feasible to mimic, thanks to vision foundation models trained on large-scale datasets.
Motion representations: Given the recognition of 3D scenes and identified physics models, one might think that motions are simply the outcome of path planning, motion planning, and trajectory optimization -- end of story. This is both correct and incorrect.
If we knew the environment (such as obstacle positions) and physics model precisely -- which is already a challenging task -- then generating motion would be a well-defined mathematical problem that could be solved in simulation. This is true; however, these methods typically take an extremely long time to compute. When processing time is lengthy, it means the system cannot generate motions adaptively and reactively to match changing environments.
Encoding a set of relevant motions for the tasks at hand -- similar to how humans encode motor patterns in muscle memory -- and generating desired motions within this encoded set, which is much smaller than the entire motion space, is a promising approach for quickly adapting to changing environments. This concept corresponds to movement primitives or skills.
A natural question then arises: what mathematical models should be used to encode motions? Are there additional criteria for effective primitives? Our recent research focuses on two main directions: first, encoding a manifold of trajectories diverse enough to offer multiple adaptable candidates; second, representing motions through stable dynamical systems that are globally defined in space and possess robustness to external perturbations. See our related works below.
EMMP (CoRL 2023): Fit a manifold of discrete-time trajectories conditioned on 3D inputs, with a focus on achieving equivariance.
MMP++ (T-RO 2024): Fit a manifold of parametric, continuous-time trajectories with specific biases (e.g., via-points) to enhance adaptability.
MMFP (Arxiv): Fit a manifold of discrete-time trajectories and flow models in the latent coordinate space to capture the complex dependencies of the manifold on the conditioning variables.
Trajectory Manifold Optimization (Arxiv): Fit a manifold of continuous-time trajectories that encode motions satisfying kinodynamic constraints.
Stable Dynamical Systems: BCSDM (ICRA workshop 2024): Construct a stable dynamical system with a controllable rate of convergence to a given trajectory, adjustable based on user intent.
In our current work on motion manifold primitives, we generate trajectories and then apply trajectory tracking control, such as computed torque control. This approach requires a highly accurate dynamics model, making it challenging to apply directly to contact-involved manipulation tasks, as contact dynamics are difficult to identify. One promising approach to achieve robust control despite model inaccuracies is to apply model-predictive control or receding-horizon control. Extending our motion manifold primitives to a framework that enables iterative replanning within a feedback loop could be an interesting direction for future research.
Most existing stable dynamical systems approaches currently focus on achieving a single point attractor. However, in many problems, there are cases with multiple attractors, and sometimes the stable points can even form a continuous manifold. For example, imagine placing a dish into a dishwasher. The dish doesn’t have a single destination; instead, there are multiple possible locations. Developing a dynamical system that allows for a set of stable points would be an important step toward adaptable motion generation for such tasks.
Machine learning
Manifold representation learning:
Manifold of functional data (ICLR 2025)
Riemannian geometry of data:
News
(March, 2025) I will be joining MIT's Biomimetic Robotics Lab as a Postdoctoral Associate in March 2025, working with Prof. Sangbae Kim. Very excited!
(Feb, 2025) I will be hosting a mini workshop on AI and robotics on February 18th, spending the entire day engaging in exciting discussions with 12 speakers and other participants! Feel free to join at KIAS.
(Feb, 2025) Our paper on Isometric Regularization for Manifolds of Functional Data is accepted at ICRL 2025.
(Feb, 2025) Our paper on Diverse Policy Learning via Random Obstacle Deployment for Zero-Shot Adaptation is accepted at RA-L 2025.
(Sep 4, 2024) Our two papers, EquiGraspFlow: SE(3)-Equivariant 6-DoF Grasp Pose Generative Flows and T2SQNet: A Recognition Model for Manipulating Partially Observed Transparent Tableware Objects, are accepted at CoRL 2024.
(Aug, 2024) Our paper on MMP++: Motion Manifold Primitives with Parametric Curve Models is accepted at T-RO.
(May 12, 2024 ~ May 18, 2024) I will be at ICRA in Yokohama, Japan, presenting one poster, two workshop posters, and giving one tutorial talk.
(May 02, 2024) Our paper, Graph Geometry-Preserving Autoencoders, is accepted at ICML 2024.
(April, 2024) Two ICRA 2024 workshop papers, Behavior-Controllable Stable Dynamics Models in Riemannian Configuration Manifolds and Leveraging Equivariant Representations of 3D Point Clouds for SO(3)-Equivariant 6-DoF Grasp Pose Generation, are accepted.
(Nov 4, 2023) Our paper on NFL: Normal Field Learning for 6-DoF Grasping of Transparent Objects is accepted at RA-Letters.
(Nov 4, 2023 ~ Nov 12, 2023) I will be at CoRL in Atalanta, US, presenting two posters!
(Aug 30, 2023) Our two papers, Equivariant Motion Manifold Primitives and Leveraging 3D Reconstruction for Mechanical Search on Cluttered Shelves, are accepted at CoRL 2023.
(July 22 ~ Aug 1, 2023) I will be present at ICML in Hawaii, US.
(July 7, 2023) Our paper, On Explicit Curvature Regularization in Deep Generative Models, is accepted at the 2nd Annual Topology, Algebra, and Geometry in Machine Learning Workshop in ICML 2023.
(June 15 ~ July 6, 2023) I will be at the Nonsan Korea Army Training Center.
(Mar 1, 2023) I have become an AI Research Fellow at the KIAS Center for AI and Natural Sciences.
(Feb 24, 2023) Great news! I have successfully completed my Ph.D. program and am thrilled to announce that I will be awarded the Outstanding Doctoral Dissertation Award by the Mechanical Engineering Department!
(Feb 8~10, 2023) I will be present at the KIAS AI Center winter workshop and will give a 30 mins talk on "Geometric Methods for Machine Learning". (slide)
(Jan 21, 2023) Our paper, Geometrically regularized autoencoders for Non-Euclidean data, is accepted at ICLR 2023.
(Nov 4, 2022) I will give a talk for My Ph. D. Thesis Defense Seminar on "Geometric Methods for Manifold Representation Learning" at 4 pm in SNU (301-306), all are welcome! (slide)
(Sep 10, 2022) Our paper, SE(2)-Equivariant Pushing Dynamics Models for Tabletop Object Manipulations, is accepted at CoRL 2022 for an oral presentation.
(July 2022) I received Youlchon AI STAR Fellowship.
(July 17~27, 2022) I will be present at ICML in Baltimore, US, and will give the spotlight talk and poster presentation.
(June 10, 2022) Our paper, DSQNet: A Deformable Model-Based Supervised Learning Algorithm for Grasping Unknown Occluded Objects, is accepted at T-ASE 2022.
(May 15, 2022) Our paper, A Statistical Manifold Framework for Point Cloud Data, is accepted at ICML 2022.
(Apr 15, 2022) I gave a presentation at the 2022 AIIS Spring Retreat and won the 3rd prize for the poster presentation.
(Jan 29, 2022) Our paper, Regularized Autoencoders for Isometric Representation Learning, is accepted at ICLR 2022.
(Sep 29, 2021) Our paper, Neighborhood Reconstructing Autoencoders, is accepted at NeurIPS 2021.