Daniel Lenton

I'm a PhD student in The Dyson Robotics Lab at Imperial College London. In my research, I'm exploring the intersection between learning-based geometric reresentations, ego-centric perception, spatial memory, and reinforcement learning for robotics.

Task-driven optimization is useful for streamlining learnt geometric representations to be directly useful for acting in the world, the principles of 3D reconstruction can increase structure in the computation graph, improving generalization over more naive networks, and egocentric representations are useful for learning to select actions in the world.

In addition to my research, I am also a teaching assistant at Imperial College London. I have taught courses on robot navigation, algorithms, intro to AI, deep learning and software engineering design.

Email  /  Google Scholar  /  Twitter

Research and Publications
End-to-End Egospheric Spatial Memory
Daniel lenton, Stephen James, Ronald Clark, Andrew Davison
International Conference on Learning Representations (ICLR), 2021
paper / video / code / project page

ESM encodes the memory in an ego-sphere around the agent, enabling expressive 3D representations. ESM can be trained end-to-end via either imitation or reinforcement learning, and improves both training efficiency and final performance against other memory baselines on visuomotor control tasks.

Ivy: Templated Deep Learning for Inter-Framework Portability
Daniel lenton, Fabio Pardo, Fabian Falck, Stephen James, Ronald Clark
arXiv, 2021
paper / video / code / project page

Ivy is a templated Deep Learning (DL) framework which abstracts existing DL frameworks such that their core functions all exhibit consistent call signatures, syntax and input-output behaviour. Ivy allows high-level framework-agnostic functions to be implemented through the use of framework templates.

Learning To Find Shortest Collision-Free Paths From Images
Michal Pándy, Daniel lenton, Ronald Clark
arXiv, 2020
paper

We propose a novel cost function that guarantees collision-free shortest paths are found at its minimum. We show that our approach works seamlessly with RGBD input and predicts high-quality paths in 2D, 3D, and 6 DoF robotic manipulator settings.

MoreFusion: Multi-object Reasoning for 6D Pose Estimation from Volumetric Fusion
Kentaro Wada, Edgar Sucar, Stephen James, Daniel lenton, Andrew Davison
Conference on Computer Vision and Pattern Recognition (CVPR), 2020
paper / video / code / project page

MoreFusion makes 3D object pose proposals from single RGB-D views, accumulates pose estimates and non-parametric occupancy information from multiple views as the camera moves, and performs joint optimization to estimate consistent, non-intersecting poses for multiple objects in contact.


Website template by Jon Barron.