Toronto Robotics and Artificial Intelligence Laboratory (TRAIL)

Associate Professor Steven L. Waslander
University of Toronto
Institute for Aerospace Studies
4925 Dufferin St., Ontario, Canada M3H 5T6

Phone: +1-416-667-TBD
Fax: +1-416-667-TBD
Email: stevenw (at) utias.utoronto.ca
Web: trailab.utias.utoronto.ca/

Education

Research Overview

Prof. Steven Waslander is a leading authority on autonomous aerial and ground vehicles, including multirotor drones and autonomous driving vehicles. Simultaneous Localization and Mapping (SLAM) and multi-vehicle systems.  He received his B.Sc.E.in 1998 from Queen’s University, his M.S. in 2002 and his Ph.D. in 2007, both from Stanford University in Aeronautics and Astronautics, where as a graduate student he created the Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC), the world’s most capable outdoor multi-vehicle quadrotor platform at the time. He was a Control Systems Analyst for Pratt & Whitney Canada from 1998 to 2001. He was recruited to Waterloo from Stanford in 2008, where he founded and directs the Waterloo Autonomous Vehicle Laboratory (WAVELab), extending the state of the art in autonomous drones and autonomous driving through advances in localization and mapping, object detection and tracking, integrated planning and control methods and multi-robot coordination. In 2018, he joined the University of Toronto Institute for Aerospace Studies (UTIAS), and founded the Toronto Robotics and Artificial Intelligence Laboratory (TRAILab).

Prof. Waslander’s innovations were recognized by the Ontario Centres of Excellence Mind to Market award for the best Industry/Academia collaboration (2012, with Aeryon Labs), best paper and best poster awards at the Computer and Robot Vision Conference (2018), and through two Outstanding Performance Awards, and two Distinguished Performance Awards while at the University of Waterloo.  His work on autonomous vehicles has resulted in the Autonomoose, the first autonomous vehicle created at a Canadian University to drive on public roads. His insights into autonomous driving have been featured in the Globe and Mail, Toronto Star, National Post, the Rick Mercer Report, and on national CBC Radio.  He is Associate Editor of the IEEE Transactions on Aerospace and Electronic Systems, has served as the General Chair for the International Autonomous Robot Racing Competition (IARRC 2012-15), as the program chair for the 13th and 14th Conference on Computer and Robot Vision (CRV 2016-17), and as the Competitions Chair for the International Conference on Intelligent Robots and Systems (IROS 2017).

Prof. Waslander’s current research focuses on two main areas: simultaneous localization and mapping with dynamic camera clusters, and perception for autonomous driving.   Dynamic camera clusters are groups of cameras attached to robotic systems in which at least one of the cameras can move relative to the others, such as a gimballed camera common on multirotor drones, or an actuated camera on a mobile manipulator arm.  These systems require dynamic calibration to identify an accurate transformation from each camera frame to a base vehicle frame, and so this work has led to minimal parameterizations that provide such transformations based on joint angles for the actuated mechanism.  We are developing active vision techniques for both calibration of the dynamic camera cluster and for localization and mapping during operation.  This work will enable robotic platforms to exploit their best sensors and reduce the overall sensor requirements by identifying regions of the environment that are most helpful to a given task and focusing sensor attention in those directions.

Perception for autonomous driving involves numerous challenging tasks, such as the identification, localization, tracking and prediction of static and dynamic objects in the environment, the construction of multi-faceted maps for route planning, local path planning and obstacle avoidance, and the localization and state estimation of ego motion.  Our research in this area involves both classical and deep learning approaches, and is seeking new ways of extracting estimate uncertainty from deep networks to improve sensor fusion and provide a holistic perceptual representation in real-time on in-vehicle hardware.  These efforts are aided by data collection and public road driving evaluations on the Autonomoose testbed, a fully capable autonomous vehicle developed at the University of Waterloo.  The team’s emphasis is on robust methods that operate in all weather and lighting conditions and use multiple sources of information to improve both performance and fault tolerance.