Research
My work focuses on developing knowledge of how novel technologies can serve to mediate interactions between people and robotic systems. In particular, my research agenda involves:
-
Designing new methods for understanding, measuring, and analyzing the efficacy of robotic technologies
-
Building empirical understandings regarding the fundamental principles that support human use of interactive systems
-
Applying theoretical knowledge, experimental findings, and practical insights in the creation of new algorithms, systems, and integrated solutions that provide enhanced capabilties for assisting human endeavors
Projects
Developing Principles for Effective Human Collaboration with Aerial Robots
Small aerial robots represent a novel platform uniquely suited to assist humans in exploratory, surveillance, inspection, and telepresence tasks across a variety of domains. Recently, there has been an explosive growth in the development of such platforms, which is expanding their availability for potential use in military, commercial, and personal contexts. This project explores methods to enhance robot usability, perceived safety, and collaborative potential to support the deployment of flying robots working alongside humans in environments such as warehouses, construction sites, and the International Space Station.
Publications
- Daniel Szafir, Bilge Mutlu, and Terrence Fong. (2017). Designing Planning and Control Interfaces to Support User Collaboration with Flying Robots. International Journal of Robotics Research (IJRR), 36 (5–7), 514–542. doi: 10.1177/0278364916688256
- Daniel Szafir. (2015). Human Interaction with Assistive Free-Flying Robots. Doctoral Dissertation, University of Wisconsin–Madison.
- Daniel Szafir, Bilge Mutlu, and Terrence Fong. (2015). Communicating Directionality in Flying Robots. Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI 2015).
- Daniel Szafir, Bilge Mutlu, and Terrence Fong. (2014). Communication of Intent in Assistive Free Flyers Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI 2014).
Supporting User Interaction with Virtual, Mixed, and Augmented Reality
Virtual, augmented, and mixed reality technologies hold great promise at providing users with new ways to interact with information via immersive 3D visuals. These technologies are rapidly maturing and opening up novel possibilities for applications across robotics, computer-supported collaborative work, and visualization. This project characterizes the perceptual affordances of modern mixed reality technologies and leverages their capabilities in developing new user interfaces for intuitively interacting with data, robots, and other people.
Publications
- Catherine Diaz, Michael Walker, Danielle Albers Szafir, and Daniel Szafir. (2017). Designing for Depth Perceptions in Augmented Reality. Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2017).
- Kevin Ponto, Ross Tredinnick, Aaron Bartholomew, Carrie Roy, Daniel Szafir, Daniel Greenheck, and Joe Kohlmann. (2013). SculptUp: A Rapid, Immersive 3D Modeling Environment Proceedings of the 2013 IEEE Symposium on 3D User Interfaces (3DUI 2013).
Repurposing Robots for Novel Interactions
This project explores how robots might be repurposed such that their use transcends traditional notions of “put-and-place” or data collection tasks. We explore a wider design space for robotics, for instance examining robot abilities to support opportunistic tangible input and haptic output for traditional Graphical User Interfaces (GUIs) or act as assistive devices for users with disabilities.
Publications
- Darren Guinness, Daniel Szafir, and Shaun Kane. (2017). GUI Robots: Using Off-the-Shelf Robots as Tangible Input and Output Devices for Unmodified GUI Applications. Proceedings of the ACM SIGCHI Conference on Designing Interactive Systems (DIS 2017).
Designing Adaptive Robotic Products and Educational Technologies
In this project, we explore how to develop educational technologies such as robots or virtual tutoring systems. A major focus of this work is developing systems that can autonomously adapt to changes in users’ behavioral, cognitive, and task states. We follow a transdisciplinary design process that draws on robotics, educational psychology, and neuroergonomics to design and evaluate the effectiveness of such adaptive educational technologies.
Publications
- Allison Sauppé, Daniel Szafir, Chien-Ming Huang, and Bilge Mutlu. (2015). From 9 to 90: Engaging Learners of All Ages. Proceedings of the 2015 ACM Special Interest Group on Computer Science Education (SIGCSE 2015).
- Daniel Szafir and Bilge Mutlu. (2013). ARTFul: Adaptive Review Technology for Flipped Learning. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2013).
- Daniel Szafir and Bilge Mutlu. (2012). Pay Attention! Designing Adaptive Agents that Monitor and Improve User Engagement. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2012).
- Daniel Szafir and Robert Signorile. (2011). An Exploration of the Utilization of Electroencephalography and Nueral Nets to Control Robots. Proceedings of the 2011 IFIP TC 13 International Conference on Human-Computer Interaction (INTERACT 2011).
Artistic Expressions with Robotics and Interactive Technologies
Advances in sensing, projection, and robotics are creating new design spaces for merging technology with artistic performances and athletic activities. This project develops new methods for human activity recognition within the context of non-traditional movements, such as in sports or dance performances, and designs interactive exhibitions that combine human movement and adaptive artistic expressions.
Exhibitions and Performances
- Merideth Burgess and Daniel Szafir. (2016). Graphic Impulse, Pole Theatre USA. Winner of the Pole Art category.