Skip to main content


Human-Computer Interaction

Human-Computer Interaction (HCI) concerned with the study, design, construction and implementation of human-centric interactive computer systems. Work in the Rochester HCI group (ROCHI) includes人类非语言行为分析、社交技巧training, applied machine learning, educational technology, accessible computing, ubiquitous computing.Visit theHCI website.


Henry Kautz

亨利考兹是罗宾和蒂姆温特沃斯的主席Institute for Data Scienceand professor in theDepartment of Computer Science德赢vwin官网客户端. He performs research in social media, machine learning, pervasive computing, automated planning, and assistive technology. His academic degrees include an AB in mathematics from Cornell University, an MA in creative writing from the Johns Hopkins University, an MSc in computer science from the University of Toronto, and a PhD in computer science from the University of Rochester. He was a researcher and department head at Bell Labs and AT&T Laboratories until becoming a professor in the Department of Computer Science and Engineering of the University of Washington in 2000. He joined University of Rochester in 2006. He was president (2010-2012) of the Association for the Advancement of Artificial Intelligence, and is a fellow of the AAAI, a fellow of the American Association for the Advancement of Science, and a recipient of the IJCAI Computers and Thought Award.

M. Ehsan Hoque

M. Ehsan Hoque is an assistant professor in theDepartment of Computer Science德赢vwin官网客户端. His research focuses on (1) designing and implementing new algorithms to sense subtle human nonverbal behavior, (2) enabling new behavior sensing for human-computer interaction, and (3) inventing new applications of emotion technology in high-impact social domains such as social skills training, public speaking, and assisting individuals who experience difficulties with social interactions.

Hoque received his PhD in 2013 from theMassachusetts Institute of Technology. His PhD thesis — “Computers to Help with Conversations: Affective Framework to Enhance Human Nonverbal Skills“——这是第一个证明人们可以通过与自动化系统的交互来学习和提高社交技能的论文;这篇论文作为麻省理工学院最非传统的发明之一在麻省理工学院博物尤文图斯官方德赢馆展出。Hoque博士获得了许多奖项,包括IEEE人道主义金奖、泛在计算(UbiComp)最佳论文奖、自动人脸和手势识别(FG)最佳论文提名、智能虚拟代理(IVA)和NSF CRII(职业前)奖。

Zhen Bai

Zhen Bai is an Assistant Professor in the Department of Computer Science at the University of Rochester. Her research focuses on creating embodied and intelligent interfaces that transcend learning, communication, and wellbeing for people with diverse abilities and backgrounds. Her main research fields include human-computer interaction, augmented reality, tangible user interface, embodied conversational agent, technology-enhanced collaborative learning, and assistive technology. Her work is published in premier human-computer interaction and learning science conferences such as CHI, ISMAR, IDC, IVA, and AIED.

Zhen received her Ph.D. degree from the Graphics & Interaction Group at Computer Laboratory, University of Cambridge in 2015 and was a postdoctoral fellow of the Human-Computer Interaction Institute and Language Technology Institute at Carnegie Mellon University before joining the University of Rochester.

Project Pages

Project Name Brief Summary

VizWizis an iPhone application aimed at enabling blind people to recruit remote sighted workers to help them with visual problems in nearly real-time.


声网宝is a free web-based screen reader enabling blind web users to benefit from the availability of public computers. Information on the web can be accessed from any computer that has a sound card without the need to install screen-reader software.


Motivation ofLegionwas to provide a quick way of bootstrapping highly-robust, intelligent assistive robots. Such systems usually require significant (and costly) training to work automatically, are prone to errors, and so can often be controlled remotely by experts. Legion supports the flexible control of such existing remote-control interfaces.


Scribeis a new approach in which groups of non-expert captionists (anyone who can hear and type) collectively caption speech in real-time.