Aligning Robot Representations with Humans

Workshop CoRL 2022 - December 15th (Hybrid)

Robots deployed in the real world will interact with many different humans to perform many different tasks in their lifetime, which makes it difficult (perhaps even impossible) for designers to specify all the aspects that might matter ahead of time. Instead, robots can extract these aspects implicitly when they learn to perform new tasks from their users' input. The challenge is that this often results in representations which pick up on spurious correlations in the data and fail to capture the human’s representation of what matters for the task, resulting in behaviors that do not generalize to new scenarios. In this workshop, we are interested in exploring ways in which robots can align their representations with those of the humans they interact with so that they can more effectively learn from their input. By bringing together experts from representation learning, human-robot interaction, and cognitive science, we believe we can foster an environment where we can exchange ideas for how the robot learning community can best benefit from learning representations from human input and vice-versa, and how the HRI community can best direct their efforts towards discovering more effective human-robot teaching strategies. We encourage participation from researchers working in robot learning, human-robot interaction, cognitive science, and representation learning. The workshop will adopt a hybrid format, including in-person presenations, live streams, and a hybrid poster session.

Speakers and Panelists

Jacob Andreas

Massachusetts Institute of Technology

Daniel S. Brown

University of Utah

Matthew Gombolay

Georgia Institute of Technology

Mark Ho

Princeton University

George Konidaris

Brown University

Lerrel Pinto

New York University

Dorsa Sadigh

Stanford University


Time (NZST)
08:30 am - 08:45 am Organizers
Introductory Remarks
08:45 am - 09:45 am Invited Speakers
30-minute talks 1 & 2
09:45 am - 10:00 am Coffee Break
10:00 am - 11:00 am Invited Speakers
30-minute talks 3 & 4
11:00 am - 12:00 pm Contributed Talks
12:00 pm - 12:30 pm Panel Session 1
12:30 pm - 01:30 pm Lunch Break
01:30 pm - 02:00 pm Invited Speakers
30-minute talk 5
02:00 pm - 02:30 pm Conference Opening Session
02:30 pm - 03:00 pm Invited Speakers
30-minute talk 6
03:00 pm - 03:30 pm Coffee Break
03:30 pm - 04:30 pm Invited Speakers
30-minute talks 7 & 8
04:30 pm - 05:00 pm Panel Session 2
05:00 pm - 05:10 pm Organizers
Concluding Remarks
05:10 pm - 06:00 pm In person: TBD; Virtual: On Gather.Town
Poster Session

Call for papers
New: The call for papers is now open.

Areas of interest
We will accept submissions focusing on Aligning Robot Representations with Humans. Topics include but are not limited to:

  1. Human Representations: What kind of representations do humans form about their surrounding world to plan and accomplish their goals effectively?
  2. Robot Representations: Conversely, what kinds of representations should robots learn in order to be most aligned with what humans care about? Should we represent the world using features? Knowledge graphs? Object-centric representations? Is it important that we learn representations that generalize across many tasks or should we always directly specialize?
  3. The Role of Human Input for Learning Representations: When and to what extent is human input necessary for learning good robot representations? Should we try to eliminate human input from representation alignment as best as possible or should we focus our efforts on enabling people to give the right kinds of input to distill their knowledge into the robot? What are the best types of human input for distilling a person’s knowledge of the world into the robot and aligning their representations?
  4. The Role of Simulators for Building Representations: What is the value of simulation for representation alignment? As a community, should we spend all the human effort building simulators with good assets and just collect a lot of human data OR should we focus our research effort into figuring out effective teaching strategies?
  5. Bi-directional Human-Robot Communication: How can robots be more transparent about what representation they do or don’t know such that humans can more appropriately communicate what it is they care about? What is the role of domains such as vision and natural language in helping humans communicate representations to robots? In robots communicating representations to humans?
We welcome research papers of 4-8 pages, not including references or appendix.

The paper submission deadline is on October 21st, 11:59 pm (AOE). Papers should be submitted to Submissions should follow the CoRL template, and be submitted as .pdf files. The review process will be double blind, and therefore the papers should be appropriately anonymized.

All accepted papers will be given oral presentations (lightning talks or spotlight talks) as well as poster presentations. For the oral presentations, authors would have the option to present in-person or remotely. The poster session will be held in-person and virtually on Gather.Town. Accepted papers will be made available online on the workshop website as non-archival reports, allowing submissions to future conferences or journals.

Important Dates
  • Submission deadline: Friday, October 21st, 2022 (11:59 pm AOE).
  • Author Notifications: Friday, November 11th, 2022.
  • Camera Ready: Friday, November 25th, 2022 (11:59 pm AOE).
  • Workshop: Thursday, December 15th, 2022, Auckland, NZ.


Andreea Bobu

University of California Berkeley

Andi Peng

Massachusetts Institute of Technology

Pulkit Agrawal

Massachusetts Institute of Technology

Julie Shah

Massachusetts Institute of Technology

Anca Dragan

University of California Berkeley

Reach out to for any questions.