Agentic AI in the Wild: From Hallucinations to Reliable Autonomy

An interdisciplinary workshop exploring uncertainty, reliability and hallucinations in large-scale Agentic AI systems

April 26th, 2026
Rio de Janeiro, Brazil

News & Updates

29th Dec 2025

Call for papers! Do you have exciting findings that fit the workshop? The call for papers is now live: call for papers.

29th Dec 2025

Call for reviewers! Are you passionate about reliable AI? We are looking for experienced reviewers on the domain to help us assess the workshop submissions. Please fill the form here.

3rd Dec 2025

Workshop Proposal Accepted! We are excited to announce our proposal for ICLR 2026.

Archive

Missed our previous workshop? Watch the recording of the ICLR 2025 Workshop here: View Recording

About the Workshop

When we delegate tasks to AI agents—can we count on them to get it right?

Agentic AI systems are increasingly moving beyond static content generation to perform autonomous decision-making tasks, such as scheduling meetings, booking travel, managing workflows, and supporting scientific research. In these settings, reliability is not merely beneficial—it is a fundamental requirement. Yet today’s foundation models remain prone to a critical failure mode: hallucination.

Understanding Hallucinations in AI Systems

Foundation models have demonstrated remarkable capabilities across various domains, but they often generate outputs that are inconsistent with facts or user intent - a phenomenon known as "hallucination".

This workshop brings together experts from machine learning, natural language processing, cognitive science, and human-computer interaction to address the challenges of uncertainty and reliability in large-scale AI systems.

Key Focus Areas

Detection methods, uncertainty quantification, human-AI collaboration, and mitigation strategies for hallucinations

Target Audience

Researchers, practitioners, and industry professionals working with foundation models

Format

Keynote talks, panel discussions, paper presentations, and interactive sessions

Outcomes

Shared understanding of challenges and collaborative roadmap for future research

Call for Papers

We invite researchers to submit their latest work on reliable agentic AI

Important Dates

Submission Deadline 5th February 2026, AOE
Author Notification 26th February 2026, AOE

Submission link: https://openreview.net/group?id=ICLR.cc/2026/Workshop/Reliable_Autonomy

Submission format: ICLR template. 6-8 pages for regular submissions, 2-4 pages for tiny papers.


Policy on Tiny Papers

This year, ICLR is requiring each workshop to accept short (2-4 pages in ICLR format) paper submissions, with an eye towards inclusion.

Authors of these papers will be earmarked for potential funding from the main conference of ICLR, but need to submit a separate application for Financial Assistance that evaluates their eligibility. This application for Financial Assistance to attend ICLR 2026 will become available on the ICLR website, which is centrally managed by ICLR and not the workshop organizers.

TINY Tiny Paper Track Instructions

For submissions to the Tiny Paper Track, the Appendix will not be reviewed. The submission must be limited to 2-4 pages of content.

Requirement: Include the label "TINY" at the start of the title to distinguish it from regular submissions. If this label is not included, the submission will be treated as a regular one.

Note: Please note that we follow ICLR's reciprocal reviewing policy for authors. We are committed to keeping this burden low, limiting assignments to at most 3 papers.

Note: This workshop is non-archival. The workshop provides a platform for researchers to present and discuss their latest findings without the pressure of formal publication.

Note: All authors are required to have an updated OpenReview profile. Please register with an institutional email, otherwise it might require up to two weeks for your account to be registered.

Featured Speakers

Topics: reliable AI agents, uncertainty quantification and hallucination

Dawn Song

Dawn Song

Professor

University of California, Berkeley

Florian Buettner

Florian Buettner

Professor

Goethe-University Frankfurt, and German Cancer Research Center

Hamed Hassani

Hamed Hassani

Associate Professor

University of Pennsylvania

Stefano Soatto

Stefano Soatto

Professor & VP Applied Science

UCLA & AWS AI

James Zou

James Zou

Associate Professor

Stanford University

Mohan Kankanhalli

Mohan Kankanhalli

Provost's Chair Professor

National University of Singapore

Workshop Schedule

One day of focused sessions on hallucinations in agentic systems

9:00 - 9:10 AM

Opening Remarks

9:10 - 9:45 AM

Invited Talk: Florian Buettner

Theme: Hallucination in agentic systems

9:45 - 10:20 AM

Invited Talk: Hamed Hassani

Theme: Hallucination in agentic systems

10:20 - 11:20 AM

Poster Session I

Theme: Hallucination in agentic systems

11:20 - 11:55 AM

Invited Talk: Dawn Song

Theme: Hallucination in agentic systems

11:55 AM - 12:55 PM

Lunch Break

12:55 - 1:55 PM

Poster Session II

Theme: Towards reliable AI agents

1:55 - 2:25 PM

Invited Talk: Stefano Soatto

Theme: Towards reliable AI agents

2:25 - 3:00 PM

Invited Talk: James Zou

Theme: Towards reliable AI agents

3:00 - 3:30 PM

Invited Talk: Mohan Kankanhalli

Theme: Towards reliable AI agents

3:30 - 4:30 PM

Poster Session III

Theme: Towards reliable AI agents

4:30 - 4:40 PM

Best Paper Award

4:40 - 5:40 PM

Panel Discussion

Can we trust AI agents with critical tasks?

5:40 - 5:50 PM

Closing Remarks

Workshop Organizers

Grigorios Chrysos

Assistant Professor

University of Wisconsin-Madison

Sharon Li

Associate Professor

University of Wisconsin-Madison

Etsuko Ishii

Applied Scientist

Amazon

Sean Xuefeng Du

Assistant Professor

Nanyang Technological University in Singapore

Katia Sycara

Research Professor

Carnegie Mellon University (CMU)