An interdisciplinary workshop exploring uncertainty, reliability and hallucinations in large-scale Agentic AI systems
Call for papers! Do you have exciting findings that fit the workshop? The call for papers is now live: call for papers.
Call for reviewers! Are you passionate about reliable AI? We are looking for experienced reviewers on the domain to help us assess the workshop submissions. Please fill the form here.
Workshop Proposal Accepted! We are excited to announce our proposal for ICLR 2026.
Missed our previous workshop? Watch the recording of the ICLR 2025 Workshop here: View Recording
When we delegate tasks to AI agents—can we count on them to get it right?
Agentic AI systems are increasingly moving beyond static content generation to perform autonomous decision-making tasks, such as scheduling meetings, booking travel, managing workflows, and supporting scientific research. In these settings, reliability is not merely beneficial—it is a fundamental requirement. Yet today’s foundation models remain prone to a critical failure mode: hallucination.
Foundation models have demonstrated remarkable capabilities across various domains, but they often generate outputs that are inconsistent with facts or user intent - a phenomenon known as "hallucination".
This workshop brings together experts from machine learning, natural language processing, cognitive science, and human-computer interaction to address the challenges of uncertainty and reliability in large-scale AI systems.
Detection methods, uncertainty quantification, human-AI collaboration, and mitigation strategies for hallucinations
Researchers, practitioners, and industry professionals working with foundation models
Keynote talks, panel discussions, paper presentations, and interactive sessions
Shared understanding of challenges and collaborative roadmap for future research
We invite researchers to submit their latest work on reliable agentic AI
Submission link: https://openreview.net/group?id=ICLR.cc/2026/Workshop/Reliable_Autonomy
Submission format: ICLR template. 6-8 pages for regular submissions, 2-4 pages for tiny papers.
This year, ICLR is requiring each workshop to accept short (2-4 pages in ICLR format) paper submissions, with an eye towards inclusion.
Authors of these papers will be earmarked for potential funding from the main conference of ICLR, but need to submit a separate application for Financial Assistance that evaluates their eligibility. This application for Financial Assistance to attend ICLR 2026 will become available on the ICLR website, which is centrally managed by ICLR and not the workshop organizers.
For submissions to the Tiny Paper Track, the Appendix will not be reviewed. The submission must be limited to 2-4 pages of content.
Requirement: Include the label "TINY" at the start of the title to distinguish it from regular submissions. If this label is not included, the submission will be treated as a regular one.
Note: Please note that we follow ICLR's reciprocal reviewing policy for authors. We are committed to keeping this burden low, limiting assignments to at most 3 papers.
Note: This workshop is non-archival. The workshop provides a platform for researchers to present and discuss their latest findings without the pressure of formal publication.
Note: All authors are required to have an updated OpenReview profile. Please register with an institutional email, otherwise it might require up to two weeks for your account to be registered.
Topics: reliable AI agents, uncertainty quantification and hallucination
Professor
University of California, Berkeley
Professor
Goethe-University Frankfurt, and German Cancer Research Center
Associate Professor
University of Pennsylvania
Professor & VP Applied Science
UCLA & AWS AI
Associate Professor
Stanford University
Provost's Chair Professor
National University of Singapore
One day of focused sessions on hallucinations in agentic systems
Theme: Hallucination in agentic systems
Theme: Hallucination in agentic systems
Theme: Hallucination in agentic systems
Theme: Hallucination in agentic systems
Theme: Towards reliable AI agents
Theme: Towards reliable AI agents
Theme: Towards reliable AI agents
Theme: Towards reliable AI agents
Theme: Towards reliable AI agents
Can we trust AI agents with critical tasks?