Welcome to SERI Industry Day Workshop 2026
Welcome to SERI (Software Engineering Research and Innovation) Industry Day Workshop 2026, co-hosted by Infosys and IIIT Hyderabad.
This full-day workshop bridges the gap between cutting-edge academic research and industry practice, bringing together researchers, practitioners, and industry leaders to collaborate on one of the most critical challenges in modern software engineering.
The workshop focuses on Testing and Verification of Agentic AI Systems — a pressing need as autonomous AI agents transition from research prototypes to production deployments in safety-critical and mission-critical applications.
Join us on Saturday, July 18, 2026 at IIIT Hyderabad for a day of keynote presentations, invited talks, and in-depth panel discussions featuring 2 keynote speakers, 5 invited speakers, and 10 industry and academic panelists.
Objectives
- Bridge the Academia–Industry Gap: Foster meaningful collaboration between academic researchers and industry practitioners working on the testing, evaluation, and assurance of autonomous and agentic AI systems.
- Share Practical Experience from Real-World Deployments: Enable the exchange of insights, challenges, and lessons learned from deploying and operating agentic AI systems in production and applied research environments.
- Explore Emerging Research Directions: Provide visibility into current and emerging research questions in the testing, robustness, and verification of autonomous agents, highlighting areas where academic and industrial perspectives intersect.
- Understand Robustness and Alignment Considerations: Discuss high-level approaches to achieving dependable agent behavior under uncertainty, environmental variability, and deployment drift, drawing from both research and practice.
- Discuss Adversarial and Stress-Testing Perspectives: Examine how autonomous agents can be evaluated under challenging, unexpected, or adversarial conditions, and how system resilience can be reasoned about and assessed.
- Reason About Systematic Evaluation and Coverage: Explore principles for designing meaningful evaluation strategies that provide adequate coverage of agent behaviors and interactions in realistic settings.
- Broaden the View on Performance Assessment: Encourage discussion on performance evaluation beyond narrow metrics, considering reliability, safety, consistency, robustness, and other multi-dimensional aspects relevant to agentic systems.
- Survey Practical Evaluation and Testing Approaches: Increase awareness of practitioner and research-driven approaches to testing and evaluating autonomous agents, without prescribing specific tools or implementation choices.
- Build Sustained Professional Networks: Strengthen long-term connections across academia, industry, and applied research labs to enable continued collaboration, joint publications, and follow-on initiatives.
Motivation
Why Traditional Testing Falls Short for Autonomous and Agentic Systems
Autonomous and agentic AI systems introduce fundamental challenges that are not well addressed by traditional software testing and verification practices:
- Non-Deterministic Execution: Agent behavior can vary across runs due to stochastic decision processes, adaptive policies, and changing environments, limiting the effectiveness of conventional regression testing.
- Emergent System Behavior: System-level outcomes arise from interactions between agents, environments, and users, making behavior difficult to infer from component-level tests alone.
- High-Impact Autonomous Decisions: Agents increasingly make decisions with real-world consequences in operational settings, elevating the importance of rigorous pre-deployment evaluation and assurance.
- Behavioral Drift Over Time: As agents adapt to new data, tasks, or environments, their behavior may gradually diverge from original intent, expectations, or safety assumptions.
- Combinatorial State Spaces: The number of possible states, interactions, and execution paths grows rapidly with system complexity, rendering exhaustive testing infeasible.
- Evolving Regulatory Expectations: Emerging governance and regulatory regimes increasingly require demonstrable evidence of testing, evaluation, and responsible system behavior.
Why Now: Industry and Research Urgency
Several converging factors make this an urgent and timely moment to focus on the testing and assurance of agentic AI systems:
- Rapid Transition from Research to Production: Agent-based systems are moving quickly from experimental settings into large-scale, real-world deployments across industries.
- Growing Accountability and Compliance Demands: Organizations are under increasing pressure to justify system behavior, manage risk, and provide evidence of responsible AI practices.
- Rising Safety and Trust Expectations: High-visibility failures and public scrutiny have heightened expectations around reliability, robustness, and trustworthiness.
- Need for Shared Understanding Across Communities: Academia and industry are tackling the same problems from different angles, yet often lack common forums to align perspectives and priorities.
- Opportunity for Collective Leadership: Bringing together researchers and practitioners at this stage creates a valuable opportunity to shape future research directions, evaluation norms, and collaborative efforts.
Venue: IIIT Hyderabad
International Institute of Information Technology, Hyderabad
Gachibowli, Hyderabad, Telangana 500 032, India
View on Google Maps
Program
Date: Saturday, July 18, 2026 | Duration: Full day, 09:00 - 14:35
| Time | Session |
|---|---|
| 09:00 | Opening Remarks & Welcome (15 min) |
| 09:15 | Keynote 1 – International Speaker (45 min) Keynote |
| 10:00 | Tea Break (15 min) Break |
| 10:15 | Keynote 2 – India Industry Leader (45 min) Keynote |
| 11:00 | Invited Talks 1–3 (60 min: 3 × 20 min) |
| 12:00 | Lunch Break (45 min) Break |
| 12:45 | Invited Talks 4–5 (40 min: 2 × 20 min) |
| 13:25 | Transition & Discussion (10 min) |
| 13:35 | Panel 1 – "Industry Challenges in Agentic AI Testing" (25 min, 5 panelists) Panel |
| 14:00 | Panel 2 – "Future of AI Verification Research and Practice" (20 min, 5 panelists) Panel |
| 14:20 | Closing Remarks & Networking (15 min) |
| 14:35 | End |
Organization
Co-hosts: Infosys & IIIT Hyderabad
Organizing Committee: Details will be announced soon.
Contact: For inquiries, please reach out to the organizing committee.
Sponsorship
This workshop is co-hosted by Infosys and IIIT Hyderabad. For additional sponsorship opportunities, please contact the organizing committee.
| Co-host | Role |
|---|---|
| Infosys | Industry Co-host |
| IIIT Hyderabad | Academic Co-host |