Thank you for participating in our event!
Scroll to see how amazing it was
Thank you to everyone who took part in AgentStack!
It was great to see so many of you come together to explore the full journey of building intelligent agents — from early ideas to real-world implementation.
For those who couldn’t make it, or if you’d like to revisit any of the talks — all session recordings and presentations are available below.
18:00-18:30
Gathering and Mingling
18:30-19:00
AI Agents: From Self-Reflection to Multi-Agent Discussion
This talk will present the fundamental concepts of AI agents and highlight key challenges in their development and deployment. We will also explore strategies to enhance agentic workflows, such as self-refinement and debating techniques, to improve their performance
Assaf Rabinowicz, Ph.D.
AI & Cloud Expert at Oracle
19:00-19:30
IntellAgent: The Open-Source Multi-Agent Framework for Evaluating Agentic Systems
Prototyping AI agents is one thing—ensuring they are robust, reliable, and production-ready is another. As AI agents take on increasingly complex tasks, from autonomous decision-making to tool execution, traditional evaluation methods fall short, lacking scalability and the ability to capture real-world multi-step reasoning and dynamic interactions.
In this talk, I will introduce IntellAgent, an open-source multi-agent framework designed to rigorously test AI agents in diverse, high-stakes environments. IntellAgent generates real-world scenarios, simulates complex user interactions, and systematically evaluates policy adherence, reasoning accuracy, and failure patterns. By leveraging policy graphs, complexity-driven test case generation, and automated critique mechanisms, it helps developers identify weaknesses, optimize agent performance, and prevent failure modes before deployment.
As AI systems become more autonomous and deeply integrated into critical workflows, reliable evaluation is no longer optional—it’s essential. IntellAgent provides the missing evaluation layer, offering a structured, scalable testing framework that ensures AI agents perform as expected before real-world deployment, setting new reliability standards for AI-driven systems.
Elad Levi
CTO & Co-Founder at Plurai
19:30-20:00
Agent Supervision or Simple Workflow? Insights from the Journey of Implementing monday.com Prepme – AI-powered Meeting Preparation
Itay Maslovich
Software Engineer at Monday.com
Ronnie Mindlin Miller
Data Scientist at Monday.com