Hi.

Oh, okay. Good afternoon, everybody, and thanks for coming, welcome to our AI webinar series for this year, 2026.

My name is Muhammad Ali. I'm a AI fellow at Incorta. And with me is Hania Anwar, software engineer in the Nexus team.

I think we can wait for, like, one more minute for other people to come, and then we can start.

Okay, I think we can start. So for this webinar series, we prepared four lectures. We're focusing on agentic AI applications from design patterns and different aspects and different considerations.

We're gonna start this one. So the first one starting with agentic design patterns, we're talking about architectures, frameworks, and some applications.

So this is the agenda for the talk today. So we're gonna give a quick overview about LLMs, then we'll talk about intelligent agents, chatbots, then also a quick overview of popular frameworks.

And Hany, I will take it from their talk about two design patterns, human in the loop and agent to agent communication.

And feel free to ask if you have any questions, type it in the chat, in the q and a, and we'll get to that towards the end.

So large language models. So LLMs are large language models. So these are advanced AI systems that everybody knows. They are designed to understand and generate human language.

They're basically very large neural networks. Neural networks have been around since the eighties and nineties, but these are huge neural networks. They are designed to take and text, process it, and then output original text.

They are very expensive to develop and run, and that's why we only got them in the last, like, seven, eight years.

What made it possible to to develop and deploy LLMs at scale now were breakthroughs in network architectures, specifically what's called the transformer architecture or transformer layer.

Also availability of large amounts of data, large amounts of compute and GPUs, and also large amounts of investments.

Early examples of LLMs were GPT one by OpenAI, BERT, Roberta, t five, Lambda. These are all by Google, Microsoft, and Facebook or Meta. Modern examples that we all know is like GPT five, Gemini, Cloudflor, and DeepSeek.

They are very important, so they have very unique capabilities specifically for text based applications, like text understanding and summarization, text generation.

They also form the kind of the brain or the core of LLM based agents.

They are also a catalyst for economic growth, hopefully increasing productivity and kind of sparking economic investment and development.

Chatbots. Chatbots were the very first application for LLMs, and this what actually sparked interest in LLMs.

Typical interaction with a chatbot is the user inputs a prompt or a question along with some context or some information.

The LLM would take this input, process it, processes it, and then outputs an answer. Then the user will take this answer, possibly ask follow-up questions and continue on this cycle.

So, yeah, these were the catalysts for increasing the interest in NLMs and they have been applied in many applications like text understanding, text summarization, problem solving, port completion.

But they also have very, I mean, you can call it severe limitations. One thing the first one is they have limited context.

So an LLM, you have to give it all the information in the prompt that it would need to answer unless the data is in the training set, which is the second point. It has very limited knowledge.

It doesn't know anything beyond what's in its training data. So if you needed to to know something new or to extract some new information from some document, you have to give it a document along with your question or prompt.

Chatbots or LLMs in general, they also have very limited abilities in terms of interacting with the physical world or invoking tools or making decisions.

That's why LLMs are usually augmented with extra capabilities like memory, tool calling, abilities to interact with physical world, like, you know, turning on the light or turning on the TV. These are extra capabilities that are enabled using, like, modern frameworks.

So agents. So what is an intelligent agent? Again, agents or the idea of intelligent agents has been around since the start of artificial intelligence, like sixties and seventies.

An agent is an entity that can perceive its environment, it can make informed decisions based on these perceptions and observations, then execute actions to achieve predefined goals.

Very famous examples of intelligent agents, like a chess agent, which is an agent that can play chess. So it can perceive the chessboard, look at the current, the current configuration. It it can then decide on what would be the next move or the next best move, then excuse execute it with the goal of winning the game.

Also, customer service agents, which are agents designed to satisfy customers' demands or answer customer customers' questions.

So they perceive the environment in in the sense of reading in the customer's question, then they can gather information from all the available data sources or knowledge sources, like accessing internal FAQs or internal manuals for the enterprise or for this company. Then they can make decision on the next reply and then respond to the customer and hopefully then satisfying their goal as far as satisfying their query or answering their question.

An LLM agent is a special type of agent which has an LLM as its intelligent core. So the LLM is the brain of this agent.

So the way this agent interacts with the world or executes actions is through text.

So the observations or the inputs are text and outputs are also text.

So they interact with the environment, they sense the environment, everything through language.

These agents have many features, but four of the main features are they are autonomous, so they should be able to work independently. They are proactive, so they can initiate actions towards their goal. They are also active, so they can respond to changes or observations in their environment.

And finally, they are goal oriented. So everything they do is towards achieving their goal. So every agent has a predefined goal and everything it does is hopefully towards achieving the goal.

LLM based applications have evolved over the past, like, less than a decade.

So the first the the very first applications were simple LLM based workflows where the user would ask something, the LLM would take some action, kind of, like, invoke some tools, like, very limited amount of tools and then excuse them.

Then came Rag, which is the retrieval augmented generation. We're gonna talk about this hopefully next time.

So Rag has the ability to interact, to kind of generate dynamic context for the user's query so that the the output of their LM is grounded in certain context.

And doing it does this through using what's called what's called text embeddings.

The third, like, kind of step is AI agent. So it's an agent that has a lot of tools or capabilities, so it can has its own memory, both short term and long term memory. It can interact with external databases or knowledge bases. It can plan, it can reason, it can access tools.

And the final iteration is Genentech AI systems or workflows or pipelines, which includes more than one agent. So it includes a collection of cooperating agents doing various tasks, again, with the goal with a specific goal in mind.

Lastly is the agentic frameworks. So to be able to build all these, like, these agentic applications or these agentic workflows that involve many many agents, we need facilities for we have to connect to various LMs, like, from various vendors like Google or OpenAI to be able to build complex agentic workflows with many agents, to be able to monitor, to debug, to log the pipeline runs.

So there exist many, many, like, various workflows.

Some open source, some closed source, and they have different I mean, they're different in many aspects. So they're even in the focus and philosophy. So some are geared toward kind of quick prototyping, some are geared towards enterprise and kind of robust applications.

In terms of ease of use, some are kind of drag and drop visual UI, others are code based. They differ in the technologies and languages they use, the amount of control you can you can have over your workflows, and the number of parameters or knobs you can play with or control, also in terms of the cost.

And this is just, like, five of the most popular workflows. We're not gonna go into the details. This is this is just for for reference.

But, like, the popular workflows, they they kind of cover all the spectrum in terms of easy to use, hard to use, final control, course control, free or paid and so on.

Finally, patterns. So why study design patterns? Agentic AI systems and workflows, they are very complex. They involve they can solve many problems. These problems have kind of of predefined set of patterns or characteristics.

So in order to not reinvent the wheel, so, like, when we get a new problem, we don't have to kind of come up with a new architecture or a new paradigm to solve the problem. We can just find what is the closest design process that can solve our problem and just use this because this has been kind of thoroughly studied, has been implemented many times. We know the common pitfalls, like in terms of design or implementation.

So we can just kind of pattern match, so find out the closest pattern. Then you have, like, all the wisdom of all the people who applied this pattern and tried it and have discovered the best the best practices on how to use it.

And in this webinar and the next, we're gonna explore some of these most popular design patterns.

We're gonna explore two of them with Hania, and next time, we're gonna explore a few more.

And with this, I'm gonna leave it to Hania.

I'm gonna stop sharing it.

Thank you so much, Mohammad. I can take the slides from here.

Okay. So can you see my screen clearly?

Yes. Okay. So as Mohammad mentioned and focused on the importance of the design patterns, so that we don't repeat the same problem each time we encounter it. I'll start by focusing on two of the most important design patterns in the agenting frameworks world, starting with the human in the loop.

I consider it a very critical pattern.

I like to call it the safety valve for agentic AI.

Why we call it like this?

As of now, we all want autonomous agents, we want them passed, and we want them independent. But let's be real, in the enterprise world, especially in sectors like finance or legal, mostly actually wrong. You cannot afford to hallucinate a quarterly report or, for example, a legal contract. That brings us to the equation you see on the screen.

It's all about synergy. On one side, you have the AI agent. It gives you massive scale and speed, crunching numbers faster than any army of analysts. And on the other side, you have the human, which being the context, known as a corporate strategy and the why behind the numbers.

When you combine those two together, you don't just get speed, you get a trusted outcome.

So when you apply this pattern, I advise that you should use this pattern whenever the cost of an error is higher than the need for speed. If a mistake is expensive, put a human in the loop.

So, now let's pop the hood. How does this actually work in practice? If you look at the blue path on the screen, that's your standard day to day pattern. You prompt the agent, and it answers instantly.

That's going to happen about ninety percent of the time. But the real magic happens on the dashed orange path. When the agent detects ambiguity or, for example, a high stakes decision, it doesn't just guess. It hits the brakes and the workflow physically stops.

That moment, the agent turns back to you, where you stop being just a requester, and you become the expert in that loop. Here, the agent waits for you to do one of the three things: either clarify a vague question, or correct a wrong assumption, or simply approve a critical step. And once you reply, the agent unposes and crosses the finish lines correctly. Now, let me show you exactly what that pose looks like in Cortex's in reality.

We've talked already about the theory, and now let's look at the reality inside Cortex. I want to walk you through the two specific moments where the agent will stop and ask for your help. First, on the left, we have the ambiguity check. This is considered a safeguard against ambiguity.

For example, the agent technically could run the query, but it realizes in the middle that the request is vague. So, of guessing and risking a hallucination, it pauses and asks for clarification.

And second, on the right, we have the strategic stop, where it's considered a safeguard against failures. And this happens when the agent hits a hard wall, for example, like missing data, and it needs you to decide on a new structure. And now we are going to start by seeing that first scenario in action.

Okay. In this first example, we have a classic case of ambiguity. I'm going to ask the agent for total sales of my favorite product category. And now the agent has the data to calculate the sales, but it has no idea what my favorite product category is. So a normal standard AI might hallucinate, for example, guess the bikes category, just to be helpful. But watch our agents, how it stops in the middle, admits it doesn't know your favorite category, and wait for you to clarify before writing a single line of code.

As you can see here, it pops an input layout to ask on the favorite category you mean by your question. And once you give it the answer, it continues by generating the final report of the answer to the question you just asked.

Okay, now I'll give you some notes on a much harder problem where a strategic dead end happens. In the second demo, I ask for the longest flight distance, but here is the catch. My data set is for a retail store. There is no flight date. Let me increase the quality for you to see better.

So, instead of crashing or making up a number, watch how the agent analyzes the situation, realizes it can proceed, and present me with three strategic options on how to move forward.

Here, I chose to cancel the request, as I don't have the needed data in the dataset I provided, and the agent continues where it stopped and provide me with the final report. In this situation, the agent treats me like his partner, not just a prompter that asks some question and waits for the final answer.

And here are our two demos that demonstrate how the human in the loop can really make a difference in situations that need human intervention.

So we've just seen how human in the loop solves the problem of trust and safety. And that makes us move to another problem, which is the scale. So this problem brings us to our second design pattern, which is the agent to agent communication, or as known as the eight way.

Looking at the red box on the left, this is a single AI agent. If you ask one agent to do everything, like write code, do research, and manage a project, it will surely fail. It gets confused and surely will make mistakes. And of course, we can't rely on a human to micromanage every single step of that process. So the solution here is in the blue box, the multi agent team. So instead of one bot doing five jobs, we have five bots doing exactly one job each.

And that's how we solve for complexity without losing the speed.

And that's what brings the agent to agent communication to the table. A2A is simply the language these agents use to talk to each other. It organizes the team so they can finish the job together.

And the best part about it is that it's an open standard protocol. This means an agent from one company can easily work with an agent from a completely different company, regardless of which AI model they use. They all speak the same language.

So how do these agents talk to each other? The A2A protocol breaks it down into four simple steps. First, we have discovery. This is like looking at a phone book, for example.

So before an agent can ask for help, it looks up who is available in the registry. And second, it's identity. This is like swapping business cards. That's what we call for an agent an agent card, where each agent introduces itself and lists its skills by saying, Here is who I am, and here are the specific skills I can perform for you.

And once all the agents are identified, it comes a third step, which is the communication, or what we consider the handshake between the agents. They start assigning tasks to each other and start working together, either synchronously or asynchronously in case of heavy duties or heavy jobs. And finally, security, because we are in the enterprise, we can't have agents whispering in the dark. So, every interaction is encrypted and logged, and this gives you a full audit trail of exactly who did what and when.

So, to wrap up, I want to take you behind the scenes of the demos we've just watched. What you see on the screen is the actual WIAL protocol, the eight zero eight standard that powers these interactions. On the left, the request, notice how straightforward it is. This is platform independent JSON RPC, whether you are on our web UI or a mobile app or a custom Slack bot, the request is just that simple, a standard message. For example, here, what are the total sales for bike?

And on the other hand, the response, this is where the power lies, focus on the artifact session, where the agent doesn't just chat, it returns a structured data object. It gives you the answer, and it gives you the evidence on how the final answer was produced, either regarding the used logic or the SQL that was produced to generate the final chart or insight.

It gives you the full details on what happened behind the scenes. So, what does this actually matter? Why do we need the eight way? It matters because it proves that in Querta Nexus is open, you aren't locked into our interface because this is standard JSON. You can programmatically access these agents from any application in your enterprise.

And I'm excited to share that this open A2A protocol will be released in the next quarter, allowing you to start building your own agentic applications in Inkurta very soon.

Finally, I want to acknowledge that the main source for today's talk, these design patterns come from the book Agentic Design Patterns. And if you want to dive deeper into the architecture, this is your go to book. Thank you for your time. Before we sign off, I want to make sure that you know about our next event on January twenty.

We're bringing together a panel of experts, Mohammad, Usaina, and Abd Rahman, to discuss real world implementations. They will be tackling practical architectures and how you value use cases. It starts at four pm and the registration is open now. So please secure your spot to continue the conversation with us.

And now we're happy to receive any questions that you have.

No questions? Mohammad?

Okay.

Yeah. I mean, feel free to if you have any question, you can put it in the q and a in the chat.

Otherwise, we can handle that. As Hania said, our next webinar will be January twentieth, and we hope to see you then.

Alright. Thank you, everybody. Great.

Thank you every time for your everybody for your time.

Yep. Thank you.

Presented by:

Mohamed Aly

Mohamed Aly

AI Fellow

Incorta White Logo [PNG]-2
Hania Hani

Hania Hani

Software Engineer lll

Incorta White Logo [PNG]-2

Our customers are breaking barriers

Innovators use Incorta to break lengthy cycles and are redefining real-time self service analytics.

Group 1684
incorta-customers_vertical (1)