PostHole
Compose Login
You are browsing eu.zone1 in read-only mode. Log in to participate.
rss-bridge 2025-10-08T20:36:00+00:00

SE Radio 689: Amey Desai on the Model Context Protocol

Amey Desai, the Chief Technology Officer at Nexla, speaks with host Sriram Panyam about the Model Context Protocol (MCP) and its role in enabling agentic AI systems. The conversation begins with the fundamental challenge that led to MCP's creation: the proliferation of "spaghetti code" and custom integrations as developers tried to connect LLMs to various data sources and APIs. Before MCP, engineers were writing extensive scaffolding code using frameworks such as LangChain and Haystack, spending more time on integration challenges than solving actual business problems. Desai illustrates this with concrete examples, such as building GitHub analytics to track engineering team performance. Previously, this required custom code for multiple API calls, error handling, and orchestration. With MCP, these operations can be defined as simple tool calls, allowing the LLM to handle sequencing and error management in a structured, reasonable manner.

The episode explores emerging patterns in MCP development, including auction bidding patterns for multi-agent coordination and orchestration strategies. Desai shares detailed examples from Nexla's work, including a PDF processing system that intelligently routes documents to appropriate tools based on content type, and a data labeling system that coordinates multiple specialized agents. The conversation also touches on Google's competing A2A (Agent-to-Agent) protocol, which Desai positions as solving horizontal agent coordination versus MCP's vertical tool integration approach. He expresses skepticism about A2A's reliability in production environments, comparing it to peer-to-peer systems where failure rates compound across distributed components.

Desai concludes with practical advice for enterprises and engineers, emphasizing the importance of embracing AI experimentation while focusing on governance and security rather than getting paralyzed by concerns about hallucination. He recommends starting with simple, high-value use cases like automated deployment pipelines and gradually building expertise with MCP-based solutions.

Brought to you by IEEE Computer Society and IEEE Software magazine.


Amey Desai, the Chief Technology Officer at Nexla, speaks with host Sriram Panyam about the Model Context Protocol (MCP) and its role in enabling agentic AI systems. The conversation begins with the fundamental challenge that led to MCP’s creation: the proliferation of “spaghetti code” and custom integrations as developers tried to connect LLMs to various data sources and APIs. Before MCP, engineers were writing extensive scaffolding code using frameworks such as LangChain and Haystack, spending more time on integration challenges than solving actual business problems. Desai illustrates this with concrete examples, such as building GitHub analytics to track engineering team performance. Previously, this required custom code for multiple API calls, error handling, and orchestration. With MCP, these operations can be defined as simple tool calls, allowing the LLM to handle sequencing and error management in a structured, reasonable manner.

The discussion reveals how LLM capabilities have evolved to enable MCP’s success. Desai argues that MCP wouldn’t have succeeded with earlier models like ChatGPT 3.5, but improved reasoning capabilities in modern LLMs make them effective orchestrators. He presents the controversial view that hallucination should be treated as a feature rather than a bug, enabling LLMs to explore solution spaces more creatively when solving complex multi-step problems.

The episode explores emerging patterns in MCP development, including auction bidding patterns for multi-agent coordination and orchestration strategies. Desai shares detailed examples from Nexla’s work, including a PDF processing system that intelligently routes documents to appropriate tools based on content type, and a data labeling system that coordinates multiple specialized agents. The conversation also touches on Google’s competing A2A (Agent-to-Agent) protocol, which Desai positions as solving horizontal agent coordination versus MCP’s vertical tool integration approach. He expresses skepticism about A2A’s reliability in production environments, comparing it to peer-to-peer systems where failure rates compound across distributed components.

Desai concludes with practical advice for enterprises and engineers, emphasizing the importance of embracing AI experimentation while focusing on governance and security rather than getting paralyzed by concerns about hallucination. He recommends starting with simple, high-value use cases like automated deployment pipelines and gradually building expertise with MCP-based solutions.

Brought to you by IEEE Computer Society and IEEE Software magazine.



Show Notes


Transcript

Transcript brought to you by IEEE Software magazine.

This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number and URL

Sri Panyam 00:00:18 Hello, this is Sri Panyam for Software Engineering Radio. Today with me I have Amey Desai, CTO of Nexla and we’ll be talking about the Model Context Protocol or MCP. Welcome to the show, Amey.

Amey Desai 00:00:32 Hey Sriram, glad to be here.

Sri Panyam 00:00:34 Before we dive in, you want to tell us about yourself and Nexla?

Amey Desai 00:00:38 So I am Amey, I am CTO of Nexla, and next slide is a data integration platform that allows anyone to move, connect, transform data using no code as well as low-code processing over the last 18 months. We have also built agent DKI systems using MCP as well as not MCP to help people with the data problem and specifically the data engineering problem. And before Nexla I worked at both kind of small and big companies, but I’ve largely been with let’s say, data systems as a problem for the last part of a good decade. Did machine learning before it became deep learning and now before it became kind of let’s say LLMs.

Sri Panyam 00:01:20 Thank you Amey. Exciting. I’m glad you mentioned MCP because that’s what we’re talking about today. Let’s dive in. In last 18 months, two years LLMS have taken over and MCPs the latest thing. What actually resulted in today’s world’s MCP and why MCP? What is MCP?

Amey Desai 00:01:37 Good question. To start off, I think what happened is post charge GPT having its aha moment, everybody wanted to use it for pretty much everything and that was the promise of the ChatGPT because you could have a simple UI where you could ask it questions and it would do impressive things, whether they were right or wrong. And one of the key things then people as they started incorporating Large Language Models into software, just building software, there was a lot of scaffolding. The spaghetti code that started getting written when integrating to variety of different data systems, tools, APIs, call it. And Anthropic came out with Model Context Protocol, protocol being the key word for how LLMs should connect to external data sources and tools. So it’s a standardization that I think MCP brought to the table and that helps a lot in I would say, software engineering and agent system today. And that’s not just relevant to MCP, but a few other protocols that have come out. But MCPI would say is the cleanest protocol and the simplest protocol that we have today.

Sri Panyam 00:02:43 So before MCP, how were these integrations happening? I believe there were tools. What was the state of the world there?

Amey Desai 00:02:50 I think what was happening before was people were using a variety of open-source packages as well as REST APIs and quite a few, LongChain, Haystack, Llama index whatnot. And using those packages to writing the LLM part of the code, which there is the, I would say the sexy part. And then there was, I would say the harder parts, which is how do I communicate to a variety of different data systems. So you would go and explicitly write code for talking to APIs, managing credentials and all of those details. And how do you want to also embed those APIs with the LLM calls that the frameworks would allow? This was making the problems I would say that people are trying to solve with LLMS a little bit harder as now you are spending a lot more effort on just the integration part rather than the actual solving part of business logic. This ended up creating a lot of, I would say, bad code, hard to test, hard to maintain and hard to reason through things, which is where I think MCP at least allowed you a structure for how to do it. And I think that’s the difference right now between MCP versus what was there before MCP is that somebody can follow a structure to solve the same problem. It still has its own pitfalls, but at least you start from a much better setting rather than a clean set of hey, I have to just write code.

Sri Panyam 00:04:06 So getting a bit more of a visual framing of this, before MCP developers would write code, let’s say for example, access their own internal things, tools and so on,

Amey Desai 00:04:16 Right? Everyone was writing you’d say custom connectors or custom glue logic by and large.

Sri Panyam 00:04:21 Can you use an example of one of these things like in a real setting, we’ve all seen the get weather example on open AI space, right?

Amey Desai 00:04:26 No, I think get weather is tool calling at its,

Sri Panyam 00:04:28 It’s too simple. So what’s a real-world messy example? So

[...]


Original source

Reply