Understanding MCP security implications
My talk at APISEC|CON 2025 covered agentic AI and MCP security risks and mitigations
Today I presented at
APIsec University's APISEC|CON event, sharing my (limited) knowledge about MCP security implications. Since some attendees asked for them, here are my slides:
As I covered on The New Stack recently, researchers have discovered that MCP is not secure by default. It's prone to vulnerabilities such as tool poisoning, rug pulls, tool shadowing, and remote control execution (RCE).
My presentation covered the hype around agentic AI and the excitement around MCP. It then looks at these risks and suggests some mitigations.
It was very helpful for me to put this together, and I'll post the recording of the session once it's out.
I'm looking forward to closely following autonomous AI, MCP, and related standards, and what all this means for protecting access to underlying APIs.