Can LLMs accelerate software requirements engineering?

Bill Doerrfeld | March 6, 2025

New research shows AI tends to outperform humans in software requirements

There's a lot of AI-bashing going on right now (And I know I've stoked the fire a bit 😅). But I still see a lot of positives in using it, especially for other areas ancillary to the actual act of coding... one area is software requirements.


Studies show poor requirements are the leading cause of software project failures (InfoTech, 2024). Other reports find that setting these guidelines up front is correlated with success. But, ironically, this is an area often minimized in project roadmaps.


According to new research from Crowdbotics, LLMs outperform human requirements generation on average, slimming a process that takes 2-4 weeks into seconds. They tend to enhance completeness by 10.2% and lead to slightly better alignment at a fraction of the cost.


Cory Hymel, VP of Research and Innovation at Crowdbotics, sees using AI as having big implications for requirements engineering. "It'll become bifurcated like everything else — either you'll either use AI, or you'll fall behind."


Of course, human-in-the-loop matters here, as it's hard to capture everything, like environmental factors, company culture, and access to internal data. "Having humans in the mixture helps bring creativity into the mix," he says. 


Just thought I'd give a shoutout to some explorations on the sidelines of AI-generated code, which seems to take up most of the literature on using LLMs in software engineering.


I'll be curious to see if engineering leaders agree. Are you seeing the benefits of embedding LLMs into the requirements engineering phase? Or does it still pose too many gaps in practice?

Humans versus LLMs requirements engineering software Crowdbotics study 2025

Other Blog Posts

By Bill Doerrfeld April 20, 2026
My InfoWorld feature reviews the key building blocks in agentic systems and with real-world examples from Shopify, Block, and others.
By Bill Doerrfeld March 31, 2026
My latest InfoWorld feature explores what makes an enterprise MCP registry effective, from semantic discovery to governance and security for AI agents.
By Bill Doerrfeld March 30, 2026
My first-ever contribution to CSO Online looks at the shifting landscape, from perimeter-based security to API security, and how CISOs are responding.
By Bill Doerrfeld March 29, 2026
My latest feature for The New Stack looks into solutions being proposed to fix open source Slopmageddon.
A digital pattern of rounded rectangular blocks in shades of blue and purple, arranged in an interlocking layout.
By Bill Doerrfeld March 27, 2026
My latest DirectorPlus looks at how agentic AI is reshaping platform engineering at Squarespace: less shared code and more developer experience focus.
By Bill Doerrfeld March 19, 2026
Usage-based pricing is reshaping the API economy. Discover 5 API monetization success stories, including OpenAI, Plaid, and AssemblyAI.
A lightbulb against a purple background, containing a human brain with an
By Bill Doerrfeld March 18, 2026
Why event-driven APIs matter for AI workflows, enabling real-time data, scalable systems, and responsive agent behavior.
By Bill Doerrfeld February 28, 2026
While hardware usually gets the spotlight in physical AI, the real differentiator won't be hardware. It'll be the models.
By Bill Doerrfeld February 27, 2026
In the latest DirectorPlus, Workato's CTO explains how MCP-enabled integration catalyzed internal AI usage and ROI.
By Bill Doerrfeld February 18, 2026
My latest on InfoWorld reviews MCP servers from 5 major cloud providers