Will AI-generated code be normalized in 2025?

Bill Doerrfeld | November 1, 2024

This story is in progress: I'm collecting input here.


Over the last two years, we've seen monumental promises around AI’s impact on developer productivity. Tools like GitHub Copilot are becoming ubiquitous, while OpenAI continues to push boundaries with groundbreaking features. Yet, the AI landscape is still a Wild West and must mature. We've witnessed the downsides of hallucination, questionable code qualitybloat & technical debt, copyright/IP restrictions, friction with data regulations, security issues with LLMs, confusion about what open-source AI is, and more. Not to mention the high energy and environmental costs of AI processing.


Valuable use cases with AI are still rare. Some analysts say the genAI bubble has popped, and some tech leaders are pulling the plug on major AI investments. There is also competing research on the actual productivity gains of using AI agents for code development. For consumer-facing AI-driven apps, "Disclaimer: results may not be accurate" is the saying of this age.


Despite this, there are signs of AI-generated code improving. Engineering leads are increasingly embedding it in development workflows, and developers are beginning to expect AI assistance. 2025 may be a breakout year for AI-generated code to become more stable, secure, and normalized. What will be done to get there?


I'm putting together a feature to discuss both the potential for widespread AI adoption and the practical steps necessary for overcoming current limitations. By forecasting these advancements, I hope to predict whether AI-generated code will truly be part of mainstream development by next year.


The deadline for submission is Friday, 11/15/24, 5 pm PST.


I’m looking for commentary from respected individuals in the software development field, offering practical examples (no product pitches). For efficiency's sake, I'll only be accepting input here on the below. I cannot guarantee the publication of any or all responses. Setting a maximum 1k characters (~150 words) per response. Looking forward to reading your thoughts!


This is probably for InfoWorld but may run elsewhere.


Form: Will AI-generated code be normalized in 2025?

Other Blog Posts

By Bill Doerrfeld March 31, 2026
My latest InfoWorld feature explores what makes an enterprise MCP registry effective, from semantic discovery to governance and security for AI agents.
By Bill Doerrfeld March 30, 2026
My first-ever contribution to CSO Online looks at the shifting landscape, from perimeter-based security to API security, and how CISOs are responding.
By Bill Doerrfeld March 29, 2026
My latest feature for The New Stack looks into solutions being proposed to fix open source Slopmageddon.
A digital pattern of rounded rectangular blocks in shades of blue and purple, arranged in an interlocking layout.
By Bill Doerrfeld March 27, 2026
My latest DirectorPlus looks at how agentic AI is reshaping platform engineering at Squarespace: less shared code and more developer experience focus.
By Bill Doerrfeld March 19, 2026
Usage-based pricing is reshaping the API economy. Discover 5 API monetization success stories, including OpenAI, Plaid, and AssemblyAI.
A lightbulb against a purple background, containing a human brain with an
By Bill Doerrfeld March 18, 2026
Why event-driven APIs matter for AI workflows, enabling real-time data, scalable systems, and responsive agent behavior.
By Bill Doerrfeld February 28, 2026
While hardware usually gets the spotlight in physical AI, the real differentiator won't be hardware. It'll be the models.
By Bill Doerrfeld February 27, 2026
In the latest DirectorPlus, Workato's CTO explains how MCP-enabled integration catalyzed internal AI usage and ROI.
By Bill Doerrfeld February 18, 2026
My latest on InfoWorld reviews MCP servers from 5 major cloud providers
By Bill Doerrfeld February 18, 2026
How are organizations actually using agentic knowledge bases in practice? My article for The New Stack looks at six emerging patterns.