AI Made Friendly HERE

AI Can Never Build Software on its Own 

“Techies are claiming that AI can never build software on its own! That is blatantly false!” said Abacus AI chief Bindu Reddy, saying it already creates reliable test scripts, simple websites, and chatbots without any technical assistance. 

She anticipates that in the future, it will be able to do even more. 

Going beyond code editing platforms like Cursor, and AI agents like Cognition Labs’ Devin, Cosine’s Genie and others, alongside code assistant tools like OpenAI’s ChatGPT, Amazon Q, and GitHub Copilot are helping developers generate code snippets from natural language descriptions, streamlining the coding process, reducing errors and more. 

Amazon CEO Andy Jassy recently revealed that by leveraging Amazon Q, the company was able to save 4,500 developers-years of work. “Yes, the number is crazy, but real,” he posted on X.

In the realm of testing, AI-powered platforms such as Applitools and Testim automate the creation and execution of test cases, effectively mimicking human cognitive functions to identify UI discrepancies and performance issues. 

Looking ahead, as AI continues to evolve, its potential will expand even further. Future developments may enable AI to tackle more intricate software design tasks, optimise code for performance, and even contribute to user experience design. 

This evolution signifies a transformative shift in software development, where AI not only assists but actively participates in the creation of software solutions. GiHub Copilot Workspace, currently under technical preview, is designed to do exactly that. 

However, can AI write software on its own? It remains to be seen.

Limitations of AI

Many tech professionals assert that AI cannot independently develop software due to its reliance on human input and oversight. While AI can assist in software development through automation and optimisation, it fundamentally lacks the ability to autonomously create complex software systems without human intervention.

Discussing its dependence on human expertise, a user on X remarked, “It’s not ready.”

The user asked both Claude and ChatGPT to create a roulette game using HTML, CSS, and JavaScript—both attempts fell short. Claude’s version came closer, but only after extensive debugging. 

While both understood the requirements, they struggled with implementing the spin function, numeric display, and alignment. In response, Reddy said, “You can’t simply ask an LLM—you need to give it detailed instructions and build an entire agentic system around it!”

The utility of using an LLM for tasks requires extensive effort. Despite the model’s ability to produce code, the quality was often inadequate. After multiple iterations, the roulette game generated by ChatGPT still fell short, producing results that a junior programmer could easily outperform. 

It was noted that if an entire agentic system is necessary to support such requests, it might be more practical to write the code directly. This situation underscores the need for the technology to mature.

Then what’s the point? I don’t want to argue, but yes I should be able to simply ask. I mean it produces code already, its just not that good. Here is how it rendered my roulette game for me on chatgpt – and this is after many iterations before I threw my hands up. A junior… pic.twitter.com/i9OlYAh8ym

— Dan Little (@DannyLLittle) August 25, 2024

Despite all this, AI systems still require human expertise to define problems, gather and clean data, select appropriate algorithms, and train models. This process involves significant human judgement and creativity, which AI currently cannot replicate. For example, building an AI model requires understanding specific business needs and ensuring that the data used is relevant and accurate.

Moreover, AI operates within the parameters set by its developers. It can automate repetitive tasks and optimise certain processes, but the conceptualisation of software—such as user experience design, feature prioritisation, and strategic planning—remains a distinctly human domain. 

AI can enhance productivity and streamline workflows, yet it cannot independently navigate the complexities of software architecture or adapt to unforeseen challenges without human guidance. Said that, what Reddy has envisioned might come true someday, however, the technology is not there yet. 

Is the Demand for CS on the Going Down?

The demand for computer science professionals is experiencing a notable decline, influenced by various market dynamics and technological advancements. 

“This doesn’t mean that human engineers will disappear- we will still have human experts and supervisors. It just means that the demand for CS graduates will come down over time. We are already experiencing some of this today!” said Reddy. 

Although, Reddy’s prediction remains uncertain. Currently, the job market shows a noticeable oversaturation of CS graduates, with many recent graduates struggling to secure positions and facing heightened competition for available roles. 

As enrollment in CS programs continues to rise, the number of graduates may exceed job creation, resulting in fewer opportunities for new entrants, according to a Reddit post.

This reflects a broader market correction after a period of aggressive hiring during the pandemic, driven by inflated valuations and unsustainable growth strategies.

Additionally, the rise of AI tools is reshaping job roles within the tech industry. 

Despite these challenges, the overall need for qualified CS talent remains, particularly in medium to large companies that continue to seek skilled professionals to navigate the evolving tech landscape.


Originally Appeared Here

You May Also Like

About the Author:

Early Bird