The Power and Risks of AI-Generated Code

The Power and Risks of AI-Generated Code

In June 2021, GitHub introduced Copilot, a helpful tool that suggests code while you’re programming, powered by OpenAI’s text-generation technology. It showed the amazing potential of generative artificial intelligence to automate valuable work. Now, after two years, Copilot has matured and can handle tasks that used to be done manually.

Recently, GitHub released a report based on data from nearly a million programmers using Copilot. On average, they accepted about 30 percent of the AI assistant’s suggestions, proving that the system is quite good at predicting useful code.

Chart - GitHub Report on Copilot
Chart – GitHub Report on Copilot

The chart above showed that users tend to accept more of Copilot’s suggestions the longer they use it. This suggests that AI-enhanced coding increases productivity over time. Less experienced developers seemed to benefit the most from this productivity boost.

While this technology promises great benefits, there are also some caveats to consider. A study by Stanford University found that programmers using a code-generating AI assistant tended to include more bugs in their code. Strangely, these programmers believed their code was more secure. So, there are both benefits and risks involved in coding with AI assistance.

Clive Thompson wrote in a 2022 that Copilot’s suggestions are based on patterns in other programmers’ work, which may contain flaws. These flaws can lead to bugs that are hard to spot, especially when programmers rely too much on the AI tool.

In other fields of engineering, overreliance on automation has been known to cause problems. Pilots who become too dependent on autopilot may see their flying skills decline, and self-driving car operators must remain vigilant to prevent potential hazards.

This paradox is crucial to understanding the future of generative AI and where it might lead us. While the technology can be beneficial, it has also led to a decline in the quality of web content due to AI-generated spam and chatbot overuse.

It’s important to note that generative AI isn’t a failure. It has shown promise in boosting performance and happiness in some fields, like customer support. Some studies also found no increase in security bugs when AI is used by developers. GitHub is actively researching how to safely code with AI assistance, introducing new Copilot features to address vulnerabilities.

However, the complex effects of code generation serve as a cautionary tale for companies deploying generative algorithms in other areas. Regulators and lawmakers should also pay attention to the potential downsides of AI and carefully assess its impact.

With software underpinning much of our future, we must be cautious about AI-generated bugs that could proliferate if not managed properly. Striking the right balance between human skill and AI assistance will be crucial as we move forward into an increasingly AI-driven world.