GitHub foresees the pivotal role of AI in the software development lifecycle, including security. Over the past year, the company has incorporated over 70 features into GitHub advanced security. However, at GitHub Universe 2023 held in November, it announced adding generative AI to the mix.
The company now believes security vulnerabilities can be identified at the very stage when the code is being written. By leveraging an LLM, GitHub now not only identifies potential vulnerabilities but also provides developers with secure code suggestions from the start.
“With auto fix, we’re going to suggest the fix in the pull request for them. So the developers will not just see the alert, but also a suggested fix powered by AI right there,” Jacob Depriest, VP and deputy chief security officer at GitHub, told AIM.
These are not ordinary fixes. They are concise, actionable suggestions for swiftly comprehending and addressing vulnerabilities. Developers can now resolve issues faster and prevent new vulnerabilities from creeping into your codebases.
Depriest said GitHub has already seen great success with code scanning’s current fix rate. “This implies that when developers receive an alert while working, they address the issue approximately 50% of the time before it reaches production and this is huge. With the new AI-powered code scanning auto fix feature, developers can build on an already strong fix rate.”
Protecting secrets with AI
GitHub is not just using LLMs solely to uncover potential code vulnerabilities; the company also uses these powerful models to detect leaked passwords with reduced false positives.
Nearly 80 percent of the breaches originate through credential leakage or secrets being leaked, according to Depriest. “Secret scanning has been integral to GitHub’s advanced security, forming a key component of our security programme.
“Now with AI, we’re also going to detect generic secrets and low confidence patterns in code as well, which is going to really improve that capability and capture more and protect secrets before they’re even hitting production.”
Moreover, Depriest believes security starts with the developer and to be more precise, with the developer’s account. Given the high number of credential leakages, it opted to lean heavily into enabling multi-factor authentication for all contributors on github.com.
“That wasn’t an easy thing to do. That wasn’t a quick thing to do. It took a lot of planning and a lot of investment to make that work. But we really believe that’s the right thing to do,” he said.
And now, the introduction of the new secret scanning feature allows GitHub to detect generic or unstructured secrets in the code.
Protecting against AI Vulnerabilities
Despite GitHub’s optimistic stance on leveraging AI in cybersecurity, the era of generative AI has presented various instances where it emerges as a substantial cybersecurity threat. For instance, prompt injection attacks remain a significant challenge to address for cybersecurity teams. Over time, we have seen LLMs are vulnerable to prompt injection attacks.
Given GitHub’s close alliance with Microsoft, it might be leveraging OpenAI’s GPT models, most notably GPT-4, the most advanced LLM to date. However, GPT-4 has also been found to be vulnerable to prompt injection attacks.
Depriest believes responsible integration and security measures within the tooling are crucial in safeguarding against such manipulations. This approach is fundamental to ensuring protection in scenarios involving prompt injection and similar vulnerabilities.
Moreover, according to Depriest, safeguarding both the infrastructure and the overall network workspace remains a key aspect of cybersecurity, even in the AI age.
“We approach this responsibility with the same diligence as we do for the rest of our infrastructure, including protecting github.com. This entails threat detection, security operations, and ensuring code security, and this extends to AI models. We uphold uniform controls and compliance principles across all facets of our core responsibilities.”
Does AI fundamentally change cybersecurity?
While GitHub is banking on generative AI capabilities in a big way, including for cybersecurity, Depriest does not believe it will fundamentally change the cybersecurity landscape.
“The reality is every new technology is dual-use. This pattern has been consistent across various technologies throughout the past two decades. I still don’t think fundamentally it will change how we approach the security of what we need to do, what our job is, and how we will keep the platform safe.”
He holds the belief that the advantages of generating secure code from the outset and consistently maintaining its security far exceed the potential risks associated with certain threats in the landscape.
“We strongly believe that our efforts at GitHub, incorporating AI into the developer workflow, will yield a substantial and highly valuable impact, far outweighing any potential risks in the threat landscape. While scaling remains an industry challenge, we envision this as the future for enhancing security in the realm of developers.”