As AI rises in popularity, AI development is creating new cybersecurity risks and vulnerabilities. In this blog, we’ll talk about ways of developing AI securely from code to cloud while maintaining the speed of innovation.
AI security is a crtical issue in today’s landscape. With developers, teams, employees and lines of business racing ahead to compete, security teams consistently fall short in an ecosystem where new risks are rising up every day. The result is that we are seeing an unprecedented amount of AI breaches in 2025.
According to Capgemini, 97% of organizations suffered incidents related to generative AI initiatives in the past year.
It is unclear whether these incidents were all breaches, or whether some were merely vulnerabilities, however, around half of these organizations reported the loss impact would be estimated at $50M+ per incident. This shows the scale of data involved, as well as that each incident would indicate a systemic flaw, likely exposing an entire data set.
So how do developers and security teams work together to continue innovating in the AI space, without sacrificing security? The issue is complicated and requires a multilayered approach.
One of the best ways to ensure your AI is secure is to start in the design phase. At FireTail, we talk a lot about protecting your cyber assets from “code to cloud.” Designing your models with security in mind enables you to stay ahead of threats instead of having to play a constant game of whackamole when new risks pop up.
Security should be a prime concern from code to cloud.
Development teams and security teams need to work together on the design phase to ensure the mutual success of them both. We’ve talked before about the growing developer/security team gap, but in order to have a holistic security posture, this gap needs to be bridged from the beginning by involving security teams in the early stages of design and development.
It is common knowledge that visibility and discovery are the cornerstones of any strong cybersecurity posture. Having full visibility allows security teams to stay ahead of threats by spotting vulnerabilities and misconfigurations before they pop up.
Everyone in your team should know what AI models you are using, what they are being used for, what information is okay to input into them and what is not, et cetera. And security teams need to be vigilant in monitoring AI interactions and activity. A centralized dashboard can help to keep all of these interactions in one place, in order to ensure nothing slips between the cracks.
Any strong AI security posture should involve constant monitoring to see how things change. Usage cases for AI change over time, with new technologies, et cetera, so it is essential to stay on top of which models are being used for what functions in your team, and what data is being fed to them. Visibility is only the first step into keeping track of your AI use and interactions. But with consistent monitoring and alerting systems, security teams and developers alike can see changes in real-time and respond immediately, staying ahead of threats.
AI logging is one of the biggest challenges for AI security. One of the reasons for this is that new AI providers will often create novel log formats based on their own LLMs. Security teams can try to learn about and understand their known LLMs, but each time they adopt a new model, they have to essentially relearn the wheel, slowing the pace of innovation.
As tedious as it may seem, the only way to stay on top of logging is to log each LLM on a case-by-case basis, to avoid errors and ensure that each log is accurate before moving on. Prioritizing accuracy over efficiency may seem counterintuitive, but in the long run, if teams do not pay attention to proper logging protocols, they will end up wasting more time fixing mistakes and spending just as long as they would have meticulously doing it right in the first place.
Many organizations rush to test AI with their own data, but some of this data may be subject to regulatory compliance requirements.So sending this data to third-parties, such as an LLM, may require user consent.
Compliance frameworks such as GDPR and CCPA (California Consumer Protection Act) dictate terms around things like data sharing, which developers may not realize they are subject to until it’s too late. Often specific criteria slip through the cracks when they are listed in small print, and do not result in immediate consequences.
So what is the solution, with compliance constantly updating and changing? The best method for ensuring compliance is to continually monitor your landscape and every interaction as you go. It may seem tedious, but it’s the only surefire way to avoid consequences.
The OWASP Top Ten Risk List for LLMs was assembled by AI security experts and based on real-world understanding of threats and vulnerabilities. The list provides information and mitigation techniques for the top ten most urgent risks to LLMs today, from prompt injection to sensitive information disclosure, and more.
The OWASP LLM Top Ten can serve as a risk model for teams to measure their LLMs against.
While the OWASP Top Ten list is extensive, it is not complete nor is it a framework that teams can follow. However, it is a great jumping off point for developers and security teams alike to learn about the most prevalent risks in the ecosystem and how to protect against them.
FireTail’s AI security platform can help developers and security teams alike stay ahead of threats. FireTail provides a centralized dashboard to see all your AI interactions and activity, as well as your API endpoints and more so you can stay on top of visibility and discovery from the design phase onward. To see how it works, set up a demo or try the platform out for free, here.