Google Cloud has announced an intriguing experiment using Gemini 1.5 Pro, a powerful multimodal AI model, for detecting vulnerabilities in code. This technology stands out for its ability to analyze large amounts of code stored in Google Cloud Storage, thanks to its extended context window of up to 2 million tokens.
This larger window allows the model to take in more information, leading to more consistent, relevant, and useful outputs. It enables efficient scanning of large codebases, analysis of multiple files in a single call, and a deeper understanding of complex relationships and patterns within the code.
By using Gemini 1.5 Pro, potential vulnerabilities in the code can be identified and helpful and contextual modifications suggested. These findings, along with relevant code snippets, are then extracted from the model's response and systematically organized in a Pandas DataFrame and finally transformed into CSV and JSON reports, ready for further analysis.
While this technology is still experimental, it shows great potential for vulnerability detection and improving code security. However, it is important to note that this experiment does not include any data anonymization or de-identification techniques and should not be relied upon for data protection purposes.
Overall, this experiment is a promising step towards a more secure future of software development, where AI can play a key role in helping developers build more robust and resilient applications.