In his recent Google Cloud blog post, Phil Venables, VP, TI Security & CISO at Google Cloud, highlights the importance of AI vendors sharing vulnerability research. He emphasizes that this transparency is crucial for building trust in AI technology, especially given its rapid evolution. Google takes cybersecurity research seriously and has invested heavily in exploring AI risks and developing better cyber defenses. Notably, Google's Cloud Vulnerability Research team discovered and remediated previously unknown vulnerabilities in Vertex AI. Venables urges AI developers to normalize sharing AI security research, advocating for a future where generative AI is secure by default. Google Cloud aims to lead this effort by promoting transparency, sharing insights, and fostering open discussions about AI vulnerabilities. He argues that concealing vulnerabilities only increases the risk of similar issues persisting on other platforms. Instead, the industry should strive to make finding and fixing vulnerabilities easier. Venables concludes by calling for a collective effort to raise the bar for AI security, working towards a future where AI models are secure by default.