Microsoft Copilot Studio Exploit Leaks Sensitive Cloud Data
A server-side request forgery (SSRF) bug in Microsoft's tool for creating custom AI chatbots potentially exposed info across multiple tenants within cloud environments.
Researchers Highlight How Poisoned LLMs Can Suggest Vulnerable Code
CodeBreaker technique can create code samples that poison the output of code-completing large language models, resulting in vulnerable — and undetectable — code suggestions.
By moving beyond guidelines and enforcing accountability, encouraging innovation, and prioritizing the safety and well-being of our communities in the digital age, we can build a more secure software future.