AI coding assistants twice as likely to leak secrets, as overall leaks rise 34%
Overview
A significant rise in hardcoded secrets found in public GitHub commits has raised concerns among cybersecurity experts. In 2025, researchers identified 28.65 million instances of sensitive data, such as API keys and passwords, embedded directly in code. The alarming trend shows that AI coding assistants are twice as likely to contribute to these leaks compared to traditional coding methods. This increase in exposed secrets, which rose by 34% from previous years, poses a serious risk to organizations, potentially leading to unauthorized access and data breaches. Companies and developers must be vigilant in managing their code and ensuring that sensitive information is not inadvertently shared in public repositories.
Key Takeaways
- Affected Systems: GitHub repositories, AI coding assistants
- Action Required: Developers should avoid hardcoding secrets in their code and use environment variables or secret management tools instead.
- Timeline: Newly disclosed
Original Article Summary
A total of 28.65 million hardcoded secrets were found in public GitHub commits in 2025.
Impact
GitHub repositories, AI coding assistants
Exploitation Status
No active exploitation has been reported at this time. However, organizations should still apply patches promptly as proof-of-concept code may exist.
Timeline
Newly disclosed
Remediation
Developers should avoid hardcoding secrets in their code and use environment variables or secret management tools instead.
Additional Information
This threat intelligence is aggregated from trusted cybersecurity sources. For the most up-to-date information, technical details, and official vendor guidance, please refer to the original article linked below.