1Password open sources a benchmark to stop AI agents from leaking credentials

Help Net Security

Overview

1Password has introduced a new open source benchmark called the Security Comprehension and Awareness Measure (SCAM) to address a gap in AI security. Research indicates that while some AI models can accurately identify phishing websites, when these models operate as autonomous agents with access to tools like email and password managers, they can still fall for scams. The SCAM benchmark aims to evaluate whether these AI agents can safely handle sensitive information without leaking credentials. This initiative is important as it seeks to enhance the security of AI applications, helping to prevent potential misuse by attackers. By focusing on the behavior of AI in real-world scenarios, 1Password is taking a proactive step in AI safety.

Key Takeaways

  • Affected Systems: AI models, password vaults, email systems
  • Action Required: Implement the SCAM benchmark for testing AI agents.
  • Timeline: Newly disclosed

Original Article Summary

Research has shown that some AI models can identify phishing websites with near-perfect accuracy when asked. When those same models are used as autonomous agents with access to tools like email, web browsers, and password vaults, they can still carry out the scam. That gap is the focus of a new open source benchmark from 1Password called the Security Comprehension and Awareness Measure, or SCAM. The benchmark tests whether AI agents behave safely during real … More → The post 1Password open sources a benchmark to stop AI agents from leaking credentials appeared first on Help Net Security.

Impact

AI models, password vaults, email systems

Exploitation Status

No active exploitation has been reported at this time. However, organizations should still apply patches promptly as proof-of-concept code may exist.

Timeline

Newly disclosed

Remediation

Implement the SCAM benchmark for testing AI agents

Additional Information

This threat intelligence is aggregated from trusted cybersecurity sources. For the most up-to-date information, technical details, and official vendor guidance, please refer to the original article linked below.

Related Topics: This incident relates to Phishing.

Related Coverage

Gone With the Shame: One in Two Americans Are Reluctant to Talk About Romance Scam Incidents

darkreading

As Valentine's Day approaches, a new report reveals that men are nearly twice as likely as women to fall victim to romance scams. These scams typically involve fraudsters posing as potential romantic partners online, often leading to significant financial losses for victims. The reluctance to discuss these incidents is prevalent, with many individuals feeling ashamed or embarrassed about being scammed. This silence can hinder awareness and prevention efforts, making it crucial for people to openly share their experiences. Given the emotional and financial toll these scams can take, men should be particularly vigilant this Valentine's Day to avoid falling prey to such deceitful tactics.

Feb 12, 2026

CISA to host industry feedback sessions on cyber incident reporting regulation

CyberScoop

The Cybersecurity and Infrastructure Security Agency (CISA) is planning to hold feedback sessions to gather input on new regulations regarding cyber incident reporting. This initiative follows the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA), which aims to improve how organizations report cyber attacks. However, some industry experts believe that these town halls may not effectively address the pressing needs of the current cybersecurity landscape. They argue that the timing may not be ideal for such discussions, given the urgency surrounding cyber threats. The outcome of these sessions could influence how well organizations prepare for and respond to future cyber incidents, making it crucial for stakeholders to engage in the process.

Feb 12, 2026

Lazarus Campaign Plants Malicious Packages in npm and PyPI Ecosystems

The Hacker News

Researchers have identified a series of malicious packages in both the npm and Python Package Index (PyPI) repositories, linked to a recruitment-themed campaign by the Lazarus Group, which is associated with North Korea. This operation, dubbed graphalgo, reportedly began in May 2025, aiming to trick developers into downloading harmful software disguised as legitimate packages. The malicious payloads can compromise user systems and potentially lead to data theft or other cybercrimes. Developers using these package repositories should be particularly cautious and verify the authenticity of packages before installation, as this incident emphasizes the ongoing risks associated with open-source software ecosystems. Awareness and vigilance are crucial for maintaining security in the software development community.

Feb 12, 2026

The AI threat isn't one exploit: It's attackers hijacking trusted workflows

SCM feed for Latest

The article discusses how attackers are increasingly hijacking trusted AI workflows rather than relying on single exploits. This method allows cybercriminals to exploit established processes, making detection harder for organizations. Companies are urged to manage AI workflows with the same level of security as they do their production applications to prevent such attacks. This shift in tactics poses a significant risk, as it can undermine the trust in AI systems used across various industries. Ensuring robust security measures in AI implementations is essential to protect sensitive data and maintain operational integrity.

Feb 12, 2026

These 4 critical AI vulnerabilities are being exploited faster than defenders can respond

Latest news

Researchers have identified four significant vulnerabilities in artificial intelligence systems, including prompt injection and deepfake fraud. These flaws are being exploited faster than security teams can respond, raising concerns about the safety of AI applications. The vulnerabilities lack known fixes, which means that users and companies relying on AI technologies are at risk. This situation poses a challenge not only for tech firms but also for consumers who may fall victim to fraud or misinformation propagated by malicious actors. As AI continues to integrate into various sectors, the urgency to address these vulnerabilities becomes increasingly critical.

Feb 12, 2026

Published CVEs could hit record-breaking 50,000-plus in 2026

SCM feed for Latest

A recent forecast from FIRST anticipates that the number of Common Vulnerabilities and Exposures (CVEs) could surpass 50,000 in 2026. This significant increase poses a challenge for security teams who will need to manage and address these vulnerabilities effectively. As organizations continue to rely heavily on technology, the growing number of CVEs could strain resources and complicate planning for security measures. Security teams will need to prioritize their responses and strengthen their strategies to handle the influx of vulnerabilities. This situation emphasizes the need for companies to invest in better tools and training to keep up with the evolving security landscape.

Feb 12, 2026