Anthropic's most capable AI model has identified thousands of critical zero-days across all major operating systems and browsers — including a 27-year-old bug. We are following Mythos closely and will evaluate integration as access broadens. We always use the most capable AI available — and our platform helps you close the gaps today, before the next generation of threats arrives.
Claude Mythos is a frontier model from Anthropic with exceptional capabilities in coding, reasoning, and autonomous agent work. It is not specifically trained for cybersecurity — but its raw capability makes it extraordinarily effective at security tasks.
Anthropic is providing limited access to Mythos Preview via Project Glasswing — an initiative with ~9 enterprise partners (AWS, Apple, Cisco, CrowdStrike, Google, Microsoft, Nvidia and more) for defensive security work. Public API access will follow when new safeguards are in place.
We have Mythos on our radar and are closely monitoring access developments. Access is currently limited to a small number of enterprise partners via Project Glasswing. We will evaluate integration when broader access opens — and if Mythos reasoning meaningfully raises the bar, we will build it in. We always aim to use the most capable technology available.
Try the platform today →We already augment manual pentesting with Anthropic Claude models for deep reasoning around attack vectors, business logic flows, and complex auth. With Mythos access, we take it further — frontier model reasoning in every engagement.
Request a quote →Not directly — Mythos Preview is limited to ~9 enterprise partners via Project Glasswing. Anthropic plans broader access once new safeguards are in place. We are monitoring the situation closely and will evaluate integration if and when access broadens.
Yes — we already run Claude models (Sonnet/Opus) for vulnerability analysis, false positive filtering, and report generation. Mythos is the next step in that roadmap.
Mythos-class reasoning can understand complex attack chains, business logic vulnerabilities, and finding correlations in ways that simpler models and rule-based scanners cannot.
Anthropic deliberately restricts access due to the risks — the same capability that empowers defenders can be misused offensively. We work exclusively in defensive contexts with written customer authorization.
Responsible use
Pentesting.se uses AI models exclusively for defensive security work with explicit customer authorization. All our scanning and testing is conducted within written agreements and ethical practice.
Don't wait for next-generation AI tools — the threats are already here. Our platform with Claude-powered analysis, 30+ scanning tools, and self-learning false positive filtering helps you identify and close vulnerabilities now, before more powerful automated attacks become commonplace.
Connect via our MCP server for AI agent integration • GDPR-compliant • Hosted in Sweden