Anthropic's Mythos: Here There Be Monsters
On AI security warnings, industry shock, and why understanding unfamiliar systems is still the best way to keep your footing.
If you trace a line upward through model capabilities and plot it against the security posture of real systems in the aggregate, at some point the curves cross. Every piece of software running in production has bugs, many of them decades old, surviving only because finding and exploiting them required scarce human expertise and enormous patience. The question was never whether an automated system could find them all but when.
Anthropic claims that moment has arrived with their latest model, Mythos, which is why they’ve declined to release it publicly. Instead they’ve launched Project Glasswing, giving about forty organizations access to use the model for defensive security work, backed by $100 million in usage credits. The rest of us get to read about it.
Whether the threat is as immediate as Anthropic says is genuinely unknowable from the outside. We’ve been here before: in 2019, OpenAI declared GPT-2 “too dangerous to release,” and it turned out to be a toy. But we’ve also been watching the same trajectory with quantum computing, where everyone agrees the threat to existing cryptographic systems is real and the only debate is timing. Sometimes “the curves will cross eventually” means next year, and sometimes it means next decade, and nobody on the outside can tell the difference until it happens.
What interests me more than the security question is the effect these repeated claims of unprecedented danger have on the people who hear them. There is a cognitive dissonance at the center of the entire industry’s pitch that is becoming impossible to ignore. The message is always both of these things simultaneously: you must adopt these tools immediately because anyone who doesn’t will be left behind, and also, these tools will ultimately replace everything you do and there’s nothing you can do about it.
Both framings produce the same result: paralysis. Naomi Klein wrote about this pattern in The Shock Doctrine, describing how crisis, real or manufactured, creates a window where people are too disoriented to resist whatever comes next. I’m not suggesting a conspiracy, but the effect is the same regardless of intent. I see it in blog posts from developers who sound gut-punched, in YouTube videos from people who believe their careers are now worthless, in the quiet desperation of creatives and white-collar workers trying to figure out whether they still have a future.
Should you be concerned? Yes. With the hindsight we now have about what social media did to public discourse, to adolescent mental health, to the information environment, if we could go back we would likely have very different conversations at the outset. Those are conversations we should be having now about AI, while the technology is still taking shape. Head in the sand is not the answer, and sky is falling is not the answer either.
Will AI take my job? I don’t know. But corporations have always made impersonal spreadsheet decisions about staffing based on whims none of us can control. There have always been periods of downsizing, executed without care, just numbers on a page. Nothing about that has changed other than the threat that every boardroom gets AI-pilled at once and makes the same cut simultaneously. That’s worth taking seriously, but it is not a reason to stop moving.
When I was a kid I got passionate about technology in a world of BBSes and dial-up modems, of BASIC programs saved to floppy disks. That world was replaced by the web, which was replaced by mobile, which was replaced by cloud, which is now being replaced by whatever this is. Each era was unrecognizable from the one before it, and each one arrived with predictions that everything prior was now obsolete. Some of that was true and a lot of it wasn’t, and the only way to find out which was which was to keep going.
I’ve been writing all my code with AI for the past year and building real things with genuine productivity. But I’m also going deeper into the languages that code is written in: TypeScript, Go, Rust, Java. I’m working through Thorsten Ball’s Writing an Interpreter in Go, not because I need to compete with a model but because I want to understand what the model is doing. Understanding has always been how you keep your footing on unfamiliar ground.
If there is any comfort in facing the unknown now, it’s that it has always been so. The title of this note references old cartography, the edges of the known world where mapmakers drew sea serpents and wrote “here there be monsters.” Those maps were made by and for people who were already on the water, who understood that the unknown is not the same thing as the hopeless. The best stories always have monsters in them because that’s what makes them worth telling, and the only real choice is to stay still or to keep moving forward, to explore, and to build.