Discussion about this post

User's avatar
Dan Tinsley's avatar

This is the article the industry needs to sit with. The maths is not complicated. More code, worse code, cheaper exploitation. What I keep coming back to is that this is not just a software problem.

richardstevenhack's avatar

"Anthropic did not explicitly train Mythos for these capabilities. They emerged as a downstream consequence of general improvements in code reasoning and autonomy."

That's what THEY claim.

I'm not so sure. I suspect the model was explicitly trained in cybersecurity solely to provide Anthropic with a model they could offer to major corporations - approved by the government - and governments themselves, so they could recover from the Pentagon fiasco.

Also note the apparent source of Mythos' capability in this regard, as Gary Marcus noted, is that it has a neurosymbolic pattern-matching capability embedded in the code. In other words, it is NOT a strict generational LLM.

That implies the ability was ADDED to its basic LLM technology precisely for the purpose of being good at pattern matching for cybersecurity.

I can't prove this hypothesis, of course, absent examination of internal Anthropic documents. But given Anthropic's continued pattern of deception and hyping of its model's capabilities, I can not trust them to be truthful about anything. And i would advise everyone else to be distrusting - as much so as everyone is about Sam Altman. Amodei and Altman are a pair. due to their known antipathy for each other.

No posts

Ready for more?