Yes, today we talk about AI. No we're not becoming an AI newsletter, this is just what was on my mind this week.

Is Mythos Dangerous?

On April 7th Anthropic published news that their new LLM was too dangerous to release, but we've heard these types of claims before though. Back in 2019 OpenAI claimed GPT2 was too dangerous to release on cybersecurity grounds. They ended up doing a staged rollout, and the claimed threats didn't materialize. Heck, just days after the Mythos announcement OpenAI basically said they have the same thing.

Opinions on the truth of the Anthropic announcement vary from assertions that this is a nothing burger, to more tempered evaluations that this is just another step forward, to those that think our software security is totally fucked.

I think I fall with Steve Gibson, Mythos is impressive and we have an opportunity now to run current versions of LLM tools against our code to find bugs and start to get ahead of any future bug finding capabilities. Human coders are often really bad at seeing subtle bugs, but AI doesn't have this problem. It can run and run, testing many variations of hacks or data input/output and find the problems.

At my job we've done this and found weird edge cases that we've been chasing for months, and a number of code enhancements that tighten up the security of our WordPress plugins and our infrastructure. We're using it to help generate the reports and then fixing them by hand. We've been lucky and not found any big issues, and some of the highlighted issues are actually working as intended, but we've found a number of ways to make the code we run safer.

I know some of the readers here are totally against AI. Yes these companies did take the work of the general public without asking. Yes AI is causing pricing of computer parts to skyrocket. Yes these big AI companies claim to be working for public good, but are really just looking to make a bunch of money.

My job is to make sure our customers have fast secure sites. I use many tools to do this, and one of them is certainly Claude Code. Over the last few months I've used it to fix subtle bugs, ship new tools in a few days that would have taken weeks without the help of Claude. I have to take a pragmatic approach and use the tools that help me get my job done quickly with secure code.

Claude is one of those tools.

AI taking Jobs?

Jesse has a great post about making yourself obsolete. He worries that giving AI tools to those without foundational skills breaks the pipeline to build good coders.

Adoption of AI assistants threaten to disrupt journeys like mine, circumventing the years I spent in search of my own obsolescence where I built the foundation I still stand on today. They provide the automation, but not the architecture. It makes employees immediately more responsible without affording them the time to learn what having that responsibility really means. To a degree, giving an employee an AI assistant is like promoting them to a mini-manager. We should take care to avoid promoting someone to their level of respective incompetence. Similar to managing a junior role, AI assistants require guidance via the provision of abundant context and what they produce requires review. If we shortcut the career pathways of junior developers and make them mini-managers, we should consider what their foundations will be built out of.

I worry about the same thing, and even worry that my skills as a programmer will atrophy if I get the LLM to do too much of my job. My current attempt to continue to keep my skills sharp is to do more writing about the code I'm working with. To share the code I write with the help of Claude so that I understand what we did and why.

Will this work? Really I won't know for a few years. Until some day when the current LLM tool that's got the lead is unavailable. Can I continue to produce good work without the tool?

To be sure, my job has changed a bit. I write far less boilerplate than I used to which means I get to ship things faster. I spend part of that time reviewing the boilerplate so that it's set up properly, and part of that time trying to ensure I'm learning. That has still left me with more time to work on the next task and ship more features for my company.

Be nice to your chatbot?

Yes it would seem that your LLM might perform better if you're nice to it which seems a bit crazy to me. The LLM doesn't "feel" anything and we need to be careful about anthropomorphizing it lest we fall in love with our chatbot, but telling the AI it's doing a good job and is smart enough to do the work makes it perform better.

Honestly the more I look at AI the crazier some of this shit feels.