Want to keep this newsletter coming become a member.

Anyone know the name of the music album this blog post title refers to? No looking it up.

Quantum Computer’s Aren’t Real

While I had heard of this paper debunking the breathless media headlines about quantum factorization I admit it was above my head but luckily Security Now 1034 took a look at the paper and explained it. The short version, any current quantum computer that can “break encryption” is really some hand waving reductionist trickery to make it look like progress is being made.

This matters to you because it’s good to know that we won’t need quantum encryption for a long time, which was already the case if the proofs above were true. So it’s now an even longer time that current standard encryption methods are safe. Thus your data is safe, baring other vulnerabilities that let encryption get bypassed.

So why do we get breathless hype for quantum computing, well because it helps tech companies have high valuations for stock which makes shareholders money. Same reason we get breathless hype about superintelligence from AI companies, it makes them seem more valuable than they are.

AI coder’s are 19% slower

Another interesting study about how devs using AI directly on their code bases thought they’d be 20% faster with the usage of AI but were really 19% slower than peers that simply did the work.

AI suggestions needed lots of clean up. Waiting on AI isn’t productive time. AI is bad at dealing with large complex codebases.

While I have issues with both DHH and Lex, I did like this interview where DHH talks about how he uses AI, which is similar to how I use it in my work on code. If I have an error that I can’t see I’ll drop a function or lines of code into the agent and most of the time it finds that comma I was missing right away. I’ve spent hours looking for dumb errors like that, and now I don’t have to.

More than this though, I agreed with DHH as he talked about coders wanting to learn a new language shouldn’t be relying on AI almost at all in the beginning because it takes the learning out of your hands. Much like we wouldn’t say we “learned” an instrument because AI did the work and we pasted the audio into some audio editor and exported the result.

This idea applies to all realms of cognitive work. Outsourcing all thought to AI and then pasting the results into your work isn’t teaching you anything. If you want to have good writing and good thoughts in any domain, you need to expend the energy to think deeply about things and wrestle with your words.

Automation Hype

Aaron Benanav looks at the hype around automation in his book Automation and the Future of Work. They key idea that sticks out here is the hype portion.

It is the reason that predictions of a coming wave of pandemic-induced automation ring so hollow. They mistake the technical feasibility of automation (itself more a shaky hypothesis than a proven result) for its economic viability. Automation and the Future of Work Pg 44

Benanav tackles the why of the hype and concludes that it’s mostly about keeping workers feeling like their job is precarious so that they feel like they have less power in negotiations and are willing to work more hours for less money with less perks. Over the last decade coders have been so in demand that some crazy perks were given to them, but those perks have been rolled back as tech layoffs have continued to skyrocket.

Toss AI hype on top of that with managers thinking that their coders are going to be 20% more efficient so they don’t have to hire.

Hype is often a tool of big tech, mainly rich white dudes that went to similar schools, to make them more money. We have these cloned robots of silicon valley leaders making decisions, when we know that the more of a monoculture you have the worse your decisions are. But they keep fooling enough people that they keep making money and keep getting power.

Related Content