The current hype machine in tech is all around ChatGPT and other AI models, but is there much credibility to them? Should we even be calling them AI when they’re really large language models that can merely statistically predict word combinations?
This is usually only for members. Members get my weekly research notes and all my courses.
Not so long ago crypto-currency was all the rage, then bank crashes came, and scammers abound, and crypto has little chance of overtaking all the banking markets as was once predicted. Before that it was the robots that were coming for our jobs…so let’s start there.
Robots Were Coming for Our Jobs
It was only about 10 years ago that robots and automation were coming for all our jobs. Then it was claimed that automation would take 47% of the current job market1. Companies spent billions on robots and automation, and companies pushing robots and automation made billions of dollars on the hype that workers could be cut from the equation and owners could make far more money without the pesky expense of people.
The reality is that less than 10% of jobs were lost to automation and robots. Robots lift heavy things well, but often require a human to interact with them to ensure they job is done as expected, and then to fix them when they break. Robots and automation brought a job shift not wholesale replacement of almost 50% of the labour market as once claimed.
Take a look around at lists of the jobs that will be taken by AI, among the most listed jobs are teachers2 and3 customer service agents. Sure ChatGPT may be able to convey the base knowledge a teacher would use in class but I help out in a number of classes at my kid’s school and wonder how on earth ChatGPT could react to the mayhem that happens in a classroom.
As for customer service agents we’ve all been on phone tree systems with computers as well, and most often are trying to find the fastest way to get to a person to talk to. We don’t want to waste our time with the endless tree, so ChatGPT may steal the job of the “Press 1 if…” tree, but if it works well we may welcome that shift. A real person that is empowered to solve your problem won’t be replaced by the statistical word prediction systems pawned off as current AI.
Will some jobs change, sure. Lawyers may use AI to get first drafts of will. It also wouldn’t surprise me if the only legal council available to those without lots of money is some type of AI, but that’s a different structural problem with society and how money is divided among the wealthy and poor. Wealthy people will still have in person lawyers, who may use AI in part of their process, but they won’t move to a totally AI based lawyer any time soon.
The Language Used around LLM’s Misleading
One of the big issues with ChatGPT is in how it’s spoken about currently. We call it AI, but really it’s simply a large language model that is good at statistical word prediction4. Given a set of words to go in, it can look at what it should put out and get something that is plausibly close to the answer we may expect.
The thing is, it benefits companies selling ChatGPT and other LLM’s to use words like “understand” because it makes them seem like their further along in building something that is truly cognitively aware. It makes companies like OpenAI more money the more you believe the hype around the progression of their statistical word prediction machine.
We see this type of marketing language used in “smart” watches, which are not smart at all. They regularly mess up what we expect them to show, and are often worse at the job of telling time than previous watches. But calling them “smart” watches makes us believe they’re better than “computer” watches, so we are more likely to purchase them.
Hype based language and marketing also hides the environmental impact of ChatGPT behind excitement.
The Environmental Impact of LLM’s
It’s estimated that ChatGPT needs to “drink” one large bottle of water for every 20 questions you ask it5. This doesn’t even account for the water used to train the model.
Didn’t we just get out from all the crypto hype that had an impact on the Texas power grid and increased costs by 5% to general users6. Crypto miners were using all this electricity, while returning few jobs to an area, unlike manufacturing would with it’s electric use. LLM’s are in the same boat, requiring huge data centers, lots of electricity, and few people on site to deal with the machines. It may look good on the surface to be “tech forward” and court data centers, but when your power grid fails7 for everyday people those same tech companies aren’t footing the bills for the failure they helped cause.
Why do we endorse these companies using a shared resource while we have to use soggy paper straws to save the environment? Why is it okay for these corporations to reap profits of cheap energy that was put in with public funds, then not not dig into their profit when they strain the resource and have a hand in damaging the lives of those living under their sphere of influence? Corporations are taking the profits from their environmental destruction, while they socialize the losses as we all have to deal with the environment hell they helped create.
Use Public Resources for Private Profits
LLM’s are involved in the same thing companies have been involved in for generations, they want to use public resources for their own profit. LLM’s are trained on public data, including my site since much of it is freely available. I didn’t get paid for the use of my data because it was public, but OpenAI will want to get paid for any output of their tool trained on my data.
They want to privatize the profit, and socialize any costs so they can be profitable at all. We saw this with the collapse of Silicon Valley Bank. They spent decades getting out from under any type of regulation. Silicon Valley has had a mantra for decades that government should be small, but when collapse hit their pocket books it was time for a bailout and “big” government to help them8.
The 2008 crash and bailout was similar in that executives and banks were saved, while homeowners had their mortgages foreclosed on by those same banks. Yes the SVB bailout will be assessed to banks so the taxpayer contributed funds won’t be used, but banks are simply going to pass on their costs for this levy to their customers, so we’ll still pay for the bailout.
Ultimately, I find ChatGPT interesting, but I’m not on the hype train. I’m not using it with my notes. It’s not creating my content. It’s sometimes helping with headline ideas for articles but then I spend a bunch of time working headlines.
The only people the hype train is good for is those selling LLM’s as AI. Everyone else should approach with caution, just as ChatGPT itself tells you to9
- ChatGPT is not Intelligent
- Minds, Brains, and Programs
- Climbing Towards NLU
- NurtureShock Chapter 10
- The Age of Surveillance Capitalism
- The revolution will not be brought to you by ChatGPT
- Chatbots won’t take many jobs
- The revolution will not be brought to you by ChatGPT ↩
- ChatGPT may be coming for our jobs ↩
- These 20 jobs are the most “exposed” to Ai ↩
- Climbing Towards NLU ↩
- ChatGPT needs to “drink” a water bottle’s worth of fresh water for every 20 to 50 questions you ask ↩
- Bitcoin mining has raised Texas electricity pricing 5% ↩
- Texas’ fragile grid isn’t ready for crypto mining’s explosive growth ↩
- With demands for bank bailout, Silicon Valley shows its ‘small government’ mantra was just a pose ↩
- When I asked ChatGPT about the percentage of jobs that are claimed it will replace, it told me employment data is hard and I should be skeptical of any numbers because it’s notoriously hard to predict what technology will be able to do and people are often way to bullish on the outcomes. ↩