Co-Intelligence - Ethan Mollick

Co-Intelligence - Ethan Mollick

Ethan Mollick

MAYBE NONFICTION

Started: Feb 23, 2025

Finished: Mar 01, 2025

Review

AI has come for our world, whether we like it or not. In Co-Intelligence, Ethan Mollick, takes a positive look at what AI could mean for our lives. He explores how to maximize it's effectiveness for our productivity and dreams of a "better" world where AI lets us get more done with less effort, and we're rewarded for it with better wages and less working time. In theory we can then explore our creative sides instead of working all the time.

In these final ideas I think Mollick is way off base. Productivity has risen since the 70's such that we could already have short work weeks and lots of free time. Instead of getting this capitalist owners have taken more percentage of profit into their bank accounts leaving workers struggling to get by. If things continue as they are, I think it's more likely we have a very rich few, and many who have no work and no money.

While the book was interesting, I don't feel like Mollick engaged with any serious critiques of AI and how it's stolen creative content that was copyrighted to then steal work from people. The only glimmer was a few brief mentions of his worry that we'll loose the "experts" who can judge AI output in their field because we'll stop hiring Jr people and thus have no apprenticeship pipeline to build experts.

Overall, Mollick seems to have bought into the techno-utopian ideas that all tech is good and somehow magically society will benefit if we let corporations run with their profit centers unchecked.

Notes

**Purpose**
- to take us on a tour of [[Ai]] as a co-intelligence, so how it can be used in our lives. Pg XIX

> Ultimately, that is all [[ChatGPT]] does technically-act as a very elaborate autocomplete like you have on your phone. Pg 9
- but since it reads everything online it has access to more domain specific information than you could ever have

> And, potentially worse, it has no ethical boundaries and would be happy to give advice on how to embezzle money, commit murder, or stalk someone online. Pg 13, 14
- no longer is this info hidden in some deep corner of the internet. Anyone can easily find it by searching with an AI

> Demonstrations of the abilities of [[Large Language Models|LLM]]s can seem more impressive than they actually are because they are so good at producing answers that sound correct, at providing the illusion of understanding. Pg 25, 26
- so to know if the answer is right you need to be a domain expert already
- but LLMs are taking over Jr positions so we won't have domain experts at some point

> The result gives AIs a skewed picture of the world, as its training data is far from representing the diversity of the population of the internet, let alone the planet. Pg 35
- result of [[Ai]] being trained on mainly Western centric english language sources and being designed by rich white dudes with all the biases that these dudes bring to the table

> Low-paid foreign workers around the world are recruited to read and rate AI replies, but in doing so, are exposed to exactly the sort of content that AI companies don't want the world to see. Pg 38
- but as contractors in areas with poor labour protections they don't get [[mental health]] supports needed to even attempt to deal with the bad stuff they see.

**4 Things we should be doing with AI to survive it**
1. Invite AI to the table Pg 47
- use [[Ai]] so you understand what it's good at and what it's bad at
2. Be the human in the loop Pg 52
- learn to spot the hallucinations Pg 54
- but I hate that we call them hallucinations instead of the errors that they are
- this really repeated the first point, use it so you are in the loop
3. Treat AI like a person Pg 55
- but [[Ai]] isn't a person and doesn't deserve the same respect any human living does or the rights of a human or the deference you would accord a human.
- the author wants to speak of AI like a person because it's easier linguistically for his book. Pg 56,57
- [[tech giants|big tech]] want us to think of it as human like so we relate to it more and bump their valuations
4. Assume AI is at it's worst right now Pg 60
- this I get and it's the same thing we do with security issues, assume the attacks only get easier to perform and reveal more issues deeper in the tech stack

![[we accord AI more weight than a human]]

> Some companies will start to deploy [[Large Language Models|LLM]]s that are built specifically to optimize "engagement" in the same way that [[social media]] timelines are fine-tuned to increase the amount of time you spend on your favorite site. Pg 91

^485d05

- this is not for your benefit no matter what they say. This is for their corporate valuation purposes to gain market share and keep engagement numbers up so that they can make money. Even later they can gate the features you most used and people will pay for things that they feel they suddenly need because an AI that will chat with you and please you is easier to deal with than a real person who has their own needs to take into account.
- it makes being a person easy, because you can only think of yourself.

> Remember that [[Large Language Models|LLM]]s work by predicting the most likely words to follow the prompt you gave it based on the statistical patterns in its training data. It does not care if the words are true, meaningful, or original. It just wants to produce a coherent and plausible text that makes you happy. Pg 93. 94

^641347

- it doesn't care about truth, or the politics of the country it's deployed inside, it just wants to make you happy

> LLMs are not generally optimized to say "I don't know" when they don't have enough information - Pg 96

^b74ebd
- we talk about [[Ai]] making mistakes and telling lies and having hallucinations like a human. But it's not human, it gets fare more trust than a human would even when it's wrong and thus we fact check it less because it's a "computer" and we accord it more weight. Pg 67


> When we use [[Ai]] to generate our first drafts, we don't have to think as hard or as deeply about what we write. We rely on the machine to do the hard work of analysis and synthesis, and we don't engage in critical and reflective thinking ourselves. We also miss the opportunity to learn from our mistakes and feedback and the chance to develop our own style. Pg 120

^21b347

- we get anchored on the ideas that AI generated for us instead of branching out into creativity and making our own connections

> When the [[Ai]] is very good humans have no reason to work hard and pay attention. They let the AI take over instead of using it as a tool, which can hurt human skill development. Pg 129
- in this experiment if the AI was good people just posted questions and pasted answers without working to understand what was going on in the test/questions
- the "bad" AI that obviously had faults caused users to dig into the question more and massage the AI to get better answers

- sure companies that get more [[productivity]] out of workers could use it to dominate instead of reducing headcount, but [[shareholder]]s trump so much that often companies cut and still want the growth trajectory to continue like it did when they had a higher headcount. Pg 146
- plus messy employees want more money when the see the company doing well. If you keep them on their toes by firing people and cutting staff you can pay less and keep shareholders happy with dividends.

> AI tracks the activities, behaviors, outputs, and outcomes of workers and managers. AI sets goals and targets for them, assigns tasks and roles to them, evaluates their performance, and rewards them accordingly. Pg 151
- he imagines a better world that uses [[Ai]]/[[Large Language Models|LLM]]s but treats workers well giving them autonomy but it seems like [[capitalist]] owners want less autonomy so people become a replaceable part of the machine that makes them money and they use AI and other [[mass surveillance]] technology to drive wages down so they can retain profits

> Additional most important: *there is no way to detect whether or not a piece of text is AI-generated*. A couple of rounds of prompting remove the ability of any detection system to identify AI writing. Even worse, detectors have high false-positive rates, accusing people (and especially nonnative English speakers) of using AI when they are not. Pg 163
- good luck disproving the false-positive. How do you prove it never happened?
- [[searching makes people think they learned 110920201038]] and [[Not with a Bug But with a Sticker - Ram Shankar Siva Kumar Hyrum Anderson#^f405f8|ai gives confident wrong answers]] and we trust computer answers simply because they're computers, so any student accused is fighting an impossible fight

> The biggest danger to our educational system posed by AI is not its destruction of homework, but rather it's undermining of the hidden system of [[apprenticeship]] that comes from formal education. Pg 178
- this assumes interested teachers and many of my kid's teachers seem fairly checked out according to her information. They're probably overwhelmed by kids that won't listen, and parents that make many things the fault of the teacher
- in the workforce we already have Jr dev's having a hard time finding work because the [[Ai]] can do many of the things that would have been given to them. Thus we'll have no Sr developers because no one got an apprenticeship

> Even as experts become the only people who can effectively check the work of even more capable AIs, we are in danger of stopping the pipeline that creates experts. Pg 180
- see notes above on Jr -> Sr pipelines

> The nature of science is growing so complex that PhD founders now need large teams and administrative support to make progress, so they go to big firms instead. Thus, we have the paradox of our Golden Age of science. More research is being published by more scientists than ever, but the result is actually slowing progress! With too much to read and absorb, papers in more crowded fields are citing new work less and canonizing highly cited articles more. Pg 202
- see [[DDoS’ing Yourself With to-Dos and Reminders]] as a parallel example

- if [[Ai]] grows exponentially he imagines short work weeks and [[universal basic income|basic income]] Pg 208
- but productivity has grown since the 70s to enable this already if the ownership class was willing to not take all the profits for themselves and instead invested them in the well-being of their employees. [[Puritan Work Ethic|Protestant Work Ethic]] doesn't let us have more time off
- I feel like serfdom and poverty for most with a very few rich people is a more likely outcome