Dan's main premise is that many of our organisations are so complex that no one feels responsibility for any of the actions of the organisation which makes it easy for any level of management to push off responsibility for the results of the organisation. He calls many of these processes that push responsibility away from decision makers accountability sinks.
An accountability sink is when you have a front-line worker that has to enforce a policy that they can't change, nor can the end users affected by the policy affect any change. The only way to affect change is to communicate with someone further up the chain, who isn't available and thus never has to experience the results of the policies they put in place.
While he mentions AI and algorithms and makes a small connection between them, I don't think he takes it far enough as companies now roll out either of those options, and then absolve any consequences to the unknowable black box of AI tools. When algorithms can't be accountable, why should we let them make decisions. IBM is famously claimed to have warned against this future in 1979 saying:
“A COMPUTER CAN NEVER BE HELD ACCOUNTABLE
THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION.”
I think we should likely hold the designers and companies accountable for the output of their AI and algorithms and not let them point to the black box as a way of running away from the results of their choices in designing a tool.
Overall, good book, though I admit that much of the conversations on cybernetics and the ideas of Stafford Beer took some intense focus to get through. I loved the idea of accountability sinks and couldn't help but continue to find areas where they were applicable.
- Purchase on Bookshop.org - support local bookstores
- Purchase on Amazon