AI Made Friendly HERE

Eight books you should read to really understand AI

Welcome back to Neural Notes, a weekly column where I look at how AI is influencing Australia. In this edition: some books you might want to consider reading.

Related Article Block Placeholder

Article ID: 284652

The generative AI hype train has been going full steam ahead for almost three years now. Depending on which echo chamber you’re in, the conversation around AI sometimes needs a little more nuance. Unsurprisingly, there’s a lot more to discuss than the productivity gains and alleged money saved.

Fortunately, those conversations are indeed happening. There are quite a few really detailed books on AI, written by some of the field’s sharpest thinkers (and critics). 

This doesn’t mean they’re anti-AI, but they do explore the ethics and risks, as well as the opportunities, behind the technology that is rapidly shaping our future.

Man-Made by Tracey Spicer (2023)

Spicer’s part-memoir, part-investigation digs into how gender bias is coded into the technology we use every day, from facial recognition to voice assistants that default to female. 

Smarter business news. Straight to your inbox.

For startup founders, small businesses and leaders. Build sharper instincts and better strategy by learning from Australia’s smartest business minds. Sign up for free.

By continuing, you agree to our Terms & Conditions and Privacy Policy.

Drawing on interviews with engineers, ethicists and users, she shows how AI can either reinforce or challenge inequality. 

AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference by Arvind Narayanan and Sayash Kapoor (2025)

Born from their viral newsletter, Narayanan and Kapoor’s book is a myth-busting field guide for the generative-AI era. 

They unpack what large language models actually do (and what they don’t). The conversation is grounded in data rather than hype in a smart, funny and clarifying way.

Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell  (2019)

Mitchell combines accessible science with philosophical depth, exploring why pattern recognition isn’t the same as understanding. 

Through examples like self-driving cars and image-recognition failures, she shows how AI’s limits are just as revealing as its capabilities.

The AI Con: How to Fight Big Tech’s Hype Machine and Create the Future We Want  by Emily M Bender and Alex Hanna (2025)

Two of the industry’s most vocal critics dissect how corporate narratives shape public perception. They expose the rhetorical sleight of hand behind “responsible AI” marketing and offer tools to resist the idea that innovation automatically equals progress. 

Related Article Block Placeholder

Article ID: 322045

Neural Notes: We’ve let ‘AI innovation’ become synonymous with theft

Artificial Intelligence and the Value Alignment Problem by Travis LaCroix (2025)

LaCroix, a philosopher of science, tackles one of AI’s most persistent dilemmas: how to embed human values into systems that don’t share them. 

Moving beyond sci-fi speculation, he focuses on the messy ethics of real-world deployment, from autonomous vehicles to decision-making tools already influencing policy.

Weapons of Math Destruction by Cathy O’Neil (2016)

A modern classic that predicted today’s accountability crisis. O’Neil shows how opaque algorithms can quietly amplify inequality across finance, education and criminal justice. 

Her case studies remain a warning for every new AI application that treats fairness as an afterthought.

Feeding the Machine: The Hidden Human Labour Powering AI by James Muldoon, Mark Graham and Callum Cant (2024)

Behind every “automated” system are thousands of workers labelling data, moderating content, and training models, often for a few dollars an hour. 

This book exposes that invisible global workforce and asks what responsibility the AI industry has toward them. It’s an unflinching look at the human cost of convenience.

Checkmate Humanity: The How and Why of Responsible AI by Catriona Wallace, Sam Kirshner and Richard Vidgen (2022)

An Australian blueprint for ethical AI in business and government. Wallace, Kirshner and Vidgen map the taxonomy of AI harms, from bias to misinformation, and outline concrete steps for governance, transparency and accountability.

A must-read for anyone trying to build or regulate AI responsibly.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird