AI Made Friendly HERE

Geoffrey Hinton, AI, And Google’s Ethics Problem

Talk about the dangers of artificial intelligence, actual
or imagined, has become feverish, much of it induced by the
growing world of generative chat bots. When scrutinising the
critics, attention should be paid to their motivations. What
do they stand to gain from adopting a particular stance? In
the case of Geoffrey Hinton, immodestly seen as the
“Godfather of AI”, the scrutiny levelled should be
sharper than most.

Hinton hails from the
“connectionist” school of thinking in AI, the once
discredited field that envisages neural networks which mimic
the human brain and, more broadly, human behaviour. Such a
view is at odds with the “symbolists”, who focus on AI
as machine-governed, the preserve of specific symbols and
rules.

John Thornhill, writing
for the Financial Times, notes Hinton’s rise, along
with other members of the connectionist tribe: “As
computers became more powerful, data sets exploded in size,
and algorithms became more sophisticated, deep learning
researchers, such as Hinton, were able to produce ever more
impressive results that could no longer be ignored by the
mainstream AI community.”

In time, deep learning
systems became all the rage, and the world of big tech
sought out such names as Hinton’s. He, along with his
colleagues, came to command absurd salaries at the summits
of Google, Facebook, Amazon and Microsoft. At Google, Hinton
served as vice president and engineering
fellow.

Hinton’s departure from Google, and more
specifically his role as head of the Google Brain team, got
the wheel of speculation whirring. One line of thinking was
that it took place so that he could criticise the very
company whose very achievements he has aided over the years.
It was certainly a bit rich, given Hinton’s own role in
pushing the cart of generative AI. In 2012, he pioneered a
self-training neural network capable of identifying common
objects in pictures with considerable accuracy.

The
timing is also of interest. Just over a month prior, an open
letter was published by the Future of Life Institute
warning of the terrible effects of AI beyond the wickedness
of OpenAI’s GPT-4 and other cognate systems. A number of
questions were posed: “Should we let machines flood our
information channels with propaganda and untruth? Should we
automate away all the jobs, including the fulfilling ones?
Should we develop nonhuman minds that might eventually
outnumber, outsmart, obsolete and replace us? Should we risk
loss of control of our civilization?

In calling for a
six-month pause on developing such large-scale AI projects,
the letter attracted a number of names that somewhat
diminished the value of the warnings; many signatories had,
after all, played a far from negligible role in creating
automation, obsolescence and the encouraging the “loss of
control of our civilization”. To that end, when the likes
of Elon Musk and Steve Wozniak append their signatures to a
project calling for a pause in technological developments,
bullshit detectors the world over should stir.

The
same principles should apply to Hinton. He is obviously
seeking other pastures, and in so doing, preening himself
with some heavy self-promotion. This takes the form of mild
condemnation of the very thing he was responsible for
creating. “The idea that this stuff could actually get
smarter than people – a few people believed that. But most
people thought it was way off. And I thought it was way off.
[…] Obviously, I no longer think that.” He, you would
think, should know better than most.

On Twitter,
Hinton put
to bed any suggestions that he was leaving Google on a
sour note, or that he had any intention of dumping on its
operations. “In the NYT today, Cade Metz implies that I
left Google so that I could criticize Google. Actually, I
left so that I could talk about the dangers of AI without
considering how this impacts Google. Google has acted very
responsibly.”

This somewhat bizarre form of
reasoning suggests that any criticism of AI will exist
independently of the very companies that develop and profit
from such projects, all the while leaving the developers –
like Hinton – immune from any accusations of complicity.
The fact that he seemed incapable of developing critiques of
AI or suggest regulatory frameworks within Google itself,
undercuts the sincerity of the move.

In reacting
to his long time colleague’s departure, Jeff Dean, chief
scientist and head of Google DeepMind, also revealed that
the waters remained calm, much to everyone’s satisfaction.
“Geoff has made foundational breakthroughs in AI, and we
appreciate his decade of contributions to Google […] As
one of the first companies to publish AI Principles, we
remain committed to a responsible approach to AI. We’re
continually learning to understand emerging risks while also
innovating boldly.”

A number in the AI community did
sense that something else was afoot. Computer scientist
Roman Yampolskiy, in responding
to Hinton’s remarks, pertinently observed that concerns
for AI Safety were not mutually exclusive to research within
the organisation – nor should they be. “We should
normalize being concerned with AI Safety without having to
quit your [sic] job as an AI researcher.”

Google
certainly has what might be called an ethics problem when it
comes to AI development. The organisation has been rather
keen to muzzle internal discussions on the subject. Margaret
Mitchell, formerly of Google’s Ethical AI team, which she
co-founded in 2017, was given
the heave-ho after conducting an internal inquiry into
the dismissal of Timnit Gebru, who had been a member of the
same team.

Gebru was scalped in December 2020 after co-authoring
work that took issue with the dangers arising from using
AI trained and gorged on huge amounts of data. Both Gebru
and Mitchell have also been critical about the conspicuous
lack of diversity in the field, described by the latter as a
“sea of dudes”.

As for Hinton’s own
philosophical dilemmas, they are far from sophisticated and
unlikely to trouble his sleep. Whatever Frankenstein role he
played in the creation of the very monster he now warns of,
his sleep is unlikely to be troubled. “I console myself
with the normal excuse: If I hadn’t done it, somebody else
would have,” Hinton explained
to the New York Times. “It is hard to see how you
can prevent the bad actors from using it for bad
things.”

Dr. Binoy Kampmark was a Commonwealth
Scholar at Selwyn College, Cambridge. He currently lectures
at RMIT University. Email: bkampmark@gmail.com

© Scoop Media

Did you know Scoop has an Ethical Paywall?

If you’re using Scoop for work, we ask that you or your organisation pay a small license fee with Scoop Pro. We think that’s fair, because your organisation is benefiting from using our news resources. In return, we’ll give your team access to pro news tools and keep Scoop free for personal use, because we believe public access to news is important!

Join Scoop Pro
Find out more

 

Originally Appeared Here

You May Also Like

About the Author:

Early Bird