Categories
Essay

Artificial Intelligence Alert

It just keep coming! I thought that I had said or written everything that I needed to say or write – at least for a couple of months. But my metaphorical ink is barely dry and I am hit by another article highlighting another aspect (I might say risk). I have made it clear that you cannot expect ChatBots to give advice or even answers as would a human agent. I have said that they have inherent bias. I have said that their responses have no moral or ethical standards on which to draw.

Now I have to add that they have no desire… yet, no wish to do anything except respond… yet. Barely three paragraphs into his long article (Computers that want things) in the London Review of Books 47:18, 9th October, James Meek states:

…existing iterations of AI can’t do that – care. The chatbot doesn’t not care like a human not caring: it doesn’t care like a rock doesn’t care, or a glass of water. AI doesn’t want anything. 

and then:

[Artificial General Intelligence – the next generation] will have to have some approximation of initiative, imagination and conscience, and the scientist-coders can’t set aside the part of the human brain that is inextricably bound up with reason: motivation. At this level, how could there be AI, artificial intelligence, without AD, artificial desire?

And that sounds like something that most of us wold not want. Meek suggests that

we stand in a position to transcend evolution by defining the advanced AI we make as unselfish and benign. [But (quoting Geoffrey Hinton) he continues], ‘Suppose we make one million superintelligent AI entities, and all but three of them are kind, non-expansionist, selfless and non-tribal. But three of them are expansionist and self-interested. Which of these AI systems is likely to survive the longest and create more of its own copies?’

The ‘dangers’ of AI do not only exist in job losses!

This is a must-read article!

P.S.

I have written about the inherent bias in ChatBots and Large Language Models before. An article in Aeon, ‘Holes in the Web’ points out that the under-representation of many languages means that local or indigenous knowledge is often lost or significantly under-represented. The LLMs also reinforce this as they are trained on data shaped by previous AI outputs, underrepresented knowledge can become less visible – not because it lacks merit, but because it is less frequently retrieved. The more one source is used the further into the background sink the others.

“The disappearance of local knowledge is not a trivial loss. It is a disruption to the larger web of understanding that sustains both human and ecological wellbeing. “

By Chris

Poet and writer: I have travelled the world in the Merchant Navy, worked on the farm where I now live, and re-invented myself as an information scientist. Born in Sussex, I moved to Swansea and have lived in the same farm cottage in mid-Wales for almost 50 years.

I have three collections of poems in print, Mostly Welsh, Book of the Spirit and the recent Lost Time. Although initially entirely focussed on poetry, my writing has branched into short stories and my first full length work of fiction, The Dark Trilogy and the collection of short stories - When I Am Not Writing Poetry - are also available.

Leave a Reply

Your email address will not be published. Required fields are marked *