Categories
Essay

Good AI / Bad AI Revisited

There has been a lot in the news recently about AI – none of which, as it happened, did anything to change my views expressed in the 6th February post. But it – and a conversation I had with an AI user – did remind me of another issue.

As an information scientist, I was – I suppose like any researcher – taught to look at multiple sources and to verify my sources. So, for example, if I were looking for medical advice I would favour NHS (or the US equivalent) sites over most others. And – certainly – if I was being advised on a course of action or on a medication – I would compare a number of sites and read (evaluate) what they all had to say. I would also build into my evaluation a weighting based on the source. Weighting sounds like a complicated algorithm, but all I mean is that I would favour information from known sites (NHS, etc.) over that from an unknown blog. Because I could evaluate in that way.

It seems to me that while AI search engines/ChatBots may search far wider and faster than I could ever hope to do, there is (in my limited experience) no (or little) information provided about sources. I know that there is weighting built into their algorithms (a sort of sequential word probability at the lowest level) but I do not know whether that weighting extends to analysing sources, nor do I know – if it is – on what that weighting is based. (For a simple explanation of how basic weighting and large language models (LLM) work see this Aeon video – which does not mention sources!)

This means – I think – that if you use an AI ChatBot/LLM to do your research, you are relying on a probability that the answer is correct based – mainly – on word probabilities (of the… is the word ‘green’ likely to be followed by ‘leaf’ rather than ‘face’ variety) but with little attention to the various URLs/sources from which the information presented – if ‘information’ it is (Can you call a collection of words increasingly likely to work together ‘information’?) – is culled. I am not even sure whether source information is built into the algorithms.

And I have no information on how trustworthy that makes AI research-bots. Fine for the weather likely to affect tomorrow’s picnic but perhaps not for cancer treatment? Better than that? Worse?

Although – possibly – if you are searching for ‘facts’ (I mean ‘important facts’ such as the right medication as opposed to the correct ending of a quotation from Byron), the AI system goes beyond the LLM. But most of us do not know whether that is true… or indeed how the AI would interpret my word ‘facts’!

Or ‘important’!

And I think we are getting back into the ‘morality’ realms I dealt with in the earlier posting.

For now – while I am still able to choose – I shall use search engines (and I know these all have some ‘intelligence’ built in) that allow me to assess the degree to which I can trust the answer.

By Chris

Poet and writer: I have travelled the world in the Merchant Navy, worked on the farm where I now live, and re-invented myself as an information scientist. Born in Sussex, I moved to Swansea and have lived in the same farm cottage in mid-Wales for almost 50 years.

I have three collections of poems in print, Mostly Welsh, Book of the Spirit and the recent Lost Time. Although initially entirely focussed on poetry, my writing has branched into short stories and my first full length work of fiction, The Dark Trilogy and the collection of short stories - When I Am Not Writing Poetry - are also available.

Leave a Reply

Your email address will not be published. Required fields are marked *