Categories
Essay

Good AI / Bad AI Addendum

I made a brief reference in my first AI post to:

the well-rehearsed issue of copyright infringement as the LLM hoover up any text found on the web

but it is too important an issue to be left at that: Kate Bush, Annie Lennox and Damon Albarn are among 1,000 artists on a silent ‘AI protest’ album launched to emphasise the impact on musicians of UK’s plans to let AI train on their work without permission. (see this Guardian article). Copyright is important to ALL creative publishers of music, poems, literature, scholarly articles, etc. as it protects their work from unauthorised use and ensures fair recompense for use. The new UK government exemption allows AI companies to train their algorithms on the work of such creative professionals without compensation.

The issue is explained more fully in another Guardian article from the same issue (Tuesday 25th February). Andrew Lloyd Webber and Alistair Webber’s clearly argued opinion piece, “It’s grand theft AI and UK ministers are behind it. Oppose this robbery of people’s creativity” explains the problem in some detail and with some force, noting that the government’s consultation which ended this week “is not regulation, it is a free pass for AI to exploit creativity without consequence.”

Copyright ensures creators retain control and are fairly compensated. It underpins the creative economy. Put simply, it allows artists and creatives to make a living.

The point that both I and Robert Griffiths have made (see my first AI post) is made again here:

AI can replicate patterns, but it does not create. If left unregulated, it will not just be a creative crisis, but an economic failure in the making. AI will flood the market with machine-generated imitations, undercutting human creativity … 

… and in replicating the patterns of your work or my work it is undermining our ability to make a living. Copyright protections are the

foundation that allows creators to produce the high-quality work AI depends on. Without strong copyright laws, human creativity will be devalued and displaced by machines. 

Both articles are essential reading if you are interested in understanding how AI is set to move forward. Or indeed the stage it has already reached. We need to understand and deal with the problems as they arise. There needs to be more open debate and more understanding about ‘good AI’ and ‘bad AI’.

And, I repeat, the man and woman in the street need both to understand and to have a choice as to whether they use (or are exposed to) AI.

Postscript:

The Authors’ Licensing and Collecting Society (ALCS) has just made public their 24-page response to the Government Consultation. It is introduced by CEO Barbara Hayes here and the link to the full PDF document is at the foot of that page. It makes interesting reading (very!) but perhaps the most interesting issue highlighted is the amount of legal challenges that are likely to ensue if the proposed exception-based approach is taken:

The central issue giving rise to this uncertainty is encapsulated well in a paper coauthored by US and German academics: ‘The training of generative AI models does not limit the use of the training data to a simple analysis of the semantic information contained in the works. It also extracts the syntactic information in the works, including the elements of copyright-protected expression. This comprehensive utilization results in a representation of the training data in the vector space of the AI models and thus in a copying and reproduction in the legal sense. Consequently, the training of generative AI models does not fall under the exceptions for text and data mining.’ (Dornis, Tim W. and Stober, Sebastian, Urheberrecht und Training generativer KI-Modelle – technologische und juristische Grundlagen September, 2024).

In the US there is already a significant number of lawsuits relating to the use of copyright material by AI systems.

If you have concerns over the use of supposedly copyright-protected material being used, this report is a ‘must-read’ document.

Categories
Essay

Good AI / Bad AI Revisited

There has been a lot in the news recently about AI – none of which, as it happened, did anything to change my views expressed in the 6th February post. But it – and a conversation I had with an AI user – did remind me of another issue.

As an information scientist, I was – I suppose like any researcher – taught to look at multiple sources and to verify my sources. So, for example, if I were looking for medical advice I would favour NHS (or the US equivalent) sites over most others. And – certainly – if I was being advised on a course of action or on a medication – I would compare a number of sites and read (evaluate) what they all had to say. I would also build into my evaluation a weighting based on the source. Weighting sounds like a complicated algorithm, but all I mean is that I would favour information from known sites (NHS, etc.) over that from an unknown blog. Because I could evaluate in that way.

It seems to me that while AI search engines/ChatBots may search far wider and faster than I could ever hope to do, there is (in my limited experience) no (or little) information provided about sources. I know that there is weighting built into their algorithms (a sort of sequential word probability at the lowest level) but I do not know whether that weighting extends to analysing sources, nor do I know – if it is – on what that weighting is based. (For a simple explanation of how basic weighting and large language models (LLM) work see this Aeon video – which does not mention sources!)

This means – I think – that if you use an AI ChatBot/LLM to do your research, you are relying on a probability that the answer is correct based – mainly – on word probabilities (of the… is the word ‘green’ likely to be followed by ‘leaf’ rather than ‘face’ variety) but with little attention to the various URLs/sources from which the information presented – if ‘information’ it is (Can you call a collection of words increasingly likely to work together ‘information’?) – is culled. I am not even sure whether source information is built into the algorithms.

And I have no information on how trustworthy that makes AI research-bots. Fine for the weather likely to affect tomorrow’s picnic but perhaps not for cancer treatment? Better than that? Worse?

Although – possibly – if you are searching for ‘facts’ (I mean ‘important facts’ such as the right medication as opposed to the correct ending of a quotation from Byron), the AI system goes beyond the LLM. But most of us do not know whether that is true… or indeed how the AI would interpret my word ‘facts’!

Or ‘important’!

And I think we are getting back into the ‘morality’ realms I dealt with in the earlier posting.

For now – while I am still able to choose – I shall use search engines (and I know these all have some ‘intelligence’ built in) that allow me to assess the degree to which I can trust the answer.

Categories
Poetry

A New Nursery Rhyme

Following on from the short February 4th ‘Miracles’ poem with it’s ‘time’ theme, another poem that recognises the variability of time!

Tick, tock, distant star
How I wonder when you are
Just a timepiece in the sky
Your every ‘now’ another lie!

Categories
Essay

Good AI / Bad AI

As a writer – possibly even a poet – I have concerns about the Large Language Models (LLM) of Artificial Intelligence. As Robert Griffiths wrote recently in PNR 281:

“But even if these programs could train on ‘good’ poetry, it is not clear how, in their production of what is statistically most likely in a word-string, they could produce anything original. It is not obvious that any analysis of the best poetry written before 1915 would have come up with the third line of Prufrock [“Like a patient etherized upon a table” since you ask]. That line was not already waiting in that poetry; it was not even waiting in language.’

Crucially he reminded readers that it arose “from a particular human being’s unique relationship to that poetry and the world.”

This echoed a part of something I wrote about a month ago. I too have concerns about AI producing art, fiction and poems for that very reason. AI used for necessary processes – such as NHS image scanning to speed up analysis and consultations – is a wonderful step forward. Unnecessary AI simply to make money for the lazy is not. In essence my concerns boil down to three issues which I have classified as to do with morality, followed by three further issues:

  1. Morality 1. The ability to produce (in seconds, apparently) novels or poems or the works of art  – forgeries basically – is extraordinarily clever but… why? Apart from the amusement value of the last, who needs them? What value are they/do they have? A novel or a poem (even one of mine) is a representation of the author’s thinking: it has his/her imprint and imprimatur. It is essentially – leaving aside the individual creator – the art/creation of this planet’s life and represents a version of this planet’s thinking/beliefs/understanding of life etc at the point in time at which it was written. An AI creation is just some cleverly jumbled words with no life or meaning other than the lexical. Essentially I would suggest it has no value. Ditto the works of art. This is a waste of resources.
    Another thought is that it may seriously mislead readers, uncritical children learning to  or having recently learned to read, future generations (that could have serious repercussions).
    Additionally is the well-rehearsed issue of copyright infringement as the LLM hoover up any text found on the web.
  2. Morality 2. Like data banks and bit-coin, AI systems use huge amounts of electricity and water. Is this morally acceptable in a time when we are having trouble producing enough/enough cleanly? I suppose I would argue that it is OK for the image scanning type of work but not for creating valueless, gimmicky novels or pictures, or for enhancing – a questionable word – search results or providing a voice response when I ask about the weather – something I could do more easily on my iPhone!
  3. Morality 3 Finally – and maybe this should have been the first of the three – AI systems have no inherent morals or ethics. Arguably, neither do many of our leaders who make choices on our behalf, but at least they exist in the same bubble of morality as I do. Remember Asimov’s laws for robots – basically do no harm to humans – do AI systems have even that basic ‘morality’ built in? AI is (I think) used in legal as well as medical work – what moral and ethical safeguards are there. (Even at a lesser level than morality/ethics, can we be sure that the rules built in are the same ones that a judge would make?) Can the system vary them? Should it be able to? Should we – the general public – know what they are? Who decides on the morals/ethics?
  4. Security is definitely an issue – not just in government or armed forces systems. It does need to be addressed IN EVERY APPLICATION of AI. That probably means a minimum level should be set and regulated. (By someone!)
  5. Definition: what do we mean by AI? The term sweeps in general robotics – as on a production/assembly line – which probably have very limited intelligence beyond recognising parts, etc) through image recognition and control to Large Language Models which swallow and assimilate and ‘learn’ from huge, uncontrolled and unfiltered vats of text. Without permission. Without (so far as I am aware) any human interference, value adding or ‘explaining’. Shouldn’t there be some understood vocabulary or classification or Linnaean taxonomy beyond/below the ambidextrous AI? And shouldn’t we all (have the opportunity to) understand it.
  6. Choice. In many cases AI is being foisted on us whether we will or not. If I buy a new car my interaction with it may be largely by via ChatGP (I may ask out loud the navigation system to re-route me to a shop and it may reply, But that shop is currently closed, I’ll take you to…). Already search engines may incorporate it. What else does? Who knows? I believe that users should have the right to know – and have the ability built into the interface without having to argue with a nameless bit of AI on the phone! – to decide whether we want ‘ordinary, vanilla’ search or enhanced AI search. After all when all is said and done what is AI doing in the search that the search engine hasn’t been doing (more or less) (more less satisfactorily) for years? Essentially, I – as a human being – want to remain in control!
    And – another aside – shouldn’t users be able to decide whether the thing on the other end of a help line is human or artificial?

Link to my short story on Artificial Intelligence.

Categories
Poetry Uncategorized

Miracles

Around me, the unseen miracle
Of the past: the birds in the tree—the hills—the clouds—the people passing by,
I am at the centre of my time
                And they are all in the past
What stranger miracles are there?

After Walt Whitman:

To me the sea is a continual miracle,
The fishes that swim—the rocks—the motion of the waves—the ships with men in them,
What stranger miracles are there?