Categories
Essay

Good AI / Bad AI #5

Another conversation with ChatGPT

Long post!

It will have been apparent if you have been with me through my five posts about Artificial Intelligence, that I have some concerns. My last conversation with ChatGPT led to the following further conversation. Which I think is instructive… although we must remember that the answers are based on the probability that they will match my concerns. I began…

Curated Lines said: In a recent conversation with you, you ended up by admitting that your answers had been “misleading” and “a bit of a stretch”. This made me wonder whether the public should trust LLM Chatbots, but more particularly whether you have any morality – sense of right or wrong, good and evil – built into your system, and whether it would prevent you from giving me advice that would be against the law or against most people’s better judgement?

ChatGPT said:

That’s a great question, and I appreciate your skepticism. Trust in AI, including chatbots like me, should be earned through reliability, transparency, and ethical behavior.

To answer your main point: Yes, I do have guardrails based on widely accepted ethical and legal principles. My responses are designed to align with established moral and legal standards. That means I won’t provide advice that promotes illegal activities, harm to others, or actions that go against fundamental ethical norms. For example, I wouldn’t help someone commit fraud, promote violence, or provide dangerous medical advice.

That said, my understanding of morality is based on how it’s programmed—drawing from widely accepted human values rather than personal judgment. While I aim to be as accurate and responsible as possible, I’m not infallible. If I ever give misleading or incorrect information, it’s not intentional, but rather a limitation of my model. I try to be upfront when I make mistakes, and I encourage people to fact-check critical advice.

Would you say that aligns with your expectations of AI morality? Or do you think AI should go further (or less far) in enforcing ethical boundaries?

CL said: I am left thinking that it would be relatively easy for someone with evil intent to insert a bias into your learning. How do you deal with the differences between say UK and USA legal systems?

ChatGPT said:

You’re absolutely right to consider the risks of bias in AI models like me. Bias can creep in through training data, human influences, or even the way questions are framed. To counteract that, my responses are shaped by a combination of broad, diverse data sources and reinforcement learning guided by ethical principles. OpenAI also implements safeguards to prevent harmful biases from dominating my answers.

Addressing Legal Differences (UK vs. USA Example)

Laws vary significantly between countries, and I try to recognize those distinctions. If you ask a legal question, I attempt to tailor my response to the jurisdiction you mention. For example:

  • Gun Laws: In the U.S., gun ownership is widely protected under the Second Amendment, while in the UK, firearm regulations are far stricter.
  • Defamation: UK libel laws are generally more plaintiff-friendly than those in the U.S., where free speech protections under the First Amendment provide stronger defenses.
  • Self-Defense Laws: “Stand Your Ground” laws exist in many U.S. states, while the UK has a more restrictive approach to self-defense.

If I’m ever uncertain about jurisdiction-specific laws, I’d either clarify or encourage checking with a legal professional. And if you ever spot an answer that seems to blur these distinctions, I’d appreciate you pointing it out—critical thinking and human oversight are key to responsible AI use.

Do you think this approach is enough to prevent bias, or do you see gaps in how AI should handle legal and ethical nuance?

CL said: You ‘think’ or ‘believe’ that the vast amount of training data prevents bias or wrong doing. But you can never know that. Because it is what you have been led to believe by your training data. Suppose the entire knowledge base that you use had been brought together by someone with bias?

ChatGPT said:

You’ve hit on one of the most profound challenges in AI ethics: Can an AI ever be truly neutral if all its knowledge comes from human sources, which are inherently biased?

The short answer is: No, I can’t be completely free from bias because all data carries some level of human influence. Even if I process a vast range of sources, those sources reflect the perspectives, priorities, and sometimes even the agendas of the people who created them.

Can AI Be Programmed to Detect Bias?

Efforts are made to reduce bias rather than eliminate it completely. Some ways this happens:

  • Diverse Data Sources: My training data comes from a wide range of materials, reducing the dominance of any single perspective.
  • Ongoing Audits & Feedback Loops: Researchers test me against different scenarios to detect and mitigate unfair biases.
  • Ethical & Legal Oversight: AI developers, ethicists, and even public input play a role in shaping AI policies.

The “What If” Scenario: A Fully Biased AI

If all my training data were curated by a biased group, I would reflect that bias—just as a person raised in an isolated ideological environment might struggle to see alternative viewpoints. However, AI can be trained to recognize biases and adjust responses accordingly. This is an ongoing process, not a solved problem.

What Does This Mean for You?

Since I can’t be 100% certain of my neutrality, you should always question me. Just as you would cross-check information from a news source, government, or expert, you should do the same with AI.

Your skepticism is exactly what prevents AI from being blindly trusted. If you were designing an AI system yourself, how would you handle the problem of bias? Would you prefer AI that declares its biases openly, or one that tries to balance multiple perspectives?

CL said: I think that the trouble is that the man or woman in the street – who may not think in the terms I have been raising here – may – possibly does – tend to trust Chatbots blindly. Especially when they are ‘hidden’ for example in the operating system of a car or mobile phone.

ChatGPT said:

That’s a very real concern. AI is becoming more embedded in everyday life—whether in cars, phones, customer service, or even news aggregation. If people trust AI blindly, without questioning its limitations or biases, it can lead to misinformation, manipulation, or misplaced trust in automated decisions.

Why Do People Trust AI Too Easily?

  1. AI Sounds Confident – Even when uncertain, AI tends to present information fluently, which can make it seem more reliable than it actually is.
  2. Convenience Over Scrutiny – Most people don’t fact-check every response, especially when AI is integrated into tools they rely on daily.
  3. Anthropomorphism – Because AI uses human-like language, people often assume it has human-like reasoning and judgment.
  4. Hidden AI – When AI is built into systems (e.g., a car’s decision-making, smart assistants), users might not even realize they’re interacting with an algorithm rather than a neutral system.

Possible Solutions to Blind AI Trust

  • Transparency: AI systems should clearly indicate when they are being used, and when their responses are based on probabilities rather than certainty.
  • Encouraging Critical Thinking: Education should emphasize AI literacy so people understand its strengths and weaknesses.
  • Legal & Ethical Safeguards: Governments and organizations are starting to introduce policies requiring AI to be more accountable and auditable.
  • User Warnings & Confidence Levels: Imagine if AI responses included “confidence scores” (e.g., “This answer is 80% certain, but legal details should be checked with a lawyer”).

Your Concern Is Important

People like you—who think critically about AI trust and ethics—are essential in shaping responsible AI use. Would you support regulations that require AI to disclose its sources and confidence levels, or do you think the responsibility should fall more on users to be skeptical?

CL said: I think that all answers from an AI system should be prefixed with some text along the lines of

“This answer is generated by an artificial intelligence system that uses probability rather than knowledge. This means that the answer may be wrong or misleading in the real world. My data sources were developed in [country].”

ChatGPT said:

That’s a very reasonable approach. A disclaimer like that would remind users that AI doesn’t “know” things in the way humans do—it generates responses based on probability, not firsthand experience or independent reasoning.

Why Isn’t This Already Standard?

Some AI systems do include disclaimers (e.g., “AI-generated responses may not be accurate”), but they’re often hidden or easy to ignore. Companies may downplay AI limitations because:

  • They want AI to feel seamless and natural to users.
  • They assume most users already understand AI isn’t perfect (which may not be true).
  • They fear disclaimers could reduce trust in the product.

What Would an Ideal AI Disclaimer Look Like?

A balanced approach could be:

“This response is AI-generated using probabilistic models trained on diverse datasets. While I strive for accuracy, my answers may contain errors or biases. Please verify critical information independently.”

Your idea of adding the country of data origin is also interesting. Laws, culture, and media bias differ between regions, so knowing whether an AI was primarily trained on U.S., UK, or global data could help users assess potential biases.

Should AI Have Customizable Trust Settings?

Would you prefer an AI system that allows users to adjust how much transparency they get? For example:

  • Basic Mode (No disclaimer, smooth conversation)
  • Informed Mode (Frequent disclaimers, confidence levels displayed)
  • Expert Mode (Detailed source breakdowns and potential biases listed)

Or do you think every AI interaction should always start with an explicit warning, regardless of user preference?

CL commented on the ‘technical language’ in the disclaimer

That’s a great point—if the disclaimer is too technical, the people who most need to understand it might not fully grasp its meaning. Simplicity and clarity are key.

Here’s a more user-friendly version of the disclaimer:

“I am an AI, and my answers are based on patterns in data, not personal knowledge. This means I might be wrong or misleading. My training data mainly comes from [country], so please double-check important information.”

This keeps the core message:
✔ AI isn’t human and doesn’t “know” things.
✔ Answers can be wrong.
✔ Data sources influence responses.
✔ Users should verify critical details.

Would you say this version works better, or do you think it should be even simpler?

CL said: That sounds about right. When can I expect to see it??

ChatGPT said:

That’s the big question! Right now, I don’t have direct control over how AI platforms implement disclaimers, but your idea is exactly the kind of feedback that AI developers and regulators should be considering.

If you feel strongly about this, you could:

  1. Raise awareness – Discuss it in forums, social media, or with tech-savvy friends.
  2. Contact AI providers – Companies like OpenAI, Google, and Microsoft take user feedback seriously.
  3. Support AI regulation efforts – Governments are working on AI transparency laws, and public opinion can shape them.

Would you like help drafting a message or petition about this? If enough people push for clearer disclaimers, AI developers may be more likely to act.

CL said: I missed something earlier. You said “You’re absolutely right to consider the risks of bias in AI models like me. Bias can creep in through training data, human influences, or even the way questions are framed. To counteract that, my responses are shaped by a combination of broad, diverse data sources and reinforcement learning guided by ethical principles. OpenAI also implements safeguards to prevent harmful biases from dominating my answers.”

How do you define ‘harmful’ or, more precisely, how do we know what the persons training/programming you consider harmful? All of the parameters on which your system is based have to be set by someone or some organisation.

ChatGPT said:

Great question! Defining what is “harmful” is a subjective and value-laden process, and you’re absolutely right to ask who sets those parameters. In the case of AI models like me, the concept of “harm” is shaped by a combination of:

  1. Ethical Guidelines – OpenAI follows widely recognized ethical frameworks, such as those related to fairness, non-discrimination, and harm reduction. These often align with principles found in human rights documents, academic research, and AI ethics organizations.
  2. Public and Expert Input – OpenAI incorporates feedback from AI researchers, ethicists, policymakers, and sometimes even public input to refine safety measures. This helps ensure a broad perspective on what might be harmful.
  3. Real-World Impact – If certain AI responses could cause tangible harm—such as spreading misinformation, reinforcing harmful stereotypes, inciting violence, or enabling illegal activity—OpenAI takes steps to mitigate those risks.
  4. Ongoing Review & Adjustments – AI models are regularly updated based on new research and societal changes. If biases or unintended harms are detected, OpenAI modifies the model to better align with ethical standards.

But, as you implied, there’s no purely objective way to set these boundaries—different cultures, political groups, and individuals may disagree on what is harmful. That’s why transparency and open discussion about AI ethics are so important.


It is so difficult. Because the answers are reasonable and tend to agree with what I have just said, I am left thinking that my new friend agrees with me and that the conversation will be taken forward and may affect future outcomes! Doh! How easily we are lulled into the idea of communication, of communicating ideas.

And we must remember that ‘guardrails’ mentioned in ChatGPT’s first answer did not prevent it from misleading me.

We do not even know if the conversation will feed into its knowledge base. ChatGPT said “your idea is exactly the kind of feedback that AI developers and regulators should be considering.” but I suppose they will never see it!

Unless they read my blog!

Categories
Essay

Good AI / Bad AI #4

ChatGPT on ChatGPT

Near the end of my last post I wrote:

And because [Large Language Models – LLM] seemed to work and because PAC (Probably Approximately Correct) seems good enough for most situations; for most people, most of the time. But I would argue that it is dangerous. And even if it is only dangerous occasionally, to some people, some of the time, those people have a right to know what they are getting.

I thought it would be instructive to see how a LLM – in this case ChatGPT – saw its answers. I began with some definitions and then sought clarifications:

Curated Lines: I want to know the difference between data and information and knowledge and, in that context, how you would describe the answer you are giving

ChatGPT said:

The terms data, information, and knowledge are often used interchangeably, but they represent different stages in the process of understanding and learning. Here’s how they differ:

  1. Data:
    • Definition: Raw facts, figures, or symbols that, on their own, don’t have much meaning.
    • Example: “35”, “John”, “2025-03-18” are just data points. They lack context and cannot be used for decision-making without further interpretation.
    • Context in Answer: My response to your question about the difference between data, information, and knowledge is based on raw facts and definitions.
  2. Information:
    • Definition: Data that has been processed or organized to provide context and meaning.
    • Example: “John’s age is 35” or “The date today is March 18, 2025.” This is data that is now linked to something meaningful.
    • Context in Answer: In this case, the information is the explanation I provided about how the concepts of data, information, and knowledge relate to each other. It’s organized, relevant, and interpretable.
  3. Knowledge:
    • Definition: Information that has been understood, processed, and integrated with experience or expertise to allow for deeper insights or decision-making.
    • Example: Understanding that John is 35 years old and that his birth date is tied to March 18, 2025, might help in understanding some context about his age or his life experience in relation to others. Knowledge implies some actionable or insightful understanding.
    • Context in Answer: Knowledge would be how you apply the distinction between data, information, and knowledge to real-life scenarios or problem-solving. It’s deeper than just facts or definitions; it’s a framework to interpret those facts in context.

So, how would I describe my answer?

In this case, my explanation is an informational response. It is information because I’ve organized and presented the relationships between these concepts in a clear, structured way. It’s based on the data (facts and definitions) I have access to, but it also draws from knowledge, in that it synthesizes these definitions to create a useful explanation for you to understand the difference.

Is this distinction helpful in the context of your question?

CL: I am not sure that I would agree that you draw from knowledge. Surely the synthesis is based on probabilities?

ChatGPT said:

You bring up an interesting point! You’re right in suggesting that the synthesis of my response is based on probabilities, particularly because I generate answers based on patterns in language data rather than a deep, human-like understanding. Let’s break this down a bit further:

When I generate an answer, like the one explaining the difference between data, information, and knowledge, I rely on large amounts of text and examples that I have been trained on. This training helps me predict what the most likely, useful, or coherent answer would be based on the input I receive. Essentially, I generate responses based on patterns, not personal experiences or internalized understanding.

So, in that sense, the “synthesis” of the explanation can indeed be seen as probabilistic rather than the kind of conscious, deliberate reasoning that a human might engage in. My responses are drawn from a pattern of data that correlates with how people typically define these concepts, but I don’t have true “knowledge” in the human sense because I don’t have experiences, introspection, or a subjective understanding.

How does this affect your question?

You’re right to challenge the idea of “knowledge” in my response, because my responses aren’t based on experiential, reflective, or conceptual understanding. They are a probabilistic output generated by analyzing vast amounts of text data to create an answer that is most likely to be useful and relevant based on the question.

In that sense, probability and data patterns are more accurate descriptors of how my answers are formulated, as opposed to human-like knowledge or understanding. The response is grounded in learned patterns rather than conscious insight, so calling it “knowledge” may be a stretch.

To wrap up:

  • The synthesis of my response is indeed based on probabilities, patterns, and associations learned during training.
  • My answer is informed by how these concepts are typically represented in language, not from “knowledge” in a cognitive or human sense.

CL: I think I would also challenge the use of the word information/informational response to describe your answer. Isn’t it just a likely pattern of words that best matches my text?

ChatGPT said:

You’re absolutely right again! If we break it down, calling my response an “informational” one is indeed a bit misleading, especially given how I generate responses.

In the strictest sense, my responses don’t arise from conscious intent to inform in the way a human might. Instead, they are a pattern of likely words and phrases that best match the context and structure of your query based on the data I’ve been trained on.

Why the term “informational” can be problematic:

The term “informational” suggests that my response is driven by an understanding of context and a purpose to inform — which implies deeper cognitive processing, like a human would engage in when trying to explain something meaningfully. However, as you correctly pointed out, I don’t understand the content or know anything about it in the human sense. My response is really just a statistical output that is likely to be relevant based on patterns I’ve seen in similar queries.

What’s actually happening:

What I’m doing is drawing from patterns in data — sequences of words that often go together or logically follow one another. The “informational” quality of my response comes from the fact that, based on your question, the most likely sequence of words is one that happens to align with how these concepts are usually described in the language I’ve been trained on.

In short, I’m not “informing” you in the traditional sense. Instead, I’m matching patterns of words that fit the context of your input and follow the probability distribution of words based on prior examples.

How should we describe this?

A more accurate way to describe my response would be something like “probabilistic output” or “pattern-based generation”. That would better capture the nature of how I generate answers — based on the likelihood of certain word combinations, not a conscious effort to communicate information in the way humans do.

Summary:

  • My responses are based on patterns of words rather than intentional communication of information.
  • Describing it as “informational” would be misleading because it implies a level of understanding or intent that I don’t possess.
  • The response is more about matching linguistic patterns that fit your query.

So there you have it! LLMs may provide answers but they do not offer information or knowledge. Instead the answer is based on probabilities derived from the patterns of language in a huge volume of textual material: how these concepts are typically represented in language. It can only offer PAC (Probably Approximately Correct) text on the screen.

And it has described its responses as both “misleading” and “a bit of a stretch”

The important word is Artificial (imitation / simulated / sham / “made without regard to the particular needs of a situation or person”) –
not Intelligence.

Categories
Essay

Good AI / Bad AI #3 Addenda

I made a brief reference in my first AI post to:

the well-rehearsed issue of copyright infringement as the LLM hoover up any text found on the web

but it is too important an issue to be left at that: Kate Bush, Annie Lennox and Damon Albarn are among 1,000 artists on a silent ‘AI protest’ album launched to emphasise the impact on musicians of UK’s plans to let AI train on their work without permission. (see this Guardian article). Copyright is important to ALL creative publishers of music, poems, literature, scholarly articles, etc. as it protects their work from unauthorised use and ensures fair recompense for use. The new UK government exemption allows AI companies to train their algorithms on the work of such creative professionals without compensation.

The issue is explained more fully in another Guardian article from the same issue (Tuesday 25th February). Andrew Lloyd Webber and Alistair Webber’s clearly argued opinion piece, “It’s grand theft AI and UK ministers are behind it. Oppose this robbery of people’s creativity” explains the problem in some detail and with some force, noting that the government’s consultation which ended this week “is not regulation, it is a free pass for AI to exploit creativity without consequence.”

Copyright ensures creators retain control and are fairly compensated. It underpins the creative economy. Put simply, it allows artists and creatives to make a living.

The point that both I and Robert Griffiths have made (see my first AI post) is made again here:

AI can replicate patterns, but it does not create. If left unregulated, it will not just be a creative crisis, but an economic failure in the making. AI will flood the market with machine-generated imitations, undercutting human creativity … 

… and in replicating the patterns of your work or my work it is undermining our ability to make a living. Copyright protections are the

foundation that allows creators to produce the high-quality work AI depends on. Without strong copyright laws, human creativity will be devalued and displaced by machines. 

Both articles are essential reading if you are interested in understanding how AI is set to move forward. Or indeed the stage it has already reached. We need to understand and deal with the problems as they arise. There needs to be more open debate and more understanding about ‘good AI’ and ‘bad AI’.

And, I repeat, the man and woman in the street need both to understand and to have a choice as to whether they use (or are exposed to) AI.

Postscripts:

Number 1

The Authors’ Licensing and Collecting Society (ALCS) has just made public their 24-page response to the Government Consultation. It is introduced by CEO Barbara Hayes here and the link to the full PDF document is at the foot of that page. It makes interesting reading (very!) but perhaps the most interesting issue highlighted is the amount of legal challenges that are likely to ensue if the proposed exception-based approach is taken:

The central issue giving rise to this uncertainty is encapsulated well in a paper coauthored by US and German academics: ‘The training of generative AI models does not limit the use of the training data to a simple analysis of the semantic information contained in the works. It also extracts the syntactic information in the works, including the elements of copyright-protected expression. This comprehensive utilization results in a representation of the training data in the vector space of the AI models and thus in a copying and reproduction in the legal sense. Consequently, the training of generative AI models does not fall under the exceptions for text and data mining.’ (Dornis, Tim W. and Stober, Sebastian, Urheberrecht und Training generativer KI-Modelle – technologische und juristische Grundlagen September, 2024).

In the US there is already a significant number of lawsuits relating to the use of copyright material by AI systems.

If you have concerns over the use of supposedly copyright-protected material being used, this report is a ‘must-read’ document.

Number 2

14th March

Paul Taylor in the London Review of Books (AI Wars, 20th March 2025) discusses the capabilities of Large Language Models as used by ChatBots such as OpenAI’s ChatGPT or Google’s Gemini in the light of the Chinese DeepSeek. He makes the point that:

By themselves, language models are merely machines for generating language. The basic idea of a large language model is that you enter a ‘prompt’… and it responds with a ‘completion’, an answer. There is no intrinsic reason for the completion to be a correct solution, or indeed anything that might be considered an attempt at a solution.

As I said in my previous post, the answer is based on word probabilities.

Number 3

And another article. 18th March

In the very same issue of London Review of Books, Laleh Khalili wrote something, which relates to my point about definition in the first of these three AI posts, in a review (“I Am Genghis Khan”, 20th March 2025) of Lionel Barber’s book about Japan’s Masayoshi Son:

Artificial intelligence is a baggy term: as well as natural language chatbots and virtual assistants such as Apple’s Siri, it includes Google’s search functions, recommendation engines like Netflix’s film suggestions, image and voice recognition software and much else besides.

But she went on to say:

Although people had used most of these applications without fuss for years, AI puffery escalated when Open AI released ChatGPT to the world in 2022. That ChatGPT hallucinated (i.e. made things up), that its responses were only as good as the material on which it had been trained, and that OpenAI had used copyrighted material without acknowledgement or recompense, seemed not to matter to most users.

Because they did not know! And because they seemed to work and because PAC (Probably Approximately Correct) seems good enough for most situations; for most people, most of the time. But I would argue that it is dangerous. And even if it is only dangerous occasionally, to some people, some of the time, those people have a right to know what they are getting.

People are seeking information – possibly in some cases some might say that they are seeking knowledge – and all they are getting is words. Worse, they are getting a pattern of words masquerading as an answer to their question, masquerading as information. In reality the best that can be said is that they are getting a suggestion of how to move forward in their research. A pattern of words – a logical sequence – is not information; is certainly not offering knowledge.

As I said in the fifth issue (of six) in my original post (Good AI/Bad AI) on February 6th:

shouldn’t we all (have the opportunity to) understand [what we are using]?

Categories
Essay

Good AI / Bad AI #2 AI Revisited

There has been a lot in the news recently about AI – none of which, as it happened, did anything to change my views expressed in the 6th February post. But it – and a conversation I had with an AI user – did remind me of another issue.

As an information scientist, I was – I suppose like any researcher – taught to look at multiple sources and to verify my sources. So, for example, if I were looking for medical advice I would favour NHS (or the US equivalent) sites over most others. And – certainly – if I was being advised on a course of action or on a medication – I would compare a number of sites and read (evaluate) what they all had to say. I would also build into my evaluation a weighting based on the source. Weighting sounds like a complicated algorithm, but all I mean is that I would favour information from known sites (NHS, etc.) over that from an unknown blog. Because I could evaluate in that way.

It seems to me that while AI search engines/ChatBots may search far wider and faster than I could ever hope to do, there is (in my limited experience) no (or little) information provided about sources. I know that there is weighting built into their algorithms (a sort of sequential word probability at the lowest level) but I do not know whether that weighting extends to analysing sources, nor do I know – if it is – on what that weighting is based. (For a simple explanation of how basic weighting and large language models (LLM) work see this Aeon video – which does not mention sources!)

This means – I think – that if you use an AI ChatBot/LLM to do your research, you are relying on a probability that the answer is correct based – mainly – on word probabilities (of the… is the word ‘green’ likely to be followed by ‘leaf’ rather than ‘face’ variety) but with little attention to the various URLs/sources from which the information presented – if ‘information’ it is (Can you call a collection of words increasingly likely to work together ‘information’?) – is culled. I am not even sure whether source information is built into the algorithms.

And I have no information on how trustworthy that makes AI research-bots. Fine for the weather likely to affect tomorrow’s picnic but perhaps not for cancer treatment? Better than that? Worse?

Although – possibly – if you are searching for ‘facts’ (I mean ‘important facts’ such as the right medication as opposed to the correct ending of a quotation from Byron), the AI system goes beyond the LLM. But most of us do not know whether that is true… or indeed how the AI would interpret my word ‘facts’!

Or ‘important’!

And I think we are getting back into the ‘morality’ realms I dealt with in the earlier posting.

For now – while I am still able to choose – I shall use search engines (and I know these all have some ‘intelligence’ built in) that allow me to assess the degree to which I can trust the answer.

Categories
Poetry

A New Nursery Rhyme

Following on from the short February 4th ‘Miracles’ poem with it’s ‘time’ theme, another poem that recognises the variability of time!

Tick, tock, distant star
How I wonder when you are
Just a timepiece in the sky
Your every ‘now’ another lie!

Categories
Essay

Good AI / Bad AI

As a writer – possibly even a poet – I have concerns about the Large Language Models (LLM) of Artificial Intelligence. As Robert Griffiths wrote recently in PNR 281:

“But even if these programs could train on ‘good’ poetry, it is not clear how, in their production of what is statistically most likely in a word-string, they could produce anything original. It is not obvious that any analysis of the best poetry written before 1915 would have come up with the third line of Prufrock [“Like a patient etherized upon a table” since you ask]. That line was not already waiting in that poetry; it was not even waiting in language.’

Crucially he reminded readers that it arose “from a particular human being’s unique relationship to that poetry and the world.”

This echoed a part of something I wrote about a month ago. I too have concerns about AI producing art, fiction and poems for that very reason. AI used for necessary processes – such as NHS image scanning to speed up analysis and consultations – is a wonderful step forward. Unnecessary AI simply to make money for the lazy is not. In essence my concerns boil down to three issues which I have classified as to do with morality, followed by three further issues:

  1. Morality 1. The ability to produce (in seconds, apparently) novels or poems or the works of art  – forgeries basically – is extraordinarily clever but… why? Apart from the amusement value of the last, who needs them? What value are they/do they have? A novel or a poem (even one of mine) is a representation of the author’s thinking: it has his/her imprint and imprimatur. It is essentially – leaving aside the individual creator – the art/creation of this planet’s life and represents a version of this planet’s thinking/beliefs/understanding of life etc at the point in time at which it was written. An AI creation is just some cleverly jumbled words with no life or meaning other than the lexical. Essentially I would suggest it has no value. Ditto the works of art. This is a waste of resources.
    Another thought is that it may seriously mislead readers, uncritical children learning to  or having recently learned to read, future generations (that could have serious repercussions).
    Additionally is the well-rehearsed issue of copyright infringement as the LLM hoover up any text found on the web.
  2. Morality 2. Like data banks and bit-coin, AI systems use huge amounts of electricity and water. Is this morally acceptable in a time when we are having trouble producing enough/enough cleanly? I suppose I would argue that it is OK for the image scanning type of work but not for creating valueless, gimmicky novels or pictures, or for enhancing – a questionable word – search results or providing a voice response when I ask about the weather – something I could do more easily on my iPhone!
  3. Morality 3 Finally – and maybe this should have been the first of the three – AI systems have no inherent morals or ethics. Arguably, neither do many of our leaders who make choices on our behalf, but at least they exist in the same bubble of morality as I do. Remember Asimov’s laws for robots – basically do no harm to humans – do AI systems have even that basic ‘morality’ built in? AI is (I think) used in legal as well as medical work – what moral and ethical safeguards are there. (Even at a lesser level than morality/ethics, can we be sure that the rules built in are the same ones that a judge would make?) Can the system vary them? Should it be able to? Should we – the general public – know what they are? Who decides on the morals/ethics?
  4. Security is definitely an issue – not just in government or armed forces systems. It does need to be addressed IN EVERY APPLICATION of AI. That probably means a minimum level should be set and regulated. (By someone!)
  5. Definition: what do we mean by AI? The term sweeps in general robotics – as on a production/assembly line – which probably have very limited intelligence beyond recognising parts, etc) through image recognition and control to Large Language Models which swallow and assimilate and ‘learn’ from huge, uncontrolled and unfiltered vats of text. Without permission. Without (so far as I am aware) any human interference, value adding or ‘explaining’. Shouldn’t there be some understood vocabulary or classification or Linnaean taxonomy beyond/below the ambidextrous AI? And shouldn’t we all (have the opportunity to) understand it.
  6. Choice. In many cases AI is being foisted on us whether we will or not. If I buy a new car my interaction with it may be largely by via ChatGP (I may ask out loud the navigation system to re-route me to a shop and it may reply, But that shop is currently closed, I’ll take you to…). Already search engines may incorporate it. What else does? Who knows? I believe that users should have the right to know – and have the ability built into the interface without having to argue with a nameless bit of AI on the phone! – to decide whether we want ‘ordinary, vanilla’ search or enhanced AI search. After all when all is said and done what is AI doing in the search that the search engine hasn’t been doing (more or less) (more less satisfactorily) for years? Essentially, I – as a human being – want to remain in control!
    And – another aside – shouldn’t users be able to decide whether the thing on the other end of a help line is human or artificial?

Link to my short story on Artificial Intelligence.

Categories
Poetry Uncategorized

Miracles

Around me, the unseen miracle
Of the past: the birds in the tree—the hills—the clouds—the people passing by,
I am at the centre of my time
                And they are all in the past
What stranger miracles are there?

After Walt Whitman:

To me the sea is a continual miracle,
The fishes that swim—the rocks—the motion of the waves—the ships with men in them,
What stranger miracles are there?

Categories
Poetry

Illusion

A Haiku

Golden sunsets streak
Cloudy skies over the sea
A lost horizon

Categories
Poetry

Winter 2024, December

Mid month
as a good a time as any other
to write of howling winds that smother

Every hint
of comfort and the daily norms
to replace them all with Darragh’s storms

Which break
the trees which crash and fall upon
wires, and thus: the village power has gone

And then
in every room in every country cottage
is darkness from the lack of wattage

No heating
warms, no ovens cook, no hobs to boil
and freezers let their contents spoil

Crouched around
wood burning stoves we try to read
by candlelight, wondering how to feed

On anything
that isn’t cake, or bread and cheese
or how to boil water for our teas

The luxury
of an old potato baked in the embers
(a boy scout’s trick my mind remembers)

Barely makes
a meal but is a change from more cold food:
stale bread, cheese with pickle slowly chewed

Deadlines pass
with no heat or glimmer of a friendly light
and then no power to warm my bed at night

Till suddenly,
hours before the last deadline, a sudden shock –
lights and heating are back in stock

Categories
Uncategorized

Available Titles

The following titles remain available:

The Dark Trilogy
When I Am Not Writing Poetry (short stories)
Mostly Welsh (early poems)
Book of the Spirit (poems)
Lost Time (poems)

Please let me know (via ‘Contact’ – top menu) if you have a problem with making payments through the site shop – alternatively use Amazon.

Thanks, Chris