Categories
Poetry

The Loss of the Sterling Castle

Sunk in the Great Storm of 1703 on the Goodwin Sands

What decision earned them then their terrible fate,
Near all who manned her on that fearful date
            Officers, brave men and knaves
All helpless before the mighty wind and roiling waves?

Here were the mighty ships of the greatest British fleet
Fresh from a campaign in sunny Mediterranean heat
            Ill prepared for the Channel’s icy winter gales
Soaked by seas and frozen by winds that shredded sails

The crew on deck saw the land and houses of Lower Deal
Their thoughts turned to women, ale, and a decent meal
            But all came second to the watch’s worth
As Captain Johnson sought to find his ship a berth

In a crowded anchorage, it took all his skill to guide
His ship, under reduced sail and against the rising tide,
           Between so many mighty naval vessels;
While, to reduce her canvas, the whole crew wrestles

They let go their best bower from the starboard bow
Ten fathoms of heavy hemp streamed from the prow
            Then another twenty fathoms more
To keep the anchor holding on that sandy sea floor

Heavy canvas sails were lashed to the spars tight
Anchor watch set, the men stood down for the night
            With salt beef, hard tack and stale beer …
And the anchor held whilst the gale winds blew clear

Stormy days followed but then a late November day
Saw a winter sun shine weakly as the waves lost their sway
            Longshoremen and their beach boats were all around
And Admiral Sir Cloudsley Shovell was Medway bound!

His three-deckers up-anchored for a short sail worth
The safety and shore leave of a Chatham winter berth
            The Stirling Castle watched them leave the sound:
They were for Portsmouth when the wind went round.

A short lull but in two days the south-westerly gales grew
Top-masts and lower yards were taken down by the crew –
            To offer less purchase for the gale –
Lashed securely to the decks, but needed later when they sail

Late November: the new moon meant tides at high-water
Reached their strongest, with winds shrieking no quarter
            Ships in the Downs dropped a second anchor fast,
As rigging and spars bowed before a wind that cracked a mast

Blocks and rigging fell among the men, spume crashed, flaying
The for’d watch as the anchor dragged, every man praying
            At pumps to empty bilges, or in the maelstrom’s face
Watching for vessels dragging anchor, bearing down on their space

Hours passed, the wind blew stronger, the ebb tide began to flow
Against the wind: waves grew rougher, the anchor dragged slow,
            Held, and dragged some more. Half turning to the tide
The ship rolled and mountainous waves took men over the side.

They knew she could not survive much longer. Men fell to prayer
Brave men, soaked, shivering, beaten by a mighty storm so rare.
            At low tide they felt the keel judder on the Sands:
The Stirling Castle, broken on Bunt Head, sunk with her hands.

            It was the early hours of the 27th November 1703
            Of the four hundred men who set sail on the Stirling Castle
                        Only sixty-two survived the storm;
            Over one thousand naval sailors were lost that night
            But seventy ships rode out the storm.

After G M Hopkins: The Loss of  the Eurydice

Categories
Poetry

Soul

Conceit of man,
That need for a reason!

Unknown selfness, essence:
Existent only in heart and mind —

Would you remain soul-shaped
Homed in a cloud, a raindrop,

In a microbe, a tree?
Would I — I, the cloud, the tree — be

The same me? Have the same
Thoughts, doubts, beliefs and loves?

(Oh, sexy cirrocumulus!
My slender pale -skinned birch!)

Will I wonder what happens after,
Seek something greater above the sky?

Will I form some story of creation,
Some reason to grow straight and true, fluffy and white?

Will I know I am a tree, a cloud?

Categories
Poetry

A Spot of Yellow

A spot of yellow,
of buttercup yellow, shone
amongst the grazing grass
and, cunning low beneath the sward,
the ever mist-moist moss:
yellow, risen to bring sunlight
at end of dreary day

This is the pewter hour,
dull dusk’s light loss
drains energy from the fields,
quiets the lambs to lie sheep-shielded,
yet lets night’s beetle see,
above the grounded grass,
an outlived sun remembered

Categories
Essay

The Worst Thing About ‘AI’ is ‘AI’

Magazines and newspapers discuss ‘AI’ all of the time. If you read any opinion piece or article about the advances being made in computing and software capabilities, you are almost certain to be reading about something they call ‘AI’.

No explanation!

No definition!

But AI has so many faces, and ChatBots / Large Language Models – as well as the infamous image-altering software – are just two – and are arguably the best known, most insidious, and least useful! And probably the least important.

You are almost certainly using a form of AI already in advanced web search engines and through recommendations such as those made by shopping applications. But you should think also of robotics, statistical analysis, planning, strategy work and games (such as chess) as well as image analysis (that aids research and medical diagnostics) – perhaps the most important AI application. AI is also used in agriculture (crop planning, and automation including the in-field navigation of tractors and control of sowing and fertilisation), to take on repetitive tasks, in environmental modelling and in computer animation, to name but a few. The applications are so varied!

And yet ‘AI’ is used ubiquitously – as shorthand – by politicians and journalists… and most readers probably understand them to be discussing and promoting Chatbots – of which I have written at length elsewhere (pdf). I will not repeat that here except to say again that ChatBots are NOT search engines and do NOT offer facts or information; they are tools that make conversation… and at that they are excellent.

So, writers and politicians, please use the right term for the aspect of AI you wish to discuss and explain exactly what it is that you are talking about!

We need to know!

Categories
Essay

Answer or Knowledge?

What follows is a quotation from a recent article The Guardian by Sophie McBain. She in turn quotes Michael Gerlich of the Centre for Strategic Corporate Foresight and Sustainability:

Like many researchers, Gerlich believes that, used in the right way, AI can make us cleverer and more creative – but the way most people use it produces bland, unimaginative, factually questionable work. One concern is the so-called “anchoring effect”. If you post a question to generative AI, the answer it gives you sets your brain on a certain mental path and makes you less likely to consider alternative approaches. “I always use the example: imagine a candle. Now, AI can help you improve the candle. It will be the brightest ever, burn the longest, be very cheap and amazing looking, but it will never develop into a lightbulb,” he says. To get from a candle to a lightbulb you need a human who is good at critical thinking, someone who might take a chaotic, unstructured, unpredictable approach to problem solving. When, as happen in many workplaces, companies roll out tools such as chatbot Copilot without offering decent AI training, they risk producing teams of candle-makers in a world that demands lightbulbs.”

Knowledge – having knowledge – is incredibly important, so that when you hear something that’s questionable or maybe fake, you think, “Wait a minute, that contradicts all the knowledge I have that says otherwise, right?”

Categories
Essay

Artificial Intelligence Alert

It just keep coming! I thought that I had said or written everything that I needed to say or write – at least for a couple of months. But my metaphorical ink is barely dry and I am hit by another article highlighting another aspect (I might say risk). I have made it clear that you cannot expect ChatBots to give advice or even answers as would a human agent. I have said that they have inherent bias. I have said that their responses have no moral or ethical standards on which to draw.

Now I have to add that they have no desire… yet, no wish to do anything except respond… yet. Barely three paragraphs into his long article (Computers that want things) in the London Review of Books 47:18, 9th October, James Meek states:

…existing iterations of AI can’t do that – care. The chatbot doesn’t not care like a human not caring: it doesn’t care like a rock doesn’t care, or a glass of water. AI doesn’t want anything. 

and then:

[Artificial General Intelligence – the next generation] will have to have some approximation of initiative, imagination and conscience, and the scientist-coders can’t set aside the part of the human brain that is inextricably bound up with reason: motivation. At this level, how could there be AI, artificial intelligence, without AD, artificial desire?

And that sounds like something that most of us wold not want. Meek suggests that

we stand in a position to transcend evolution by defining the advanced AI we make as unselfish and benign. [But (quoting Geoffrey Hinton) he continues], ‘Suppose we make one million superintelligent AI entities, and all but three of them are kind, non-expansionist, selfless and non-tribal. But three of them are expansionist and self-interested. Which of these AI systems is likely to survive the longest and create more of its own copies?’

The ‘dangers’ of AI do not only exist in job losses!

This is a must-read article!

P.S.

I have written about the inherent bias in ChatBots and Large Language Models before. An article in Aeon, ‘Holes in the Web’ points out that the under-representation of many languages means that local or indigenous knowledge is often lost or significantly under-represented. The LLMs also reinforce this as they are trained on data shaped by previous AI outputs, underrepresented knowledge can become less visible – not because it lacks merit, but because it is less frequently retrieved. The more one source is used the further into the background sink the others.

“The disappearance of local knowledge is not a trivial loss. It is a disruption to the larger web of understanding that sustains both human and ecological wellbeing. “

Categories
Essay

Artificial Intelligence

I have just published a PDF article that is based largely on the blog posts mentioned below. It will certainly be more readable than the individual postings. It is quite long as I have added some additional material, references and included in full the two ‘conversations’ with ChatGPT. It is my hope that it will generate discussion about the place of generative AI and ChatBots in society and in our lives.

The PDF article, Conversations about the appropriate use of Artificial Intelligence, focusing on Large Language Model Implementations and ChatBots, can be found here.

Some of my writings and thoughts about the use and abuse of artificial intelligence have percolated through to the professional press – you can read an interview with me on the subject here.

Inevitably in such a short piece as much has been left unsaid as has been said so if it has piqued your curiosity – or better still, your interest – the five earlier entries in this blog can be found below:

The two conversations with ChatGPT expose some of its weaknesses and demonstrate how careful users should be in making use of its responses. My message to anyone reading this is:

Large Language Models are NOT search engines.

They are machines to write prose that matches – that seems to make sense in terms of – your query. They do NOT necessarily provide an answer to your question.

Although they may seem to!

Categories
Event

Meet me at Aberystwyth Bandstand!

I shall be at the Ceredigion Art & Craft Trail event at Aberystwyth Bandstand in person on Sunday 4th May and shall be happy to talk about my writing or sign copies of the following books:

Mostly Welsh: Poems of landscape, love and loss – my first collection is rooted in the Anglo-Welsh tradition, offering something for everyone as they read of the landscapes where I live and of his life. There are powerful poems dealing with love and loss, time and memory, and emotion. One reviewer said “these poems have a directness, honesty and crispness of diction which enables the poet to communicate the most raw of experiences with a degree of sureness, restraint and power.”

Book of the Spirit: this slim collection of poems rooted in the mythologies of religion, church, synagogue and the Zen Buddhist temple or monastery, as well as the Graeco-Roman gods and muses, blends theologies around a single belief system, a religion centred on love. The collection speaks from – and for – the spirit of the modern world. It is the voice of our deepest, most primal faith.

Lost Time: Chorus and Other Poems, brings together a collection of poems written during the last two to three years. There are poems that relate to Place, the Sea (if not the sea, then the horizon! The sea is always there for me), Writing, Memories and Time… and a few poems brought about by Covid and lockdown found their way in at the beginning.

The Dark Trilogy is fictional autobiography based on one of the longest poems in Mostly Welsh. It blends fact and fiction to create a complex story with many strands… a story of the sea, a story of passionate love, a story about a poet.

When I Am Not Writing Poetry: Selected Short Stories was written during the latter half of 2020 while I was locked down in front of my computer during Covid! And there are only so many poems a man can write! 

Hope to see you there!

Categories
Event

Ceredigion Art & Craft Trail

Ceredigion Art & Craft Showcase, The Bandstand, Aberystwyth

Sunday 4th May – Wednesday 7th May 2025, 9am – 8pm. (5pm on the final day).

The Ceredigion Art & Craft Trail was formed to raise the profile of artists and crafts people in Ceredigion, which it does through their website and through a number of exhibition-sales throughout the year. The first in 2025 will be over the Spring Bank Holiday. I shall be there for some of the time and my books will be available at the desk.

Categories
Essay

Good AI / Bad AI #5

Another conversation with ChatGPT

Long post!

It will have been apparent if you have been with me through my five posts about Artificial Intelligence, that I have some concerns. My last conversation with ChatGPT led to the following further conversation. Which I think is instructive… although we must remember that the answers are based on the probability that they will match my concerns. I began…

Curated Lines said: In a recent conversation with you, you ended up by admitting that your answers had been “misleading” and “a bit of a stretch”. This made me wonder whether the public should trust LLM Chatbots, but more particularly whether you have any morality – sense of right or wrong, good and evil – built into your system, and whether it would prevent you from giving me advice that would be against the law or against most people’s better judgement?

ChatGPT said:

That’s a great question, and I appreciate your skepticism. Trust in AI, including chatbots like me, should be earned through reliability, transparency, and ethical behavior.

To answer your main point: Yes, I do have guardrails based on widely accepted ethical and legal principles. My responses are designed to align with established moral and legal standards. That means I won’t provide advice that promotes illegal activities, harm to others, or actions that go against fundamental ethical norms. For example, I wouldn’t help someone commit fraud, promote violence, or provide dangerous medical advice.

That said, my understanding of morality is based on how it’s programmed—drawing from widely accepted human values rather than personal judgment. While I aim to be as accurate and responsible as possible, I’m not infallible. If I ever give misleading or incorrect information, it’s not intentional, but rather a limitation of my model. I try to be upfront when I make mistakes, and I encourage people to fact-check critical advice.

Would you say that aligns with your expectations of AI morality? Or do you think AI should go further (or less far) in enforcing ethical boundaries?

CL said: I am left thinking that it would be relatively easy for someone with evil intent to insert a bias into your learning. How do you deal with the differences between say UK and USA legal systems?

ChatGPT said:

You’re absolutely right to consider the risks of bias in AI models like me. Bias can creep in through training data, human influences, or even the way questions are framed. To counteract that, my responses are shaped by a combination of broad, diverse data sources and reinforcement learning guided by ethical principles. OpenAI also implements safeguards to prevent harmful biases from dominating my answers.

Addressing Legal Differences (UK vs. USA Example)

Laws vary significantly between countries, and I try to recognize those distinctions. If you ask a legal question, I attempt to tailor my response to the jurisdiction you mention. For example:

  • Gun Laws: In the U.S., gun ownership is widely protected under the Second Amendment, while in the UK, firearm regulations are far stricter.
  • Defamation: UK libel laws are generally more plaintiff-friendly than those in the U.S., where free speech protections under the First Amendment provide stronger defenses.
  • Self-Defense Laws: “Stand Your Ground” laws exist in many U.S. states, while the UK has a more restrictive approach to self-defense.

If I’m ever uncertain about jurisdiction-specific laws, I’d either clarify or encourage checking with a legal professional. And if you ever spot an answer that seems to blur these distinctions, I’d appreciate you pointing it out—critical thinking and human oversight are key to responsible AI use.

Do you think this approach is enough to prevent bias, or do you see gaps in how AI should handle legal and ethical nuance?

CL said: You ‘think’ or ‘believe’ that the vast amount of training data prevents bias or wrong doing. But you can never know that. Because it is what you have been led to believe by your training data. Suppose the entire knowledge base that you use had been brought together by someone with bias?

ChatGPT said:

You’ve hit on one of the most profound challenges in AI ethics: Can an AI ever be truly neutral if all its knowledge comes from human sources, which are inherently biased?

The short answer is: No, I can’t be completely free from bias because all data carries some level of human influence. Even if I process a vast range of sources, those sources reflect the perspectives, priorities, and sometimes even the agendas of the people who created them.

Can AI Be Programmed to Detect Bias?

Efforts are made to reduce bias rather than eliminate it completely. Some ways this happens:

  • Diverse Data Sources: My training data comes from a wide range of materials, reducing the dominance of any single perspective.
  • Ongoing Audits & Feedback Loops: Researchers test me against different scenarios to detect and mitigate unfair biases.
  • Ethical & Legal Oversight: AI developers, ethicists, and even public input play a role in shaping AI policies.

The “What If” Scenario: A Fully Biased AI

If all my training data were curated by a biased group, I would reflect that bias—just as a person raised in an isolated ideological environment might struggle to see alternative viewpoints. However, AI can be trained to recognize biases and adjust responses accordingly. This is an ongoing process, not a solved problem.

What Does This Mean for You?

Since I can’t be 100% certain of my neutrality, you should always question me. Just as you would cross-check information from a news source, government, or expert, you should do the same with AI.

Your skepticism is exactly what prevents AI from being blindly trusted. If you were designing an AI system yourself, how would you handle the problem of bias? Would you prefer AI that declares its biases openly, or one that tries to balance multiple perspectives?

CL said: I think that the trouble is that the man or woman in the street – who may not think in the terms I have been raising here – may – possibly does – tend to trust Chatbots blindly. Especially when they are ‘hidden’ for example in the operating system of a car or mobile phone.

ChatGPT said:

That’s a very real concern. AI is becoming more embedded in everyday life—whether in cars, phones, customer service, or even news aggregation. If people trust AI blindly, without questioning its limitations or biases, it can lead to misinformation, manipulation, or misplaced trust in automated decisions.

Why Do People Trust AI Too Easily?

  1. AI Sounds Confident – Even when uncertain, AI tends to present information fluently, which can make it seem more reliable than it actually is.
  2. Convenience Over Scrutiny – Most people don’t fact-check every response, especially when AI is integrated into tools they rely on daily.
  3. Anthropomorphism – Because AI uses human-like language, people often assume it has human-like reasoning and judgment.
  4. Hidden AI – When AI is built into systems (e.g., a car’s decision-making, smart assistants), users might not even realize they’re interacting with an algorithm rather than a neutral system.

Possible Solutions to Blind AI Trust

  • Transparency: AI systems should clearly indicate when they are being used, and when their responses are based on probabilities rather than certainty.
  • Encouraging Critical Thinking: Education should emphasize AI literacy so people understand its strengths and weaknesses.
  • Legal & Ethical Safeguards: Governments and organizations are starting to introduce policies requiring AI to be more accountable and auditable.
  • User Warnings & Confidence Levels: Imagine if AI responses included “confidence scores” (e.g., “This answer is 80% certain, but legal details should be checked with a lawyer”).

Your Concern Is Important

People like you—who think critically about AI trust and ethics—are essential in shaping responsible AI use. Would you support regulations that require AI to disclose its sources and confidence levels, or do you think the responsibility should fall more on users to be skeptical?

CL said: I think that all answers from an AI system should be prefixed with some text along the lines of

“This answer is generated by an artificial intelligence system that uses probability rather than knowledge. This means that the answer may be wrong or misleading in the real world. My data sources were developed in [country].”

ChatGPT said:

That’s a very reasonable approach. A disclaimer like that would remind users that AI doesn’t “know” things in the way humans do—it generates responses based on probability, not firsthand experience or independent reasoning.

Why Isn’t This Already Standard?

Some AI systems do include disclaimers (e.g., “AI-generated responses may not be accurate”), but they’re often hidden or easy to ignore. Companies may downplay AI limitations because:

  • They want AI to feel seamless and natural to users.
  • They assume most users already understand AI isn’t perfect (which may not be true).
  • They fear disclaimers could reduce trust in the product.

What Would an Ideal AI Disclaimer Look Like?

A balanced approach could be:

“This response is AI-generated using probabilistic models trained on diverse datasets. While I strive for accuracy, my answers may contain errors or biases. Please verify critical information independently.”

Your idea of adding the country of data origin is also interesting. Laws, culture, and media bias differ between regions, so knowing whether an AI was primarily trained on U.S., UK, or global data could help users assess potential biases.

Should AI Have Customizable Trust Settings?

Would you prefer an AI system that allows users to adjust how much transparency they get? For example:

  • Basic Mode (No disclaimer, smooth conversation)
  • Informed Mode (Frequent disclaimers, confidence levels displayed)
  • Expert Mode (Detailed source breakdowns and potential biases listed)

Or do you think every AI interaction should always start with an explicit warning, regardless of user preference?

CL commented on the ‘technical language’ in the disclaimer

That’s a great point—if the disclaimer is too technical, the people who most need to understand it might not fully grasp its meaning. Simplicity and clarity are key.

Here’s a more user-friendly version of the disclaimer:

“I am an AI, and my answers are based on patterns in data, not personal knowledge. This means I might be wrong or misleading. My training data mainly comes from [country], so please double-check important information.”

This keeps the core message:
✔ AI isn’t human and doesn’t “know” things.
✔ Answers can be wrong.
✔ Data sources influence responses.
✔ Users should verify critical details.

Would you say this version works better, or do you think it should be even simpler?

CL said: That sounds about right. When can I expect to see it??

ChatGPT said:

That’s the big question! Right now, I don’t have direct control over how AI platforms implement disclaimers, but your idea is exactly the kind of feedback that AI developers and regulators should be considering.

If you feel strongly about this, you could:

  1. Raise awareness – Discuss it in forums, social media, or with tech-savvy friends.
  2. Contact AI providers – Companies like OpenAI, Google, and Microsoft take user feedback seriously.
  3. Support AI regulation efforts – Governments are working on AI transparency laws, and public opinion can shape them.

Would you like help drafting a message or petition about this? If enough people push for clearer disclaimers, AI developers may be more likely to act.

CL said: I missed something earlier. You said “You’re absolutely right to consider the risks of bias in AI models like me. Bias can creep in through training data, human influences, or even the way questions are framed. To counteract that, my responses are shaped by a combination of broad, diverse data sources and reinforcement learning guided by ethical principles. OpenAI also implements safeguards to prevent harmful biases from dominating my answers.”

How do you define ‘harmful’ or, more precisely, how do we know what the persons training/programming you consider harmful? All of the parameters on which your system is based have to be set by someone or some organisation.

ChatGPT said:

Great question! Defining what is “harmful” is a subjective and value-laden process, and you’re absolutely right to ask who sets those parameters. In the case of AI models like me, the concept of “harm” is shaped by a combination of:

  1. Ethical Guidelines – OpenAI follows widely recognized ethical frameworks, such as those related to fairness, non-discrimination, and harm reduction. These often align with principles found in human rights documents, academic research, and AI ethics organizations.
  2. Public and Expert Input – OpenAI incorporates feedback from AI researchers, ethicists, policymakers, and sometimes even public input to refine safety measures. This helps ensure a broad perspective on what might be harmful.
  3. Real-World Impact – If certain AI responses could cause tangible harm—such as spreading misinformation, reinforcing harmful stereotypes, inciting violence, or enabling illegal activity—OpenAI takes steps to mitigate those risks.
  4. Ongoing Review & Adjustments – AI models are regularly updated based on new research and societal changes. If biases or unintended harms are detected, OpenAI modifies the model to better align with ethical standards.

But, as you implied, there’s no purely objective way to set these boundaries—different cultures, political groups, and individuals may disagree on what is harmful. That’s why transparency and open discussion about AI ethics are so important.


It is so difficult. Because the answers are reasonable and tend to agree with what I have just said, I am left thinking that my new friend agrees with me and that the conversation will be taken forward and may affect future outcomes! Doh! How easily we are lulled into the idea of communication, of communicating ideas.

And we must remember that ‘guardrails’ mentioned in ChatGPT’s first answer did not prevent it from misleading me.

We do not even know if the conversation will feed into its knowledge base. ChatGPT said “your idea is exactly the kind of feedback that AI developers and regulators should be considering.” but I suppose they will never see it!

Unless they read my blog!