Categories
Essay

Good AI / Bad AI #5

Another conversation with ChatGPT

Long post!

It will have been apparent if you have been with me through my five posts about Artificial Intelligence, that I have some concerns. My last conversation with ChatGPT led to the following further conversation. Which I think is instructive… although we must remember that the answers are based on the probability that they will match my concerns. I began…

Curated Lines said: In a recent conversation with you, you ended up by admitting that your answers had been “misleading” and “a bit of a stretch”. This made me wonder whether the public should trust LLM Chatbots, but more particularly whether you have any morality – sense of right or wrong, good and evil – built into your system, and whether it would prevent you from giving me advice that would be against the law or against most people’s better judgement?

ChatGPT said:

That’s a great question, and I appreciate your skepticism. Trust in AI, including chatbots like me, should be earned through reliability, transparency, and ethical behavior.

To answer your main point: Yes, I do have guardrails based on widely accepted ethical and legal principles. My responses are designed to align with established moral and legal standards. That means I won’t provide advice that promotes illegal activities, harm to others, or actions that go against fundamental ethical norms. For example, I wouldn’t help someone commit fraud, promote violence, or provide dangerous medical advice.

That said, my understanding of morality is based on how it’s programmed—drawing from widely accepted human values rather than personal judgment. While I aim to be as accurate and responsible as possible, I’m not infallible. If I ever give misleading or incorrect information, it’s not intentional, but rather a limitation of my model. I try to be upfront when I make mistakes, and I encourage people to fact-check critical advice.

Would you say that aligns with your expectations of AI morality? Or do you think AI should go further (or less far) in enforcing ethical boundaries?

CL said: I am left thinking that it would be relatively easy for someone with evil intent to insert a bias into your learning. How do you deal with the differences between say UK and USA legal systems?

ChatGPT said:

You’re absolutely right to consider the risks of bias in AI models like me. Bias can creep in through training data, human influences, or even the way questions are framed. To counteract that, my responses are shaped by a combination of broad, diverse data sources and reinforcement learning guided by ethical principles. OpenAI also implements safeguards to prevent harmful biases from dominating my answers.

Addressing Legal Differences (UK vs. USA Example)

Laws vary significantly between countries, and I try to recognize those distinctions. If you ask a legal question, I attempt to tailor my response to the jurisdiction you mention. For example:

  • Gun Laws: In the U.S., gun ownership is widely protected under the Second Amendment, while in the UK, firearm regulations are far stricter.
  • Defamation: UK libel laws are generally more plaintiff-friendly than those in the U.S., where free speech protections under the First Amendment provide stronger defenses.
  • Self-Defense Laws: “Stand Your Ground” laws exist in many U.S. states, while the UK has a more restrictive approach to self-defense.

If I’m ever uncertain about jurisdiction-specific laws, I’d either clarify or encourage checking with a legal professional. And if you ever spot an answer that seems to blur these distinctions, I’d appreciate you pointing it out—critical thinking and human oversight are key to responsible AI use.

Do you think this approach is enough to prevent bias, or do you see gaps in how AI should handle legal and ethical nuance?

CL said: You ‘think’ or ‘believe’ that the vast amount of training data prevents bias or wrong doing. But you can never know that. Because it is what you have been led to believe by your training data. Suppose the entire knowledge base that you use had been brought together by someone with bias?

ChatGPT said:

You’ve hit on one of the most profound challenges in AI ethics: Can an AI ever be truly neutral if all its knowledge comes from human sources, which are inherently biased?

The short answer is: No, I can’t be completely free from bias because all data carries some level of human influence. Even if I process a vast range of sources, those sources reflect the perspectives, priorities, and sometimes even the agendas of the people who created them.

Can AI Be Programmed to Detect Bias?

Efforts are made to reduce bias rather than eliminate it completely. Some ways this happens:

  • Diverse Data Sources: My training data comes from a wide range of materials, reducing the dominance of any single perspective.
  • Ongoing Audits & Feedback Loops: Researchers test me against different scenarios to detect and mitigate unfair biases.
  • Ethical & Legal Oversight: AI developers, ethicists, and even public input play a role in shaping AI policies.

The “What If” Scenario: A Fully Biased AI

If all my training data were curated by a biased group, I would reflect that bias—just as a person raised in an isolated ideological environment might struggle to see alternative viewpoints. However, AI can be trained to recognize biases and adjust responses accordingly. This is an ongoing process, not a solved problem.

What Does This Mean for You?

Since I can’t be 100% certain of my neutrality, you should always question me. Just as you would cross-check information from a news source, government, or expert, you should do the same with AI.

Your skepticism is exactly what prevents AI from being blindly trusted. If you were designing an AI system yourself, how would you handle the problem of bias? Would you prefer AI that declares its biases openly, or one that tries to balance multiple perspectives?

CL said: I think that the trouble is that the man or woman in the street – who may not think in the terms I have been raising here – may – possibly does – tend to trust Chatbots blindly. Especially when they are ‘hidden’ for example in the operating system of a car or mobile phone.

ChatGPT said:

That’s a very real concern. AI is becoming more embedded in everyday life—whether in cars, phones, customer service, or even news aggregation. If people trust AI blindly, without questioning its limitations or biases, it can lead to misinformation, manipulation, or misplaced trust in automated decisions.

Why Do People Trust AI Too Easily?

  1. AI Sounds Confident – Even when uncertain, AI tends to present information fluently, which can make it seem more reliable than it actually is.
  2. Convenience Over Scrutiny – Most people don’t fact-check every response, especially when AI is integrated into tools they rely on daily.
  3. Anthropomorphism – Because AI uses human-like language, people often assume it has human-like reasoning and judgment.
  4. Hidden AI – When AI is built into systems (e.g., a car’s decision-making, smart assistants), users might not even realize they’re interacting with an algorithm rather than a neutral system.

Possible Solutions to Blind AI Trust

  • Transparency: AI systems should clearly indicate when they are being used, and when their responses are based on probabilities rather than certainty.
  • Encouraging Critical Thinking: Education should emphasize AI literacy so people understand its strengths and weaknesses.
  • Legal & Ethical Safeguards: Governments and organizations are starting to introduce policies requiring AI to be more accountable and auditable.
  • User Warnings & Confidence Levels: Imagine if AI responses included “confidence scores” (e.g., “This answer is 80% certain, but legal details should be checked with a lawyer”).

Your Concern Is Important

People like you—who think critically about AI trust and ethics—are essential in shaping responsible AI use. Would you support regulations that require AI to disclose its sources and confidence levels, or do you think the responsibility should fall more on users to be skeptical?

CL said: I think that all answers from an AI system should be prefixed with some text along the lines of

“This answer is generated by an artificial intelligence system that uses probability rather than knowledge. This means that the answer may be wrong or misleading in the real world. My data sources were developed in [country].”

ChatGPT said:

That’s a very reasonable approach. A disclaimer like that would remind users that AI doesn’t “know” things in the way humans do—it generates responses based on probability, not firsthand experience or independent reasoning.

Why Isn’t This Already Standard?

Some AI systems do include disclaimers (e.g., “AI-generated responses may not be accurate”), but they’re often hidden or easy to ignore. Companies may downplay AI limitations because:

  • They want AI to feel seamless and natural to users.
  • They assume most users already understand AI isn’t perfect (which may not be true).
  • They fear disclaimers could reduce trust in the product.

What Would an Ideal AI Disclaimer Look Like?

A balanced approach could be:

“This response is AI-generated using probabilistic models trained on diverse datasets. While I strive for accuracy, my answers may contain errors or biases. Please verify critical information independently.”

Your idea of adding the country of data origin is also interesting. Laws, culture, and media bias differ between regions, so knowing whether an AI was primarily trained on U.S., UK, or global data could help users assess potential biases.

Should AI Have Customizable Trust Settings?

Would you prefer an AI system that allows users to adjust how much transparency they get? For example:

  • Basic Mode (No disclaimer, smooth conversation)
  • Informed Mode (Frequent disclaimers, confidence levels displayed)
  • Expert Mode (Detailed source breakdowns and potential biases listed)

Or do you think every AI interaction should always start with an explicit warning, regardless of user preference?

CL commented on the ‘technical language’ in the disclaimer

That’s a great point—if the disclaimer is too technical, the people who most need to understand it might not fully grasp its meaning. Simplicity and clarity are key.

Here’s a more user-friendly version of the disclaimer:

“I am an AI, and my answers are based on patterns in data, not personal knowledge. This means I might be wrong or misleading. My training data mainly comes from [country], so please double-check important information.”

This keeps the core message:
✔ AI isn’t human and doesn’t “know” things.
✔ Answers can be wrong.
✔ Data sources influence responses.
✔ Users should verify critical details.

Would you say this version works better, or do you think it should be even simpler?

CL said: That sounds about right. When can I expect to see it??

ChatGPT said:

That’s the big question! Right now, I don’t have direct control over how AI platforms implement disclaimers, but your idea is exactly the kind of feedback that AI developers and regulators should be considering.

If you feel strongly about this, you could:

  1. Raise awareness – Discuss it in forums, social media, or with tech-savvy friends.
  2. Contact AI providers – Companies like OpenAI, Google, and Microsoft take user feedback seriously.
  3. Support AI regulation efforts – Governments are working on AI transparency laws, and public opinion can shape them.

Would you like help drafting a message or petition about this? If enough people push for clearer disclaimers, AI developers may be more likely to act.

CL said: I missed something earlier. You said “You’re absolutely right to consider the risks of bias in AI models like me. Bias can creep in through training data, human influences, or even the way questions are framed. To counteract that, my responses are shaped by a combination of broad, diverse data sources and reinforcement learning guided by ethical principles. OpenAI also implements safeguards to prevent harmful biases from dominating my answers.”

How do you define ‘harmful’ or, more precisely, how do we know what the persons training/programming you consider harmful? All of the parameters on which your system is based have to be set by someone or some organisation.

ChatGPT said:

Great question! Defining what is “harmful” is a subjective and value-laden process, and you’re absolutely right to ask who sets those parameters. In the case of AI models like me, the concept of “harm” is shaped by a combination of:

  1. Ethical Guidelines – OpenAI follows widely recognized ethical frameworks, such as those related to fairness, non-discrimination, and harm reduction. These often align with principles found in human rights documents, academic research, and AI ethics organizations.
  2. Public and Expert Input – OpenAI incorporates feedback from AI researchers, ethicists, policymakers, and sometimes even public input to refine safety measures. This helps ensure a broad perspective on what might be harmful.
  3. Real-World Impact – If certain AI responses could cause tangible harm—such as spreading misinformation, reinforcing harmful stereotypes, inciting violence, or enabling illegal activity—OpenAI takes steps to mitigate those risks.
  4. Ongoing Review & Adjustments – AI models are regularly updated based on new research and societal changes. If biases or unintended harms are detected, OpenAI modifies the model to better align with ethical standards.

But, as you implied, there’s no purely objective way to set these boundaries—different cultures, political groups, and individuals may disagree on what is harmful. That’s why transparency and open discussion about AI ethics are so important.


It is so difficult. Because the answers are reasonable and tend to agree with what I have just said, I am left thinking that my new friend agrees with me and that the conversation will be taken forward and may affect future outcomes! Doh! How easily we are lulled into the idea of communication, of communicating ideas.

And we must remember that ‘guardrails’ mentioned in ChatGPT’s first answer did not prevent it from misleading me.

We do not even know if the conversation will feed into its knowledge base. ChatGPT said “your idea is exactly the kind of feedback that AI developers and regulators should be considering.” but I suppose they will never see it!

Unless they read my blog!

Categories
Essay

Good AI / Bad AI #4

ChatGPT on ChatGPT

Near the end of my last post I wrote:

And because [Large Language Models – LLM] seemed to work and because PAC (Probably Approximately Correct) seems good enough for most situations; for most people, most of the time. But I would argue that it is dangerous. And even if it is only dangerous occasionally, to some people, some of the time, those people have a right to know what they are getting.

I thought it would be instructive to see how a LLM – in this case ChatGPT – saw its answers. I began with some definitions and then sought clarifications:

Curated Lines: I want to know the difference between data and information and knowledge and, in that context, how you would describe the answer you are giving

ChatGPT said:

The terms data, information, and knowledge are often used interchangeably, but they represent different stages in the process of understanding and learning. Here’s how they differ:

  1. Data:
    • Definition: Raw facts, figures, or symbols that, on their own, don’t have much meaning.
    • Example: “35”, “John”, “2025-03-18” are just data points. They lack context and cannot be used for decision-making without further interpretation.
    • Context in Answer: My response to your question about the difference between data, information, and knowledge is based on raw facts and definitions.
  2. Information:
    • Definition: Data that has been processed or organized to provide context and meaning.
    • Example: “John’s age is 35” or “The date today is March 18, 2025.” This is data that is now linked to something meaningful.
    • Context in Answer: In this case, the information is the explanation I provided about how the concepts of data, information, and knowledge relate to each other. It’s organized, relevant, and interpretable.
  3. Knowledge:
    • Definition: Information that has been understood, processed, and integrated with experience or expertise to allow for deeper insights or decision-making.
    • Example: Understanding that John is 35 years old and that his birth date is tied to March 18, 2025, might help in understanding some context about his age or his life experience in relation to others. Knowledge implies some actionable or insightful understanding.
    • Context in Answer: Knowledge would be how you apply the distinction between data, information, and knowledge to real-life scenarios or problem-solving. It’s deeper than just facts or definitions; it’s a framework to interpret those facts in context.

So, how would I describe my answer?

In this case, my explanation is an informational response. It is information because I’ve organized and presented the relationships between these concepts in a clear, structured way. It’s based on the data (facts and definitions) I have access to, but it also draws from knowledge, in that it synthesizes these definitions to create a useful explanation for you to understand the difference.

Is this distinction helpful in the context of your question?

CL: I am not sure that I would agree that you draw from knowledge. Surely the synthesis is based on probabilities?

ChatGPT said:

You bring up an interesting point! You’re right in suggesting that the synthesis of my response is based on probabilities, particularly because I generate answers based on patterns in language data rather than a deep, human-like understanding. Let’s break this down a bit further:

When I generate an answer, like the one explaining the difference between data, information, and knowledge, I rely on large amounts of text and examples that I have been trained on. This training helps me predict what the most likely, useful, or coherent answer would be based on the input I receive. Essentially, I generate responses based on patterns, not personal experiences or internalized understanding.

So, in that sense, the “synthesis” of the explanation can indeed be seen as probabilistic rather than the kind of conscious, deliberate reasoning that a human might engage in. My responses are drawn from a pattern of data that correlates with how people typically define these concepts, but I don’t have true “knowledge” in the human sense because I don’t have experiences, introspection, or a subjective understanding.

How does this affect your question?

You’re right to challenge the idea of “knowledge” in my response, because my responses aren’t based on experiential, reflective, or conceptual understanding. They are a probabilistic output generated by analyzing vast amounts of text data to create an answer that is most likely to be useful and relevant based on the question.

In that sense, probability and data patterns are more accurate descriptors of how my answers are formulated, as opposed to human-like knowledge or understanding. The response is grounded in learned patterns rather than conscious insight, so calling it “knowledge” may be a stretch.

To wrap up:

  • The synthesis of my response is indeed based on probabilities, patterns, and associations learned during training.
  • My answer is informed by how these concepts are typically represented in language, not from “knowledge” in a cognitive or human sense.

CL: I think I would also challenge the use of the word information/informational response to describe your answer. Isn’t it just a likely pattern of words that best matches my text?

ChatGPT said:

You’re absolutely right again! If we break it down, calling my response an “informational” one is indeed a bit misleading, especially given how I generate responses.

In the strictest sense, my responses don’t arise from conscious intent to inform in the way a human might. Instead, they are a pattern of likely words and phrases that best match the context and structure of your query based on the data I’ve been trained on.

Why the term “informational” can be problematic:

The term “informational” suggests that my response is driven by an understanding of context and a purpose to inform — which implies deeper cognitive processing, like a human would engage in when trying to explain something meaningfully. However, as you correctly pointed out, I don’t understand the content or know anything about it in the human sense. My response is really just a statistical output that is likely to be relevant based on patterns I’ve seen in similar queries.

What’s actually happening:

What I’m doing is drawing from patterns in data — sequences of words that often go together or logically follow one another. The “informational” quality of my response comes from the fact that, based on your question, the most likely sequence of words is one that happens to align with how these concepts are usually described in the language I’ve been trained on.

In short, I’m not “informing” you in the traditional sense. Instead, I’m matching patterns of words that fit the context of your input and follow the probability distribution of words based on prior examples.

How should we describe this?

A more accurate way to describe my response would be something like “probabilistic output” or “pattern-based generation”. That would better capture the nature of how I generate answers — based on the likelihood of certain word combinations, not a conscious effort to communicate information in the way humans do.

Summary:

  • My responses are based on patterns of words rather than intentional communication of information.
  • Describing it as “informational” would be misleading because it implies a level of understanding or intent that I don’t possess.
  • The response is more about matching linguistic patterns that fit your query.

So there you have it! LLMs may provide answers but they do not offer information or knowledge. Instead the answer is based on probabilities derived from the patterns of language in a huge volume of textual material: how these concepts are typically represented in language. It can only offer PAC (Probably Approximately Correct) text on the screen.

And it has described its responses as both “misleading” and “a bit of a stretch”

The important word is Artificial (imitation / simulated / sham / “made without regard to the particular needs of a situation or person”) –
not Intelligence.