{"id":1083,"date":"2025-02-26T11:56:27","date_gmt":"2025-02-26T11:56:27","guid":{"rendered":"https:\/\/curatedlines.online\/?p=1083"},"modified":"2025-03-18T14:41:55","modified_gmt":"2025-03-18T14:41:55","slug":"good-ai-bad-ai-revisited","status":"publish","type":"post","link":"https:\/\/curatedlines.online\/index.php\/2025\/02\/26\/good-ai-bad-ai-revisited\/","title":{"rendered":"Good AI \/ Bad AI #2 AI Revisited"},"content":{"rendered":"\n<p>There has been a lot in the news recently about AI &#8211; none of which, as it happened, did anything to change my views expressed in the <a href=\"https:\/\/curatedlines.online\/index.php\/2025\/02\/06\/good-ai-bad-ai-artificial-intelligence-concerns\/\">6th February post<\/a>. But it &#8211; and a conversation I had with an AI user &#8211; did remind me of another issue.<\/p>\n\n\n\n<p>As an information scientist, I was &#8211; I suppose like any researcher &#8211; taught to look at multiple sources and to verify my sources. So, for example, if I were looking for medical advice I would favour NHS (or the US equivalent) sites over most others. And &#8211; certainly &#8211; if I was being advised on a course of action or on a medication &#8211; I would compare a number of sites and read (evaluate) what they all had to say. I would also build into my evaluation a weighting based on the source. Weighting sounds like a complicated algorithm, but all I mean is that I would favour information from known sites (NHS, etc.) over that from an unknown blog. Because <strong>I could evaluate<\/strong> in that way.<\/p>\n\n\n\n<p>It seems to me that while AI search engines\/ChatBots may search far wider and faster than I could ever hope to do, there is (in my limited experience) no (or little) information provided about sources. I know that there is weighting built into their algorithms (a sort of sequential word probability at the lowest level) but I do not know whether that weighting extends to analysing sources, nor do I know &#8211; if it is &#8211; on what that weighting is based. (For a simple explanation of how basic weighting and large language models (LLM) work see this <a href=\"https:\/\/aeon.co\/videos\/why-large-language-models-are-mysterious-even-to-their-creators\" target=\"_blank\" rel=\"noreferrer noopener\"><em>Aeon <\/em>video<\/a> &#8211; which does not mention sources!)<\/p>\n\n\n\n<p>This means &#8211; I think &#8211; that if you use an AI ChatBot\/LLM to do your research, you are relying on a probability that the answer is correct based &#8211; mainly &#8211; on word probabilities (of the&#8230; is the word &#8216;green&#8217; likely to be followed by &#8216;leaf&#8217; rather than &#8216;face&#8217; variety) but with little attention to the various URLs\/sources from which the information presented &#8211; if &#8216;information&#8217; it is (Can you call a collection of words increasingly likely to work together &#8216;information&#8217;?) &#8211; is culled. I am not even sure whether source information is built into the algorithms. <\/p>\n\n\n\n<p>And I have no information on how trustworthy that makes AI research-bots. Fine for the weather likely to affect tomorrow&#8217;s picnic but perhaps not for cancer treatment? Better than that? Worse?<\/p>\n\n\n\n<p>Although &#8211; possibly &#8211; if you are searching for &#8216;facts&#8217; (I mean &#8216;important facts&#8217; such as the right medication as opposed to the correct ending of a quotation from Byron), the AI system goes beyond the LLM. But most of us do not know whether that is true&#8230; or indeed how the AI would interpret my word &#8216;facts&#8217;! <\/p>\n\n\n\n<p>Or &#8216;important&#8217;!<\/p>\n\n\n\n<p>And I think we are getting back into the &#8216;morality&#8217; realms I dealt with in the earlier posting. <\/p>\n\n\n\n<p>For now &#8211; while I am still able to choose &#8211; I shall use search engines (and I know these all have some &#8216;intelligence&#8217; built in) that allow me to assess the degree to which I can trust the answer.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>There has been a lot in the news recently about AI &#8211; none of which, as it happened, did anything to change my views expressed in the 6th February post. But it &#8211; and a conversation I had with an AI user &#8211; did remind me of another issue. As an information scientist, I was [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[7],"tags":[96,19],"class_list":["post-1083","post","type-post","status-publish","format-standard","hentry","category-essay","tag-artificial-intelligence","tag-chris-armstrong"],"_links":{"self":[{"href":"https:\/\/curatedlines.online\/index.php\/wp-json\/wp\/v2\/posts\/1083","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/curatedlines.online\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/curatedlines.online\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/curatedlines.online\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/curatedlines.online\/index.php\/wp-json\/wp\/v2\/comments?post=1083"}],"version-history":[{"count":13,"href":"https:\/\/curatedlines.online\/index.php\/wp-json\/wp\/v2\/posts\/1083\/revisions"}],"predecessor-version":[{"id":1154,"href":"https:\/\/curatedlines.online\/index.php\/wp-json\/wp\/v2\/posts\/1083\/revisions\/1154"}],"wp:attachment":[{"href":"https:\/\/curatedlines.online\/index.php\/wp-json\/wp\/v2\/media?parent=1083"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/curatedlines.online\/index.php\/wp-json\/wp\/v2\/categories?post=1083"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/curatedlines.online\/index.php\/wp-json\/wp\/v2\/tags?post=1083"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}