{"id":83721,"date":"2024-06-24T06:49:07","date_gmt":"2024-06-24T10:49:07","guid":{"rendered":"https:\/\/isarta.com\/news\/?p=83721"},"modified":"2024-08-20T22:57:47","modified_gmt":"2024-08-21T02:57:47","slug":"the-4-modes-of-using-generative-ai-to-know-for-avoiding-botshit","status":"publish","type":"post","link":"https:\/\/isarta.com\/news\/the-4-modes-of-using-generative-ai-to-know-for-avoiding-botshit\/","title":{"rendered":"The 4 Modes of Using Generative AI to Know for Avoiding &#8220;Botshit&#8221;"},"content":{"rendered":"\n<p><strong>How can generative AI be used in the best way while mitigating its risks of &#8220;hallucinations&#8221;? Researchers provide an answer through a model that crosses the ability to verify AI-generated information with the importance of the information&#8217;s accuracy. This is useful in the era of rapid expansion of ChatGPT and similar technologies.<\/strong><\/p>\n\n\n\n<p>&#8220;Botshit.&#8221; The expression is not the most elegant, but it has the merit of being clear. Mentioned in this <a href=\"https:\/\/www.theguardian.com\/commentisfree\/2024\/jan\/03\/botshit-generative-ai-imminent-threat-democracy\" target=\"_blank\" rel=\"noreferrer noopener\">article from the Guardian<\/a> by Professor Andr\u00e9 Spicer of Bayes Business School in London, it roughly corresponds to the ability to say anything, not from humans (&#8220;bullshit&#8221;) but from machines! This phenomenon is obviously related to the spectacular emergence of generative artificial intelligence (AI) over the past year.<\/p>\n\n\n\n<p>Let\u2019s remember that these models are probabilistic. A generative AI answers questions based on statistical suppositions. It does not seek truth or falsehood (it is not capable of knowing that) but the most probable. Often, its answers are correct. But sometimes, this is not the case \u2013 despite their apparent coherence. This is what we call &#8220;hallucinations.&#8221;<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\">\n<p>&#8220;When humans use this misleading content for tasks, it becomes what we call &#8216;botshit&#8217;,&#8221; says Andr\u00e9 Spicer.<\/p>\n<\/blockquote>\n\n\n\n<p>Based on this observation, the researcher, along with two of his colleagues, proposes in an <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0007681324000272\" target=\"_blank\" rel=\"noreferrer noopener\">academic article published a few months ago<\/a> to identify the use cases of generative artificial intelligence to mitigate these risks. The result is a two-dimensional model:<\/p>\n\n\n\n<ul>\n<li>The ability to verify the truthfulness of the answers<\/li>\n\n\n\n<li>The importance of the truthfulness of the answers<\/li>\n<\/ul>\n\n\n\n<p>As a result, there are four different situations of collaboration between humans and generative AI.<\/p>\n\n\n\n<ol>\n<li><strong>&#8220;Autonomous&#8221; Mode \u2013 Easily Verifiable Task and Low Stakes<\/strong><\/li>\n<\/ol>\n\n\n\n<p>In this first mode, the user can delegate specific tasks to the machine by quickly verifying them, which is likely to increase productivity and eliminate routine tasks. The authors cite common customer requests or the processing of routine administrative requests as examples.<\/p>\n\n\n\n<ol start=\"2\">\n<li><strong>&#8220;Augmented&#8221; Mode \u2013 Hard-to-Verify Task and Low Stakes<\/strong><\/li>\n<\/ol>\n\n\n\n<p>In this case, the user should use the AI-generated response in an augmented manner, meaning they should use it to &#8220;augment&#8221; their human capabilities. This mode is particularly suited for creative brainstorming and generating new ideas. The responses are not fully usable as they are, but when sorted, modified, and reworked, they can help increase productivity and creativity.<\/p>\n\n\n\n<ol start=\"3\">\n<li><strong>&#8220;Automate&#8221; Mode \u2013 Easily Verifiable Task and High Stakes<\/strong><\/li>\n<\/ol>\n\n\n\n<p>Since the accuracy of the information is crucial here, users must verify the AI&#8217;s responses. They assign simple, routine, and relatively standardized tasks to the AI. The authors mention quality control as an example, such as analyzing and pre-approving loan applications in a bank.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\">\n<p>&#8220;Large amounts of information could be used to verify the statements made. However, the decision will likely be relatively crucial and high-stakes. This means that even if the chatbot can automatically produce a truthful statement, it must still be verified and approved by a trained banker,&#8221; they indicate.<\/p>\n<\/blockquote>\n\n\n\n<ol start=\"4\">\n<li><strong>&#8220;Authenticated&#8221; Mode \u2013 Hard-to-Verify Task and High Stakes<\/strong><\/li>\n<\/ol>\n\n\n\n<p>In this last and most problematic case, users must establish safeguards and maintain critical thinking and reasoning abilities to assess the truthfulness of the AI&#8217;s responses.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\">\n<p>&#8220;In these contexts, users must structure their engagement with the chatbot&#8217;s response and adapt the responses as (un)certainties reveal themselves,&#8221; explain the authors.<\/p>\n<\/blockquote>\n\n\n\n<p>An example? An investment decision in a new industry with little precise information \u2013 making it difficult to verify the AI&#8217;s responses in this matter. Robots can highlight overlooked details or problems, adding depth and rigor to critical decisions, especially in uncertain or ambiguous situations.<\/p>\n\n\n\n<p>A very easy-to-apply model in interactions with AI!<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"1024\" height=\"761\" src=\"https:\/\/isarta.com\/news\/wp-content\/uploads\/2024\/06\/image-3.png\" alt=\"\" class=\"wp-image-83730\" srcset=\"https:\/\/isarta.com\/news\/wp-content\/uploads\/2024\/06\/image-3.png 1024w, https:\/\/isarta.com\/news\/wp-content\/uploads\/2024\/06\/image-3-300x223.png 300w, https:\/\/isarta.com\/news\/wp-content\/uploads\/2024\/06\/image-3-768x571.png 768w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\"><a rel=\"noreferrer noopener\" href=\"https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=4678265\" target=\"_blank\">Source<\/a><\/figcaption><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>How can generative AI be used in the best way while mitigating its risks of &#8220;hallucinations&#8221;? Researchers provide an answer through a model that crosses the ability to verify AI-generated information with the importance of the information&#8217;s accuracy. This is useful in the era of rapid expansion of ChatGPT and similar technologies.<\/p>\n","protected":false},"author":88,"featured_media":83729,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[7],"tags":[90,88,160],"_links":{"self":[{"href":"https:\/\/isarta.com\/news\/wp-json\/wp\/v2\/posts\/83721"}],"collection":[{"href":"https:\/\/isarta.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/isarta.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/isarta.com\/news\/wp-json\/wp\/v2\/users\/88"}],"replies":[{"embeddable":true,"href":"https:\/\/isarta.com\/news\/wp-json\/wp\/v2\/comments?post=83721"}],"version-history":[{"count":3,"href":"https:\/\/isarta.com\/news\/wp-json\/wp\/v2\/posts\/83721\/revisions"}],"predecessor-version":[{"id":83760,"href":"https:\/\/isarta.com\/news\/wp-json\/wp\/v2\/posts\/83721\/revisions\/83760"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/isarta.com\/news\/wp-json\/wp\/v2\/media\/83729"}],"wp:attachment":[{"href":"https:\/\/isarta.com\/news\/wp-json\/wp\/v2\/media?parent=83721"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/isarta.com\/news\/wp-json\/wp\/v2\/categories?post=83721"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/isarta.com\/news\/wp-json\/wp\/v2\/tags?post=83721"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}