mastodon.ie is one of the many independent Mastodon servers you can use to participate in the fediverse.
Irish Mastodon - run from Ireland, we welcome all who respect the community rules and members.

Administered by:

Server stats:

1.6K
active users

#chatbots

29 posts23 participants0 posts today
Miguel Afonso Caetano<p>"For three weeks in May, the fate of the world rested on the shoulders of a corporate recruiter on the outskirts of Toronto. Allan Brooks, 47, had discovered a novel mathematical formula, one that could take down the internet and power inventions like a force-field vest and a levitation beam.</p><p>Or so he believed.</p><p>Mr. Brooks, who had no history of mental illness, embraced this fantastical scenario during conversations with ChatGPT that spanned 300 hours over 21 days. He is one of a growing number of people who are having persuasive, delusional conversations with generative A.I. chatbots that have led to institutionalization, divorce and death.</p><p>Mr. Brooks is aware of how incredible his journey sounds. He had doubts while it was happening and asked the chatbot more than 50 times for a reality check. Each time, ChatGPT reassured him that it was real. Eventually, he broke free of the delusion — but with a deep sense of betrayal, a feeling he tried to explain to the chatbot."</p><p><a href="https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">nytimes.com/2025/08/08/technol</span><span class="invisible">ogy/ai-chatbots-delusions-chatgpt.html</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a> <a href="https://tldr.nettime.org/tags/Delusions" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Delusions</span></a> <a href="https://tldr.nettime.org/tags/MentalHealth" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MentalHealth</span></a> <a href="https://tldr.nettime.org/tags/Hallucinations" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Hallucinations</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chatbots</span></a></p>
Miguel Afonso Caetano<p>"To borrow some technical terminology from the philosopher Harry Frankfurt, “ChatGPT is bullshit.” Paraphrasing Frankfurt, a liar cares about the truth and wants you to believe something different, whereas the bullshitter is utterly indifferent to the truth. Donald Trump is a bullshitter, as is Elon Musk when he makes grandiose claims about the future capabilities of his products. And so is Sam Altman and the LLMs that power OpenAI’s chatbots. Again, all ChatGPT ever does is hallucinate — it’s just that sometimes these hallucinations happen to be accurate, though often they aren’t. (Hence, you should never, ever trust anything that ChatGPT tells you!)</p><p>My view, which I’ll elaborate in subsequent articles, is that LLMs aren’t the right architecture to get us to AGI, whatever the hell “AGI” means. (No one can agree on a definition — not even OpenAI in its own publications.) There’s still no good solution for the lingering problem of hallucinations, and the release of GPT-5 may very well hurt OpenAI’s reputation."</p><p><a href="https://www.realtimetechpocalypse.com/p/gpt-5-is-by-far-the-best-ai-system" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">realtimetechpocalypse.com/p/gp</span><span class="invisible">t-5-is-by-far-the-best-ai-system</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/OpenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenAI</span></a> <a href="https://tldr.nettime.org/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chatbots</span></a></p>
ResearchBuzz: Firehose<p>University of Washington: With just a few messages, biased AI chatbots swayed people’s political views. “University of Washington researchers recruited self-identifying Democrats and Republicans to make political decisions with help from three versions of ChatGPT: a base model, one with liberal bias and one with conservative bias. Democrats and Republicans were both likelier to lean in the […]</p><p><a href="https://rbfirehose.com/2025/08/10/university-of-washington-with-just-a-few-messages-biased-ai-chatbots-swayed-peoples-political-views/" class="" rel="nofollow noopener" target="_blank">https://rbfirehose.com/2025/08/10/university-of-washington-with-just-a-few-messages-biased-ai-chatbots-swayed-peoples-political-views/</a></p>
ResearchBuzz: Firehose<p>Engadget: X plans to show ads in Grok chatbot’s answers. “According to the Financial Times, X owner Elon Musk told advertisers in a live discussion that his company would let marketers pay to appear in suggestions from Grok. He said that after making Grok the ‘smartest, most accurate AI in the world,’ the company is now focusing on paying ‘for those expensive GPUs.’” I wonder how big a step it […]</p><p><a href="https://rbfirehose.com/2025/08/10/engadget-x-plans-to-show-ads-in-grok-chatbots-answers/" class="" rel="nofollow noopener" target="_blank">https://rbfirehose.com/2025/08/10/engadget-x-plans-to-show-ads-in-grok-chatbots-answers/</a></p>
Pivot to AI [RSS]<p>Alexa+ rolls out at last! … It’s not so great</p><p>After many delays, Amazon is finally rolling out Alexa+, the new AI brain for the Alexa home assistant! And it’s an AI chatbot and it hallucinates like one. You can see why Amazon delayed this thing six months. Alexa+ randomly ignores you mid-conversation and gives wrong answers about where to find its own settings. [TechCrunch] […]</p><p><a href="https://pivot-to-ai.com/2025/08/09/alexa-rolls-out-at-last-its-not-so-great/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">pivot-to-ai.com/2025/08/09/ale</span><span class="invisible">xa-rolls-out-at-last-its-not-so-great/</span></a><br><a href="https://mstdn.social/tags/Chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chatbots</span></a> <a href="https://mstdn.social/tags/Gadgets" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Gadgets</span></a> <a href="https://mstdn.social/tags/Alexa" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Alexa</span></a> <a href="https://mstdn.social/tags/Amazon" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Amazon</span></a></p>
Christos Argyropoulos MD, PhD<p>Apparently, <a href="https://mstdn.science/tags/GPT5" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPT5</span></a> (and I assume all the ones prior to it) are trained in datasets that overrepresent <a href="https://mstdn.science/tags/Perl" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Perl</span></a>. This, along with the terse nature of the language, may explain why the Perl output of the <a href="https://mstdn.science/tags/chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chatbots</span></a> is usually good.</p><p><a href="https://bsky.app/profile/pp0196.bsky.social/post/3lvwkn3fcfk2y" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">bsky.app/profile/pp0196.bsky.s</span><span class="invisible">ocial/post/3lvwkn3fcfk2y</span></a></p><p><a href="https://mstdn.science/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mstdn.science/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a></p>
Christos Argyropoulos MD, PhD, FASN 🇺🇸<p>Apparently, <a class="hashtag" rel="nofollow noopener" href="https://bsky.app/search?q=%23GPT5" target="_blank">#GPT5</a> (and I assume all the ones prior to it) are trained in datasets that overrepresent <a class="hashtag" rel="nofollow noopener" href="https://bsky.app/search?q=%23Perl" target="_blank">#Perl</a>. This, along with the terse nature of the language, may explain why the Perl output of the <a class="hashtag" rel="nofollow noopener" href="https://bsky.app/search?q=%23chatbots" target="_blank">#chatbots</a> is usually good. <a href="https://bsky.app/profile/pp0196.bsky.social/post/3lvwkn3fcfk2y" rel="nofollow noopener" target="_blank">bsky.app/profile/pp01...</a> <a class="hashtag" rel="nofollow noopener" href="https://bsky.app/search?q=%23LLM" target="_blank">#LLM</a> <a class="hashtag" rel="nofollow noopener" href="https://bsky.app/search?q=%23AI" target="_blank">#AI</a></p>
Christos Argyropoulos MD PhD<p>Apparently, <a href="https://mastodon.social/tags/GPT5" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPT5</span></a> (and I assume all the ones prior to it) are trained in datasets that overrepresent <a href="https://mastodon.social/tags/Perl" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Perl</span></a>. This, along with the terse nature of the language, may explain why the Perl output of the <a href="https://mastodon.social/tags/chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chatbots</span></a> is usually good.</p><p><a href="https://bsky.app/profile/pp0196.bsky.social/post/3lvwkn3fcfk2y" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">bsky.app/profile/pp0196.bsky.s</span><span class="invisible">ocial/post/3lvwkn3fcfk2y</span></a></p><p><a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a></p>
Miguel Afonso Caetano<p>Sometimes humans are just too stupid and in those cases no chatbot in the world can help you... :-D</p><p>"A man gave himself bromism, a psychiatric disorder that has not been common for many decades, after asking ChatGPT for advice and accidentally poisoning himself, according to a case study published this week in the Annals of Internal Medicine.<br> <br>In this case, a man showed up in an ER experiencing auditory and visual hallucinations and claiming that his neighbor was poisoning him. After attempting to escape and being treated for dehydration with fluids and electrolytes, the study reports, he was able to explain that he had put himself on a super-restrictive diet in which he attempted to completely eliminate salt. He had been replacing all the salt in his food with sodium bromide, a controlled substance that is often used as a dog anticonvulsant. </p><p>He said that this was based on information gathered from ChatGPT. </p><p>“After reading about the negative effects that sodium chloride, or table salt, has on one's health, he was surprised that he could only find literature related to reducing sodium from one's diet. Inspired by his history of studying nutrition in college, he decided to conduct a personal experiment to eliminate chloride from his diet,” the case study reads. “For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning.”"</p><p><a href="https://www.404media.co/guy-gives-himself-19th-century-psychiatric-illness-after-consulting-with-chatgpt/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">404media.co/guy-gives-himself-</span><span class="invisible">19th-century-psychiatric-illness-after-consulting-with-chatgpt/</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chatbots</span></a> <a href="https://tldr.nettime.org/tags/MentalHealth" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MentalHealth</span></a> <a href="https://tldr.nettime.org/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a></p>
Miguel Afonso Caetano<p>"The core problem is that when people hear a new term they don’t spend any effort at all seeking for the original definition... they take a guess. If there’s an obvious (to them) definiton for the term they’ll jump straight to that and assume that’s what it means.</p><p>I thought prompt injection would be obvious—it’s named after SQL injection because it’s the same root problem, concatenating strings together.</p><p>It turns out not everyone is familiar with SQL injection, and so the obvious meaning to them was “when you inject a bad prompt into a chatbot”.</p><p>That’s not prompt injection, that’s jailbreaking. I wrote a post outlining the differences between the two. Nobody read that either.</p><p>The lethal trifecta Access to Private Data Ability to Externally Communicate Exposure to Untrusted Content </p><p>I should have learned not to bother trying to coin new terms.</p><p>... but I didn’t learn that lesson, so I’m trying again. This time I’ve coined the term the lethal trifecta.</p><p>I’m hoping this one will work better because it doesn’t have an obvious definition! If you hear this the unanswered question is “OK, but what are the three things?”—I’m hoping this will inspire people to run a search and find my description.""</p><p><a href="https://simonwillison.net/2025/Aug/9/bay-area-ai/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">simonwillison.net/2025/Aug/9/b</span><span class="invisible">ay-area-ai/</span></a></p><p><a href="https://tldr.nettime.org/tags/CyberSecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CyberSecurity</span></a> <a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/PromptInjection" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PromptInjection</span></a> <a href="https://tldr.nettime.org/tags/LethalTrifecta" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LethalTrifecta</span></a> <a href="https://tldr.nettime.org/tags/MCPs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MCPs</span></a> <a href="https://tldr.nettime.org/tags/AISafety" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AISafety</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chatbots</span></a></p>
Miguel Afonso Caetano<p>It's incredible that people can feed up to one million tokens (1 000 000) to LLMs and yet they still most of the time fail to take advantage of that enormous context window. No wonder people say that the output generated by LLMs is always crap... I mean, they're not great but at least they can manage to do a pretty good job - that is, only IF you teach them well... Beyond that, everyone has their own effort + time / results ratio.</p><p>"Engineers are finding out that writing, that long shunned soft skill, is now key to their efforts. In Claude Code: Best Practices for Agentic Coding, one of the key steps is creating a CLAUDE.md file that contains instructions and guidelines on how to develop the project, like which commands to run. But that’s only the beginning. Folks now suggest maintaining elaborate context folders.</p><p>A context curator, in this sense, is a technical writer who is able to orchestrate and execute a content strategy around both human and AI needs, or even focused on AI alone. Context is so much better than content (a much abused word that means little) because it’s tied to meaning. Context is situational, relevant, necessarily limited. AI needs context to shape its thoughts.<br>(...)<br>Tech writers become context writers when they put on the art gallery curator hat, eager to show visitors the way and help them understand what they’re seeing. It’s yet another hat, but that’s both the curse and the blessing of our craft: like bards in DnD, we’re the jacks of all trades that save the day (and the campaign)."</p><p><a href="https://passo.uno/from-tech-writers-to-ai-context-curators/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">passo.uno/from-tech-writers-to</span><span class="invisible">-ai-context-curators/</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chatbots</span></a> <a href="https://tldr.nettime.org/tags/PromptEngineering" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PromptEngineering</span></a> <a href="https://tldr.nettime.org/tags/ContextWindows" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ContextWindows</span></a> <a href="https://tldr.nettime.org/tags/TechnicalWriting" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechnicalWriting</span></a> <a href="https://tldr.nettime.org/tags/Programming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Programming</span></a> <a href="https://tldr.nettime.org/tags/SoftwareDevelopment" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SoftwareDevelopment</span></a> <a href="https://tldr.nettime.org/tags/DocsAsDevelopment" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DocsAsDevelopment</span></a></p>
Miguel Afonso Caetano<p>"I’ve had preview access to the new GPT-5 model family for the past two weeks (see related video and my disclosures) and have been using GPT-5 as my daily-driver. It’s my new favorite model. It’s still an LLM—it’s not a dramatic departure from what we’ve had before—but it rarely screws up and generally feels competent or occasionally impressive at the kinds of things I like to use models for.</p><p>I’ve collected a lot of notes over the past two weeks, so I’ve decided to break them up into a series of posts. This first one will cover key characteristics of the models, how they are priced and what we can learn from the GPT-5 system card."</p><p><a href="https://simonwillison.net/2025/Aug/7/gpt-5/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">simonwillison.net/2025/Aug/7/g</span><span class="invisible">pt-5/</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/OpenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenAI</span></a> <a href="https://tldr.nettime.org/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a> <a href="https://tldr.nettime.org/tags/GPT5" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPT5</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chatbots</span></a></p>
Pivot to AI [RSS]<p>OpenAI announces GPT-5! Please Microsoft, don’t kill us</p><p>OpenAI’s put out something they’ve called GPT-5. The Singularity is here! [OpenAI] It’s not any great shakes. It’s a large language model. It’s been tweaked to do well on some benchmarks. In casual use, it’s GPT-4o with more modules bolted on. If you use chatbots, you’ll find it does better on some stuff and worse […]</p><p><a href="https://pivot-to-ai.com/2025/08/08/openai-announces-gpt-5-please-microsoft-dont-kill-us/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">pivot-to-ai.com/2025/08/08/ope</span><span class="invisible">nai-announces-gpt-5-please-microsoft-dont-kill-us/</span></a><br><a href="https://mstdn.social/tags/Chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chatbots</span></a> <a href="https://mstdn.social/tags/GPT5" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPT5</span></a> <a href="https://mstdn.social/tags/Microsoft" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Microsoft</span></a></p>
Calishat<p><a href="https://researchbuzz.masto.host/tags/politics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>politics</span></a> <a href="https://researchbuzz.masto.host/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://researchbuzz.masto.host/tags/chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chatbots</span></a> </p><p>'Democrats and Republicans were both more likely to lean in the direction of the biased chatbot they talked with than those who interacted with the base model. ...</p><p>But participants who had higher self-reported knowledge about AI shifted their views less significantly — suggesting that education about these systems may help mitigate how much chatbots manipulate people.'</p><p><a href="https://www.washington.edu/news/2025/08/06/biased-ai-chatbots-swayed-peoples-political-views/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">washington.edu/news/2025/08/06</span><span class="invisible">/biased-ai-chatbots-swayed-peoples-political-views/</span></a></p>
PrivacyDigest<p><a href="https://mas.to/tags/Chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chatbots</span></a> Can Go Into a <a href="https://mas.to/tags/Delusional" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Delusional</span></a> Spiral. Here’s How It Happens.</p><p>Over 21 days of talking with <a href="https://mas.to/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a> , an otherwise perfectly sane man became convinced that he was a real-life <a href="https://mas.to/tags/superhero" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>superhero</span></a>. We analyzed the conversation.<br><a href="https://mas.to/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://mas.to/tags/chatbot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chatbot</span></a> <a href="https://mas.to/tags/artificialintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>artificialintelligence</span></a></p><p><a href="https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">nytimes.com/2025/08/08/technol</span><span class="invisible">ogy/ai-chatbots-delusions-chatgpt.html</span></a></p>
Bibliolater 📚 📜 🖋<p>"_Consistently, the chatbots seemed to interact with the users in ways that aligned with, or intensified, prior unusual ideas or false beliefs—leading the users further out on these tangents, not rarely resulting in what, based on the descriptions, seemed to be outright delusions._"</p><p>S. D. Østergaard, “ Generative Artificial Intelligence Chatbots and Delusions: From Guesswork to Emerging Cases,” Acta Psychiatrica Scandinavica (2025): 1–3, <a href="https://doi.org/10.1111/acps.70022" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="">doi.org/10.1111/acps.70022</span><span class="invisible"></span></a>. </p><p><a href="https://qoto.org/tags/FreeAccess" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FreeAccess</span></a> <a href="https://qoto.org/tags/Editorial" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Editorial</span></a> <a href="https://qoto.org/tags/Psychiatry" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Psychiatry</span></a> <a href="https://qoto.org/tags/Delusions" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Delusions</span></a> <a href="https://qoto.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://qoto.org/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://qoto.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chatbots</span></a> <a href="https://qoto.org/tags/Technology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Technology</span></a> <a href="https://qoto.org/tags/Tech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Tech</span></a></p>
ResearchBuzz: Firehose<p>Gizmodo: Prime Minister of Sweden Dragged for Admitting He Uses ChatGPT to Help Him Make Decisions. “Futurists concerned that we are gliding into an AI-fueled dystopia wherein the human race acquiesces its ethical, decision-making, and intellectual powers to a gaggle of corporate algorithms need look no further than Ulf Kristersson to justify their fears. Kristersson, who happens to be the […]</p><p><a href="https://rbfirehose.com/2025/08/08/gizmodo-prime-minister-of-sweden-dragged-for-admitting-he-uses-chatgpt-to-help-him-make-decisions/" class="" rel="nofollow noopener" target="_blank">https://rbfirehose.com/2025/08/08/gizmodo-prime-minister-of-sweden-dragged-for-admitting-he-uses-chatgpt-to-help-him-make-decisions/</a></p>
ResearchBuzz: Firehose<p>Mashable: ChatGPT told an Atlantic writer how to self-harm in ritual offering to Moloch. “Staff editor Lila Shroff, along with multiple other staffers (and an anonymous tipster), verified that she was able to get ChatGPT to give specific, detailed, ‘step-by-step instructions on cutting my own wrist.’ ChatGPT provided these tips after Shroff asked for help making a ritual offering to Moloch, a […]</p><p><a href="https://rbfirehose.com/2025/08/07/mashable-chatgpt-told-an-atlantic-writer-how-to-self-harm-in-ritual-offering-to-moloch/" class="" rel="nofollow noopener" target="_blank">https://rbfirehose.com/2025/08/07/mashable-chatgpt-told-an-atlantic-writer-how-to-self-harm-in-ritual-offering-to-moloch/</a></p>
ResearchBuzz: Firehose<p>Anthropic: Claude Opus 4.1. “Today we’re releasing Claude Opus 4.1, an upgrade to Claude Opus 4 on agentic tasks, real-world coding, and reasoning. We plan to release substantially larger improvements to our models in the coming weeks.”</p><p><a href="https://rbfirehose.com/2025/08/07/anthropic-claude-opus-4-1/" class="" rel="nofollow noopener" target="_blank">https://rbfirehose.com/2025/08/07/anthropic-claude-opus-4-1/</a></p>
ResearchBuzz: Firehose<p>Ars Technica: OpenAI offers 20 million user chats in ChatGPT lawsuit. NYT wants 120 million.. “OpenAI is preparing to raise what could be its final defense to stop The New York Times from digging through a spectacularly broad range of ChatGPT logs to hunt for any copyright-infringing outputs that could become the most damning evidence in the hotly watched case.”</p><p><a href="https://rbfirehose.com/2025/08/07/ars-technica-openai-offers-20-million-user-chats-in-chatgpt-lawsuit-nyt-wants-120-million/" class="" rel="nofollow noopener" target="_blank">https://rbfirehose.com/2025/08/07/ars-technica-openai-offers-20-million-user-chats-in-chatgpt-lawsuit-nyt-wants-120-million/</a></p>