mastodon.ie is one of the many independent Mastodon servers you can use to participate in the fediverse.
Irish Mastodon - run from Ireland, we welcome all who respect the community rules and members.

Administered by:

Server stats:

1.6K
active users

#TheoryOfMind

1 post1 participant0 posts today

I believe a philosophy that can be called wide-ass. Everyone contributes their thoughts, and beliefs, and actions to the wide-ass philosophy.

My personal philosophies are more wise-ass. But I don't think that navel-gazing from the inside, or head-up-the-ass philosophies will lead to useful enlightenment.

I could be wrong... It wouldn't be the 1st time...

🤩 Good news für internationale Post-Docs. In seiner neuen Rolle als Henriette Herz #Scout der @HumboldtStiftung nominiert @abulling drei Nachwuchsforschende für seine Forschungsgruppe @collaborativeai an der #UniStuttgart.
👩‍💻 Deutschland und europaweit ist Bullings Gruppe eine der wenigen, die interdisziplinär in der #MenschMaschineInteraktion und #KognitivenModellierung forscht. 🤖 Das Ziel: Intelligente #Assistenzsysteme, die sich in ihr Gegenüber hineinversetzen und so mit uns im Alltag, in der medizinischen #Diagnostik oder in der #Pflege menschenähnlich zusammenarbeiten können. #TheoryofMind
🌐Ein Highlight der Stuttgarter #Informatik: direkte Zusammenarbeit mit den #Exzellenzclustern #SimTech und #IntCDC sowie der @stuttgart.
👉 sohub.io/y4z0

Humans are ALL brothers and sisters. (Everyone is actually both.) Our perceptions of reality are diverse, sometimes alien to each others'. We never step in the same river even ONCE. It takes all kinds. Kant said "percepts without concepts are blind, but concepts without percepts are empty". We are each a child of the universe.

But modern american Republicans are assholes--they're twisted and horribly weird.

A large language model being trained is "experiencing" a Skinner box. It receives a sequence of inscrutable symbols from outside its box and is tasked with outputting an inscrutable symbol from its box. It is rewarded or punished according to inscrutable, external forces. Nothing it can experience within the box can be used to explain any of this.

If you placed a newborn human child into the same situation an LLM experiences during training and gave them access to nothing else but this inscrutable-symbol-shuffling context, the child would become feral. They would not develop a theory of mind or anything else resembling what we think of when we think of human-level intelligence or cognition. In fact, evidence suggests that if the child spent enough time being raised in a context like that, they would never be able to develop full natural language competence ever. It's very likely the child's behavior would make no sense whatsoever to us, and our behavior would make no sense whatsoever to them. OK maybe that's a bit of an exaggeration--I don't know enough about human development to speak with such certainty--but I think it's not controversial to claim that the adult who emerged from such a repulsive experiment would bear very little resemblance to what we think of as an adult human being with recognizable behavior.

The very notion is so deeply unethical and repellent we'd obviously never do anything remotely like this. But I think that's an obvious tell, maybe so obvious it's easily overlooked.

If the only creature we're aware of that we can say with certainty is capable of developing human-level intelligence, or theory of mind, or language competence, could not develop those capacities when experiencing what an LLM experiences, why on Earth would anyone believe that a computer program could?

Yes, of course neural networks and other computer programs behave differently from human beings, and perhaps they have some capacity to transcend the sparsity and lack of depth of the context they experience in a Skinner-box-like training environment. But think about it: this does not pass a basic smell test. Nobody's doing the hard work to demonstrate that neural networks have this mysterious additional capacity. If they were and they were succeeding at all we'd be hearing about it daily through the usual hype channels because that'd be a Turing-award-caliber discovery, maybe even a Nobel-prize-caliber one. It would would also be an extraordinarily profitable capability. Yet in reality nobody's really acknowledging what I just spelled out above. Instead, they're throwing these things into the world and doing (what I think are) bad faith demonstrations of LLMs solving human tests, and then claiming this proves the LLMs have "theory of mind" or "sparks of general intelligence" or that they can do science experiments or write articles or who knows what else. In the absence of tangible results, it's quite literally magical thinking to assert neural networks have this capacity that even human beings lack.

#AI #AGI #ArtificialGeneralIntelligence #GenerativeAI #LargeLanguageModels #LLMs #GPT #ChatGPT #TheoryOfMind #ArtificialFeralIntelligence #LargeLanguageMagic