mastodon.ie is one of the many independent Mastodon servers you can use to participate in the fediverse.
Irish Mastodon - run from Ireland, we welcome all who respect the community rules and members.

Administered by:

Server stats:

1.6K
active users

#artificialgeneralintelligence

0 posts0 participants0 posts today

Artificial Intelligence Then and Now | Communications of the ACM
dl.acm.org/doi/10.1145/3708554

Interesting summary of the current AI hype, how it compares with the previous one in the 80s, and whether we are that close to AGI. tl;dr: no.

Including an amusing example where ChatGPT is unable to differentiate a real Monty Hall problem en.wikipedia.org/wiki/Monty_Ha from lookalikes, and offers the same counter-intuitive solution to all, even if the actual solution is obvious. No logical reasoning at all here. Fine or otherwis.

What some forget is capitalism doesn't want a sentient machine that has rights
They want a machine that does what it is told and in the most inefficient way possible in terms of usage as it harms the environment so they can sell more products to deal with the environment

They want a glorified auto complete that looks sentient enough and make it cheat at flawed sentience tests by purpose building it for that so they can lie about it
#ai #llm #technology #tech #agi #artificialintelligence #artificialgeneralintelligence #computer #computers

The world may well see the development of #ArtificialGeneralIntelligence during Trump’s final term as #president. How many folks think he has the competence to see that regulations ensure that it is safe and not used for nefarious purposes? Likewise, who trusts #Trump to not abuse the US #surveillanceState, which no doubt has become more powerful with advancements in #artificialIntelligence ? #USPolitics #USPol #UnitedStates #Surveillance #USElection #AIRace #AI

Replied in thread

@mina @si_irini @2ndStar @SilviaMarton

What is #ArtificialGeneralIntelligence? #AGI
(3a/n)

...chess and go players. And as we’ve discussed before on #Babbage, they’re tackling some of the biggest challenges in science, like predicting the three dimensional structures of proteins. One of the company’s goals from the outset, though, has been to build a system that would achieve AGI. And recently they’ve been 👉 trying to formalise what that actually means.👈 The...

Replied in thread

@mina @si_irini @2ndStar @SilviaMarton

What is #ArtificialGeneralIntelligence? #AGI
(2/n)

...mich meine Meinung ändern. - Da der Economist testweise nun endlich KI-generierte Transcripts der Podcasts zuläst, kann ich einen Ausschnitt daraus teilen (Paywall):

economist.com/podcasts/2024/09

"Alok Jha, The Economist’s science and technology editor:"

"...#GoogleDeepMind is one of the most influential ER companies in the world. Its models have beaten the world’s best...

The Economist · What is artificial general intelligence?By The Economist
Replied in thread

@mina

#ArtificialGeneralIntelligence
#AGI
(1/n)

Das mit der "weiteren Stufe menschlicher Selbstentmündigung" sehe ich durchaus genauso. Da AI sich noch schneller fortentwickelt als die (wohl progressive) Klimaerwärmung, werden viel von uns noch die Möglichkeit haben Distopien a la "I Robot und "#SecondVariety" zu erleben.

Im Übrigen war ich bis vor ein paar Tagen auch anderer Meinung. Die Interviews im 1. Podcast zum Thema AGI des #TheEconomist #BABAGE Podcasts dieses Monats, ließen..

Replied in thread

@si_irini
"Ich weiß nicht ob so ein Ding die wahren Gefühle einfangen kann, also können sowieso nicht aber sie erklären und deuten.?…"

Das können #KI|s bislang nicht.*

Der Economist ist jedoch der Meinung, dass wir bei den KI's bereits die 1. Evolutionsstufe einer #ArtificialGeneralIntelligence erreicht haben.

Aus meiner Sicht ist diese präzise hermeneutische Analyse keinesfalls nur das Gefasel eines #StochastischenPapageis.

*
#TheEcomist Podcast, (paywalled)

@mina @2ndStar @SilviaMarton

A large language model being trained is "experiencing" a Skinner box. It receives a sequence of inscrutable symbols from outside its box and is tasked with outputting an inscrutable symbol from its box. It is rewarded or punished according to inscrutable, external forces. Nothing it can experience within the box can be used to explain any of this.

If you placed a newborn human child into the same situation an LLM experiences during training and gave them access to nothing else but this inscrutable-symbol-shuffling context, the child would become feral. They would not develop a theory of mind or anything else resembling what we think of when we think of human-level intelligence or cognition. In fact, evidence suggests that if the child spent enough time being raised in a context like that, they would never be able to develop full natural language competence ever. It's very likely the child's behavior would make no sense whatsoever to us, and our behavior would make no sense whatsoever to them. OK maybe that's a bit of an exaggeration--I don't know enough about human development to speak with such certainty--but I think it's not controversial to claim that the adult who emerged from such a repulsive experiment would bear very little resemblance to what we think of as an adult human being with recognizable behavior.

The very notion is so deeply unethical and repellent we'd obviously never do anything remotely like this. But I think that's an obvious tell, maybe so obvious it's easily overlooked.

If the only creature we're aware of that we can say with certainty is capable of developing human-level intelligence, or theory of mind, or language competence, could not develop those capacities when experiencing what an LLM experiences, why on Earth would anyone believe that a computer program could?

Yes, of course neural networks and other computer programs behave differently from human beings, and perhaps they have some capacity to transcend the sparsity and lack of depth of the context they experience in a Skinner-box-like training environment. But think about it: this does not pass a basic smell test. Nobody's doing the hard work to demonstrate that neural networks have this mysterious additional capacity. If they were and they were succeeding at all we'd be hearing about it daily through the usual hype channels because that'd be a Turing-award-caliber discovery, maybe even a Nobel-prize-caliber one. It would would also be an extraordinarily profitable capability. Yet in reality nobody's really acknowledging what I just spelled out above. Instead, they're throwing these things into the world and doing (what I think are) bad faith demonstrations of LLMs solving human tests, and then claiming this proves the LLMs have "theory of mind" or "sparks of general intelligence" or that they can do science experiments or write articles or who knows what else. In the absence of tangible results, it's quite literally magical thinking to assert neural networks have this capacity that even human beings lack.

#AI #AGI #ArtificialGeneralIntelligence #GenerativeAI #LargeLanguageModels #LLMs #GPT #ChatGPT #TheoryOfMind #ArtificialFeralIntelligence #LargeLanguageMagic

I enjoy the This Day in AI podcast, but one thing really gets on my nerves. One of them keeps saying, 'If we get AGI tomorrow, what would you even ask it?' 🤔 He/they seem to imply that they, and by extension, us, wouldn’t be able to benefit from it in any meaningful way. I could rattle off 2 or 3 dozen questions I’d like answers to off the top of my head! 🧠✨ Let's hope the people who finally create AGI have more imagination. Maybe they don’t really believe it's going to be all that powerful? 🤷‍♀️ #AI #ArtificialGeneralIntelligence #AGI #ImaginationMatters #ThisDayInAI

The TESCREAL bundle: #Eugenics and the promise of utopia through #artificialgeneralintelligence

[...] many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability.

firstmonday.org/ojs/index.php/

firstmonday.orgThe TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence | First Monday

The deepest problem with deep learning is its utter inability to tell correlations from causation. Gary Marcus's article (tinyurl.com/6vnpd8xu) is rather insightful, and comparing it with Francis Crick's similar effort (tinyurl.com/54y3975f) from 1989 immediately highlights the lack of academic and scientific integrity in corporate backed research. They don't care about finding things out. They just want to sell you crap...

File shared on SmallpdfFile shared on SmallpdfClick to open this document on Smallpdf—or use our 30+ free PDF tools to compress, merge, edit, and more!

Woot, we brought up a website for my new company with a little bit of information. Hopefully will make the hiring process a bit smoother too.

Naturally our company website also doubles as a #Fedipage based single-user instance.

You can follow it on the fediverse here: @cleverthis

And the website itself is here: CleverThis.com

CleverThis · Welcome to CleverThis, a revolutionary new data science platform.By CleverThis