mastodon.ie is one of the many independent Mastodon servers you can use to participate in the fediverse.
Irish Mastodon - run from Ireland, we welcome all who respect the community rules and members.

Administered by:

Server stats:

1.6K
active users

With a moment's contemplation after reading this, I just realized how spectacularly bad this could go if, for example, you went to do a search for an chemical's MSDS and it gave you back some bullshit advice to take in the event of exposure or fire.

Human brains melt at the concept of what to do with chemical incompatibles & water reactive substances during a fire. Machine learning will train on the typical responses...but the zebras will kill you.
twitter.com/edzitron/status/16

TwitterEd Zitron on Twitter“Today's Substack is about the outright recklessness of Google and Microsoft integrating experimental conversational AI into real search engines, and how the ramifications of doing so could genuinely get people hurt or killed. https://t.co/NIZ6Ui3jXM”

However, an AI trained to think about the zebras is going to return useless info to you most of the time.

The very first thing students do before they do an experiment is "check the literature" and step one is almost always hit up Google. I grimly await a lab blowing up due to AI advice.

It's nice that I have a new thing to add to safety training now. That people should absolutely not use any conversational AI generated advice unless they are actively seeking a Darwin Award.

As someone with a safety role considering all the things people regularly search Google for, like material safety data sheets, asking an AI what to do in any dangerous situation is:

1) Asking to die from inaccurate/blatantly wrong advice.

2) A liability nightmare. Not sure Section 230 is going to protect the AI's owner.

Will the AI tell you to put out a vinyl chloride fire with water? Will it tell you to wear a HEPA filter respirator? If it did and you follow that advice you are about to die.