#ai

See tagged statuses in the local BookWyrm community

Only in SF:

- bunch of explicitly anti-social, pro-AI ads show up
- people get mad
- ads had things like ‘you’re a bad parent. Just use AI to fix your kids before they go wrong’ and ‘replace humans’
- they revealed that this was all a joke to make a statement about irresponsible AI

(Phew, right?)

- it was an elaborate stunt by a startup that believes they are ‘responsible AI’ and others are not

(Check notes)

- said startup wants to replace receptionists with AI

https://www.abby.com/news-and-announcements/yes-it-was-us-why-abby-launched-a-campaign-about-the-end-of-humanity/

I AM SO TIRED

🌶 take: anti-AI crowd is getting so absurdly obseessed over anything-having-A-and-I-next-to-each-other that it's getting more annoying than the AI itseld…

"The openness adopted by Chinese AI companies, which sees them routinely publishing papers detailing new engineering and training tricks, stands in stark contrast to the increasingly closed ethos of big US companies, which seem afraid of giving away their intellectual property, Kowinski says. A paper from the Qwen team, detailing a way to enhance the intelligence of models during training, was named as one of the best papers at NeurIPS this year."

https://www.wired.com/story/expired-tired-wired-gpt-5/

replied to Esther Plomp's status

I think that this should indeed be public knowledge as we already pay for this with public money via the universities. The same is not true for the creative industry, including many individual artists that need to make a living off these creative works. But instead of addressing this issue of copyright (the fact that people don't have access to knowledge that is generated with their money) the blogposts are instead complaining that should be exempted?

2/3

I like the way Anthony Moser talks about Large Language Models.
https://www.rollingstone.com/culture/culture-features/ai-chatbot-journal-research-fake-citations-1235485484/ or https://archive.vn/0Iuo6

Moser tells Rolling Stone that to even claim LLMs “hallucinate” fictional publications misunderstands the threat they pose to our comprehension of the world, because the term “implies that it’s different from the normal, correct perception of reality.” But the chatbots are “always hallucinating,” he says.

“It’s not a malfunction. A predictive model predicts some text, and maybe it’s accurate, maybe it isn’t, but the process is the same either way. To put it another way: LLMs are structurally indifferent to truth.”

h/t @fsinn