I like seeing how @pluralistic is refining his anti #AI arguments over time. In this interview, I love the idea of reframing "hallucinations" as "defects", the analogy that trying to get #AGI out of #LLMs is like breeding faster horses and expecting one to give birth to a locomotive, and ridiculing the premise that "if you teach enough words to the word-guessing machine it will become God."
#agi
See tagged statuses in the local BookWyrm community
RE: https://chaos.social/@afelia/115622015031682948
Dringende Hörempfehlung! Danke @afelia 🙏🏽 #KI #AGI #Techbros
Die generative KI löscht gerade ganze Berufsfelder aus. Die Ergebnisse der Menschen mit ihrer Kreativität und Erfahrung sind wahrscheinlich besser als die der sog. KI. Aber die Arbeitgeber:innen / Kund:innen interessiert es nicht. KI ist billiger. Da hilft auch keine Unterwerfung: „Guck ma LinkedIn, ich habe ein KI-Zertifikat / unterstütze meine Kunden bei der Arbeit mit KI“ / „finde diese Entwicklung total spannend“
Michael Blume replied to Michael Blume's status
Sah & sehe zu wenig nachhaltige Konzern-Geschäftsmodelle für #AGI & #Krypto. „Ich glaube, dass in wenigen Jahren die Blase platzen wird - es ist zu viel Geld, zu viel Energie im System. Die Vorherrschaft der aktuellen Konzerne, die global alle Daten sammeln, wird enden. Wir werden stattdessen zu dezentralen Open-Source-Anwendungen und KI-Systemen kommen", sagte Blume am Donnerstag beim Bodensee Business Forum der "Schwäbischen Zeitung" in Friedrichshafen. (2/2) https://www.newsroom.de/news/aktuelle-meldungen/multimedia-9/medienethiker-blume-monopole-von-google-meta-und-x-werden-enden-975689/
Essential reading for understanding the influence that The Extropians' outlandish ideas have had on many Big Tech CEOs' obsession with AGI.
Essential reading for understanding the influence that The Extropians' outlandish ideas have had on many Big Tech CEOs' obsession with AGI.
"Watching tech moguls throw caution to the wind in the AI arms race or equivocate on whether humanity ought to continue, it’s natural to wonder whether they care about human lives.
The earnest, in-depth answer to this question is just as bleak as the glib one. As moral philosopher Émile Torres argues, many Silicon Valley leaders embrace a vision of a transhumanist future in which biological humans will be replaced by digital beings endowed with superintelligence. This vision helps explain their obsession with artificial general intelligence (AGI) and sits at the core of what Torres describes as human extinctionist preferences.
In 2023, Torres and his colleague Timnit Gebru coined the acronym TESCREAL to describe a constellation of ideologies — Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism — that have become highly influential within Silicon Valley. Torres is a philosopher, intellectual historian, and journalist whose work focuses on …
"Watching tech moguls throw caution to the wind in the AI arms race or equivocate on whether humanity ought to continue, it’s natural to wonder whether they care about human lives.
The earnest, in-depth answer to this question is just as bleak as the glib one. As moral philosopher Émile Torres argues, many Silicon Valley leaders embrace a vision of a transhumanist future in which biological humans will be replaced by digital beings endowed with superintelligence. This vision helps explain their obsession with artificial general intelligence (AGI) and sits at the core of what Torres describes as human extinctionist preferences.
In 2023, Torres and his colleague Timnit Gebru coined the acronym TESCREAL to describe a constellation of ideologies — Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism — that have become highly influential within Silicon Valley. Torres is a philosopher, intellectual historian, and journalist whose work focuses on the ethics of emerging technologies, particularly AI and human extinction.
In this conversation with journalist Doug Henwood recorded for the Jacobin podcast Behind the News, Torres explains the TESCREAL worldview, its connections to eugenics and IQ realism, and why figures like Elon Musk, Peter Thiel, and Sam Altman embrace visions of a post-human future."
https://jacobin.com/2025/11/musk-thiel-altman-ai-tescrealism/
#BigTech #AI #AGI #TESCREAL #SuperIntelligence #PostHumanism
Christian Peach replied to Anne Roth's status
@anneroth Es gibt schon einen offenen Brief von Wissenschaftler:innen, die Ursula von der Leyen dazu aufrufen, ihre unsinnigen Behauptungen über #AGI zurückzuziehen: https://www.iccl.ie/wp-content/uploads/2025/11/20251110_Scientists-letter-to-the-President-AI-Hype.pdf
Reading 'Empire of AI' by Karen Hao.
2 things stand out in the first 3 chapters alone:
1) From #OpenAIs new 2018 for-profit charter this definition of #AGI: "highly autonomous systems that outperform humans at most economically valuable work"
2) From Karen's interview that same year OpenAIs reasoning for wanting to build AGI: "We think it can help solve complex problems that are just out of reach of humans" - like climate change and health care.
Reading 'Empire of AI' by Karen Hao.
2 things stand out in the first 3 chapters alone:
1) From #OpenAIs new 2018 for-profit charter this definition of #AGI: "highly autonomous systems that outperform humans at most economically valuable work"
2) From Karen's interview that same year OpenAIs reasoning for wanting to build AGI: "We think it can help solve complex problems that are just out of reach of humans" - like climate change and health care.
Un artículo interesante en @mittechreview sobre cómo la IA es una especie de secta en la que se cree con furor conspiranoico y que se sostiene en base a promesas a un corto plazo que nunca termina por llegar.
Hank Green applying sentience to AI and warning of AGI. This doesn't seem great.
I mean, it makes sense, Hank is very much in his white-guy tech bubble like he is in his white-guy political bubble.
Still, Hank has access to a lot of non-tech savvy people, and he's parroting talking point that tech CEOs love to hear. It would be great if he made room to platform someone like @timnitGebru to his audience.
Hank Green applying sentience to AI and warning of AGI. This doesn't seem great.
I mean, it makes sense, Hank is very much in his white-guy tech bubble like he is in his white-guy political bubble.
Still, Hank has access to a lot of non-tech savvy people, and he's parroting talking point that tech CEOs love to hear. It would be great if he made room to platform someone like @timnitGebru to his audience.
"The 85-year-old computer pioneer Alan Kay, a revered figure in the industry, offered some perspective. He argued that AI could undoubtedly bring real benefits. Indeed, it had helped detect his cancer in an MRI scan. “AI is a lifesaver,” he said.
However, Kay worried that humans are easily fooled and that AI companies cannot always explain how their models produce the results. Software engineers, like aeroplane designers or bridge builders, he said, had a duty of care to ensure their systems did not cause harm or fail. The main theme of this century should be safety.
The best way forward would be to harness humanity’s collective intelligence, steadily amassed over generations. “We already have artificial superhuman intelligence,” Kay said. “It is science.” AI has already produced some exquisite breakthroughs, such as Google DeepMind’s AlphaFold model that predicted the structures of over 200mn proteins — winning the researchers a …
"The 85-year-old computer pioneer Alan Kay, a revered figure in the industry, offered some perspective. He argued that AI could undoubtedly bring real benefits. Indeed, it had helped detect his cancer in an MRI scan. “AI is a lifesaver,” he said.
However, Kay worried that humans are easily fooled and that AI companies cannot always explain how their models produce the results. Software engineers, like aeroplane designers or bridge builders, he said, had a duty of care to ensure their systems did not cause harm or fail. The main theme of this century should be safety.
The best way forward would be to harness humanity’s collective intelligence, steadily amassed over generations. “We already have artificial superhuman intelligence,” Kay said. “It is science.” AI has already produced some exquisite breakthroughs, such as Google DeepMind’s AlphaFold model that predicted the structures of over 200mn proteins — winning the researchers a Nobel.
Kay highlighted his particular concerns about the vulnerabilities of AI-generated code. Citing his fellow computer scientist Butler Lampson, he said: “Start the genies off in bottles and keep them there.” That’s not a bad adage for our AI age."
#AI #GenerativeAI #AGI #LLMs #Chatbots
https://www.ft.com/content/34748e3e-92d1-4b42-9528-f98cf6b9f2f2
"Don’t believe the hype and hyperbole. AGI is not imminent. The large language models (LLMs) that power current “AI” — a marketing term coined in 1955 to attract funding from the US war machine — will never by themselves lead to AGI. Scaling up these models will not yield superintelligent machines that take over the world or usher in “utopia.”
Even Richard Sutton, an anti-democracy pro-extinctionist who won the Turing Award last year for his contributions to reinforcement learning, now “contends that LLMs are not a viable path to true general intelligence.” He concedes that they are a “dead end.”
This is exactly what Gary Marcus has been arguing for years. In his 2019 book Rebooting AI, Marcus and his coauthor contend that neural networks (e.g., LLMs) are “forever stuck in the realm of correlations.” Consequently, they will “never, with any amount of data or compute, be able …
"Don’t believe the hype and hyperbole. AGI is not imminent. The large language models (LLMs) that power current “AI” — a marketing term coined in 1955 to attract funding from the US war machine — will never by themselves lead to AGI. Scaling up these models will not yield superintelligent machines that take over the world or usher in “utopia.”
Even Richard Sutton, an anti-democracy pro-extinctionist who won the Turing Award last year for his contributions to reinforcement learning, now “contends that LLMs are not a viable path to true general intelligence.” He concedes that they are a “dead end.”
This is exactly what Gary Marcus has been arguing for years. In his 2019 book Rebooting AI, Marcus and his coauthor contend that neural networks (e.g., LLMs) are “forever stuck in the realm of correlations.” Consequently, they will “never, with any amount of data or compute, be able to understand causal relationships — why things are the way they are — and thus perform causal reasoning.”
Indeed, LLMs have no “world model,” i.e., an understanding of the world as containing entities with stable properties that persist through time and are linked together in a complex network of causal relations. Such models “include all the things required to plan, take actions and make predictions about the future.”"
https://www.realtimetechpocalypse.com/p/stop-believing-the-lie-that-agi-is
El desarrollo de la actual #IAgenerativa es solo un paso más para alcanzar la AGI. El objetivo principal de Silicon Valley es lograr la #eugenesia a través de la colonización de la humanidad. Esta tecnología no es inocua ni es una «herramienta», es un instrumento que sacrifica el presente en pos de un futuro utópico.
Aquí tenéis la traducción al español realizada por Arte es Ética del informe «The TESCREAL bundle» de Timnit Gebru y Émile P. Torres.
On the cult of #AGI in 4 steps: 1) we give it all our online knowledge, 2) we give it all our energy, 3) knowing the world is in flames, we ask it for a solution and 4) the answer is...
That is what @davidrevoy painted in this great comic https://framapiaf.org/@davidrevoy/115180874986726269
Check out his other fun works of art :-)
Die Gelddruckmaschine von Big-Tech
Apple, Amazon, Meta, NVidia gelten als Stars des Aktienmarkts. Ihre Aktien steigen und steigen. Investoren reißen sich darum. Das gibt ihnen scheinbar unbegrenzte Möglichkeiten – solange der Hype hält. Der Autor und Netz-Aktivist Cory Doctorow erklärt, wie die Gelddruckmaschine bei Big-Tech funktioniert.
👉 https://kaffeeringe.de/2025/08/27/aktien-als-gelddruckmaschinen-fuer-big-tech/
#AGI #Aktien #Amazon #Apple #AppleVisionPro #AugmentedReality #BigTech #Blockchain #ChatGPT #CoryDoctorow #ElonMusk #Facebook #KI #MarkZuckerberg #Meta #Metaverse #NFT #NVidia #OpenAI #SamAltman #Tesla #VirtualReality #Wirtschaft #zerschlagtDieMonopole
Die Gelddruckmaschine von Big-Tech
Apple, Amazon, Meta, NVidia gelten als Stars des Aktienmarkts. Ihre Aktien steigen und steigen. Investoren reißen sich darum. Das gibt ihnen scheinbar unbegrenzte Möglichkeiten – solange der Hype hält. Der Autor und Netz-Aktivist Cory Doctorow erklärt, wie die Gelddruckmaschine bei Big-Tech funktioniert.
👉 https://kaffeeringe.de/2025/08/27/aktien-als-gelddruckmaschinen-fuer-big-tech/
#AGI #Aktien #Amazon #Apple #AppleVisionPro #AugmentedReality #BigTech #Blockchain #ChatGPT #CoryDoctorow #ElonMusk #Facebook #KI #MarkZuckerberg #Meta #Metaverse #NFT #NVidia #OpenAI #SamAltman #Tesla #VirtualReality #Wirtschaft #zerschlagtDieMonopole