#agi

See tagged statuses in the local BookWyrm community

RE: https://chaos.social/@afelia/115622015031682948

Dringende Hörempfehlung! Danke @afelia 🙏🏽

Die generative KI löscht gerade ganze Berufsfelder aus. Die Ergebnisse der Menschen mit ihrer Kreativität und Erfahrung sind wahrscheinlich besser als die der sog. KI. Aber die Arbeitgeber:innen / Kund:innen interessiert es nicht. KI ist billiger. Da hilft auch keine Unterwerfung: „Guck ma LinkedIn, ich habe ein KI-Zertifikat / unterstütze meine Kunden bei der Arbeit mit KI“ / „finde diese Entwicklung total spannend“

Sah & sehe zu wenig nachhaltige Konzern-Geschäftsmodelle für & . „Ich glaube, dass in wenigen Jahren die Blase platzen wird - es ist zu viel Geld, zu viel Energie im System. Die Vorherrschaft der aktuellen Konzerne, die global alle Daten sammeln, wird enden. Wir werden stattdessen zu dezentralen Open-Source-Anwendungen und KI-Systemen kommen", sagte Blume am Donnerstag beim Bodensee Business Forum der "Schwäbischen Zeitung" in Friedrichshafen. (2/2) https://www.newsroom.de/news/aktuelle-meldungen/multimedia-9/medienethiker-blume-monopole-von-google-meta-und-x-werden-enden-975689/

Essential reading for understanding the influence that The Extropians' outlandish ideas have had on many Big Tech CEOs' obsession with AGI.

https://www.technologyreview.com/2025/10/30/1127057/agi-conspiracy-theory-artifcial-general-intelligence/

"Watching tech moguls throw caution to the wind in the AI arms race or equivocate on whether humanity ought to continue, it’s natural to wonder whether they care about human lives.

The earnest, in-depth answer to this question is just as bleak as the glib one. As moral philosopher Émile Torres argues, many Silicon Valley leaders embrace a vision of a transhumanist future in which biological humans will be replaced by digital beings endowed with superintelligence. This vision helps explain their obsession with artificial general intelligence (AGI) and sits at the core of what Torres describes as human extinctionist preferences.

In 2023, Torres and his colleague Timnit Gebru coined the acronym TESCREAL to describe a constellation of ideologies — Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism — that have become highly influential within Silicon Valley. Torres is a philosopher, intellectual historian, and journalist whose work focuses on …

Reading 'Empire of AI' by Karen Hao.

2 things stand out in the first 3 chapters alone:

1) From new 2018 for-profit charter this definition of : "highly autonomous systems that outperform humans at most economically valuable work"

2) From Karen's interview that same year OpenAIs reasoning for wanting to build AGI: "We think it can help solve complex problems that are just out of reach of humans" - like climate change and health care.

Hank Green applying sentience to AI and warning of AGI. This doesn't seem great.

I mean, it makes sense, Hank is very much in his white-guy tech bubble like he is in his white-guy political bubble.

Still, Hank has access to a lot of non-tech savvy people, and he's parroting talking point that tech CEOs love to hear. It would be great if he made room to platform someone like @timnitGebru to his audience.

https://youtu.be/90C3XVjUMqE?si=zHzHiAoQb3q1T9e8

"The 85-year-old computer pioneer Alan Kay, a revered figure in the industry, offered some perspective. He argued that AI could undoubtedly bring real benefits. Indeed, it had helped detect his cancer in an MRI scan. “AI is a lifesaver,” he said.

However, Kay worried that humans are easily fooled and that AI companies cannot always explain how their models produce the results. Software engineers, like aeroplane designers or bridge builders, he said, had a duty of care to ensure their systems did not cause harm or fail. The main theme of this century should be safety.

The best way forward would be to harness humanity’s collective intelligence, steadily amassed over generations. “We already have artificial superhuman intelligence,” Kay said. “It is science.” AI has already produced some exquisite breakthroughs, such as Google DeepMind’s AlphaFold model that predicted the structures of over 200mn proteins — winning the researchers a …

"Don’t believe the hype and hyperbole. AGI is not imminent. The large language models (LLMs) that power current “AI” — a marketing term coined in 1955 to attract funding from the US war machine — will never by themselves lead to AGI. Scaling up these models will not yield superintelligent machines that take over the world or usher in “utopia.”

Even Richard Sutton, an anti-democracy pro-extinctionist who won the Turing Award last year for his contributions to reinforcement learning, now “contends that LLMs are not a viable path to true general intelligence.” He concedes that they are a “dead end.”

This is exactly what Gary Marcus has been arguing for years. In his 2019 book Rebooting AI, Marcus and his coauthor contend that neural networks (e.g., LLMs) are “forever stuck in the realm of correlations.” Consequently, they will “never, with any amount of data or compute, be able …

El desarrollo de la actual es solo un paso más para alcanzar la AGI. El objetivo principal de Silicon Valley es lograr la a través de la colonización de la humanidad. Esta tecnología no es inocua ni es una «herramienta», es un instrumento que sacrifica el presente en pos de un futuro utópico.

Aquí tenéis la traducción al español realizada por Arte es Ética del informe «The TESCREAL bundle» de Timnit Gebru y Émile P. Torres.

https://arteesetica.org/el-paquete-tescreal/

Die Gelddruckmaschine von Big-Tech

Apple, Ama­zon, Meta, NVi­dia gel­ten als Stars des Akti­en­markts. Ihre Akti­en stei­gen und stei­gen. Inves­to­ren rei­ßen sich dar­um. Das gibt ihnen schein­bar unbe­grenz­te Mög­lich­kei­ten – solan­ge der Hype hält. Der Autor und Netz-Akti­­vist Cory Doc­to­row erklärt, wie die Geld­druck­ma­schi­ne bei Big-Tech funktioniert.

👉 https://kaffeeringe.de/2025/08/27/aktien-als-gelddruckmaschinen-fuer-big-tech/