<
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space>
"Last week's spectacular OpenAI soap-opera hijacked the attention of millions
of normal, productive people and nonconsensually crammed them full of the fine
details of the debate between "Effective Altruism" (doomers) and "Effective
Accelerationism" (AKA e/acc), a genuinely absurd debate that was allegedly at
the center of the drama.
Very broadly speaking: the Effective Altruists are doomers, who believe that
Large Language Models (AKA "spicy autocomplete") will someday become so
advanced that it could wake up and annihilate or enslave the human race. To
prevent this, we need to employ "AI Safety" – measures that will turn
superintelligence into a servant or a partner, not an adversary.
Contrast this with the Effective Accelerationists, who also believe that LLMs
will someday become superintelligences with the potential to annihilate or
enslave humanity – but they nevertheless advocate for faster AI development,
with fewer "safety" measures, in order to produce an "upward spiral" in the
"techno-capital machine."
Once-and-future OpenAI CEO Altman is said to be an accelerationist who was
forced out of the company by the Altruists, who were subsequently bested,
ousted, and replaced by Larry fucking Summers. This, we're told, is the
ideological battle over AI: should we cautiously progress our LLMs into
superintelligences with safety in mind, or go full speed ahead and trust to
market forces to tame and harness the superintelligences to come?"
Via
Linux Weekly News:
https://lwn.net/Articles/951633/
Cheers,
*** Xanni ***
--
mailto:xanni@xanadu.net Andrew Pam
http://xanadu.com.au/ Chief Scientist, Xanadu
https://glasswings.com.au/ Partner, Glass Wings
https://sericyb.com.au/ Manager, Serious Cybernetics