First deployed $ULTRA was sniped , Dip ape ?
0x6774238fcbb98c2169716f02734c1d47ea3f3e21
0x6774238fcbb98c2169716f02734c1d47ea3f3e21
Forwarded from BDuez Journal's
JUST IN: πΊπΈTRUMP: "We are going to invade the middle east and parts of europe aswell!"
Forwarded from Redacted Systems Bot | @RedactedSystemsBot
ποΈ VitalikButerin Replied To SkyeSharkie
@SkyeSharkie we know lots of things that are clearly very against human values (eg. killing or imprisoning innocent people)
and the argument is that we're not good enough at alignment to ensure that a singleton superintelligent AI will avoid _even that_
@SkyeSharkie we know lots of things that are clearly very against human values (eg. killing or imprisoning innocent people)
and the argument is that we're not good enough at alignment to ensure that a singleton superintelligent AI will avoid _even that_
FxTwitter
Utah teapot π« (@SkyeSharkie)
How is AI alignment even a thing? Everyone is always asking "aligned to what" and no one ever has an answer other than "human values". Coherent, self-consistent human values are not a thing. I still don't get it. What are we researching? It's ridiculous thatβ¦
Forwarded from Redacted Systems Bot | @RedactedSystemsBot
ποΈ VitalikButerin Replied To TheDevanshMehta
the basic case for why *p(doom) reduction* may be better achieved with open source is: superintelligent agency is the most dangerous thing, a more pluralistic environment where no single AI hyper-dominates everyone else makes the world less agentic
(where I depart from the usual "we'll have many competing ASIs" boosters is that they seem to think that this outcome will (i) happen by default as a result of capitalist market forces, and (ii) be automatically safe; I think both require intentional work including changes to legal incentives that may be inconvenient to big corporations)
the basic case for why *p(doom) reduction* may be better achieved with open source is: superintelligent agency is the most dangerous thing, a more pluralistic environment where no single AI hyper-dominates everyone else makes the world less agentic
(where I depart from the usual "we'll have many competing ASIs" boosters is that they seem to think that this outcome will (i) happen by default as a result of capitalist market forces, and (ii) be automatically safe; I think both require intentional work including changes to legal incentives that may be inconvenient to big corporations)
π§΅ Thread β’ FxTwitter
Devansh Mehta (@TheDevanshMehta)
@SkyeSharkie @VitalikButerin Yeah it doesn't help that the current ai safety crowd thinks that open source LLMs is incompatible with making aligned models :/
I've actually yet to see anyone put forth a steelman case for why alignment is better achieved viaβ¦
I've actually yet to see anyone put forth a steelman case for why alignment is better achieved viaβ¦