6 Minutes Ago: Elon Musk Shared Terrifying Message

fgoa 提交于 周日, 11/27/2022 - 17:35

人工超级智能能否毁灭人类?马斯克分享了可怕的信息:

视频文件

 

附英译中:

Elon Musk: We're headed towards digital superintelligence that far exceeds any human. I think it's very obvious. /我们正朝着远远超过任何人类的数字超级智能迈进。我认为这是非常明显的。

We're headed towards either super intelligence or civilization ending. /我们要么走向超级智能,要么走向文明终结。

Elon Musk: I would argue that Al is unequivocally, something that has potential to be dangerous to the public. Digital intelligence will be able to out think us in every way, and it'll soon be able to simulate what we consider consciousness. So, to the degree that you would not be able to tell the difference. And like, we'll not be able to be smarter than a digital supercomputer. So therefore, if you can't beat them, join them! As the algorithms and the hardware improve, that digital intelligence will exceed biological intelligence by a substantial margin. It's obvious. /我会争辩说,Al 是明确的,有可能对公众构成危险的东西。数字智能将能够在各个方面超越我们,它很快就能模拟我们所认为的意识。所以,在某种程度上,你将无法分辨出差异。就像,我们无法比数字超级计算机更聪明。因此,如果你不能打败他们,那就加入他们吧!随着算法和硬件的改进,数字智能将大大超过生物智能。很明显。

When you say they'll exceed human intelligence at some point soon, the machine's gonna be smart... not just smarter, like exponentially smarter than any of us? /当你说他们很快会在某个时候超过人类智能时,机器会变得聪明......不仅仅是更聪明,比我们任何人都更聪明?

Elon Musk: In showing that the advents of Al is good. Or at least we tried to make it good. Seems like a smart move. /表明 Al 的出现是好的。或者至少我们试图让它变得更好。似乎是明智之举。

But we're way behind on that?/ 但我们在这方面落后了吗?

Elon Musk: Yes, we're not paying attention. /是的,我们没有注意。

We worry more about /我们更担心

Elon Musk: what name somebody called someone else, than whether Al will destroy humanity? /有人叫别人什么名字,而不是艾尔会不会毁灭人类?

We're like children in a playground. This could be a huge problem for society. what are the scenarios that scare you most? /我们就像操场上的孩子。这对社会来说可能是一个巨大的问题。最让你害怕的场景是什么?

Elon Musk: Humanity really is not evolved to think of existential threats in general. We're evolved to think about things that are very close to us, near term, to be upset at other humans and not really to think about things that could destroy humanity as a whole. But then in recent decades, just really, in the last century, we had nuclear bombs which could potentially destroy civilization. Obviously. We've had Al which could destroy civilization. We have global warming, which could destroy civilization. Or at least severely disrupt civilization. /人类真的没有进化到普遍考虑存在的威胁。我们进化到会考虑与我们非常接近的事情,短期内,对其他人感到不安,而不是真正考虑可能摧毁整个人类的事情。但在最近几十年,实际上,在上个世纪,我们拥有了可能摧毁文明的核弹。明显地。我们拥有可以摧毁文明的铝。我们有全球变暖,这可能会摧毁文明。或者至少严重破坏文明。

How could Al destroy civilization?/ 人工智能怎么可能毁灭文明?

Elon Musk: You know, it would be something the same way that humans destroy the habitat of primates. It wouldn't necessarily be destroyed, but they night be relegated to a small corner of the world. when Homo Sapiens became much smarter than other primates, they pushed all the other ones into it's small habitats. They were just in the way. You could make a Swarm of Assassin drones for very money by just taking the Face ID chip that's used in cell phone and having a small explosive charge and a standard drone and have them just do a grid suite for the building until they find the person they're looking for Ram into them and explode. You could do that right now! No new technologies needed. "RIGHT NOW" Probably a bigger risk than they're being hunted down by a drone. Is that Al would be used to make incredibly effective propaganda that we've not seen, like propaganda. /你知道,这与人类破坏灵长类动物栖息地的方式是一样的。 它不一定会被摧毁,但它们会在夜间被降级到世界的一个小角落。 当智人变得比其他灵长类动物聪明得多时,他们将所有其他灵长类动物推到它的狭小栖息地。 他们只是在路上。 你可以用手机中使用的 Face ID 芯片和一个小的炸药和一个标准的无人机来制造一群刺客无人机,让他们为建筑物做一个网格套件,直到他们找到他们想要的人 正在寻找拉姆进入他们并爆炸。 你现在就可以做到! 不需要新技术。  “现在”可能比他们被无人机追捕的风险更大。 是不是 人工智能会被用来进行我们从未见过的非常有效的宣传,比如宣传。

There are DeepFakes?/有深度造假?

Elon Musk: Yeah. Influence the direction of society, influence elections, artificial intelligence. How is the message, how's the message? Looks at the feed. Looks at the feedback, makes this message slightly better within millisecond, it can adapt this message and shift and react to news. And there's so many social media accounts out there that are not people. How do you know it's a person? Not a person? People in the Al community refer to the advent of digital super intelligence as a Singularity. That's not to say that it's good or bad, that it's very difficult to predict what will happen after that point. And there's some probability it will be bad. Some probability it will be good. We obviously want defect that probability and have it be more good than bad. The point at which we've digital superintelligence that's when we pass singularity and things become just very uncertain. It doesn't mean that they're necessarily bad or good, but the point at which we pass singularity, things become extremely unstable. So we want to have a human brain interface before the singularity, or at least not long after it, to minimize the existential risk for humanity and consciousness as we know it. Another point that I think is really important to appreciate is that we're all of us already a Cyborgs. So you have a machine extension of yourself in the form of your phone and your computer and all your applications. You're already super-human. By far, you have more power, more capability then president of The United States has 30 years ago. If you have an internet link, you have an oracle of wisdom. You can communicate to millions of people, you can communicate to the rest of Earth instantly. I mean, these are magical powers that didn't exist not that long ago. So everyone is already super human and a cyborg. The limitation is one of bandwidth. So we're bandwidth constraints, particularly on output. So our input is much better, but our output is extremely slow. If you want to generous, you could say maybe it's a few hundred bits per second or a kilobit something like that output. The way we output is we have our little meat sticks that we move very slowly and push buttons or tap a little screen. And that's just extremely slow. And compare that to a computer which can communicate at the terabit level, that's a very big orders of magnitude difference. Our input is much better because of vision, but even that could t enhanced significantly. I think the things that are needed for a future that we would look at and conclude is good. Most likely, is we have to solve that bandwidth constraint with a direct neural interface. I think a high bandwidth interface with Cortex. So we can have a digital tertiary layer that's more fully symbiotic with the rest of us. We've got the Cortex and the limbic system, which seems to work together pretty well. They've got good bandwidth, whereas the bandwidth to our digital tertiary layer is weak. So i think if we can solve that bandwidth issue and thenAl can be widely available, the analogy to nuclear bomb is not exactly correct. It's not as though it's gonna explode and create a mushroom cloud. It's more like if there were just a few people that had it, they would be able to be essentially dictators of Earth., You know, whoever acquired it. And if it was limited to a small number of people and it was Ultra Smart, they would have Dominion over Earth. So I think it's extremely important that it be widespread and that we solve the bandwidth issue. And if we do those things, then it will be tied to our consciousness, tied to our will, tied to the sum of individual human will., And everyone would have it. So it would be sort of still roughly even playing field. In fact, it would be probably more egalitarian than today. The way in which a regulation is put in place is slow and linear, and we're facing an exponential threat. And if you have a linear response to exponential threat is quite likely, the exponential threat will wIN! That, in a nutshell, is the issue. I think we should try to take the set of actions that are most to make the future good for human. I'm Pro-Human! My faith in humanity has been a little bit shaken this year but I'm still Pro-Humanity! / 是的。影响社会走向,影响选举,人工智能。消息如何,消息如何?看着提要。查看反馈,在毫秒内使这条消息稍微好一些,它可以适应这条消息并转移对消息做出反应。有那么多不是人的社交媒体账户。你怎么知道是人?不是人?人工智能社区的人将数字超级智能的出现称为奇点。这并不是说它是好是坏,很难预测在那之后会发生什么。而且有一定的可能性会很糟糕。有可能它会很好。我们显然想要缺陷那个概率并且让它好而不坏。我们拥有数字超级智能的那一点就是当我们通过奇点并且事情变得非常不确定时。这并不意味着它们一定是坏的或好的,但当我们通过奇点时,事情变得非常不稳定。因此,我们希望在奇点之前或至少在奇点之后不久拥有一个人脑接口,以最大限度地减少我们所知的人类和意识的存在风险。我认为值得重视的另一点是,我们所有人都已经是半机械人。因此,您以手机、计算机和所有应用程序的形式拥有自己的机器扩展。你已经是超人了。到目前为止,你比 30 年前的美国总统拥有更多的权力和能力。如果你有互联网链接,你就有了智慧的神谕。你可以与数百万人交流,你可以立即与地球上的其他人交流。我的意思是,这些是不久前还不存在的神奇力量。所以每个人都已经是超人和电子人。限制是带宽之一。所以我们是带宽限制,特别是在输出方面。所以我们的输入要好得多,但我们的输出却极其缓慢。如果你想大方一点,你可以说它可能是每秒几百位或类似输出的千位。我们输出的方式是我们有我们的小肉棒,我们移动得非常慢,然后按下按钮或点击一个小屏幕。这非常慢。并将其与可以在 terabit 级别进行通信的计算机进行比较,这是一个非常大的数量级差异。由于愿景,我们的投入要好得多,但即便如此也无法显着增强。我认为我们会研究并得出结论的未来所需的东西是好的。最有可能的是,我们必须使用直接神经接口来解决带宽限制。我认为是与 Cortex 的高带宽接口。所以我们可以拥有一个与我们其他人更完全共生的数字第三层。我们有大脑皮层和边缘系统,它们似乎可以很好地协同工作。他们有很好的带宽,而我们数字第三层的带宽很弱。所以我认为,如果我们能够解决带宽问题,那么 Al 就可以广泛使用,与核弹的类比并不完全正确。好像它不会爆炸并产生蘑菇云。更像是如果只有少数人拥有它,他们基本上可以成为地球的独裁者。你知道,无论谁获得它。如果它仅限于少数人并且是超聪明的,他们将拥有对地球的统治权。所以我认为它的广泛传播和我们解决带宽问题是极其重要的。如果我们做那些事情,那么它将与我们的意识、我们的意志、个体人类意志的总和联系在一起。每个人都会拥有它。所以这将是一个大致公平的竞争环境。事实上,它可能会比今天更加平等。监管的实施方式缓慢而线性,我们正面临指数级的威胁。如果您很可能对指数威胁做出线性反应,那么指数威胁就会获胜!简而言之,这就是问题所在。我认为我们应该尝试采取最有利于人类未来的一系列行动。我是亲人类!今年我对人性的信念有点动摇,但我仍然是亲人的!

 

Original Link: https://youtu.be/OkP9cwsEx0o