导 读
在《联结:从石器时代到AI的信息网络简史》一书中,尤瓦尔·诺亚·哈拉瑞探讨了人类如何通过信息网络塑造历史,并对未来与AI的关系提出了深刻见解。
哈拉瑞认为,信息的误用是人类历史的关键,而AI作为一种能够自主处理信息的“代理”,可能会取代人类在决策中的角色。
尽管如此,阿里安娜·赫芬顿认为AI不仅可能被用于利用人类的弱点,还可以用来强化人类的美德。通过AI的个性化能力,我们可以更好地了解自己,从而提升个人和机构的自我修正机制。
哈拉瑞强调,技术本身并非决定性的,AI的未来取决于我们如何使用它。如果我们同时投资于技术发展和自我提升,AI将有助于我们成为更好的自己。反之,如果我们忽视自我发展,过度依赖技术,这将对人类产生不利影响。我们正处于人类与技术交汇的关键时刻,未来的决策将决定我们是走向新的希望,还是陷入不可挽回的错误。
图源于AI![图片[1]-时代周刊:AI 如何引导我们成为最好的自己-EnglishX](https://www.englishx.net/wp-content/uploads/2025/10/1760925428-a4392298548cbfbfcc71d7e482461cef.png)
原 文 Time
How AI Can Guide Us on the Path to Becoming the Best Versions of Ourselves
By Arianna Huffington October 8, 2024 Time
The Age of AI has also ushered in the Age of Debates About AI. And Yuval Noah Harari, author of Sapiens and Homo Deus, and one of our foremost big-picture thinkers about the grand sweep of humanity, history and the future, is now out with Nexus: A Brief History of Information Networks from the Stone Age to AI.
Harari generally falls into the AI alarmist category, but his thinking pushes the conversation beyond the usual arguments. The book is a look at human history through the lens of how we gather and marshal information. For Harari, this is essential, because how we use—and misuse—information is central to how our history has unfolded and to our future with AI.
In what Harari calls the “naïve view of information,” humans have assumed that more information will necessarily lead to greater understanding and even wisdom about the world. But of course, this hasn’t been true. “If we are so wise, why are we so self-destructive?” Harari asks. Why do we produce things that might destroy us if we can’t control them?
For Harari—to paraphrase another big-picture thinker—the fault, dear Brutus, is not in ourselves, but in our information networks. Bad information leads to bad decisions. Just as we’re consuming more and more addictive junk food, we’re also consuming more and more addictive junk information.
He argues that the problem with artificial intelligence is that “AI isn’t a tool—it’s an agent.” And unlike other tools of potential destruction, “AI can process information by itself, and thereby replace humans in decision making.” In some ways, this is already happening. For example, in the way Facebook was used in Myanmar—the algorithms had “learned that outrage creates engagement, and without any explicit order from above they decided to promote outrage.”
Where I differ with Harari is that he seems to regard human nature as roughly fixed, and algorithms as inevitably exploiting human weaknesses and biases. To be fair, Harari does write that “as a historian I do believe in the possibility of change,” but that possibility of change at the individual level is swamped in the tide of history he covers, with a focus very much on systems and institutions, rather than the individual humans that make up those institutions.
Harari acknowledges that AI’s dangers are “not because of the malevolence of computers but because of our own shortcomings.” But he discounts the fact that we are not defined solely by our shortcomings and underestimates the human capacity to evolve. Aleksandr Solzhenitsyn, who was no stranger to systems that malevolently use networks of information, still saw the ultimate struggle as taking place within each human being: “The line separating good and evil,” he wrote, “passes not through states, nor between classes, nor between political parties either—but right through every human heart—and through all human hearts.”
So yes, AI and algorithms will certainly continue to be used to exploit the worst in us. But that same technology can also be used to strengthen what’s best in us, to nurture the better angels of our nature. Harari himself notes that “alongside greed, hubris, and cruelty, humans are also capable of love, compassion, humility, and joy.” But then why assume that AI will only be used to exploit our vices and not to fortify our virtues? After all, what’s best in us is at least as deeply imprinted and encoded as what’s worst in us. And that code is also open source for developers to build on.
Harari laments the “explicit orders from above” guiding the algorithms, but AI can allow for very different orders from above that promote benevolence and cooperation instead of division and outrage. “Institutions die without self-correcting mechanisms,” writes Harari. And the need to do the “hard and rather mundane work” of building those self-correcting mechanisms is what Harari calls the most important takeaway of the book. But it’s not just institutions that need self-correcting mechanisms. It’s humans, as well. By using AI, with its power of hyper-personalization, as a real time coach to strengthen what is best in us, we can also strengthen our individual self-correcting mechanisms and put ourselves in a better position to build those mechanisms for our institutions. “Human life is a balancing act between endeavoring to improve ourselves and accepting who we are,” he writes. AI can help us tip the balance toward the former.
Harari raises the allegory of Plato’s Cave, in which people are trapped in a cave and see only shadows on a wall, which they mistake for reality. But the technology preceding AI has already trapped us in Plato’s Cave. We’re already addicted to screens. We’re already completely polarized. The algorithms already do a great job of keeping us captive in a perpetual storm of outrage. Couldn’t AI be the technology that in fact leads us out of Plato’s Cave?
As Harari writes, “technology is rarely deterministic,” which means that, ultimately, AI will be what we make of it. “It has enormous positive potential to create the best health care systems in history, to help solve the climate crisis,” he writes, “and it can also lead to the rise of dystopian totalitarian regimes and new empires.”
Of course, there are going to be plenty of companies that continue to use algorithms to divide us and prey on our basest instincts. But we can also still create alternative models that augment our humanity. As Harari writes, “while computers are nowhere near their full potential, the same is true of humans.”
As it happens, it was in a conversation with Jordan Klepper on The Daily Show that Harari gave voice to the most important and hopeful summation of where we are with AI: “If for every dollar and every minute that we invest in developing artificial intelligence, we also invest in exploring and developing our own minds, it will be okay. But if we put all our bets on technology, on AI, and neglect to develop ourselves, this is very bad news for humanity.”
Amen! When we recognize that humans are works in progress and that we are all on a journey of evolution, we can use all the tools at our disposal, including AI, to become the best versions of ourselves. This is the critical point in the nexus of humanity and technology that we find ourselves in, and the decisions we make in the coming years will determine if this will be, as Harari puts it, “a terminal error or the beginning of a hopeful new chapter in the evolution of life.”
词汇词块积累 alarmist adj. 危言耸听的 category n. 类别 naive adj. 天真的;幼稚的 self-destructive adj. 自我毁灭的 agent n. 能动体;主体 bias n. 偏见;偏向 unfolded v. 展开 addictive adj. 上瘾的 shortcoming n. 缺点;不足 fortify v. 强化 evolve vi. 进化;发展 vice n. 恶习;缺点 virtue n. 美德;优点 deterministic adj. 决定性的 augment vt. 增强;提升 big-picture thinkers 宏观思考者 information network 信息网络 junk information 垃圾信息 decision making 决策 human nature 人性 self-correcting mechanisms 自我修正机制 tip the balance 改变平衡 Plato’s Cave 柏拉图的洞穴寓言(文中喻指人类被技术困在“虚假认知”中) hyper-personalization 超个性化 balancing act 平衡之举 terminal error 致命失误![图片[2]-时代周刊:AI 如何引导我们成为最好的自己-EnglishX](https://www.englishx.net/wp-content/uploads/2025/10/1760925429-eb3c36c84e2c0d75f8534fb82c2b93f0.png)
![图片[3]-时代周刊:AI 如何引导我们成为最好的自己-EnglishX](https://www.englishx.net/wp-content/uploads/2025/10/1760925430-9fa153a97b296ad51f4c058f7ccb8e95.png)
参考译文 AI如何引导我们成为最好的自己
阿里安娜·赫芬顿 2024年10月8日 《时代周刊》
AI时代也迎来了关于AI的辩论时代。《人类简史》和《未来简史》的作者尤瓦尔·诺亚·哈拉瑞——我们最重要的关于人类宏大图景、历史和未来的思想家之一——最近出版了《联结:从石器时代到AI的信息网络简史》。
哈拉瑞通常属于AI警示者类别,但他的思考将对话推向了常规论点之外。这本书通过我们如何收集和整理信息的视角审视人类历史。对哈拉瑞来说,这至关重要,因为我们如何使用——和误用——信息是我们历史如何展开以及我们与AI未来的核心。
在哈拉瑞所谓的"信息的天真观点"中,人类假设更多信息必然会导致对世界更深刻的理解甚至智慧。但这当然不是真的。"如果我们如此智慧,为什么我们如此自我毁灭?"哈拉瑞问道。如果我们无法控制,为什么要生产可能摧毁我们的东西?
对哈拉瑞来说——借用另一位宏观思想家的话——亲爱的布鲁特斯,错误不在我们自己,而在我们的信息网络中。错误信息导致错误决策。正如我们消费越来越多上瘾的垃圾食品,我们也消费越来越多上瘾的垃圾信息。
他认为人工智能的问题在于"AI不是工具——它是代理。"与其他潜在破坏性工具不同,"AI可以自行处理信息,从而在决策中取代人类。"在某些方面,这已经在发生。例如,在缅甸使用Facebook的方式中——算法"了解到愤怒创造参与度,没有任何来自上方的明确指令,它们决定推广愤怒。"
我与哈拉瑞的不同之处在于,他似乎认为人性大致固定,而算法不可避免地利用人类弱点和偏见。公平地说,哈拉瑞确实写道"作为历史学家,我确实相信改变的可能性",但个体层面的改变可能性在他所涵盖的历史浪潮中被淹没,关注点更多在于系统和制度,而非构成这些制度的个体人类。
哈拉瑞承认AI的危险"不是因为计算机的恶意,而是因为我们自身的缺点。"但他低估了我们不仅仅由缺点定义的事实,并低估了人类进化的能力。亚历山大·索尔仁尼琴——对恶意使用信息网络的制度并不陌生——仍然认为终极斗争发生在每个人内心:"分隔善恶的线,"他写道,"不穿过国家,不穿过阶级,也不穿过政党——而是直接穿过每个人心——穿过所有人心。"
所以是的,AI和算法肯定将继续被用来利用我们最坏的一面。但同样的技术也可以用来强化我们最好的部分,培养我们本性中善良的天使。哈拉瑞本人指出"除了贪婪、傲慢和残忍,人类也能够爱、同情、谦卑和快乐。"但为什么假设AI只会被用来利用我们的恶习而不是巩固我们的美德?毕竟,我们最好的部分至少与我们最坏的部分一样深刻烙印和编码。而且那段代码也是开发者可以构建的开源。
哈拉瑞悲叹指导算法的"来自上方的明确指令",但AI可以允许非常不同的来自上方的指令,促进仁慈与合作而非分裂和愤怒。"制度没有自我修正机制就会死亡,"哈拉瑞写道。需要做构建这些自我修正机制的"艰苦而相当平凡的工作"是哈拉瑞称之为书中最重要的启示。但不仅制度需要自我修正机制。人类也需要。通过使用AI及其超个性化力量作为实时教练来强化我们最好的部分,我们也可以加强我们个体的自我修正机制,并使自己处于更好的位置为我们的制度构建那些机制。"人类生活是在努力改进自己和接受自己之间的平衡行为,"他写道。AI可以帮助我们向前者倾斜。
哈拉瑞提出柏拉图洞穴的寓言,其中人们被困在洞穴中,只看到墙上的影子,误认为是现实。但在AI之前的技术已经将我们困在柏拉图洞穴中。我们已经对屏幕上瘾。我们已经完全两极分化。算法已经在将我们囚禁在永久的愤怒风暴方面做得很好。难道AI不能实际上是引领我们走出柏拉图洞穴的技术吗?
正如哈拉瑞所写,"技术很少是决定性的",这意味着最终,AI将是我们造就的样子。"它具有巨大的积极潜力,可以创造历史上最好的医疗保健系统,帮助解决气候危机,"他写道,"它也可能导致反乌托邦极权主义政权和新帝国的崛起。"
当然,会有很多公司继续使用算法来分裂我们并利用我们最卑劣的本能。但我们仍然可以创建增强我们人性的替代模式。正如哈拉瑞所写,"虽然计算机远未达到其全部潜力,但人类也是如此。"
碰巧的是,在与乔丹·克莱珀在《每日秀》的对话中,哈拉瑞表达了关于我们与AI关系最重要和最有希望的总结:"如果我们为开发人工智能投入的每一美元和每一分钟,我们也投资于探索和发展我们自己的心智,那将没问题。但如果我们把所有赌注押在技术上,押在AI上,而忽视发展自己,这对人类来说是非常坏的消息。"
阿门!当我们认识到人类是进行中的作品,我们都在进化之旅中时,我们可以使用所有可用的工具,包括AI,成为最好的自己。这是我们在人类与技术联结中发现的关键点,我们在未来几年做出的决定将确定这将是,如哈拉瑞所说,"终结错误还是生命进化中充满希望的新篇章的开始。"![图片[4]-时代周刊:AI 如何引导我们成为最好的自己-EnglishX](https://www.englishx.net/wp-content/uploads/2025/10/1760925429-eb3c36c84e2c0d75f8534fb82c2b93f0.png)
![图片[5]-时代周刊:AI 如何引导我们成为最好的自己-EnglishX](https://www.englishx.net/wp-content/uploads/2025/10/1760925430-9fa153a97b296ad51f4c058f7ccb8e95.png)
译文来自AI,没经过仔细核对,仅作为参考 文章来源网络,仅作学习交流使用,侵权删
复制链接到夸克网盘:
链接:
https://pan.quark.cn/s/bc512b693ab3
可下载、保存本文档!
![图片[6]-时代周刊:AI 如何引导我们成为最好的自己-EnglishX](https://www.englishx.net/wp-content/uploads/2025/10/1760925430-e44899c16e298b05b7866c1600196583.gif)
看完今天的外刊 你有什么想法? 欢迎在评论区留言讨论哦~


![图片[7]-时代周刊:AI 如何引导我们成为最好的自己-EnglishX](https://www.englishx.net/wp-content/uploads/2025/10/1760925430-4c1027c7581692548e5b94434167e66f.png)
![图片[8]-时代周刊:AI 如何引导我们成为最好的自己-EnglishX](https://www.englishx.net/wp-content/uploads/2025/10/1760925430-727ebd9e0a56fd80fe6c80d7a283e31d.png)





暂无评论内容