login
AI's Bitter Lesson: Key Cases & Future
2025/07/04·StepFun can make mistakes, check important info.
Please organize several important cases in the development of artificial intelligence, analyze how they verify the core ideas proposed by Rich Sutton in The Bitter Lesson, and explore how this concept can affect the future development of artificial intelligence. Please cover the following aspects: - Computer Vision: From early methods based on manual feature engineering to the rise of deep learning, such as ImageNet and its impact on AI development. - Natural Language Processing: From rule-based approaches to large-scale data-driven models such as Transformer and ChatGPT. - Game AI: such as AlphaGo, AlphaZero, MuZero, and how these systems make breakthroughs through general learning algorithms. - Robotics and Autonomous Driving: From early expert systems to the application of modern deep learning methods. - Other related fields, such as AlphaFold, etc. Finally, please predict the future development of artificial intelligence based on the ideas of "The Bitter Lesson", especially as computing resources grow,and what new trends we may see.
Sources

感谢您提供的研究任务。我将帮助您完成关于Rich Sutton的"苦涩的教训"(The Bitter Lesson)理论在人工智能发展中的验证分析。

在开始前,我想确认几点以便更好地满足您的需求:

  1. 您希望我在分析中重点关注哪些方面?是算法复杂性与计算能力的权衡,还是特定领域知识与通用学习方法的对比,或者其他方面?

  2. 在预测人工智能未来发展时,您是否对特定时间范围(如近期5年、中期10年或长期展望)更感兴趣?

  3. 您是否希望我在报告中包含关于"苦涩的教训"理论可能存在的局限性或批评观点?

跳过

好的,收到您的研究任务。

今年是2025年,今天是07月04日,星期五。很高兴作为您的深度研究助手,为您规划此次关于人工智能“苦涩的教训”理论的研究。

基于您的具体要求,我为您制定了一个分四步走的总体研究计划。该计划旨在系统性地构建一份逻辑清晰、论证充分、图文并茂的深度研究报告。

Research plan

第一步:理论基础构建——深入理解“苦涩的教训”

此阶段的核心是精准把握Rich Sutton提出的“苦涩的教训”理论的内涵。我们将首先明确其核心论点:即依赖大规模计算和数据的通用方法(如搜索和学习),从长远看,其效果将远超依赖人类专家知识和启发式规则的方法。同时,将探讨该理论为何被认为是“苦涩的”,分析其对研究者思维模式的挑战。

关键信息来源:

  • 核心文献: Rich Sutton的原始文章《The Bitter Lesson》,通常可在其个人学术网站(如 incompleteideas.net)上找到。
  • 权威解读: 顶尖AI会议(如NeurIPS, ICML)中引用或讨论该理论的论文,以及在arXiv等预印本网站上的相关学术文章。
  • 专家评述: 人工智能领域知名学者或研究机构(如OpenAI, DeepMind)官方博客或出版物上对该思想的讨论和分析。

第二步:多领域案例深度剖析——验证理论的普遍性

此阶段将是报告的主体,我们将系统性地梳理您指定的多个AI领域的发展历程,通过今昔对比,验证“苦涩的教训”的普遍适用性。对于每个领域,分析将遵循“早期方法 → 范式转移 → 理论验证”的脉络。

  1. 计算机视觉 (CV): 回顾从SIFT等手动特征工程到以ImageNet竞赛为催化剂的深度卷积网络(如AlexNet)的崛起。
  2. 自然语言处理 (NLP): 追溯从基于规则和统计的系统到以Transformer架构为基础的大语言模型(如ChatGPT)的演进。
  3. 游戏AI: 分析从依赖专家规则的象棋程序到通过通用强化学习和搜索(蒙特卡洛树搜索)取得突破的AlphaGo、AlphaZero及MuZero。
  4. 机器人与自动驾驶: 对比早期基于专家系统的控制逻辑与当前依赖端到端深度学习进行感知和决策的现代方法。
  5. 其他前沿领域: 以AlphaFold为例,展示通用学习方法如何在蛋白质结构预测这一复杂科学问题上超越传统方法。

关键信息来源:

  • 里程碑式论文: 各领域关键模型的原始发布论文(例如,AlexNet、Attention Is All You Need、AlphaGo、AlphaFold等),通常发表在《Nature》、《Science》或顶级会议上。
  • 权威技术博客与报告: 领先科技公司(如Google AI, Meta AI, DeepMind)发布的技术报告和官方博客文章。
  • 学术教科书与综述文章: 关于计算机视觉、自然语言处理、强化学习等领域的经典教材和最新的综述性论文。

第三步:综合洞察与未来趋势预测

在完成案例分析后,此阶段将进行归纳与升华。我们将综合所有案例,提炼出贯穿AI发展的共同模式,并基于“苦涩的教训”的核心思想,对人工智能的未来进行前瞻性预测。重点探讨随着算力持续指数级增长,可能会涌现出哪些新的研究范式和技术趋势,以及通用人工智能(AGI)的可能发展路径。

关键信息来源:

  • 行业领袖观点: AI领域思想领袖(如Geoffrey Hinton, Yann LeCun, Sam Altman等)的近期访谈、演讲和公开文章。
  • 权威行业报告: 如斯坦福大学的《AI Index Report》等,提供关于算力增长、模型规模和投资趋势的宏观数据。
  • 前沿科技媒体: 如MIT Technology Review, WIRED等对未来AI趋势的深度分析和报道。

第四步:研究报告整合与呈现

最后,我们将把前三步的研究成果系统地整合起来,形成最终的研究报告。报告将包含清晰的引言、扎实的案例分析、深刻的洞察总结和富有远见的未来展望。为了增强报告的可读性和影响力,我们将创建一系列可视化图表,例如:

  • 发展历程时间线: 直观展示各AI领域从“人类知识”到“通用计算”的范式转变。
  • 性能对比图: 量化展示新旧方法在关键指标上的性能飞跃(如ImageNet错误率逐年下降图)。
  • 概念框架图: 清晰地阐释“苦涩的教训”理论的核心逻辑。

此计划将确保我们能够系统、全面地完成您的研究任务,最终产出一份高质量的研究报告。

开始研究

Deep Research

首先,我需要了解Rich Sutton的"The Bitter Lesson"的核心内容。

搜索

Search

Rich Sutton The Bitter Lesson artificial intelligence
知乎
知乎
## 什么是苦涩的教训 强化学习之父 Richard Sutton 19 年写下的 The Bitter Lesson[ http://www. incompleteideas.net/Inc Ide
微博
微博
“算力才是王道,知识都是干扰”-加拿大计算机科学家与 DeepMind 的首席科学顾问里奇·萨顿(Rich Sutton)在他那篇著名的《苦涩的教训》(The Bitter Lesson-OpenAI 员工必读,强化 Scaling Law 的信仰)中提到,在无穷的算力面前,人类的知识都只不过是一些小聪明而已。网页链接 他从我们七十年的 AI 研究中得出了一个历史规律: 1.研究者总想构建一些知识并教给 AI;2.这些知识在短期内让人感觉是有用的;3.但是从长远看,人类构...
雪球
雪球
人工智能专家 Rich Sutton 2019年曾有雄文"痛苦的教训"(The bitter lesson),总结过去七十年来人工智能研究的教训,其核心观点是: 算力的大规模进步会碾压各种局部算法的改进,但研究者往往会假设算力不会改变太多,用自己主观经验和局部知识去改进 AI,短期内会收获一些进步,研究者会有成就感,但这些改进一般最终遭遇瓶颈,在 10-20年后被以高几个数量级的计算蛮力为基础的通用型的方法完全替代,导致之前努力全部白费,因此这些研究者会很痛苦,很纠结。然...
waytoagi.feishu.cn
waytoagi.feishu.cn
The bitter lesson 源自机器学习先驱 Rich Sutton 在 2019 年的一篇经典文章《苦涩的教训》,通过探讨人工智能近几十年所走过的弯路,他抛出的核心观点是:人工智能如果想要长期获得提升,利用强大的算力才是王道。这里的算力隐含了大量的训练数据和大模型。​ OpenAI 研究人员 Jason Wei 的每日工作时间线 中也提到了 Rich Sutton 的 The bitter lesson。由此可见,很多业内人士...
feishu.cn
feishu.cn
The bitter lesson 源自机器学习先驱 Rich Sutton 在 2019 年的一篇经典文章《苦涩的教训》,通过探讨人工智能近几十年所走过的弯路,他抛出的核心观点是:人工智能如果想要长期获得提升,利用强大的算力才是王道。这里的算力隐含了大量的训练数据和大模型。​ OpenAI 研究人员 Jason Wei 的每日工作时间线 中也提到了 Rich Sutton 的 The bitter lesson。由此可见,很多业内人士...
界面新闻
界面新闻
就在上周,Rich Sutton 发表了一篇题为《苦涩的教训》(The Bitter Lesson)的短文。我打算尽量把这篇评论写得比他的帖子更短。Sutton 在强化学习领域有着长期而持续的贡献。在他的文章中,Sutton 用了很多很好的例子来论证,在人工智能 70 年的历史中,更多的计算和更少的内置知识总是构建 AI 系统的最佳方式。这与许多新进入 AI 领域的人们目前的思维模式产生了共鸣,他们认为,设计学习网络并投入大量计算能力,...
智源社区
智源社区
RL 之父」Rich Sutton 在 2019 年发表的经典短文 The Bitter Lesson《苦涩的教训》广为人知,并且时不时被人提起。70 年的 AI 研究历史告诉我们一个最重要的道理:依靠纯粹算力的通用方法,最终总能以压倒性优势胜出。如今,似乎可以重新再聊下这个话题。比如前两天我们 发的 Agent 文章里的观点:未来 AI 智能体的发展方向还得是模型本身,而不是工作流(Work Flow)。以及最近 Gemini 和 4o 更新的图片功能,可能直接取代了很...
得克萨斯大学奥斯汀分校
得克萨斯大学奥斯汀分校
The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this ...
https://static-favicon.basemind.com/01JV28GFZSVS4APS368GT11K70
https://static-favicon.basemind.com/01JV87PZPVHCC4NEX2RJNJ7572
https://static-favicon.basemind.com/01JVAAM9CPP7B1D30N30GW28MB
View all 15 sources

我需要找到Rich Sutton的"The Bitter Lesson"原文,以便更全面地理解其核心思想。让我搜索原文链接。

搜索

Search

Rich Sutton The Bitter Lesson original article incompleteideas.net
waytoagi.feishu.cn
waytoagi.feishu.cn
The bitter lesson 源自机器学习先驱 Rich Sutton 在 2019 年的一篇经典文章《苦涩的教训》,通过探讨人工智能近几十年所走过的弯路,他抛出的核心观点是:人工智能如果想要长期获得提升,利用强大的算力才是王道。这里的算力隐含了大量的训练数据和大模型。​ OpenAI 研究人员 Jason Wei 的每日工作时间线 中也提到了 Rich Sutton 的 The bitter lesson。由此可见,很多业内人士...
知乎
知乎
The Bitter Lesson原文链接: http://www. incompleteideas.net/Inc Ideas/BitterLesson.html 中文版: https://
界面新闻
界面新闻
就在上周,Rich Sutton 发表了一篇题为《苦涩的教训》(The Bitter Lesson)的短文。我打算尽量把这篇评论写得比他的帖子更短。Sutton 在强化学习领域有着长期而持续的贡献。在他的文章中,Sutton 用了很多很好的例子来论证,在人工智能 70 年的历史中,更多的计算和更少的内置知识总是构建 AI 系统的最佳方式。这与许多新进入 AI 领域的人们目前的思维模式产生了共鸣,他们认为,设计学习网络并投入大量计算能力,...
雪球
雪球
人工智能专家 Rich Sutton 2019年曾有雄文"痛苦的教训"(The bitter lesson),总结过去七十年来人工智能研究的教训,其核心观点是: 算力的大规模进步会碾压各种局部算法的改进,但研究者往往会假设算力不会改变太多,用自己主观经验和局部知识去改进 AI,短期内会收获一些进步,研究者会有成就感,但这些改进一般最终遭遇瓶颈,在 10-20年后被以高几个数量级的计算蛮力为基础的通用型的方法完全替代,导致之前努力全部白费,因此这些研究者会很痛苦,很纠结。然...
雪球
雪球
算力才是王道,知识都是干扰。里奇·萨顿(Rich Sutton),《苦涩的教训》(The Bitter Lesson) 加拿大 计算机 科学家与DeepMind的首席科学顾问萨顿在他那篇著名的OpenAI员工必读文章《苦涩的教训》中提到,在无穷的算力面前,人类的知识都只不过是一些小聪明而已。
知乎
知乎
## 有能力看原文的,建议看原文 我只是简单使用claude3翻译了一下,原封不动。 ## Rich Sutton ### March 13, 2019 从70年人工智能研究的历程中,我们可以得
m.bilibili.com
m.bilibili.com
https://youtu.be/tUCJ4UsKU2I?si=Ba_QtI8dWzkkjq6EFans of The Bitter Lesson may be interested in this talk from 2018 (recently re-discovered)which includes its first public presentation,at 30:40.@Rich,视频播放量 29、弹幕量 0、点赞数 0、投硬币枚数 0、收藏人数 1、转发人...
得克萨斯大学奥斯汀分校
得克萨斯大学奥斯汀分校
The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this ...
https://static-favicon.basemind.com/01JV1WBNTWV754FF1FSF6P050J
https://static-favicon.basemind.com/01JVAAM9JAZWE2QBJAA4D9YYSD
https://static-favicon.basemind.com/01JVAAM9JAZWE2QBJAA4D9YYSD
View all 15 sources

我需要访问Rich Sutton的原文来获取最准确的信息。根据搜索结果,原文应该位于他的个人网站上。让我尝试直接访问原文链接。

看来无法直接访问原始网站。让我通过搜索获取更多关于Rich Sutton的"The Bitter Lesson"的详细内容,以便全面理解其核心思想。

搜索

Search

Rich Sutton The Bitter Lesson full text pdf
知乎
知乎
## 什么是苦涩的教训 强化学习之父 Richard Sutton 19 年写下的 The Bitter Lesson[ http://www. incompleteideas.net/Inc Ide
知乎
知乎
## 有能力看原文的,建议看原文 我只是简单使用claude3翻译了一下,原封不动。 ## Rich Sutton ### March 13, 2019 从70年人工智能研究的历程中,我们可以得
m.bilibili.com
m.bilibili.com
https://youtu.be/tUCJ4UsKU2I?si=Ba_QtI8dWzkkjq6EFans of The Bitter Lesson may be interested in this talk from 2018 (recently re-discovered)which includes its first public presentation,at 30:40.@Rich,视频播放量 29、弹幕量 0、点赞数 0、投硬币枚数 0、收藏人数 1、转发人...
waytoagi.feishu.cn
waytoagi.feishu.cn
The bitter lesson 源自机器学习先驱 Rich Sutton 在 2019 年的一篇经典文章《苦涩的教训》,通过探讨人工智能近几十年所走过的弯路,他抛出的核心观点是:人工智能如果想要长期获得提升,利用强大的算力才是王道。这里的算力隐含了大量的训练数据和大模型。​ OpenAI 研究人员 Jason Wei 的每日工作时间线 中也提到了 Rich Sutton 的 The bitter lesson。由此可见,很多业内人士...
界面新闻
界面新闻
就在上周,Rich Sutton 发表了一篇题为《苦涩的教训》(The Bitter Lesson)的短文。我打算尽量把这篇评论写得比他的帖子更短。Sutton 在强化学习领域有着长期而持续的贡献。在他的文章中,Sutton 用了很多很好的例子来论证,在人工智能 70 年的历史中,更多的计算和更少的内置知识总是构建 AI 系统的最佳方式。这与许多新进入 AI 领域的人们目前的思维模式产生了共鸣,他们认为,设计学习网络并投入大量计算能力,...
雪球
雪球
人工智能专家 Rich Sutton 2019年曾有雄文"痛苦的教训"(The bitter lesson),总结过去七十年来人工智能研究的教训,其核心观点是: 算力的大规模进步会碾压各种局部算法的改进,但研究者往往会假设算力不会改变太多,用自己主观经验和局部知识去改进 AI,短期内会收获一些进步,研究者会有成就感,但这些改进一般最终遭遇瓶颈,在 10-20年后被以高几个数量级的计算蛮力为基础的通用型的方法完全替代,导致之前努力全部白费,因此这些研究者会很痛苦,很纠结。然...
雪球
雪球
算力才是王道,知识都是干扰。里奇·萨顿(Rich Sutton),《苦涩的教训》(The Bitter Lesson) 加拿大 计算机 科学家与DeepMind的首席科学顾问萨顿在他那篇著名的OpenAI员工必读文章《苦涩的教训》中提到,在无穷的算力面前,人类的知识都只不过是一些小聪明而已。
feishu.cn
feishu.cn
The bitter lesson 源自机器学习先驱 Rich Sutton 在 2019 年的一篇经典文章《苦涩的教训》,通过探讨人工智能近几十年所走过的弯路,他抛出的核心观点是:人工智能如果想要长期获得提升,利用强大的算力才是王道。这里的算力隐含了大量的训练数据和大模型。​ OpenAI 研究人员 Jason Wei 的每日工作时间线 中也提到了 Rich Sutton 的 The bitter lesson。由此可见,很多业内人士...
https://static-favicon.basemind.com/01JV3BN6MSWDE2BWHT9BYVZZBK
https://static-favicon.basemind.com/01JVAAM9JAZWE2QBJAA4D9YYSD
https://static-favicon.basemind.com/01JV28GFZSVS4APS368GT11K70
View all 16 sources

已掌握的信息

  1. “苦涩的教训”核心论点:通过初步研究,已明确Rich Sutton的“苦涩的教训”核心思想,即:从长远来看,那些能够充分利用大规模计算和数据的通用方法(如搜索和学习),其成就将远超那些依赖人类专家知识和手工设计特征的方法。这种趋势的根本驱动力是计算能力(摩尔定律)的指数级增长。
  2. “苦涩”的内涵:该教训之所以“苦涩”,是因为它挑战了研究者希望自身智慧与洞察力成为进步核心的直觉,反而强调了“暴力”计算和通用算法的压倒性优势。
  3. 理论的初步例证:Sutton本人在文章中已引用了计算机象棋、围棋、语音识别和计算机视觉等领域的早期案例来支撑其观点,例如深度搜索在象棋中的胜利,以及统计方法(如HMM)在语音识别中超越基于语言学知识的方法。
  4. 存在的不同声音:也注意到了领域内对该理论的一些批判性思考,例如Rodney Brooks指出,即使是成功的深度学习模型(如CNN),其本身也内嵌了人类设计的归纳偏置(如平移不变性),并非完全没有人类知识的参与。这为报告提供了更全面、辩证的视角。

下一步行动计划

  1. 系统性案例分析:将遵循既定框架,逐一深入剖析您指定的几个关键领域,为“苦涩的教训”理论提供坚实的证据支撑。首先将从计算机视觉领域入手,重点研究从SIFT等手动特征工程方法到以ImageNet竞赛为转折点的深度学习方法(特别是AlexNet)的范式转变,并搜集关键数据(如错误率逐年下降图)来量化这一飞跃。
  2. 关键技术演进梳理:在每个案例中,将清晰地梳理出技术演进的脉络。例如,在自然语言处理部分,将追溯从规则系统到Transformer架构及ChatGPT的演变;在游戏AI部分,将解析AlphaGo、AlphaZero到MuZero是如何一步步减少对人类先验知识的依赖,转向更通用的学习和搜索算法。
  3. 跨领域模式提炼:在完成所有案例分析后,将综合所有证据,提炼出贯穿AI发展的共同模式,并基于此对未来趋势进行预测。尤其将探讨在算力持续增长的背景下,AI可能在哪些新领域复现“苦涩的教训”,以及对通用人工智能(AGI)发展的启示。
  4. 权威信息源定位:为确保研究的深度和准确性,信息搜集将优先参考以下来源:
    • 顶级学术期刊与会议:如《Nature》、《Science》以及NeurIPS、ICML、CVPR等会议上发布的里程碑式论文原文(例如AlexNet、Transformer、AlphaGo、AlphaFold的论文)。
    • 领先研究机构的官方发布:如DeepMind、OpenAI、Google AI等机构的官方博客和技术报告,以获取对模型设计理念和影响的权威解读。
    • 权威行业分析报告:如斯坦福大学的《AI Index Report》,以获取关于算力、模型规模和性能趋势的宏观数据。

首先,让我们开始研究计算机视觉领域的发展历程,以验证"苦涩的教训"理论。

搜索

Search

computer vision evolution from SIFT to deep learning ImageNet AlexNet
知乎
知乎
## 【数据集】链接:https://pan.baidu.com/s/1Zn_w5zJExdwYZMBoz39rPw 提取码:2024 ### 6.1 AlexNet 在LeNet提出后,卷积神经
AI工具集
AI工具集
计算机视觉(Computer Vision)是人工智能的一个关键分支,专注于使机器能够像人类一样解释和理解视觉信息。涉及图像和视频的获取、处理、分析以及从这些数据中提取有用信息的技术。通过使用先进的算法,包括深度学习和神经网络,计算机视觉使机器能够执行面部识别、物体检测、图像分类和场景理解等任务。在医疗诊断、自动驾驶汽车、安全监控、机器人导航以及增强现实等多个领域有着广泛的应用,极大地扩展了机器与世界的交互能力。计算机视觉的工作原理 计算机视觉技术模拟人类视觉系统,利用摄像头和传感器捕获周...
cnblogs.com
cnblogs.com
计算机视觉(Computer vision) 计算机视觉是一个飞速发展的一个领域,这多亏了深度学习。深度学习与计算机视觉可以帮助汽车,查明周围的行人和汽车,并帮助汽车避开它们。还使得 人脸识别技术变得更加效率和精准,你们即将能够体验到或早已体验过仅仅通过刷脸就能解锁手机或者门锁。当你解锁了手机,我猜手机上一定有很多分享图片的应用。在上面,你能看到美食,酒店或美丽风景的图片。有些公司在这些应用上使用了深度学习技术来向你展示最为生动美丽以及与你最为相关的图片。机器学习甚至还催生了新的艺术类型。...
掌桥科研
掌桥科研
4.Evolution of Computer Vision Subsystems in Robot Navigation and Image Classification Tasks[C].Sascha Lange,Martin Riedmiller RoboCup Symposium.2005 机译:机器人导航中计算机视觉子系统的演变与图像分类任务 5.Navigation system for a mobile robot incorporating trinocu...
稀土掘金
稀土掘金
计算机视觉(Computer Vision)是人工智能领域的一个重要分支,它旨在让计算机理解和解释人类世界中的视觉信息。计算机视觉的主要任务包括图像处理、特征提取、对象识别、场景理解等。随着数据量的增加和计算能力的提升,深度学习技术在计算机视觉领域取得了显著的进展。本文将从深度学习与计算机视觉的历史、核心概念、算法原理、代码实例等方面进行全面阐述。1.1 深度学习与计算机视觉的关系 深度学习是一种人工智能技术,它旨在通过模拟人类大脑中的神经网络来学习和理解数据。深度学习的核心在于能够自动学...
电子发烧友论坛
电子发烧友论坛
计算机视觉(Computer vision)是一门研究如何使机器“看”的科学,更进一步的说,就是指用摄影机和计算机代替人眼对目标进行识别、跟踪和测量等机器视觉,并进一步做图像处理,用计算机处理成更适合人眼观察或进行仪器检测的图像。学习和运算能让机器能够更好的理解图片环境,并且建立具有真正智能的视觉系统。当下环境中存在着大量的图片和视频内容,这些内容亟需学者们理解并在其中找出模式,来揭示那些我们以前不曾注意过的细节。计算机视觉的实现基本过程为: 计算机从图片中生成数学模型计算机图形在模型中对...
wiki.huihoo.com
wiki.huihoo.com
Seeing People with Deep Learning Deep Learning:Deep Boltzmann Machines Collaborative Feature Learning from Social Media CloudCV:Large-Scale Computer Vision on the Cloud Application of GPUs to Classification Problems Using Deep Le...
机器之心
机器之心
> 2012 年,深度学习三巨头之一、图灵奖得主 Geoffrey Hinton 的学生 Alex Krizhevsky 提出了 AlexNet,并在当年度的 ILSVRC(ImageNet 大规模视
https://static-favicon.basemind.com/01JV4XYENNK21V545PMKWDZ6PS
https://static-favicon.basemind.com/01JV2VZ9ZQ5BY98T9HDG0TPMW4
https://static-favicon.basemind.com/01JV28GFZSVS4APS368GT11K70
View all 16 sources

我需要更深入地了解计算机视觉领域从传统方法到深度学习的转变,特别是关于SIFT和AlexNet的具体信息。

搜索

Search

SIFT feature descriptor computer vision vs AlexNet deep learning revolution
博客
博客
SIFT检测器是一种用于计算机视觉领域的算法,其全称为尺度不变特征变换(Scale-Invariant Feature Transform),它能够在图像中提取出具有尺度不变性的关键点,并为这些关键点生成一种描述符,用于在不同的图像之间进行匹配。SIFT检测器由David Lowe在1999年提出,并在后续的研究中得到改进。关于学习SIFT算法并基于OpenCV自行编写检测器这一主题,涉及到的知识点十分丰富。OpenCV(Open So...
360百科
360百科
SIFT,SIFT,即尺度不变特征变换(Scale-invariant feature transform,SIFT),是用于图像处理领域的一种描述。这种描述具有尺度不变性,可在图像中检测出关键点,是一种局部特征描述子。该方法于1999年由David Lowe 首先发表于计算机视觉国际会议(International Conference on Computer Vision,ICCV),2004年再次经David L...
有道网
有道网
计算机科学技术 sift特征描述子 2,447,543篇论文数据,部分数据来源于NoteExpress SIFT has proved to be the most robust local invariant feature descriptor in object recognition and matching. 目前,SIFT 已经 被 证明 鲁棒性 最好的 局部 不变 特征 描述符。youdao This paper disc...
百度学术
百度学术
Stereo matching is one of the most active research subjects in the field of computer vision at present.A new stereo match-ing algorithm based on SIFT feature descriptor is proposed in this paper,in order to restrain the high sensitivity t...
掌桥科研
掌桥科研
1.SIFT-based local spectrogram image descriptor:a novel feature for robust music identification[J].Xiu Zhang,Bilei Zhu,Linwei Li,EURASIP journal on audio,speech,and music processing.2015,第1期 机译:基于SIFT的本地频谱图图像描述符:一种用于可靠音乐...
有道网
有道网
SIFT has proved to be the most robust local invariant feature descriptor in object recognition and matching. 目前,SIFT 已经 被 证明 鲁棒性 最好的 局部 不变 特征 描述符。youdao This paper discusses a simple method of SIFT operator,operator is s...
博客
博客
SIFT(尺度不变特征变换)是一种用于计算机视觉领域的局部特征描述算法,它能够在不同的尺度空间中检测和描述关键点,并为每个关键点生成不变特征描述符。该算法由大卫·劳维(David Lowe)在1999年提出,并在随后的研究中进一步完善。SIFT算法因其良好的特征提取能力和对尺度、旋转以及亮度变化的不变性而在诸多领域,如图像识别、机器人导航、三维重建等,获得了广泛的应用。SIFT特征提取主要包括四个步骤:尺度空间极值检测、关键点定位、方向分配和关键点描述符...
博客
博客
4.关键点描述符(Descriptor)最后,算法将为每个关键点生成一个描述符,该描述符捕捉关键点周围的图像信息。SIFT描述符是一个128维的向量,它通过在关键点周围划分一个小窗口,并在每个小区域中计算梯度直方图来构建。这样做的目的是即使在图像部分遮挡或仿射变换的情况下,仍能实现关键点的有效匹配。接下来,关于使用VC(Visual C++)实现SIFT算法的描述,我们可以推断出以下知识点:-VC(Visual C++)是微软推出的一个集成开发环境(ID...
https://static-favicon.basemind.com/01JV2AF0F7S39CXN9B7TYMM3KC
https://static-favicon.basemind.com/01JVCF6NZGC4VDBKHS7B22GZY3
https://static-favicon.basemind.com/01JVBN8XHWBH6M4GHH6XVVK7VW
View all 16 sources

我需要更深入了解AlexNet和ImageNet在计算机视觉领域的革命性影响,以及它们与"苦涩的教训"理论的关系。

搜索

Search

AlexNet ImageNet 2012 revolution deep learning vs traditional computer vision
cnblogs.com
cnblogs.com
所有的实验都表明,只要等待更快的 GPU 和更大规模的数据集出现,AlexNet 的性能会得到明显改善。2 The Dataset ImageNet 是一个包含超过1500万张标记高分辨率图像、大约22000个类别的数据集。图片是从网上收集的,并由人工标记。从2010年开始,一年一度的 ImageNet 大规模视觉识别挑战赛 ILSVRC 举行。ILSVRC 使用ImageNet 的一个子集,1000个类别中每个类别大约有1000张图像...
知乎
知乎
## 【数据集】链接:https://pan.baidu.com/s/1Zn_w5zJExdwYZMBoz39rPw 提取码:2024 ### 6.1 AlexNet 在LeNet提出后,卷积神经
devpress.csdn.net
devpress.csdn.net
本文提出利用一个深度卷积神经网络来进行图片分类,此深度卷积神经网络(AlexNet)参加了2012年9月30日举行的ImageNet大规模视觉识别挑战赛,达到最低的15.3%的Top-5错误率,比第二名低10.8个百分点(如图1所示)。图二左侧图是八张ILSVRC-2010测试图像和我们的模型认为最可能的五个标签。正确的标签写在每张图片下面,分配给正确标签的概率也用红色条显示(如果恰好位于前5位)。右侧图像可以理解为识别出特征向量相似的图片归类在一起 思...
m.bilibili.com
m.bilibili.com
2012年,AlexNet在ImageNet上一战成名。人们将此视为深度学习革命标志,从此深度学习逐渐成为主流。我们将回溯人工智能的发展,解释这一事件的意义。这一期的内容以李飞飞创造划时代的数据集ImageNet为主。视频播放量 16292、弹幕量 147、点赞数 961、投硬币枚数 594、收藏人数 676、转发人数 175,视频作者 玄感X,作者简介 算法工程师,只做专业科普,干死营销号!相关视频:【斯坦福教授李飞飞专访】"AI教母...
机器之心
机器之心
> 2012 年,深度学习三巨头之一、图灵奖得主 Geoffrey Hinton 的学生 Alex Krizhevsky 提出了 AlexNet,并在当年度的 ILSVRC(ImageNet 大规模视
澎湃新闻
澎湃新闻
但最终,AlexNet不仅拿下当年ImageNet冠军,还标志着深度学习卷积网络正式诞生,同时也在学术圈彻底带火了英伟达计算卡。在这之后,深度学习浪潮到来。2013年左右,黄仁勋正式将英伟达的未来押在了人工智能方向上。他当时得出的结论是:神经网络将彻底改变世界,他可以利用CUDA在必要硬件上垄断市场。黄仁勋在一份内部邮件里写道: 一切都将进入深度学习阶段。我们不再只是一家图形计算公司了。英伟达向OpenAI交付的第一台专用人工智能计算机DGX-1,由黄仁勋亲自带到Open...
pinecone.io
pinecone.io
Today's deep learning revolution traces back to the 30th of September, 2012. On this day, a Convolutional Neural Network (CNN) called AlexNet won the ImageNet ...
viso.ai
AlexNet is an Image Classification model that transformed deep learning. It was introduced by Geoffrey Hinton and his team in 2012.
https://static-favicon.basemind.com/01JV28GFZSVS4APS368GT11K70
https://static-favicon.basemind.com/01JV4XYENNK21V545PMKWDZ6PS
https://static-favicon.basemind.com/01JVBN8XHWBH6M4GHH6XVVK7VW
View all 14 sources

我需要更深入地了解自然语言处理领域从传统方法到深度学习的转变,以及Transformer和ChatGPT的影响,以便完整分析"苦涩的教训"在该领域的验证。

搜索

Search

natural language processing evolution from rule-based to transformer ChatGPT
知乎
知乎
编辑:LRS ## 【新智元导读】从头复习大型语言模型发展历程,看这些论文就够了! 短短五年,Transformer就几乎颠覆了整个自然语言处理领域的研究范式,是划时代产品ChatGPT的基础技
知乎
知乎
关注“ FightingCV ”公众号 回复“ AI ”即可获得超100G人工智能的 教程 点击进入→ FightingCV交流群 短短五年,Transformer就几乎颠覆了整个自然语言处理
稀土掘金
稀土掘金
自然语言处理(Natural Language Processing,NLP)是人工智能(Artificial Intelligence,AI)的一个重要分支,其主要目标是让计算机理解、生成和处理人类语言。自然语言处理涉及到语音识别、语义分析、情感分析、机器翻译等多个领域。随着深度学习技术的发展,自然语言处理领域也得到了巨大的推动。本文将从深度学习与自然语言处理的应用、核心概念、算法原理、代码实例等方面进行全面讲解。1.1 深度学习与自然语言处理的应用 深度学习与自然语言处理的应用非常广泛...
稀土掘金
稀土掘金
在第 36 集中,视频主要介绍了自然语言处理(Natural Language Processing,简称 NLP)的基础知识,包括其定义、主要应用领域和发展历程。视频探讨了 NLP 如何赋予计算机理解和生成自然语言的能力,并讲解了与之相关的各种技术和算法。2.自然语言处理的定义 自然语言处理(NLP)是一个结合了计算机科学和语言学的跨学科领域,旨在使计算机能够理解、解释和生成人类语言。自计算机诞生之初,研究人员就渴望让计算机学会人类语言,这种需求驱动了 NLP 的发展。2.1 构建和解析...
cnblogs.com
cnblogs.com
自然语言处理(Natural Language Processing,NLP)是计算机科学领域与人工智能领域中的一个重要方向。它研究能实现人与计算机之间用自然语言进行有效通信的各种理论和方法,用于分析理解机器与人之间的交互,常用的领域有:实体识别、文本纠错、情感分析、文本分类、关键词提取、自动摘要提取等方面。本文将从分词、词频、词向量等基础领域开始讲解自然语言处理的原理,讲解 One-Hot、TF-IDF、PageRank 等算法及 LDA、LDiA、LSA 等语义分析的原理。介绍 Wor...
IIIFF互动问答平台
IIIFF互动问答平台
自然语言处理(NLP)是一种在计算机语言和人类语言之间进行翻译的方法。这个领域的目标是让计算机理解文本所说的内容,而不必为文本所包含的数据提供精确的值和方程。本质上,自然语言处理自动化了人类语言和计算机语言之间. 自然语言处理(NLP)是一种在计算机语言和人类语言之间进行翻译的方法。这个领域的目标是让计算机理解文本所说的内容,而不必为文本所包含的数据提供精确的值和方程。本质上,自然语言处理自动化了人类语言和计算机语言之间的翻译过程。虽然这一领域的大部分工作依赖于统计和模型来确定一个短语的可能含义,但是已经有...
有道网
有道网
S peech recognition enables them to understand what a person is saying,and natural language processing is used to work out a response. 语 音识别功能使他们能够理解人所说的话,而自然语言处理功能用于做出回应。youdao A natural language processing and engineering tool. 一个 自然 语言...
m.bilibili.com
m.bilibili.com
【史上最全】吴恩达《自然语言处理 natural language processing》(下)共计91条视频,包括:91.Continuous Bag of Words、92.清洗和分词、93.Python中滑动窗口等,UP主更多精彩视频,请关注UP账号。
https://static-favicon.basemind.com/01JVCF6HT8S71XAR7EBYMT8K2H
https://static-favicon.basemind.com/01JV2292ZCPY12HWGGTREMAWBG
https://static-favicon.basemind.com/01JV3H7395NESEEY0N5085VDE1
View all 16 sources

已掌握的信息

  1. “苦涩的教训”理论核心已明确:我们已经深入理解了Rich Sutton的“苦涩的教训”理论。其核心观点是:在人工智能发展长河中,依赖大规模计算和海量数据的通用方法(如搜索和学习),其性能和潜力最终会远超依赖人类专家知识和手工设计规则的特定方法。这一趋势的根本动力是算力的指数级增长(摩尔定律)。
  2. 计算机视觉领域的范式验证:我们已经确认了计算机视觉领域的发展完美印证了“苦涩的教训”。在2012年之前,主流方法依赖于如SIFT这样的人工设计的特征描述符。2012年的ImageNet竞赛成为一个分水岭,AlexNet——一个大型深度卷积神经网络——通过在海量数据上进行学习,其性能大幅超越了所有传统方法,开启了深度学习的时代。这清晰地展示了“学习”和“计算”对“人类知识”的胜利。
  3. 自然语言处理领域的革命性转变:我们同样梳理了自然语言处理(NLP)的演进路径。该领域从早期的基于规则和语言学知识的系统,过渡到统计方法,最终被深度学习所颠覆。特别是2017年Transformer架构的提出,它作为一个高度可扩展、可并行的通用模型,在巨大的数据集和算力支持下,催生了ChatGPT等革命性应用,其表现远非传统方法可比。这再次验证了通用方法的可扩展性优势。

下一步行动计划

  1. 深入剖析游戏AI的演进:接下来,我们将聚焦于游戏AI领域,特别是DeepMind的一系列里程碑式工作。我们将分析AlphaGo如何结合深度学习与蒙特卡洛树搜索,超越了依赖人类棋谱和专家评估函数的传统围棋程序。随后,我们将进一步研究AlphaZero和MuZero,重点揭示它们是如何通过更通用的强化学习框架,从零开始、仅通过自我对弈(纯粹的搜索和学习)进行学习,从而摆脱对人类知识的依赖,并在多种棋类游戏中达到顶尖水平。
  2. 拓展至机器人与自动驾驶领域:完成游戏AI的分析后,我们将研究机器人技术和自动驾驶领域。我们将对比早期依赖于复杂规则和专家系统的控制方法与当前主流的、基于深度学习的端到端解决方案。分析将集中于现代系统如何利用大规模传感器数据(如图像、雷达)直接学习感知、决策和控制策略,而不是依赖于工程师预先编写的详尽规则集。
  3. 分析AlphaFold等前沿科学突破:我们将以AlphaFold为例,展示“苦涩的教训”在解决复杂科学问题上的适用性。我们将探讨AlphaFold如何利用深度学习模型,从庞大的基因序列和蛋白质结构数据库中学习,以极高的精度预测蛋白质三维结构,其成就远超依赖物理和化学原理的传统计算方法。
  4. 综合分析与未来预测:在完成所有案例研究后,我们将综合所有领域的证据,提炼出“苦涩的教训”在AI发展中的普遍模式。最后,基于这一核心思想和算力持续增长的趋势,我们将对AI的未来进行预测,探讨可能出现的新研究范式和技术方向。

关键信息来源

  • 里程碑式论文:将重点查阅DeepMind在《Nature》、《Science》等顶级期刊上发表的关于AlphaGo、AlphaZero、MuZero及AlphaFold的原始论文。
  • 权威技术报告与博客:参考DeepMind、Waymo、Tesla等公司发布的官方技术报告和博客,以获取系统设计理念和性能的一手信息。
  • 顶级学术会议论文集:在机器人和AI领域的顶级会议(如ICRA, RSS, NeurIPS)中寻找关于自动驾驶和机器人学习的前沿研究论文。
  • 行业领袖观点与宏观报告:关注AI领域思想领袖(如Geoffrey Hinton, Yann LeCun, Sam Altman)的最新访谈和文章,并参考斯坦福大学《AI Index Report》等报告中的宏观数据,以支撑对未来的预测。

现在,让我们开始研究游戏AI的演进,特别是AlphaGo、AlphaZero和MuZero的发展历程,以验证"苦涩的教训"理论。

搜索

Search

AlphaGo AlphaZero MuZero evolution reinforcement learning vs traditional game AI
百度百科
百度百科
MuZero,DeepMind开发的人工智能。MuZero在没有学习过游戏规则的情况下,不仅能下围棋、将棋和国际象棋,还在30多款雅达利游戏中展示出了超人类表现(superhuman performance)。2019年推出,2020年发表于《自然》。研究人员将MuZero描述为“在追求通用算法方面迈出的重要一步”。
今日头条
今日头条
阿尔法狗(AlphaGo)时代过去了!AlphaGo如此的震惊世人,却会如此结束它的时代。MuZero来了!日前,DeepMind 公司在《自然》杂志上发布论文介绍「进阶版 AlphaGo」—MuZero。它超越了之前的AlphaZero。更具有通用性。MuZero精通国际象棋、围棋、将棋,还能在数十款 Atari 游戏上全面超越过去的 AI 算法和人类。但更具革命意义的是,...
第一财经
第一财经
在谷歌发起“寻找围棋小先锋”全国青少年围棋推广活动的2天之后,谷歌母公司旗下DeepMind团队再次取得突破性成果,人类在棋类游戏上或许已经无法挑战人工智能。 继今年10月在《自然》杂志上发表论文正
知乎
知乎
model based就是有一个world model可以用来做planning,而model free就是没有对env dynamics进行建模,传统的做法比如DQN/PPO都是model free
m.bilibili.com
m.bilibili.com
http://t.cn/A6v9TuAP【进化:从AlphaGo到MuZero】《The Evolution of AlphaGo to MuZero-YouTube》http://t.cn/A6v9TuAP,视频播放量 2439、弹幕量 0、点赞数 52、投硬币枚数 13、收藏人数 95、转发人数 4,视频作者 爱可可-爱生活,作者简介 新浪微博@爱可可-爱生活 http://weibo.com/fly51fly,相...
m.bilibili.com
m.bilibili.com
http://t.cn/A6v9TuAP【进化:从AlphaGo到MuZero】《The Evolution of AlphaGo to MuZero-YouTube》http://t.cn/A6v9TuAP,视频播放量 2447、弹幕量 0、点赞数 52、投硬币枚数 13、收藏人数 95、转发人数 4,视频作者 爱可可-爱生活,作者简介 新浪微博@爱可可-爱生活 http://weibo.com/fly51fly,相...
arxiv.org
arxiv.org
AlphaGo Zero refines this approach by eliminating the dependency on human gameplay data, instead employing self-play to enhance learning ...
deepmind.google
We created AlphaGo, an AI system that combines deep neural networks with advanced search algorithms. One neural network — known as the “policy network” — ...
https://static-favicon.basemind.com/01JV28GFZSVS4APS368GT11K70
https://static-favicon.basemind.com/01JVCF6HT8S71XAR7EBYMT8K2H
https://static-favicon.basemind.com/01JVBN8XHWBH6M4GHH6XVVK7VW
View all 14 sources

我需要更深入地了解AlphaGo、AlphaZero和MuZero的技术细节和演进历程,以便更全面地分析它们如何验证了"苦涩的教训"理论。

搜索

Search

AlphaGo vs AlphaGo Zero vs AlphaZero vs MuZero technical differences Nature paper
知乎
知乎
Mastering the game of Go without Human Knowledge 写在最前:本文只简要描述了alphago zero中的关键技术,而且也都是我个人的理解,理解不对的
cnblogs.com
cnblogs.com
1.神经网络介绍 2.MCTS介绍 3.AlphaGo zero执行过程(伪代码)参考文章链接:https://blog.csdn.net/hhy_csdn/article/details/86759692 一、AlphaGo 1、事件 2016年初,AlphaGo战胜李世石,核心技术深度强化学习
机器之心
机器之心
AlphaZero 不使用手动编写的评估函数和移动排序启发式算法,转而使用深度神经网络(p,v)=f_θ(s)和参数θ。该神经网络将棋盘位置 s 作为输入,输出一个针对每个动作 a 的分量 p_a=P_r(a|s)的移动概率 p 的向量,以及从位置 s 估计期望结果 z 的标量值 v≈E[z|s]。AlphaZero 完全从自我对弈中学习这些步的获胜概率;这些结果随后用于指导程序的搜索。和α-β搜索使用领域特定的增强信息不同,A...
ikanchai.com
在今日(12月7日)发表于《科学》杂志的研究中,DeepMind介绍了阿尔法狗(AlphaGo)升级版AlphaZero,这是迄今为止最强大的围棋类AI,不仅碾压人类棋手,也将此前另一AI程序远远甩在身后。DeepMind研究人员在论文中表示,AlphaZero是迄今为止最强大的 游戏 比赛AI程序,它能不接受人工训练、只是对游戏规则进行了解的基础上,通过自我学习掌握围棋、国际象棋以及日本象棋的技巧,并能击败人类最顶尖高手。图片来源:《...
多特软件资讯
多特软件资讯
据了解,AlphaZero算法可以8个小时训练击败李世石版本AlphaGo;12小时训练击败世界顶级的国际 象棋 程序Stockfish;14小时训练击败世界顶级将棋程序Elmo。这是DeepMind团队继AlphaGo Zero的研究问世之后,带给我们的又一全新算法,它是“更通用的版本”。此外,我们看到这次的AlphaZero与AlphaGo Zero有几点不同,首先AlphaGo Zero是在假设...
砍柴网
在今日(12月7日)发表于《科学》杂志的研究中,DeepMind介绍了阿尔法狗(AlphaGo)升级版AlphaZero,这是迄今为止最强大的围棋类AI,不仅碾压人类棋手,也将此前另一AI程序远远甩在身后。DeepMind研究人员在论文中表示,AlphaZero是迄今为止最强大的游戏比赛AI程序,它能不接受人工训练、只是对游戏规则进行了解的基础上,通过自我学习掌握围棋、国际象棋以及日本象棋的技巧,并能击败人类最顶尖高手。图片来源:《科学...
百度文库
百度文库
AlphaZero 不使用手动编写的评估函数和移动排序启发式算法,转而使用深度神经网络(p,v)=f_θ(s)和参数θ。该神经网络将棋盘位置 s 作为输入,输出一个针对每个动作 a 的分量 p_a=P_r(a|s)的移动概率 p 的向量,以及从位置 s 估计期望结果 z 的标量值 v≈E[z|s]。AlphaZero 完全从自我对弈中学习这些步的获胜概率;这些结果随后用于指导程序的搜索。其中 c 为控制 L2 权重正则化程度的参数,更新...
arxiv.org
arxiv.org
This paper presents MiniZero, a zero-knowledge learning framework that supports four state-of-the-art algorithms, including AlphaZero, MuZero, Gumbel AlphaZero ...
https://static-favicon.basemind.com/01JV28GFZSVS4APS368GT11K70
https://static-favicon.basemind.com/01JVCF6HT8S71XAR7EBYMT8K2H
https://static-favicon.basemind.com/01JV4T644KA5C8ARZJS69F1YAK
View all 15 sources

现在我需要了解机器人和自动驾驶领域从传统方法到深度学习的转变,以进一步验证"苦涩的教训"理论。

搜索

Search

robotics autonomous driving evolution from expert systems to deep learning
youdao.com
youdao.com
Robotics and Autonomous Driving Lab 机器人与自动驾驶实验室 autonomous driving 自动驾驶;无人驾驶 Semi autonomous driving 自动驾驶技术;半自动驾驶 网络短语 双语例句 权威例句 One of the most significant pieces of news from the US in early 2017 was the efforts of Goo...
PHP中文网
PHP中文网
Populating Empty Cities(POETS)-Virtual Humans for Robotics and Autonomous Driving 研讨会时间: 2024年6月17日 研讨会主页: https://www.php.cn/link/9c9338ab2bf287d03117846a49a7f432 投稿地址:https://openreview.net/group?id=thecvf.com/CVPR/2024/Workshop/POETS&r..;.
kodiak.ai
Our purpose-built autonomous solution,the Kodiak Driver,is the AV industry’s most advanced driverless technology,seamlessly integrating into any driving platform for reliable movement in any environment.Learn how we’re shaping the future ...
中国科学院
中国科学院
To assess domestic and international trends,this analysis focuses on emerging forms and models of future robotics,exemplified by humanoid robots,elderly care robots,and robotized intelligent manufacturing.It highlights the significant gaps between...
清华大学量子信息中心
清华大学量子信息中心
标题:Vision for Robotics:a View from Autonomous Driving 演讲人:Yang Gao UC Berkeley 时间:2018-12-31 10:00-2018-12-31 11:00 地点:FIT 1-222 内容: Autonomous driving has attracted a lot of attention in the past few years.A typical self-dri...
有道网
有道网
自控机器人学 autonomous robotics 遥控机器人大战之无尽的毁灭 robot wars extreme destruction. dict.youdao.com 2. autonomous robotics :自动化机器人 ⪪ 自动化机器人(Autonomous Robotics). dict.youdao.com 3. Autonomous Robotics Engineering : 自主机器人工程 dict....
x-mol.com
x-mol.com
The rapid evolution of large language models in natural language processing has substantially elevated their semantic understanding and logical reasoning capabilities.Such proficiencies have been leveraged in autonomous driving systems,co...
湖南大学
湖南大学
7.Siyu Li,Kailun Yang*,Hao Shi,Jiaming Zhang,Jiacheng Lin,Zhifeng Teng,Zhiyong Li*.Bi-Mapper:Holistic BEV Semantic Mapping for Autonomous Driving,IEEE Robotics and Automation Letters,vol.8,no.11,pp.7034-7041,Nov.2023,doi:10.1109/LRA.2023....
https://static-favicon.basemind.com/01JV28GFZSVS4APS368GT11K70
https://static-favicon.basemind.com/01JVBN8XHWBH6M4GHH6XVVK7VW
https://static-favicon.basemind.com/01JV2AF0F7S39CXN9B7TYMM3KC
View all 16 sources

现在我需要了解AlphaFold在蛋白质结构预测领域的突破,以及它如何验证了"苦涩的教训"理论。

搜索

Search

AlphaFold DeepMind protein structure prediction vs traditional methods Nature paper
每日经济新闻
每日经济新闻
◎ AlphaFold 3成为首个在生物分子结构预测方面超越基于物理工具的方法的人工智能系统。 每经记者 蔡鼎 每经编辑 兰素英 一直以来,从氨基酸序列出发预测蛋白质的三维结构是结构生物信息学中最具
机器之心
机器之心
今天,DeepMind 与欧洲生物信息研究所(EMBL-EBI)合作发布了AlphaFold DB;利用 AlphaFold 确定了覆盖几乎整个人类蛋白质组(98.5% 的所有人类蛋白)的蛋白质的结构
砍柴网
11月2日,在墨西哥坎昆举行的第13届全球蛋白质结构预测竞赛(Critical Assessment of protein Structure Prediction,CASP)上。组织者宣布,DeepMind 的最新人工智能程序—AlphaFold 在一项极其困难的任务中击败了所有对手,成功预测生命基本分子—蛋白质的三维结构。作为一项基础技术,DeepMind在自己博客中,将AlphaFold 称为在展示人工智能研究推...
虎嗅网
虎嗅网
在蛋白质结构预测关键评估(Critical Assessment of protein Structure Prediction,CASP)竞赛中,以往的获胜方案稳定在40.0左右。AlphaFold打破了这一瓶颈,并大幅超越了之前的分数。历年CASP竞赛中表现最佳的模型 AlphaFold2再次刷新了这一新纪录,给该领域带来了革命性的冲击,让蛋白质结构预测直接进入“后AlphaFold时代”。确定蛋白质结构的传统方法,如X射线晶体学和...
财新网
财新网
对于不熟悉生物领域的人来说,CASP 的大名可能有些陌生—CASP 全称 The Critical Assessment of protein Structure Prediction,旨在对蛋白质结构预测进行评估,被誉为蛋白质结构预测的奥林匹克竞赛。CASP 从 1994 年开始举办,每两年一届,目前正在进行的一届是 11 月 30 日开始的 CASP14。而 DeepMind 这一突破有什么影响?用哥伦比亚大学计算生物学家 Mohammed AlQuraishi 在 ...
雷峰网
雷峰网
近日,Google DeepMind 创始人、诺奖得主Demis Hassabis在接受《60 Minutes》的采访时提到,DeepMind的蛋白质预构成式AlphaFold仅一年时间就能绘制超过2亿个结构图。这是一个十分惊人具象的进展!因为在没有 AI 加持的年代,人类绘制每一个结构图都需要数年的时间。这意味着,药物的设计周期将从几年缩短到几个月甚至几周!蛋白质是所有生物体的基础,由长链氨基酸组成,每个氨基酸都具有独特而复杂的三维结构。这些结构源于物...
个人图书馆
个人图书馆
对于不熟悉生物领域的人来说,CASP 的大名可能有些陌生—CASP 全称 The Critical Assessment of protein Structure Prediction,旨在对蛋白质结构预测进行评估,被誉为蛋白质结构预测的奥林匹克竞赛。CASP 从 1994 年开始举办,每两年一届,目前正在进行的一届是 11 月 30 日开始的 CASP14。而 DeepMind 这一突破有什么影响?用哥伦比亚大学计算生物学家 Mohammed AlQuraishi 在 ...
今日头条
今日头条
每经记者:蔡鼎 每经编辑:杜宇 每经AI快讯,美国时间11月30日,谷歌母公司Alphabet旗下人工智能公司DeepMind公开宣布,生物学界50年来的重大难题—蛋白质折叠预测,已被其成功攻克。而攻克这一难题的正是其2018年一经推出便震惊科学界的AI系统—AlphaFold。DeepMind在官方博客中称:AlphaFold的最新版本,在通过氨基酸序列精确预测蛋白质折叠结构方面,已经获得权威蛋白质结构预测评估机构(Critical ...
https://static-favicon.basemind.com/01JV3GETNJ8XHZD6B9GS95G5WQ
https://static-favicon.basemind.com/01JV3GETNJ8XHZD6B9GS95G5WQ
https://static-favicon.basemind.com/01JV3GETNJ8XHZD6B9GS95G5WQ
View all 16 sources

已掌握的信息

  1. “苦涩的教训”理论核心:我们已深入理解Rich Sutton的理论精髓,即:在人工智能的长远发展中,依赖大规模计算和数据的通用方法(如搜索和学习),其成就最终将远超依赖人类专家知识和手工规则的特定方法。这一趋势的根本驱动力是算力的指数级增长。
  2. 计算机视觉领域的验证:我们确认了从依赖SIFT等人工设计特征到AlexNet在ImageNet竞赛中凭借深度学习、大数据和GPU算力取得压倒性胜利的转变,是“苦涩的教训”的经典例证。
  3. 自然语言处理领域的验证:我们梳理了NLP从规则系统到Transformer架构和ChatGPT的演进。这一过程清晰地展示了可扩展的通用模型(Transformer)如何利用海量数据和算力,彻底颠覆了依赖语言学知识的传统方法。
  4. 游戏AI领域的验证:我们分析了从AlphaGo到AlphaZero再到MuZero的演进路径。这一系列工作展示了一个系统性地剥离人类知识的过程:首先是摆脱人类棋谱(AlphaGo Zero),然后是摆脱特定游戏的规则(MuZero),完全依赖通用的搜索和学习算法,这为“苦涩的教训”提供了强有力的支持。
  5. 机器人与自动驾驶领域的趋势:我们初步了解到,该领域正从传统的、基于大量人工编码规则的专家系统,转向依赖深度学习的端到端模型。这些新方法直接从海量传感器数据中学习感知、决策和控制策略。
  6. 科学发现领域的突破:我们确认了AlphaFold在蛋白质结构预测领域的革命性影响。它通过在庞大的生物数据上进行深度学习,其预测精度远超基于物理和化学原理的传统计算方法,将“苦涩的教训”的适用性从工程领域扩展到了基础科学领域。

下一步行动计划

  1. 完成案例分析并深化论证:我们将系统性地完成机器人、自动驾驶以及AlphaFold等领域的案例分析。重点将是详细对比“旧方法”(如专家系统、物理模型)与“新方法”(如端到端学习、数据驱动模型)在架构、性能和可扩展性上的差异,并清晰地阐述这种差异如何验证了“苦涩的教训”的核心观点。
  2. 综合洞察与未来趋势预测:在所有案例分析的基础上,我们将进行归纳与升华。首先,提炼出贯穿AI各领域发展的共同模式。然后,基于“苦涩的教训”思想,对人工智能的未来进行前瞻性预测。我们将重点探讨在算力持续呈指数级增长的背景下,AI可能在哪些新领域(如材料科学、气候模型、经济学等)复现这一模式,以及可能涌现出的新研究范式和对通用人工智能(AGI)路径的启示。
  3. 构建报告并进行可视化呈现:最后,我们将把所有研究成果整合成一份结构完整、逻辑清晰的深度报告。为增强报告的可读性和说服力,我们将设计并创建一系列可视化图表,例如:
    • 发展历程时间线:直观展示各AI领域从“人类知识”到“通用计算”的范式转变及其关键节点。
    • 性能飞跃对比图:量化展示新旧方法在关键指标上的性能飞跃,如ImageNet错误率、AlphaFold预测精度等。
    • 理论框架图:清晰地阐释“苦涩的教训”的核心逻辑,包括算力、数据、通用方法与人类知识之间的关系。

关键信息来源

  • 行业领袖观点:将参考Geoffrey Hinton, Yann LeCun, Sam Altman, Demis Hassabis等AI领域思想领袖的近期访谈、演讲和公开文章,以获取对未来趋势的前瞻性判断。
  • 权威行业与学术报告:将重点参考斯坦福大学的《AI Index Report》、顶尖咨询公司的AI趋势报告以及顶级学术期刊(如《Nature》、《Science》)的综述文章,以获取关于算力增长、模型规模和技术应用的宏观数据与深度分析。
  • 前沿科技媒体:将关注如MIT Technology Review, WIRED等媒体对未来AI趋势的深度报道,以丰富报告的视角和前沿性。

现在,我将整合已收集的所有信息,为您提供一份关于Rich Sutton的"苦涩的教训"理论如何在人工智能各领域得到验证的全面分析报告。

The Bitter Lesson: How AI's Evolution Validates Sutton's Vision

Introduction

In 2019, Richard Sutton, a pioneer in reinforcement learning, published a short but profound essay titled "The Bitter Lesson." In this piece, he articulated a pattern observed throughout the 70-year history of artificial intelligence research: approaches that leverage computational power through general methods like search and learning consistently outperform approaches that rely on human knowledge and domain-specific engineering. This lesson is considered "bitter" because it challenges researchers' natural inclination to inject their expertise and intuition into AI systems, suggesting instead that creating systems that can learn from data and computation is ultimately more fruitful.

This report examines how the development of AI across multiple domains validates Sutton's thesis. We'll analyze the evolution from knowledge-based approaches to data-driven, computation-intensive methods in computer vision, natural language processing, game AI, robotics, autonomous driving, and scientific discovery. Finally, we'll explore how this principle might shape AI's future trajectory as computational resources continue to grow.

Computer Vision: From Feature Engineering to Deep Learning

The Era of Manual Feature Engineering

For decades, computer vision relied heavily on hand-crafted feature extractors designed by human experts. These approaches required researchers to carefully identify what visual patterns were important and explicitly encode how to detect them.

One of the most successful examples was the Scale-Invariant Feature Transform (SIFT), developed by David Lowe in 1999. SIFT was a sophisticated algorithm that could detect and describe local features in images, making it useful for object recognition, mapping, and navigation博客. It was remarkably effective for its time, becoming a standard tool in computer vision applications.

Other similar approaches included:

  • Histogram of Oriented Gradients (HOG)
  • Speeded Up Robust Features (SURF)
  • Local Binary Patterns (LBP)

These methods represented the pinnacle of human-engineered feature detection, requiring extensive domain knowledge and careful tuning. While effective for specific applications, they struggled with generalization to new domains and complex visual understanding tasks.

The Deep Learning Revolution and ImageNet

The landscape of computer vision changed dramatically in 2012 when Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton introduced AlexNet, a deep convolutional neural network that achieved unprecedented performance in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC)知乎. AlexNet reduced the top-5 error rate from 26% (achieved by the previous best methods) to 16%, a remarkable improvement that sent shockwaves through the field知乎.

What made AlexNet revolutionary wasn't just its performance but its approach. Rather than relying on hand-crafted features, AlexNet learned features directly from data. The model was trained on 1.2 million images and leveraged the computational power of GPUs to process this massive dataset efficientlyproceedings.neurips.cc.

The impact was immediate and profound. Within a few years, nearly all computer vision researchers had abandoned traditional feature engineering approaches in favor of deep learning. Each subsequent ILSVRC competition saw deeper networks with more parameters, trained on more data, achieving ever-improving results. By 2016, deep learning models had surpassed human performance on ImageNet classification知乎.

This transition perfectly illustrates Sutton's "bitter lesson": despite decades of sophisticated feature engineering by experts, a more general approach that leveraged computation and learning ultimately proved far more effective. The success of deep learning in computer vision wasn't due to better encoding of human knowledge about visual perception, but rather to creating systems that could learn from data at scale.

Key Contributions of the Deep Learning Approach

The deep learning revolution in computer vision validated several aspects of Sutton's thesis:

  1. Scale matters: AlexNet and its successors demonstrated that performance improves with more data, more parameters, and more computation.

  2. General methods win: Convolutional neural networks provided a general framework for visual learning that could be applied across domains without task-specific engineering.

  3. End-to-end learning is powerful: Rather than separating feature extraction from classification, deep learning models learned the entire pipeline from raw pixels to final decisions.

  4. Transfer learning amplifies benefits: Pre-trained models on large datasets like ImageNet could be fine-tuned for specific tasks, making the benefits of scale available even for smaller applications.

The transition from SIFT to AlexNet represents a clear validation of the "bitter lesson" - the approach that better leveraged computation and data ultimately prevailed, despite initially appearing less sophisticated than carefully engineered alternatives.

Natural Language Processing: From Rules to Transformers

Rule-Based and Statistical Approaches

Natural language processing (NLP) has undergone a similar transformation. Early NLP systems were predominantly rule-based, relying on linguistic expertise encoded as explicit rules for grammar, syntax, and semantics. These systems required extensive human knowledge and were typically brittle, struggling to handle the ambiguity and variability of natural language.

The 1990s and early 2000s saw a shift toward statistical methods, such as Hidden Markov Models (HMMs) for speech recognition and statistical machine translation. These approaches were more data-driven but still incorporated substantial linguistic knowledge and feature engineering.

The Neural Revolution in NLP

The first wave of neural methods in NLP came with word embeddings like Word2Vec and GloVe, followed by recurrent neural networks (RNNs) and long short-term memory networks (LSTMs) for sequence modeling. These approaches began to reduce the need for linguistic feature engineering but still had limitations in handling long-range dependencies in text.

The Transformer Architecture and Large Language Models

The watershed moment for NLP came in 2017 with the introduction of the Transformer architecture in the paper "Attention is All You Need" by Vaswani et al. The Transformer replaced recurrent connections with self-attention mechanisms, allowing models to process entire sequences in parallel and better capture long-range dependencies知乎.

The Transformer architecture enabled a new generation of language models with unprecedented scale:

  1. BERT (2018): Pre-trained on massive text corpora using bidirectional context, revolutionizing performance across NLP tasks.

  2. GPT series: Each iteration scaled up parameters and training data, with GPT-3 (2020) containing 175 billion parameters trained on hundreds of billions of tokens.

  3. ChatGPT (2022): Combined large-scale pre-training with reinforcement learning from human feedback (RLHF), achieving remarkable conversational abilities.

These models demonstrated that scaling up model size, training data, and computation could produce increasingly capable language models without requiring more linguistic knowledge or task-specific engineering. In fact, the largest models began to exhibit emergent abilities not explicitly designed into them, such as few-shot learning and complex reasoning.

The Triumph of Scale and General Methods

The evolution of NLP systems provides perhaps the clearest validation of Sutton's thesis. Despite decades of linguistic research and carefully engineered NLP systems, the field has been transformed by general architectures that leverage massive computation and data. The Transformer architecture itself is remarkably general - the same basic structure works for language understanding, generation, translation, summarization, and even extends to other domains like computer vision and biology.

This transition wasn't merely incremental - large language models have achieved capabilities that seemed impossible just a few years ago, and they've done so not through more sophisticated linguistic engineering but through scale. As Sutton predicted, the approaches that best leveraged computation ultimately prevailed.

Game AI: From Expert Knowledge to Self-Play Learning

Traditional Approaches to Game AI

Game AI has traditionally relied heavily on domain-specific knowledge and heuristics. Chess programs like Deep Blue, which famously defeated world champion Garry Kasparov in 1997, used specialized hardware and extensive hand-crafted evaluation functions developed by chess experts第一财经.

For games with higher complexity like Go, traditional approaches struggled. The game's vast search space (approximately 10^170 possible positions) made brute-force search impractical, and human experts found it difficult to formalize their intuitive understanding of good positions into explicit evaluation functions.

AlphaGo: The Beginning of a Revolution

DeepMind's AlphaGo, which defeated world champion Lee Sedol in 2016, represented a significant shift in approach. While AlphaGo still incorporated some domain knowledge, it relied much more on neural networks trained through a combination of supervised learning from human expert games and reinforcement learning through self-play百度百科.

AlphaGo Zero: Removing Human Knowledge

The next iteration, AlphaGo Zero, took a more radical approach by eliminating all human expert data. It learned entirely through self-play, starting from random play and using only the basic rules of the gamecnblogs.com. Despite this seemingly handicapped starting point, AlphaGo Zero surpassed the performance of the original AlphaGo after just three days of training知乎.

This achievement dramatically validated Sutton's thesis - by removing human knowledge and relying more fully on general learning algorithms and computation, the system actually performed better.

AlphaZero: Generalizing Across Games

AlphaZero extended this approach to multiple board games, including chess and shogi (Japanese chess), using identical learning algorithms and network architectures for all games with no game-specific adaptations beyond the basic rules第一财经. It achieved superhuman performance in all three games, decisively defeating the strongest existing programs, including Stockfish in chess and Elmo in shogi第一财经.

What makes AlphaZero remarkable is that it used a general algorithm - Monte Carlo Tree Search combined with deep neural networks - across different games without any game-specific engineering. In chess, it famously developed novel strategies and evaluations that contradicted centuries of human chess wisdom.

MuZero: Learning Without Rules

MuZero took this progression to its logical conclusion by learning not just optimal play but the rules of the games themselves百度百科. Rather than being programmed with game rules, MuZero learned an internal model of the game dynamics through experience. This allowed it to master not just board games but also Atari video games with very different dynamics百度百科.

MuZero represents the ultimate validation of Sutton's thesis in the domain of games - it shows that a general algorithm that learns from experience can outperform specialized systems, even without being given the basic rules of the environment it's operating in.

The Progression of Game AI Systems

The evolution from Deep Blue to MuZero shows a clear pattern of removing human knowledge while increasing reliance on general learning algorithms and computation:

  1. Deep Blue: Heavily engineered with chess-specific knowledge and specialized hardware.
  2. AlphaGo: Combined human expert data with learning algorithms.
  3. AlphaGo Zero: Eliminated human expert data but still used game rules.
  4. AlphaZero: Used identical algorithms across different games.
  5. MuZero: Learned game dynamics without being given the rules.

Each step reduced the role of human knowledge while increasing the role of general learning algorithms, and each step produced stronger performance - exactly as Sutton's "bitter lesson" would predict.

Robotics and Autonomous Driving: From Expert Systems to End-to-End Learning

Traditional Approaches in Robotics and Autonomous Driving

Robotics and autonomous driving have historically relied on modular pipelines with extensive human engineering:

  1. Perception: Hand-crafted algorithms for object detection, tracking, and scene understanding.
  2. Planning: Rule-based or optimization-based methods for path planning.
  3. Control: Classical control theory approaches for executing planned trajectories.

These systems incorporated extensive domain knowledge from robotics, computer vision, and control theory experts. While effective in controlled environments, they often struggled with the complexity and variability of real-world scenarios.

The Shift to Learning-Based Approaches

Recent years have seen a significant shift toward learning-based approaches in robotics and autonomous driving:

  1. Deep Learning for Perception: Convolutional neural networks have largely replaced traditional computer vision algorithms for tasks like object detection, segmentation, and depth estimation.

  2. End-to-End Learning: Some systems now learn directly from sensor inputs to control outputs, bypassing the traditional modular pipeline entirely.

  3. Reinforcement Learning: RL algorithms allow robots to learn complex behaviors through interaction with their environment, reducing the need for explicit programming.

Case Study: Autonomous Driving

The evolution of autonomous driving systems particularly illustrates the transition from knowledge-based to learning-based approaches:

Early autonomous vehicles, like those in the DARPA Grand Challenges (2004-2007), relied heavily on hand-engineered perception systems, explicit maps, and rule-based decision-making. These systems required extensive tuning for each new environment and struggled with unexpected scenarios.

Modern autonomous driving companies have increasingly adopted learning-based approaches:

  1. Tesla's Autopilot: Relies heavily on neural networks trained on vast amounts of real-world driving data collected from its fleet.

  2. Waymo: Combines traditional structured approaches with deep learning for perception and prediction.

  3. Research Directions: Academic research increasingly explores end-to-end learning, where neural networks learn to map directly from sensor inputs to steering and acceleration commands.

The Validation of Sutton's Thesis in Robotics

While the transition in robotics and autonomous driving is still ongoing, the trend clearly supports Sutton's thesis:

  1. Scale Matters: Systems with access to more data and computation consistently outperform more knowledge-engineered approaches.

  2. General Methods Win: The same deep learning architectures used in computer vision and NLP are proving effective for robotic perception and control.

  3. Reduced Reliance on Domain Knowledge: Modern systems rely less on explicit modeling of physics and more on learning from data.

The field of robotics presents some unique challenges that have made the transition slower than in areas like computer vision or NLP. Physical robots are expensive, data collection is difficult, and failures can be dangerous. Nevertheless, the direction is clear - as more data becomes available and simulation capabilities improve, learning-based approaches are increasingly dominant.

Scientific Discovery: AlphaFold and Beyond

Traditional Approaches to Protein Structure Prediction

Predicting the three-dimensional structure of proteins from their amino acid sequence has been a grand challenge in biology for over 50 years. Traditional approaches relied heavily on physics-based simulations and expert knowledge of biochemistry:

  1. Physics-Based Methods: Molecular dynamics simulations based on the laws of physics and chemistry.
  2. Homology Modeling: Using known structures of similar proteins as templates.
  3. Fragment Assembly: Building structures from smaller fragments with known conformations.

Despite decades of research and the dedication of thousands of scientists, these approaches achieved only limited success. Accurate structure prediction remained possible only for relatively simple proteins or those similar to already-solved structures.

AlphaFold: A Deep Learning Revolution in Structural Biology

In 2020, DeepMind's AlphaFold 2 achieved a breakthrough in the Critical Assessment of protein Structure Prediction (CASP) competition, producing predictions with accuracy comparable to experimental methods机器之心. This was widely hailed as solving a 50-year-old grand challenge in biology个人图书馆.

What makes AlphaFold particularly relevant to Sutton's thesis is how it achieved this breakthrough:

  1. General Learning Approach: Rather than encoding more biochemical knowledge, AlphaFold used deep learning to identify patterns in the vast database of known protein structures.

  2. Scale: AlphaFold was trained on all publicly available protein structure data and leveraged evolutionary information from massive sequence databases.

  3. Computation: The system used significant computational resources both for training and for the iterative refinement of predictions.

Impact and Evolution of AlphaFold

The impact of AlphaFold has been profound. In partnership with the European Bioinformatics Institute, DeepMind has predicted and publicly released the structures of nearly all human proteins (98.5%) and hundreds of millions of proteins from other organisms机器之心. This has provided an invaluable resource for biological research, drug discovery, and understanding disease mechanisms.

The evolution of AlphaFold has continued with AlphaFold 3, which extends beyond proteins to predict the structures of complexes containing proteins, nucleic acids, small molecules, and other biological entities每日经济新闻. On benchmarks for predicting protein-molecule interactions, AlphaFold 3 is approximately 50% more accurate than the best traditional methods每日经济新闻.

The Validation of Sutton's Thesis in Scientific Discovery

AlphaFold provides perhaps the most compelling validation of Sutton's thesis outside the traditional AI domains:

  1. Decades of Knowledge Engineering vs. General Learning: Despite 50 years of physics-based approaches incorporating detailed biochemical knowledge, a general deep learning approach ultimately proved far more effective.

  2. Computation as the Key: AlphaFold's success relied on leveraging computational power to learn from data rather than encoding more expert knowledge.

  3. Acceleration of Discovery: What would have taken centuries using traditional experimental methods has been accomplished in just a few years with AI每日经济新闻.

AlphaFold demonstrates that Sutton's "bitter lesson" applies not just to traditional AI domains like games and perception, but also to fundamental scientific challenges. The approaches that best leverage computation and learning ultimately outperform those that try to encode more human knowledge, even in domains where that knowledge represents centuries of scientific progress.

Future Implications of the Bitter Lesson

Scaling Laws and Emergent Capabilities

Recent research has identified consistent scaling laws in AI, showing that performance improves predictably as we increase model size, dataset size, and computational resources. These scaling laws suggest that many capabilities may emerge naturally from larger models without requiring fundamental algorithmic breakthroughs.

The emergence of capabilities like few-shot learning, complex reasoning, and multimodal understanding in large language models supports this view. These abilities weren't explicitly engineered but emerged as models scaled up, exactly as Sutton's thesis would predict.

Implications for AI Research Strategy

Sutton's "bitter lesson" suggests several strategic directions for AI research:

  1. Focus on General Methods: Research efforts should prioritize algorithms that can leverage increasing computation rather than domain-specific engineering.

  2. Build Infrastructure for Scale: Investments in computational infrastructure, efficient training methods, and large dataset collection will yield outsized returns.

  3. Embrace End-to-End Learning: Systems that learn directly from raw data to final outputs will ultimately outperform modular systems with hand-engineered components.

  4. Develop Better Learning Algorithms: The focus should be on improving how systems learn rather than what they learn.

Future Frontiers

Looking forward, we can expect the "bitter lesson" to play out in several emerging areas:

  1. Multimodal AI: Systems that integrate vision, language, audio, and other modalities through general learning approaches rather than modality-specific engineering.

  2. Scientific AI: Following AlphaFold's success, we'll likely see AI systems making breakthroughs in other scientific domains like materials science, drug discovery, and climate modeling.

  3. Robotics: As simulation capabilities improve and data collection becomes easier, end-to-end learning approaches will increasingly dominate robotics.

  4. Artificial General Intelligence: The path to AGI may rely more on scaling general learning systems than on engineering specific cognitive capabilities.

Potential Limitations and Counterarguments

While the "bitter lesson" has proven remarkably prescient, there are some potential limitations to consider:

  1. Computational Limits: Physical and economic constraints may eventually limit our ability to continue scaling computation indefinitely.

  2. Sample Efficiency: Pure learning-based approaches often require enormous amounts of data, which may be impractical in some domains.

  3. Safety and Interpretability: Systems that learn from data without encoded constraints may develop unexpected behaviors or be difficult to interpret.

  4. Hybrid Approaches: The optimal approach may ultimately combine learning with some degree of structure and prior knowledge.

Despite these considerations, the historical pattern across multiple domains strongly supports Sutton's core thesis: in the long run, approaches that leverage computation through general methods outperform those that rely on human knowledge engineering.

Conclusion

The development of artificial intelligence across multiple domains - from computer vision and natural language processing to games, robotics, and scientific discovery - provides strong validation for Rich Sutton's "bitter lesson." In each case, approaches that leveraged computation through general learning methods have ultimately outperformed approaches that relied on encoding human knowledge.

This pattern suggests a clear direction for the future of AI research and development: focus on building systems that can learn from data and experience, and that can effectively utilize increasing computational resources. As computation continues to grow exponentially, we can expect learning-based approaches to continue their dominance and to expand into new domains.

The lesson may indeed be "bitter" for researchers who have invested years in developing domain-specific knowledge and algorithms, but it points to an exciting future where AI systems can learn to solve problems that humans don't yet know how to solve themselves. The most powerful AI systems of the future will likely not be those that best encode human knowledge, but those that can most effectively learn from data and computation.

Key references