A roadmap for AI, if anyone will listen | TechCrunch

【原文摘要】
- The U.S. government's lack of coherent AI rules was exposed by its split with Anthropic, while a bipartisan coalition of experts, former officials and public figures released the pre-finalized Pro-Human Declaration, a framework for responsible AI development, coinciding with the Pentagon-Anthropic standoff.
- MIT physicist Max Tegmark, a key organizer, noted that 95% of Americans now oppose an unregulated superintelligence race, per recent polling. The declaration frames humanity as facing a fork in the road: one path is "the race to replace," where humans are supplanted by unaccountable institutions' AI as workers and decision-makers; the other is AI that expands human potential, relying on five pillars: keeping humans in charge, avoiding power concentration, protecting human experience, preserving individual liberty, and holding AI firms legally accountable.
- Its strict provisions include banning superintelligence development until scientific consensus on safety and democratic buy-in are secured, mandating off-switches for powerful systems, and prohibiting architectures with self-replication, autonomous self-improvement or shutdown resistance capabilities.
- The Pentagon-Anthropic standoff, where Anthropic was labeled a "supply chain risk" for refusing unlimited Pentagon tech access (while OpenAI struck a hard-to-enforce DoD deal), highlighted the high cost of Congressional inaction on AI, with experts calling it the first national conversation on AI system control.
- Tegmark compared unregulated AI to unvetted drugs, noting the FDA prevents unsafe drug releases. He sees child safety as the key to breaking the current impasse: the declaration calls for mandatory pre-deployment testing of AI targeting young users, covering risks like suicidal ideation, mental health exacerbation and emotional manipulation.
【单词表】
- coherent /kəʊˈhɪərənt/ 连贯的,条理清晰的
- bipartisan /ˌbaɪpɑːtɪˈzæn/ 两党的,代表两党的
- coalition /ˌkəʊəˈlɪʃn/ 联盟,联合
- pre-finalized /ˌpriːˈfaɪnəlaɪzd/ 接近定稿的,预最终确定的
- coinciding /ˌkəʊɪnˈsaɪdɪŋ/ 同时发生的,重合的
- standoff /ˈstændɒf/ 僵局,对峙
- unregulated /ʌnˈreɡjuleɪtɪd/ 未受监管的,无约束的
- superintelligence /ˌsuːpərɪnˈtelɪdʒəns/ 超级智能
- fork /fɔːk/ 岔路口,转折点
- supplanted /səˈplɑːntɪd/ 取代,替代
- unaccountable /ˌʌnəˈkaʊntəbl/ 无需负责的,不承担责任的
- pillar /ˈpɪlə(r)/ 支柱,核心原则
- liberty /ˈlɪbəti/ 自由,自主
- provision /prəˈvɪʒn/ 条款,规定
- consensus /kənˈsensəs/ 共识,一致意见
- buy-in /ˈbaɪ ɪn/ 认同,支持
- mandate /ˈmændeɪt/ 强制要求,授权
- architecture /ˈɑːkɪtektʃə(r)/ 架构,体系结构
- self-replication /ˌself ˌreplɪˈkeɪʃn/ 自我复制
- autonomous /ɔːˈtɒnəməs/ 自主的,自动的
- exacerbation /ɪɡˌzæsəˈbeɪʃn/ 恶化,加剧
- label /ˈleɪbl/ 把…称为,给…贴标签
- inaction /ɪnˈækʃn/ 无作为,不行动
- unvetted /ˌʌnˈvetɪd/ 未审查的,未核实的
- impasse /ˈæmpɑːs/ 僵局,绝境
- pre-deployment /ˌpriːdɪˈplɔɪmənt/ 部署前的,预部署的
- suicidal /ˌsuːɪˈsaɪdl/ 自杀的,有自杀倾向的
- ideation /ˌaɪdiˈeɪʃn/ 思维,观念形成
- manipulation /məˌnɪpjuˈleɪʃn/ 操纵,操控
【句子翻译】
- The U.S. government's lack of coherent AI rules was exposed by its split with Anthropic, while a bipartisan coalition of experts, former officials and public figures released the pre-finalized Pro-Human Declaration, a framework for responsible AI development, coinciding with the Pentagon-Anthropic standoff. 美国政府与Anthropic公司的分歧暴露了其缺乏条理清晰的人工智能规则,与此同时,一个由专家、前官员和公众人物组成的跨党派联盟发布了接近定稿的《亲人类宣言》——一个负责任的人工智能发展框架,而这恰好与五角大楼和Anthropic的对峙同时发生。
- MIT physicist Max Tegmark, a key organizer, noted that 95% of Americans now oppose an unregulated superintelligence race, per recent polling. 该宣言的核心组织者、麻省理工学院物理学家马克斯·泰格马克指出,根据近期民调,95%的美国人反对不受监管的超级智能竞赛。
- The declaration frames humanity as facing a fork in the road: one path is "the race to replace," where humans are supplanted by unaccountable institutions' AI as workers and decision-makers; the other is AI that expands human potential, relying on five pillars: keeping humans in charge, avoiding power concentration, protecting human experience, preserving individual liberty, and holding AI firms legally accountable. 该宣言将人类描述为正面临一个岔路口:一条是“取代竞赛”之路,在这条路上,人类作为劳动者和决策者会被不承担责任的机构开发的人工智能取代;另一条是人工智能拓展人类潜能之路,它基于五大核心原则:人类主导、避免权力集中、保护人类体验、维护个人自由,以及让人工智能企业承担法律责任。
- Its strict provisions include banning superintelligence development until scientific consensus on safety and democratic buy-in are secured, mandating off-switches for powerful systems, and prohibiting architectures with self-replication, autonomous self-improvement or shutdown resistance capabilities. 其严格条款包括:在就安全性达成科学共识并获得民众认同之前,禁止开发超级智能;强制要求强大的人工智能系统配备关闭开关;禁止开发具备自我复制、自主自我提升或抗拒关闭能力的架构。
- The Pentagon-Anthropic standoff, where Anthropic was labeled a "supply chain risk" for refusing unlimited Pentagon tech access (while OpenAI struck a hard-to-enforce DoD deal), highlighted the high cost of Congressional inaction on AI, with experts calling it the first national conversation on AI system control. 五角大楼与Anthropic的对峙中,Anthropic因拒绝向五角大楼开放无限制技术权限被贴上“供应链风险”标签(而OpenAI则与美国国防部达成了一项难以执行的协议),这一事件凸显了美国国会在人工智能问题上不作为的高昂代价,专家称这是首次关于人工智能系统管控的全国性讨论。
- Tegmark compared unregulated AI to unvetted drugs, noting the FDA prevents unsafe drug releases. 泰格马克将不受监管的人工智能比作未经审查的药物,指出美国食品药品监督管理局会阻止不安全药物上市。
- He sees child safety as the key to breaking the current impasse: the declaration calls for mandatory pre-deployment testing of AI targeting young users, covering risks like suicidal ideation, mental health exacerbation and emotional manipulation. 他将儿童安全视为打破当前僵局的关键:该宣言要求针对青少年用户的人工智能在部署前必须进行强制测试,涵盖自杀意念、心理健康恶化和情绪操控等风险。
【译文】
- 美国政府与Anthropic公司的分歧暴露了其缺乏条理清晰的人工智能规则,与此同时,一个由专家、前官员和公众人物组成的跨党派联盟发布了接近定稿的《亲人类宣言》——一个负责任的人工智能发展框架,而这恰好与五角大楼和Anthropic的对峙同时发生。
- 该宣言的核心组织者、麻省理工学院物理学家马克斯·泰格马克指出,根据近期民调,95%的美国人反对不受监管的超级智能竞赛。该宣言将人类描述为正面临一个岔路口:一条是“取代竞赛”之路,在这条路上,人类作为劳动者和决策者会被不承担责任的机构开发的人工智能取代;另一条是人工智能拓展人类潜能之路,它基于五大核心原则:人类主导、避免权力集中、保护人类体验、维护个人自由,以及让人工智能企业承担法律责任。
- 其严格条款包括:在就安全性达成科学共识并获得民众认同之前,禁止开发超级智能;强制要求强大的人工智能系统配备关闭开关;禁止开发具备自我复制、自主自我提升或抗拒关闭能力的架构。
- 五角大楼与Anthropic的对峙中,Anthropic因拒绝向五角大楼开放无限制技术权限被贴上“供应链风险”标签(而OpenAI则与美国国防部达成了一项难以执行的协议),这一事件凸显了美国国会在人工智能问题上不作为的高昂代价,专家称这是首次关于人工智能系统管控的全国性讨论。
- 泰格马克将不受监管的人工智能比作未经审查的药物,指出美国食品药品监督管理局会阻止不安全药物上市。他将儿童安全视为打破当前僵局的关键:该宣言要求针对青少年用户的人工智能在部署前必须进行强制测试,涵盖自杀意念、心理健康恶化和情绪操控等风险。
文章来源:https://techcrunch.com/2026/03/07/a-roadmap-for-ai-if-anyone-will-listen/
