[제롬 글렌 유엔미래포럼회장] 미 상원 소위원회에서 열린 두 번째 AI 청문회. Anthropic, Google, Microsoft 및 OpenAI는 (정책 입안자, 학계 및 시민 사회와 함께) 안전, 모범 사례, 협업을 촉진하고 기후 변화, 암 및 사이버 위협과 같은 글로벌 과제에 대한 응용 프로그램을 개발하기 위한 프론티어 모델 포럼을 발표했다
안녕하세요.
Stuart Russel, Yoshua Bengio 및 Dario Amodei와 함께하는 AI에 대한 미국 상원 사법부 소위원회 세션, 7-25-23 당신은 보관된 버전을 볼 수 있습니다. https://youtu.be/hm1zexCjELo
다음 날(오늘) Anthropic, Google, Microsoft 및 OpenAI는 (정책 입안자, 학계 및 시민 사회와 함께) 안전, 모범 사례, 협업을 촉진하고 기후 변화, 암 및 사이버 위협과 같은 글로벌 과제에 대한 응용 프로그램을 개발하기 위한 프론티어 모델 포럼을 발표했습니다.
AI에 대한 상원 법사 소위원회 청문회는 지난 청문회보다 훨씬 더 좋았습니다. 이번 청문회는 AGI에 대해 이야기했고, 국가 AI 규제 기관, G-7 감독 및 모두를 위한 기본적인 국제 규칙을 위한 유엔 기관에 대해 몇 가지 세부 사항을 다루었습니다.그리고 2024년 선거를 위한 정치 AI 허위 정보에 대한 논의는 더 정확했습니다.블루멘탈 회장님, 슈퍼 AI가 몇 년 남지 않았을 수 있다는 점을 인정하고 규제를 마련해야 합니다.우리는 이 청문회에 앞서 상원 소위원회에 두 차례의 AGI 배경을 전달했습니다.Stuart Russell은 현재 AGI 스타트업에 월 100억 달러가 소요되고 있으므로 공개 전에 안전성 증명이 필요하며 미국 규제 기관은 규제 위반자를 시장에서 제거해야 한다고 말했습니다.Yoshua Bengio는 AI와 AGI가 불량해지는 나쁜 행위자들에 대응하기 위해 AI 시스템을 만들어야 한다고 말했습니다. 생물학, 의학 등을 위한 AI에 대한 대학 윤리 심사 위원회가 있어야 합니다.그의 전문을 이용할 수 있습니다: https://yoshuabengio.org/2023/07/25/my-testimony-in-front-of-the-us-senate/ Dario Amodei는 Anthropic이 대중에게 공개하기 전에 "안전에 대한 최고의 경쟁"을 고취하고, 새롭고 더 강력한 모델에 대한 전체 AI 공급망, 테스트 및 감사 체제(제3자 및 국가 안보)를 확보하기를 원한다고 말했습니다.테스트 및 감사가 실제로 효과적인지 여부를 알기 위해 측정 및 테스트 연구에 자금을 지원합니다(아마도 이를 위해 NIST에 자금을 지원할 것).지난 청문회 때와 마찬가지로 미국이든 중국이든 정부 AI 연구에 대한 이야기는 거의 없었습니다.
제리.
Hi Everybody,
US Senate Judiciary Subcommittee session on AI with Stuart Russel, Yoshua Bengio, and Dario Amodei, 7-25-23 you can see the archived version: https://youtu.be/hm1zexCjELo
The following day (today) Anthropic, Google, Microsoft and OpenAI announced Frontier Model Forum to promote Safety, Best practices, Collaboration (with policymakers, academics, and civil society), and develop applications for Global Challenges such as climate change, cancer, and cyber threats.
Senate Judiciary Subcommittee hearing on AI was much better than the last one: this one DID talk about AGI, DID get into some details for a national AI regulatory agency, G-7 oversight, and a UN agency for basic international rules for all. And the discussion on political AI disinformation for 2024 election was more precise. Chairman Blumenthal, acknowledge super AI could be just a few years away and hence urgency to get regulations in place. We had delivered two rounds of AGI background to the Senate subcommittee prior to this hearing. Stuart Russell said we have to “move fast and fix thinks,” as $10 billion/month are going for AGI start-ups now, we should require proof of safety before public release, and a US regulatory agency should have violators of regulations removed from the market. Yoshua Bengio said we have to create AI systems to counter bad actors using AI and AGI going rogue, there should be university ethics review boards for AI there are for biology, medicine, etc. His full text is available: https://yoshuabengio.org/2023/07/25/my-testimony-in-front-of-the-us-senate/ Dario Amodei said Anthropic wants to inspire a “race to the top on safety,” secure the entire AI supply chain, testing and auditing regime (third party and national security) for new and more powerful models before releasing to the public, fund research on measurement and testing to know if testing and auditing is actually effective (maybe funding NIST to do this). As with the last hearing, there was little talk about government AI research be it US or China.
Jerry
--------------- The Ministerial Meeting for the Summit for the Future is scheduled to take Re: Ethics of Care post on SuperIntelligent AIs보낸사람
2023년 7월 24일 (월) 오후 4:40
이 메일은 [GMAIL.COM]을 통해 발송된 메일이 아닙니다.
보낸 사람의 주소가 실제 발송 주소와 다를 수 있으니 주의하시기 바랍니다. 영어 한국어
Hi Karl,
You probably saw this article in Noema. Makes a fair and reasoned case for the standard vigilance against tool abuse and economic abuse (monopoly power). https://www.noemamag.com/the-illusion-of-ais-existential-risk/
I also include a link to a little mini-fiction that tries to suggest some upside to powerful pattern recognition software – but also puts the emphasis on who controls it and the quality/control of the data. https://www.linkedin.com/pulse/rover-my-liminal-ai-hound-riel-miller
Best, Riel
From: Riel Miller <riel.miller@gmail.com>
Hi Karl,
Do you want brakes and a steering mechanism? And if such an fantasy were realizable, which in my view is not the case for this universe, who gets to apply the brakes and who gets to steer? I suspect you were tongue in cheek to a list that seems mostly preoccupied with some universe other than this one. I don’t want to prod the trolls and ‘alt’ addicts but even the Sousa prophecy, warning, lament, admonition rings too pretentiously preservationist for me – who’s to say a ‘rave’ is better than a ‘ho down’ and I suspect you concur with the observation that partisans of the immortality of anything are not only not generous but deluded regarding what is feasible in this universe. Best, Riel
PS – as for Harari – as he pointed out in his first book – no one decided to take sedentary paths even though from a what-if perspective it looks like a bad choice – I don’t see the difference with what’s going on now… evolution and creative complexity are not choices but conditions in this universe, trying to play god is a particularly perverse and unpleasant pathology… on the other hand, playing with our tools, including ones that mimic us, is the same constant invitation to change games as the games change… what fun!
From: Millennium Project Discussion List <MILLPROJ@HERMES.GWU.EDU> on behalf of Karl Schroeder <karl@KSCHROEDER.COM>
The rabbit-hole is indeed very deep. One can ask, for example, what the value of AGI as such is—what is its value in itself. Leaving aside specific problems that it can solve, what is its value? What is its purpose?
This seems like an odd question to ask, and answering it usually drives us back to specific benefits—improving health care, making new scientific discoveries, improving governance etc. Many of these can be lumped together as increases in efficiency; so I usually interpret Paul’s question of what gets maximized as efficiency, across any and all domains and where definable in mathematical and empirically verifiable terms.
But this is deeply problematic. Remember that in 1906 John Philip Sousa published a vicious attack on the technology of the gramophone, arguing that it would end amateur music making and the culture of itinerant traveling musicians. He wasn’t wrong; while we can look at the widespread availability of recorded music as an improvement of efficiency in the transmission of culture, it could also be seen as seriously reducing the efficiency of spontaneous gatherings by people to make music together. This was an argument over values and for Sousa, the soul of music lay in people of all social classes creating it together, rather than in people who could afford to, listening to it passively and in isolation.
So the problem of the intrinsic value of AGI is the gramophone problem. Let us say that the gramophone provided an improvement in efficiency in one area, while reducing it in another; further, let’s suppose that this is the general result of efficiency improvements: they create new cultures while erasing previous ones. Each culture has its own intrinsic value (for instance, spontaneous gatherings and itinerant musicians; or the culture of ASL deaf people) and that can be lost forever when the culture is replaced. If AGI exists to maximize the efficiency of replacement of cultures, there will be winners and losers, but in many and perhaps most cases we do not have a way of objectively saying that the new culture is ‘better’ than the old one. So, what we have in AGI is a highly efficient cultural disruption machine, with no brakes and no well-defined steering mechanism.
From: Millennium Project Discussion List <MILLPROJ@HERMES.GWU.EDU> On Behalf Of Paul Werbos
This is a very important question you ask, Lene.:
On Fri, Jul 21, 2023 at 12:45 AM Lene Rachel Andersen <la@nordicbildung.org> wrote:
It is important because it is the tip of an iceberg .. or, to say it another way, a door into a large and important domain.
I bcc my elder daughter because it echoes discussions we had long, long ago.
What do we think about "the happy computer", the AGI which tries to make you happy, with all the fervor of a totally dedicated mind?
Would it just drug you to force you to smile? (I was deeply upset when I heard of major IT players now encouraging drug use, to better control and direct their employees.)
In fact... ONE view of AGI is to think of it as systems for "cognitive optimization" and"cognitive prediction",
described in https://www.nsf.gov/pubs/2007/nsf07579/nsf07579.htm .
That COPN plan was a major watershed in understanding intelligence both in the brain and in AGI. Even today,
most "experts" would do well to try to catch up by what came out of a fountain-wide dialogue of program directors across many disciplines early in this century.
THIS MONTH, I have seen signs of huge changes in the path to AGI. I bcc a few important players.
In the important news:
The push to AGI **IS** becoming much stronger, but it suffers deeply from a one-sided development where cognitive prediction becomes real but cognitive OPTIMIZATION -- the larger foundation of AGI and of biological intelligence -- is still deeply misunderstood, in ways that raise risks of fatal policies. That is where the "happy computer" fits in: it is all about what the optimization side of AGI -- the overarching system -- TRIES TO MAXIMIZE. That affects what it says or does to us, BUT also what it does to the Internet of Things (IOT), the vast array of "robots" (programmable local decision systems) which suddenly now outnumber the humans.
The IOT now includes many, many military systems, and I was glad to hear that the Security Council paid attention to the very urgent problems which already flow from this new situation.
There are very important further details, but his email is not the place get TOO deep. In the end, I hope that any new AI agency under the security council will support major progress in early warning and detection systems, as most nains asked for, based on true cognitive prediction using Quanum AGI, but will also better study and understand the cognitive optimization aspects, like where the values (U) come from which will drive the IOT and the entire world.
One UN representative strongly opposed international agreements or guardrails with any force over sovereign states. He reminded me of company spokesmen who in past centuries argued for elimination of ALL social or governmental constraints on what they can do. (The IEEE Power and Energy Society -- key people I bcc -- learned the hard way over decades how to find a viable way to harmonize such views with the optimization of collective decisions, and avoid major breakdowns. It requires serious mathematical research and dialogue.) For AGI, even just true quantum cognitive prediction, the Nash alternative which he advocated is one of the clearest paths in front of us to human extinction. I wondered: is his boss trying to channel the spirit of Loki now? But Jungian psychology is just as deep and tricky as AGI...
<저작권자 ⓒ ainet 무단전재 및 재배포 금지>
|
많이 본 기사
유미포 해외기관/기업 한국지사 많이 본 기사
최신기사
|