광고
광고
광고
광고
광고
광고
광고
광고
광고
광고
광고
광고
광고
로고

[제롬 글렌 유엔미래포럼회장] 미 상원 소위원회에서 열린 두 번째 AI 청문회. Anthropic, Google, Microsoft 및 OpenAI는 (정책 입안자, 학계 및 시민 사회와 함께) 안전, 모범 사례, 협업을 촉진하고 기후 변화, 암 및 사이버 위협과 같은 글로벌 과제에 대한 응용 프로그램을 개발하기 위한 프론티어 모델 포럼을 발표했다

박인주 | 기사입력 2023/08/07 [08:07]

[제롬 글렌 유엔미래포럼회장] 미 상원 소위원회에서 열린 두 번째 AI 청문회. Anthropic, Google, Microsoft 및 OpenAI는 (정책 입안자, 학계 및 시민 사회와 함께) 안전, 모범 사례, 협업을 촉진하고 기후 변화, 암 및 사이버 위협과 같은 글로벌 과제에 대한 응용 프로그램을 개발하기 위한 프론티어 모델 포럼을 발표했다

박인주 | 입력 : 2023/08/07 [08:07]

 

안녕하세요.

 

Stuart Russel, Yoshua Bengio 및 Dario Amodei와 함께하는 AI에 대한 미국 상원 사법부 소위원회 세션, 7-25-23 당신은 보관된 버전을 볼 수 있습니다. https://youtu.be/hm1zexCjELo

 

다음 날(오늘) Anthropic, Google, Microsoft 및 OpenAI는 (정책 입안자, 학계 및 시민 사회와 함께) 안전, 모범 사례, 협업을 촉진하고 기후 변화, 암 및 사이버 위협과 같은 글로벌 과제에 대한 응용 프로그램을 개발하기 위한 프론티어 모델 포럼을 발표했습니다.

 

AI에 대한 상원 법사 소위원회 청문회는 지난 청문회보다 훨씬 더 좋았습니다. 이번 청문회는 AGI에 대해 이야기했고, 국가 AI 규제 기관, G-7 감독 및 모두를 위한 기본적인 국제 규칙을 위한 유엔 기관에 대해 몇 가지 세부 사항을 다루었습니다.그리고 2024년 선거를 위한 정치 AI 허위 정보에 대한 논의는 더 정확했습니다.블루멘탈 회장님, 슈퍼 AI가 몇 년 남지 않았을 수 있다는 점을 인정하고 규제를 마련해야 합니다.우리는 이 청문회에 앞서 상원 소위원회에 두 차례의 AGI 배경을 전달했습니다.Stuart Russell은 현재 AGI 스타트업에 월 100억 달러가 소요되고 있으므로 공개 전에 안전성 증명이 필요하며 미국 규제 기관은 규제 위반자를 시장에서 제거해야 한다고 말했습니다.Yoshua Bengio는 AI와 AGI가 불량해지는 나쁜 행위자들에 대응하기 위해 AI 시스템을 만들어야 한다고 말했습니다. 생물학, 의학 등을 위한 AI에 대한 대학 윤리 심사 위원회가 있어야 합니다.그의 전문을 이용할 수 있습니다: https://yoshuabengio.org/2023/07/25/my-testimony-in-front-of-the-us-senate/ Dario Amodei는 Anthropic이 대중에게 공개하기 전에 "안전에 대한 최고의 경쟁"을 고취하고, 새롭고 더 강력한 모델에 대한 전체 AI 공급망, 테스트 및 감사 체제(제3자 및 국가 안보)를 확보하기를 원한다고 말했습니다.테스트 및 감사가 실제로 효과적인지 여부를 알기 위해 측정 및 테스트 연구에 자금을 지원합니다(아마도 이를 위해 NIST에 자금을 지원할 것).지난 청문회 때와 마찬가지로 미국이든 중국이든 정부 AI 연구에 대한 이야기는 거의 없었습니다.

 

제리.

 

 

 

Hi Everybody,

 

 

 

 

 

US Senate Judiciary Subcommittee session on AI with Stuart Russel, Yoshua Bengio, and Dario Amodei, 7-25-23 you can see the archived version: https://youtu.be/hm1zexCjELo

 

 

 

 

 

The following day (today) Anthropic, Google, Microsoft and OpenAI announced Frontier Model Forum to promote Safety, Best practices, Collaboration (with policymakers, academics, and civil society), and develop applications for Global Challenges such as climate change, cancer, and  cyber threats.

 

 

 

 

 

Senate Judiciary Subcommittee hearing on AI was much better than the last one: this one DID talk about AGI, DID get into some details for a national AI regulatory agency, G-7 oversight, and a UN agency for basic international rules for all. And the discussion on political AI disinformation for 2024 election was more precise. Chairman Blumenthal, acknowledge super AI could be just a few years away and hence urgency to get regulations in place. We had delivered two rounds of AGI background to the Senate subcommittee prior to this hearing. Stuart Russell said we have to “move fast and fix thinks,” as $10 billion/month are going for AGI start-ups now, we should require proof of safety before public release, and a US regulatory agency should have violators of regulations removed from the market. Yoshua Bengio said we have to create AI systems to counter bad actors using AI and AGI going rogue, there should be university ethics review boards for AI there are for biology, medicine, etc. His full text is available: https://yoshuabengio.org/2023/07/25/my-testimony-in-front-of-the-us-senate/ Dario Amodei said Anthropic wants to inspire a “race to the top on safety,” secure the entire AI supply chain, testing and auditing regime (third party and national security) for new and more powerful models before releasing to the public, fund research on measurement and testing to know if testing and auditing is actually effective (maybe funding NIST to do this). As with the last hearing, there was little talk about government AI research be it US or China.

 

Jerry

 

---------------

The Ministerial Meeting for the Summit for the Future is scheduled to take
place September 21 this year for input planning for the follow year's Summit
on the Future 2024.



Futurists in each country should have something to say about an agenda for
the future, no?  



If you have not yet contacted your Minister for Foreign Affairs please do
so.



Some examples of ideas you might consider suggesting to your Ministry of
Foreign Affairs put on the agenda:



Future Issues in Digital Transformation:

* UN Artificial General Intelligence (AGI) Agency maybe co-chaired by
US and China
* Governance issues in future transition from artificial narrow
intelligence to artificial general intelligence (AGI) (EC AGI paper
attached).



Governance/Peace-Security

* Transition from zero-sum geopolitical competition to synergic
relations (synergy matrix example attached)
* Global Collective Intelligence System on the Future for the UNSG's
Office (maybe part of the UN Futures Lab)



Environment/health

* US-China Climate Change Goal with R&D&D program to achieve it that
others can join
* Salt water agriculture: rain does not matter, conserves fresh water,
carbon sink, new source of protein and biofuel, economic growth in poor
areas.
* Scale-up production of pure meat cell-based to save water, land,
energy, pharmaceuticals, localization saving energy travel.



Transnational Organized Crime Global Strategy

* Transnational organized crime added to ICC's crimes against
humanity; bank transfer software upgraded to identify largest crime
accounts; arrest and prosecution international cooperation prioritized by
money (not by crimes or countries), ICC-deputized courts to prosecute
selected lottery system; frozen assets to fund the system.



UN Office of Existential Threats

* Special unit to identify what is known, not known, what should be
known and research needed to close the gap to add to the global
foresight/risk reports.



Cheers!



Jerry


<br />

<hr />

<p align="center">

<a href="https://hermes.gwu.edu/cgi-bin/wa?A0=MP-PC" target="_blank">Access the MP-PC Home Page and Archives</a>

<br />

<br />


<a href="https://hermes.gwu.edu/cgi-bin/wa?SUBED1=MP-PC&A=1" target="_blank">Unsubscribe from the MP-PC List</a>
---------------

Re: Ethics of Care post on SuperIntelligent AIs

보낸사람
2023년 7월 24일 (월) 오후 4:40
 
주의
이 메일은 [GMAIL.COM]을 통해 발송된 메일이 아닙니다.

보낸 사람의 주소가 실제 발송 주소와 다를 수 있으니 주의하시기 바랍니다.

출발 언어영어도착 언어한국어

Hi Karl,

 

 

You probably saw this article in Noema. Makes a fair and reasoned case for the standard vigilance against tool abuse and economic abuse (monopoly power). https://www.noemamag.com/the-illusion-of-ais-existential-risk/

 

I also include a link to a little mini-fiction that tries to suggest some upside to powerful pattern recognition software – but also puts the emphasis on who controls it and the quality/control of the data. https://www.linkedin.com/pulse/rover-my-liminal-ai-hound-riel-miller 

 

 

Best, Riel

 

 

 

 

 

From: Riel Miller <riel.miller@gmail.com>
Date: Saturday, 22 July 2023 at 14:56
To: Karl Schroeder <karl@KSCHROEDER.COM>, MILLPROJ@HERMES.GWU.EDU <MILLPROJ@HERMES.GWU.EDU>
Subject: Re: Ethics of Care post on SuperIntelligent AIs

 

 

Hi Karl,

 

Do you want brakes and a steering mechanism? And if such an fantasy were realizable, which in my view is not the case for this universe, who gets to apply the brakes and who gets to steer? I suspect you were tongue in cheek to a list that seems mostly preoccupied with some universe other than this one. I don’t want to prod the trolls and ‘alt’ addicts but even the Sousa prophecy, warning, lament, admonition rings too pretentiously preservationist for me – who’s to say a ‘rave’ is better than a ‘ho down’ and I suspect you concur with the observation that partisans of the immortality of anything are not only not generous but deluded regarding what is feasible in this universe. Best, Riel

 

PS – as for Harari – as he pointed out in his first book – no one decided to take sedentary paths even though from a what-if perspective it looks like a bad choice – I don’t see the difference with what’s going on now… evolution and creative complexity are not choices but conditions in this universe, trying to play god is a particularly perverse and unpleasant pathology… on the other hand, playing with our tools, including ones that mimic us, is the same constant invitation to change games as the games change… what fun!

 

From: Millennium Project Discussion List <MILLPROJ@HERMES.GWU.EDU> on behalf of Karl Schroeder <karl@KSCHROEDER.COM>
Date: Friday, 21 July 2023 at 16:15
To: MILLPROJ@HERMES.GWU.EDU <MILLPROJ@HERMES.GWU.EDU>
Subject: Re: Ethics of Care post on SuperIntelligent AIs

 

The rabbit-hole is indeed very deep. One can ask, for example, what the value of AGI as such is—what is its value in itself. Leaving aside specific problems that it can solve, what is its value? What is its purpose?

 

 

 

This seems like an odd question to ask, and answering it usually drives us back to specific benefits—improving health care, making new scientific discoveries, improving governance etc. Many of these can be lumped together as increases in efficiency; so I usually interpret Paul’s question of what gets maximized as efficiency, across any and all domains and where definable in mathematical and empirically verifiable terms.

 

 

 

But this is deeply problematic. Remember that in 1906 John Philip Sousa published a vicious attack on the technology of the gramophone, arguing that it would end amateur music making and the culture of itinerant traveling musicians. He wasn’t wrong; while we can look at the widespread availability of recorded music as an improvement of efficiency in the transmission of culture, it could also be seen as seriously reducing the efficiency of spontaneous gatherings by people to make music together. This was an argument over values and for Sousa, the soul of music lay in people of all social classes creating it together, rather than in people who could afford to, listening to it passively and in isolation.

 

 

 

So the problem of the intrinsic value of AGI is the gramophone problem. Let us say that the gramophone provided an improvement in efficiency in one area, while reducing it in another; further, let’s suppose that this is the general result of efficiency improvements: they create new cultures while erasing previous ones. Each culture has its own intrinsic value (for instance, spontaneous gatherings and itinerant musicians; or the culture of ASL deaf people) and that can be lost forever when the culture is replaced. If AGI exists to maximize the efficiency of replacement of cultures, there will be winners and losers, but in many and perhaps most cases we do not have a way of objectively saying that the new culture is ‘better’ than the old one. So, what we have in AGI is a highly efficient cultural disruption machine, with no brakes and no well-defined steering mechanism.

 

 

 

From: Millennium Project Discussion List <MILLPROJ@HERMES.GWU.EDU> On Behalf Of Paul Werbos
Sent: Friday, July 21, 2023 8:01 AM
To: MILLPROJ@HERMES.GWU.EDU
Subject: Re: Ethics of Care post on SuperIntelligent AIs

 

 

 

This is a very important question you ask, Lene.:

 

 

 

On Fri, Jul 21, 2023 at 12:45 AM Lene Rachel Andersen <la@nordicbildung.org> wrote:

 

I would feel a lot safer around AI if it did not try to show care (it cannot care); "caring" AI, wouldn't that be like having a robot constantly trying to manipulate you?

 

It is important because it is the tip of an iceberg .. or, to say it another way, a door into a large and important domain.

 

 

 

I bcc my elder daughter because it echoes discussions we had long, long ago.

 

What do we think about "the happy computer", the AGI which tries to make you happy, with all the fervor of a totally dedicated mind? 

 

 

 

Would it just drug you to force you to smile? (I was deeply upset when I heard of major IT players now encouraging drug use, to better control and direct their employees.)

 

 

 

In fact... ONE view of AGI is to think of it as systems for "cognitive optimization" and"cognitive prediction",

 

That COPN plan was a major watershed in understanding intelligence both in the brain and in AGI. Even today,

 

most "experts" would do well to try to catch up by what came out of a fountain-wide dialogue of program directors across many disciplines early in this century. 

 

 

 

THIS MONTH, I have seen signs of huge changes in the path to AGI. I bcc a few important players.

 

 

 

In the important news:

 

 

 

The push to AGI **IS** becoming much stronger,  but it suffers deeply from a one-sided development where cognitive prediction  becomes real but cognitive OPTIMIZATION -- the larger foundation of AGI and of biological intelligence -- is still deeply misunderstood, in ways that raise risks of fatal policies. That is where the "happy computer" fits in: it is all about what the optimization side of AGI -- the overarching system -- TRIES TO MAXIMIZE. That affects what it says or does to us, BUT also what it does to the Internet  of Things (IOT), the vast array of "robots" (programmable local decision systems) which suddenly now outnumber the humans. 

 

The IOT now includes many, many military systems, and I was glad to hear that the Security Council paid attention to the very urgent problems which already flow from this new situation.

 

 

 

There are very important further details, but his email is not the place  get TOO deep. In the end, I hope that any new AI agency under the security council will support major progress in early warning and detection systems, as most nains asked for, based on true cognitive prediction using Quanum AGI, but will also better study and understand the cognitive optimization aspects, like where the values (U) come from which will drive the IOT and the entire world. 

 

 

 

One UN representative strongly opposed international agreements or guardrails with any force over sovereign states. He reminded me of company spokesmen who in past centuries argued for elimination of ALL social or governmental constraints on what they can do. (The IEEE Power and Energy Society -- key people I bcc -- learned the hard way over decades how to find a viable way to harmonize such views with the optimization of collective decisions, and avoid major breakdowns. It requires serious mathematical research and dialogue.) For AGI, even just true quantum cognitive prediction, the Nash alternative which he advocated is one of the clearest paths in front of us to human extinction. I wondered: is his boss trying to channel the spirit of Loki now? But Jungian psychology is just as deep and tricky as AGI... 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Caringly,

 

Lene Rachel

 

On 21-07-2023 01:29, David Wood wrote:

 

Hi Linda

 

 

 

I like the emphasis on designing AIs to prioritise collaboration, empathy, and collective wellbeing.

 

 

 

However, this project faces two large hurdles in my view.

 

 

 

First, there's scope for lots of disagreement about the types and scope of the collaboration, empathy, and collective wellbeing. Humans are far from being in agreement about what really counts as fair or just. And which types of collaboration should be prioritised? Collaboration between humans and ants? Collaboration between the original cells of my body and a cancerous tumour? Collaboration with groups such as QAnon and ISIS? Again, when does empathy turn into negative traits? (Google e.g. "downsides of empathy".)

Second, at least equally challenging, is the question of how do we ensure that an AI actually follows the set of moral principles we think we are teaching it? The way our currently most powerful AIs are trained means that these LLMs initially mimic whatever set of moral principles they observe or deduce from all the data in their entire training set. A subsequent RLHF phase acts a bit like dog-training in which human reviewers say the equivalent of "good dog" or "bad dog" (countless thousand times), to mould the responses. But just as we cannot be sure what a well-trained dog will do in a novel circumstance, we cannot be sure that the LLM will do once it has been "jail broken" or otherwise moved beyond its training.

 

 

 

To be clear, although both of these problems are significant, I see ways forward in each case.

 

 

 

To make progress with the first problem, we need a global project to determine which values can indeed be upheld as the basis for moral cooperation in the 2020s and beyond. That's the subject of question 12, "Global agreement on values?" in this AGI survey.

 

 

 

To make progress with the second problem will be harder. That's the subject of (you guessed it) question 13 of the same survey, "Hardwiring moral principles into an AGI?" (and another approach is addressed in question 8, "Automatic moral alignment?") Perhaps the most interesting suggestion is the Constitutional AI approach championed by Anthropic.

 

 

 

With best wishes

 

 

 

// David W.

 

 

 

On Thu, 20 Jul 2023 at 07:17, Linda MacDonald Glenn <00000c050ff1d14d-dmarc-request@hermes.gwu.edu> wrote:

 

Hi, all --  in response to the Yuval Noah Harari post, I wanted to share this post with you: https://open.substack.com/pub/lindamacdonaldglenn/p/an-ethics-of-care-approach-the-key?r=5r379&utm_campaign=post&utm_medium=web(cross posted on Medium).    We have a short period of time in which to instill certain values into our creation, but that window of time is closing fast! 

 

 

 

I would welcome your thoughts and feedback on it; please feel free to comment or re-post!  

 

 

 

With gratitude and appreciation,

 

 

 

Linda

 

 

 

Linda MacDonald Glenn, JD, LLM (she/her/hers)

 

Ethicist, Futurist, Attorney-at-law 

 

Founding Director Center for Applied Values and Ethics in Advanced Technologies (CAVEAT) 

 

http://www.linkedin.com/in/lindamacdonaldglenn/

 

Everything is interconnected... Our survival and future are linked."  The 14th Dalai Lama

 

Image removed by sender.

 

 

 

 

 

 

 

 

 

On Wed, Jul 19, 2023 at 3:16 PM Ted Kahn <ted@designworlds.com> wrote:

 

 

 

Broadcast today on NPR’s HereAndNow..

https://www.wbur.org/hereandnow/2023/07/19/yuval-noah-harari-ai-warning

In my humble opinion, Israeli historian/philosopher/futurist & author of best seller, *Sapiens*, Yuval Noah Harari,  really hit the nail on the head.. related to many groups (incliding MP) in US & Europe dealing with both the promise— and perils—of rapidly evolving generative AI.

Ted

 

 

 


Access the MILLPROJ Home Page and Archives

Unsubscribe from the MILLPROJ List

 

 

 


Access the MILLPROJ Home Page and Archives

Unsubscribe from the MILLPROJ List

 

 

 


Access the MILLPROJ Home Page and Archives

Unsubscribe from the MILLPROJ List

 

--
Lene Rachel Andersen
Futurist, economist, author & keynote speaker
President of Nordic Bildung and co-founder of Global Bildung Network
Full member of the Club of Rome

Nordic Bildung
Vermlandsgade 51, 2300 Copenhagen S, Denmark
www.nordicbildung.org
+45 28 96 42 40

Podcast: Nordic Metamodern

 

 

 

 

 
광고
광고
광고
광고
광고
광고
많이 본 기사
최신기사