광고
광고
광고
광고
광고
광고
광고
광고
광고
광고
광고
광고
광고
로고

FLI Opens the Worldbuilding Contest. FLI Opens the Worldbuilding Contest The Future of Life Institute has launched a Worldbuilding competition with a prize purse worth $100,000 plus. Individuals and teams are invited to design visions of a plausible, aspirational future which includes artificial general intelligence.

운영자 | 기사입력 2022/01/20 [09:48]

FLI Opens the Worldbuilding Contest. FLI Opens the Worldbuilding Contest The Future of Life Institute has launched a Worldbuilding competition with a prize purse worth $100,000 plus. Individuals and teams are invited to design visions of a plausible, aspirational future which includes artificial general intelligence.

운영자 | 입력 : 2022/01/20 [09:48]

 

 

FLI January 2022 Newsletter

 

 

 

 

 

image by barrow_motion

 

 

image by barrow_motion

 

 

FLI Opens the Worldbuilding Contest

 

 

The Future of Life Institute has launched a Worldbuilding competition with a prize purse worth $100,000 plus. Individuals and teams are invited to design visions of a plausible, aspirational future which includes artificial general intelligence.

Worldbuilding is the art and science of constructing a coherent and relatively detailed fictitious world. It is frequently practised by creative writers and scriptwriters, providing the context and backdrop for stories that take place in future, fantasy or alternative realities. FLI's contest challenges entrants to use worldbuilding to explore possible futures for our own world.

The contest is designed to appeal broadly, with artistic components as well as more conceptual ones. Whether you're a writer, an economist, a scientist, an AI researcher, a student, a filmmaker, an expert in geopolitics or simply a Sci-Fi enthusiast, we believe there's something here for you.

FLI hopes to encourage people to start thinking about the future in more positive terms, particularly with regards to powerful new technologies. In order to steer the technology's trajectory in a positive direction, we need to know what we're aiming for. To know what future we would most like, we must first imagine the kinds of futures we could plausibly have. Unfortunately, not nearly enough effort goes into imagining what a good future might look like. Mainstream media tends to focus on the dystopias we could end up in. This contest seeks to change that.

Applications are due on 15th April 2022. If you'd like to attend a worldbuilding workshop or you need help finding team members, visit this page. For more information about the contest and to enter, visit this website.

The judges can't wait to see your entries!

 

 

 
 
 

We're hiring an Operations Specialist!

 

 

The Future of Life Institute has opened applications for a new job. We are looking for an Operations Specialist, a talented individual to help develop highly effective tools, workflows, and strategies for operations in FLI and new nonprofit organisations. This position will work independently, under the supervision of FLI staff and in consultation with external experts such as lawyers and consultants, on a six-month project to research and build highly efficient operations machinery for new non-profit organisations.  Applications are for a standalone project but the job could evolve into a position at FLI or another organisation. The ideal candidate will be highly motivated, entrepreneurial, self-directed, experienced in operations work, very comfortable learning new tools, and very organised. Work will be primarily remote but a Bay Area physical location is a mild advantage.

The first deadline for applications is February 1st, 2022. More information here.

 

 

 
 
 

Policy & Outreach Efforts

 

 

 
 
 

Fine-Tuning Definitions in the AI Act

 

 


The proposed AI Act prohibits 'practices that have a significant potential to manipulate persons through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups' if these lead these persons to act in a way that cause psychological or physical harm. Risto Uuk, Policy Researcher at FLI (pictured left), analysed this definition of manipulation and provided these recommendations to strengthen the proposed regulation.

 

 

 
 
 

 

 

Fallout from the CCW UN Review Conference

 

 

The United Nations Convention on certain Conventional Weapons (CCW) in mid-December concluded with no binding treaty having been agreed on autonomous weapons. Will Knight wrote in a piece for Wired that 2021 'may be remembered as the year when the world learned that lethal autonomous weapons had moved from a futuristic worry to a battlefield reality. It’s also the year when policymakers failed to agree on what to do about it.'

Emilia Javorsky, who leads FLI's advocacy against lethal autonomous weapons, was quoted in Politico EU, where she condemned the CCW forum as 'utterly incapable' of meaningfully addressing this problem. Luckily, FLI believes there is hope for action in 2022: as Javorsky says in the article, 'This may arise at the United Nations General Assembly or as an independent treaty process, as successfully occurred for land mines, cluster munitions'. There remains hope to ban Slaughterbots.

 

 

 
 
 

Dutch Minister Called to Recognise Risks from AI

 

 

In this new op-ed for Dutch daily newspaper Trouw, our Director of European Policy, Mark Brakel, and Otto Barten, Director at the Existential Risk Observatory in the Netherlands, call for the new Dutch minister for digitisation, Alexandra van Huffelen, to recognise AI as an existential risk for humanity in future policy - that is, a risk that has the potential to eliminate all of, or at least a significant fraction, of humanity - as well as a huge opportunity.

 

 

 
 
 

Wider Policy Team Updates

 

 


Carlos Ignacio Gutiérrez, Policy Researcher at FLI, has just became Co-Chair of the AI Policy Committee of IEEE USA. Carlos is currently working on FLI’s upcoming response to NIST’s AI Risk Management Framework.

Risto Uuk, Policy Researcher at FLI, recently co-authored a World Economic Forum report on Positive AI Economic Futures. The report highlights several potential positive futures that include shared economic prosperity, human-centric artificial intelligence, fulfilling jobs, and human flourishing.

Claire Boine, Senior Research Fellow at FLI, led FLI’s response to the European Commission’s consultation on adapting liability rules to the digital age and artificial intelligence. Claire Boine also participated in the French existential risks podcast The Flares (upper right). Boine and Gaëtan Selle discussed AI as an existential risk, Max Tegmark’s Life 3.0, the trolley problem, algorithmic bias, chipmunks, and much more. Listen here.

 

 

 
 
 

 

 

Nominate Your Own Unsung Hero

 

 

Nominations for next year's Future of Life Award are still open. For the unfamiliar, this award is a $50,000/person prize given to individuals who, without having received much recognition at the time, have helped make today dramatically better than it may otherwise have been.

Past winners include Stanislav Petrov, who helped prevent an all-out US-Russia nuclear war, William Foege & Viktor Zhdanov, who played key roles in the eradication of smallpox, and most recently, Joseph Farman, Susan Solomon and Stephen Andersen for their work in saving the earth's ozone layer.

Why not nominate someone like this who inspires you, and share the message of hope? To nominate an unsung hero, please follow the link here. If we decide to give the award to your nominee, you will receive a $3,000 prize from us!

 

 

 
 
 

     News & Reading

 

 

 
 
 

Dutch Minister Called to Recognise Risks from AI

 

 

In this new op-ed for Dutch daily newspaper Trouw, our Director of European Policy, Mark Brakel, and Otto Barten, Director at the Existential Risk Observatory in the Netherlands, call for the new Dutch minister for digitisation, Alexandra van Huffelen, to recognise AI as an existential risk for humanity, as well as a huge opportunity, in future policy.

 

 

Politico's 'AI: Decoded' Draws Attention to AI Medical Issues

The latest issue of 'AI: Decoded' from Politico EU covered how a group of researchers from the World Health Organisation and the University of Montreal are drawing attention to the limitations of today's medical AI and the dangers of rushing to delegate healthcare roles to computing models. The research was published in this BMJ article, which urges AI developers to think harder about the real-life uses of AI systems in hospitals from the start. To this end, it recommends greater collaboration with doctors, but also encourages medical professionals to learn more about data science and machine learning. Of course, when applied carefully enough, medical AI systems could do enormous good. The Financial Times reported how data sets could 'help AI predict medical conditions earlier'. But here too, researchers warn 'there are no good models without good data'.

 

 

 
 
 
 
 
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 
광고
광고
광고
광고
광고
광고
광고
많이 본 기사
해외석학 방한인사·행사 많이 본 기사