Home AI Preventing a Disaster: Examining Racial Avoidance in the Development of… | by The AI Roadmap Institute | AI Roadmap Institute Blog

Preventing a Disaster: Examining Racial Avoidance in the Development of… | by The AI Roadmap Institute | AI Roadmap Institute Blog

0
Preventing a Disaster: Examining Racial Avoidance in the Development of… | by The AI Roadmap Institute | AI Roadmap Institute Blog

During the workshop, several significant issues were brought up. For instance, the importance of distinguishing between different timeframes and perspectives when creating roadmaps.

In terms of timescale, roadmapping is subjective and there are multiple approaches to building roadmaps. A key issue raised during the workshop was the variability of time. A roadmap created with short-term milestones in mind will be different from a long-term roadmap, but both are interconnected. Instead of taking a definitive view on short-term versus long-term roadmaps, it may be beneficial to consider them probabilistically. For example, what roadmap can be created if there is a 25% chance of general AI being developed within the next 15 years and a 75% chance of achieving this goal in 15-400 years?

Considering the AI race in different temporal scales can bring out different aspects that need to be focused on. Each actor may anticipate a different speed of reaching the first general AI system, which can significantly impact the creation of a roadmap. The Boy Who Cried Wolf situation, for example, can decrease trust between actors and weaken ties between developers, safety researchers, and investors. This can result in a decrease in belief in developing the first general AI system at the appropriate time and miscalculating the risks of unsafe AGI deployment by a rogue actor.

Furthermore, two distinct time “chunks” have been identified, each with different problems that need to be solved. The pre-AGI era, before the first general AI is developed, and the post-AGI era, after someone possesses such technology.

During the workshop, the discussion mainly focused on the pre-AGI era as efforts to avoid the AI race should be preventative rather than curative. The first roadmap (figure 1) presented here covers the pre-AGI era, while the second roadmap (figure 2), created by GoodAI prior to the workshop, focuses on the time around AGI creation.

Another issue addressed was viewpoint. A list of actors, their actions, the environment, and the states in between that make up the AI race were identified. Viewing the problem from various perspectives can help reveal new scenarios and risks.

Cooperation among the different actors and a spirit of trust and cooperation in general can decrease the race dynamics in the overall system. Starting with low-stakes cooperation among different actors, such as talent co-development or collaboration among safety researchers and industry, can build trust and better understanding of the issues faced. Active cooperation between safety experts and AI industry leaders, including cooperation between different AI developing companies on AI safety questions, can lead to closer ties and positive information propagation to regulatory levels. Hands-on safety research with working prototypes is likely to yield better results than theoretical argumentation.

However, forms of cooperation that may seem intuitive can actually reduce the safety of AI development. It is important to find robust incentives that would push even unknown actors towards beneficial AI or at least an AI that can be controlled.

Tying timescale and cooperation issues together can help prevent negative scenarios. Concrete problems in AI safety need to be dealt with immediately and collectively, as they relate to both the short and long-term horizons of AGI development.

Encouraging the AI community to discuss and solve issues such as the AI race is necessary, but more incentives are needed to involve actors beyond those traditionally associated with AI development. Cooperation can be fostered through scenarios such as open and transparent AI safety research, free and anonymous access to safety research, inclusive alliances, and gradually allowing new members to enter cooperation programs.

The AI Roadmap Institute will continue its work on AI race roadmapping, identifying additional actors and perspectives, and searching for risk mitigation scenarios. Workshops will be organized to discuss these ideas and publish roadmaps created. The aim is to engage the wider research community and provide a solid foundation for solving this challenging problem.