A Focused Engagement
to Maximize Impact

A Focused Engagement
to Maximize Impact

A Focused Engagement
to Maximize Impact

A scenic lanscape of person standing on a mountain
A scenic lanscape of person standing on a mountain
A scenic lanscape of person standing on a mountain

AI Safety

A staggering 83% of organizations worldwide now recognize AI as a cornerstone for their business processes, underscoring its pervasive impact across industries. At the same time, 53% of companies have already used AI technology to innovate their products and services, while 64% expect AI to boost productivity. In 2024, it is projected that 35% of companies globally will utilize AI technologies, signaling a swift adoption rate that underscores the urgency for established safety protocols. The proliferation of AI technologies into everyday life accentuates the potential for AI to influence every aspect of human existence, from daily tasks to complex decision-making processes. An anticipated 97 million people are expected to work in AI by 2025, and more than 375 million people will need to change careers by 2030—a massive shift in the labor market highlighting the critical need for safety measures to protect workers and ensure ethical standards are maintained in deploying AI technologies. Predictions suggest a twentyfold increase by 2030 and a market value reaching 2.5 trillion dollars by 2032, further attesting to AI’s impact across the economic spectrum.

As AI continues to permeate the fabric of our global society, the necessity for rigorous safety measures, ethical considerations, and robust governance structures becomes paramount. AI safety is not merely a technical challenge but a societal imperative to ensure we prioritize human welfare, ethical integrity, and risk mitigation. At the center of our working method is an integrated approach, stemming from a core belief that creating platforms where people with different perspectives and competencies interact is a catalyst for innovation. To create a safer, more responsible AI future, we focus on addressing immediate impacts and long-term existential catastrophes by actively engaging in public outreach, media relations, policy proposals, international standards enforcement for AI safety, and adopting specific risk mitigation strategies.

Faithful to its mission, the AI Safety Network publishes research on risk management and mitigation, AI model analyses, AI training decision-making, safe and secure AI deployment, public and governmental awareness of AI developments, AI systems cybersecurity, the prevention and misuse of deceptive content distribution, misuse monitoring, data input controls, and responsible data management practices. We connect this research with broader impactful topics, including international law and relations, information security, pandemic prevention, and civilizational resilience. With frontier AI organizations bearing the responsibility and capability to conduct AI safety research and invest in developing tools to address these risks, we aim to collaborate with researchers, institutes, and organizations to assess their societal risks and impacts.

AI and Misinformation

An image of smoke coming out of a forest showing climate change effects
An image of smoke coming out of a forest showing climate change effects
An image of smoke coming out of a forest showing climate change effects

AI and Misinformation

The World Economic Forum (WEF) has identified AI-driven misinformation and disinformation as significant threats, particularly concerning the persistent cost of living crisis and elections in 2024 that could impact the global economy, contribute to political polarization, and risk compromising the integrity of the public sphere. Misinformation (false or inaccurate information) and disinformation (deliberately false content) can be used to spread propaganda and sow suspicion. They are consistently counted among the most significant current risks, alongside extreme weather events, societal polarization, cyber security, and interstate armed conflicts. Diminishing reach and degraded information ecosystems risk increasing polarizations, facilitate government instability, and reduce civic engagement.

We address these issues by analyzing the digital information ecosystem and supporting policies that foster a safer, more transparent media landscape and the possibility of ethical civic journalism. Our commitment extends to enhancing AI safety and accountability and evaluating the challenges of misinformation. We focus on promoting research and regulations that improve trust and ensure the authenticity of digital media, balancing the need for effective authentication methods and tackling the structural problems of information systems, including centralized control, the attention economy, declining news source credibility, and media audience fragmentation. In this engagement area, we publish AI model risk evaluations, strategic analyses of the potential uses and misuses of AI models, their ensuing societal consequences, the role of deepfakes and other generative AI in fraud and misinformation, AI usage models, and safety guidelines.

An image of a wind turbines on a grassy hill showing energy efficiency
An image of a wind turbines on a grassy hill showing energy efficiency
An image of a wind turbines on a grassy hill showing energy efficiency

AI and the Climate

The AI Safety Network promotes pioneering approaches to the environmental impact of artificial intelligence, social and technical approaches to climate justice, and the intertwining intricacies of media, technology, ecology, and society in addressing climate repair and an eco-conscious future. With over 75% of the world's land area already degraded and over 90% at risk of becoming degraded by 2050, new technologies ranging from AI to geospatial intelligence could allow for a better understanding of ecosystems and biodiversity restoration.

We support research that tackles AI’s potential to address climate challenges as well as AI’s significant energy consumption and the surging energy demand from data centers. Addressing AI and the climate crisis requires policies favoring a just transition to resilient futures, prioritizing those most affected by climate change, decisive climate actions, societal and ecological protections, fostering interspecies coexistence, and reforming environmental regulations. Therefore, we promote multidisciplinary approaches combining legal, technological, ecological, and social strategies to address the complex challenges of climate change.

With an increased need for empirically informed support to integrate within domestic decision-making, our mission is to inform public and policy debates on the climatological impact of frontier technologies and AI as we face a superposed crisis of sustainable technology and economic transitions. At the same time, we seek to promote research that tackles alternative notions of repair and questions universalizing structures and systems.

AI in Education and Society

We analyze AI’s societal implications, emphasizing its impact on education, jobs, privacy, human rights, and society. These include social isolation, loss of empathy, and technology-driven exacerbation of economic and social divides. Digital literacy and ethical AI developments are crucial in generating equal opportunities in this context. We collaborate with interdisciplinary experts to explore how AI environments affect social groups, advocate for social data reforms, and examine information organization and retrieval bias. We aim to address urgent challenges through democratic solutions, conduct societal impact evaluations, promote informed policy-making, and ensure AI safety by exploring its ethical and societal dimensions. Our mission extends to advocating for justice, equity, and human dignity, supporting diverse voices and organizations worldwide.

Children and youth are significantly impacted by data-driven technologies, toxic online environments, surveillance, and data-collection tools, thus requiring broader data privacy and governance agendas. Existing educational systems are not prepared for the impact of AI technologies, accelerated automation, market concentration, or unregulated policies and governance. Biases, structural oppression and exclusion, and information retrieval are not only technical acts but carry profound socio-cultural, technical, and political implications. One key component of our engagement program is to build trust between the public, experts, and governing bodies to shape AI behavior as societal actors with greater powers than others can use AI to increase their power differentials, and AI consumerization will soon replace a series of traditional work roles. Additionally, the increased possibility of censorship, exacerbated social credit, ubiquitous surveillance, misleading information or lack of information literacy, and the deprioritization of ethical components are further reasons to argue for increased social protection.

AI and Frontier Technologies

An image of a frog on a branch showing wildlife
An image of a frog on a branch showing wildlife
An image of a frog on a branch showing wildlife

AI and Frontier Technologies

Frontier technologies emerge at the intersection of radical scientific breakthroughs and real-world implementation, changing how we communicate, innovate, work, and live our lives. They include digital technologies such as artificial intelligence, big data, blockchains, the metaverse, quantum computing, or IoT (Internet of Things), as well as physical technologies such as hardware innovations or biological technologies such as bioprinting, organoids, genetic engineering, human augmentation, or brain-computer interfaces. Frontier technologies already represent a 350 billion market that could grow to 3.2 trillion by 2025. While the blockchain market has grown from 708 million dollars in 2017 to 61 billion in 2024, or the big data market will increase from 32 billion dollars in 2017 to 157 billion by 2026—the artificial intelligence market is poised to grow from 16 billion dollars in 2017 to over 190 billion in 2024.

Artificial intelligence might be the most critical technology to be developed, increasing the need for appropriate safety and equitability measures. Experts predict AI will revolutionize fields like medical diagnosis and astronomical research, offering new tools to detect diseases, discover drugs, and analyze large amounts of data. This potential underscores the role of AI in improving human well-being, enhancing daily life, bringing health innovation, and advancing scientific knowledge. But while AI promises significant benefits, concerns about overreliance and potentially catastrophic threats demand more ethical considerations. Developed unsafely, artificial intelligence could pose a series of existential risks or catastrophic failure points requiring consistent mitigation programs. The challenges and opportunities of AI reflect the economic impact of frontier technologies, underscoring the need for policies fostering access to data, protecting innovations, and addressing technology gaps.

The AI Safety Network seeks to address the governance gap in emerging technologies, challenging existing norms and demanding innovative policy solutions. We advocate for reforms and international coordination to bridge the gaps and ensure responsible technological advancement. This approach can lead to innovations that reduce disinformation, improve security, and foster social connectedness, aligning with goals like climate change mitigation and supporting the UN Sustainable Development Goals. A human-centered approach to AI technology development can lead to positive outcomes as long as the increasing levels of automation remain under human control and coherent regulations.