We Are Building a Hub
for AI Research Amplification

We Are Building a Hub
for AI Research Amplification

We Are Building a Hub
for AI Research Amplification

An image of office room with a room with a table and chairs
An image of office room with a room with a table and chairs
An image of office room with a room with a table and chairs

About

The AI Safety Network is a publishing, networking, support, and amplification platform for pivotal AI safety research, outreach, and communication. We seek to enhance understanding of AI risks, promote awareness, and establish innovative collaboration frameworks underpinned by a commitment to publishing expert- and community-vetted ethical AI safety research. We identify the most critical immediate and long-term existential risks and disseminate impactful and transformative AI insights. We aim to collaborate closely with renowned research organizations to help mitigate the social, economic, and technological impacts of emerging AI technologies. We focus on communicating the risks of frontier AI, advocating for safe development practices and extreme risk reduction, safeguarding a flourishing future, and enhancing transparency to build public trust.

As artificial intelligence is poised to transform society in the near future dramatically and risk becoming a fundamental threat, we collectively approach AI safety research and governance to evaluate AI trajectories and implications, address AI-related existential risks, and highlight safety concerns such as disinformation, bias, or irresponsible AI deployments. As such, our mission is to create a centralized publishing platform and collaborative research hub to amplify significant contributions and mitigate extreme risks. We bridge various disciplines to address the intertwined aspects of society, environment, design, and technology, and we are committed to creating opportunities for expert and public engagement.

While practice-based research in AI is an emerging field, there is a lack of infrastructure bridging the gap between research distribution practices and open research repositories. Through our work, we seek to inspire the development of community standards for collaborative AI research and intervene in AI debates about opportunities to act. Our mission is also to help others develop critical skills, access impactful roles, and launch promising initiatives by providing an open-access publishing infrastructure and learning environment. By creating a collaborative hub infrastructure, we institutionally bundle the knowledge associated with different agents and build new research-based learning and publishing competencies.

Strategic Mission

The AI Safety Network publishes and disseminates impactful research and information, fostering a culture of excellence to support researchers and entrepreneurs. By partnering with research teams to develop tailored communication strategies, we nurture collaboration and openness, interdisciplinary and holistic approaches, and a collective mindset to promote real-world impact for the benefit of humankind. Our strategies are designed to be flexible, measurable, and actionable, ensuring tangible outcomes and cementing our status as a trusted research communication partner. We engage with AI researchers and organizations as publishing consultants, editors, and strategic communication partners, covering various activities—from strategic research, selection, planning, data analysis, and media surveying to concept and communication stewardship. The AI Safety Network is a community broadcaster that ensures research amplification, editorial planning, external communications, strategic advice, and reputation management.

To support the AI safety community, we aim to provide actionable insights for responsible decision-making, vetted research and data, ethical standards, and journalistic rigor to keep the topic prominent and educate audiences as we build a global readership. By supporting our partners in a wide range of activities, we act as an extension of AI research teams to engage audiences, build partnerships, deepen connections, target engagement, streamline research, and amplify impact, ensuring seamless collaboration—whether by supporting existing structures or establishing in-house capabilities. Such activities may sometimes include extensive actions, including media surveying, ethnographic analysis, tools implementation, long-form research, short-form informative pieces, activations, documentation, podcasting, and social media.

To ensure consistency, transparency, and impactful effectiveness, we developed a three-stage process designed to (1) identify and approve research and data that aligns with our mission and meets vetting standards established in collaboration with a supervising board of renowned experts, (2) review and organize research according to established qualitative criteria, and (3) publish and communicate research as we schedule publication release, select channels, coordinate communication efforts, and monitor responses.

Foundational Transparency

As a foundational platform for editorial support, publishing, and research dissemination, the AI Safety Network champions operational autonomy and collaboration to foster solidarity and engagement. We are a support structure, and our strategic priorities involve amplifying research on a global scale, prioritizing impactful collaborations, and establishing a self-governed consultative board team with a mission to vet the most meaningful contributions to the field of AI safety research in their technical, social, economic, and cultural dimensions. At the same time, we aim for a significant social impact and sustained advocacy for inclusive public policies.


Our operational plans include:

The start-up phase. We seek funding for the labor we put into creating and operating the network, capability building, and the material costs of setting up the platform, identity, financial infrastructure, and practical operative abilities.

The pioneering phase. We seek to cover structural costs to increase the benefits provided to the AI safety community, from operational activities to further developing our publishing infrastructure.

The acknowledgement phase. Once the AI Safety Network is fully established and its continuation secure, we aim to develop our structural abilities, institute Fair Practice codes, hire contributors in permanent contracts, and attract multi-year support for the team.


Our immediate management plans include:

Outlining priorities. We have identified and seek to reinforce and amplify five overarching research and engagement strands: AI Safety, AI and Misinformation, AI and the Climate, AI in Education and Society, and AI and Frontier Technologies. We now seek to identify cross-program collaborations for maximum impact.

Drafting a mission mandate, theory of transformation, and program statement. We craft a detailed brief outlining the objectives, goals, challenges, and expectations in transforming the AI Safety Network into a sustainable entity as we research, develop, and exchange ideas with our project partners.

Establishing a consultative board. A consultative board and vetting committee are to be appointed to oversee the publication plan of the AI Safety Network and contribute to relevant operational decision-making.

Drafting a code of conduct and protocol. Building a code of protocol is designed to ensure the project is run consistently and aligns with best practice codes to ensure maximum social impact and help address the most complex challenges in AI safety.

Drafting the publishing and editorial strategy. Together with the consultative board, we aim to generate a comprehensive publication agenda and editorial guidelines based on the five foundational engagement strands, marrying strategic thinking and channel expertise to help organizations reach target audiences more effectively.

Establishing the research and publication ethical principles. We will create a heterodox set of ideas, speculative concepts, and ethical principles that encompass new forms of open collaboration and contribute to a transparent intersection between research, technology, and society.

Consolidating institutional partnerships in the AI safety community. In consultation with our partners, we seek to engage in direct advocacy programs to promote fair and just public policies that help the world move forward for the good of mankind.

Advocating for change. We will seek to develop coherent advocacy pillars to strengthen the AI safety community and research environment, provide ample support for educational programs, and establish criteria for ethical engagement and industry transparency. In addition, together with our board members, we will consolidate a detailed accreditation system to establish industry credits for projects and work that researchers have created or participated in for peer recognition. This entails drafting coherent eligibility criteria, the requirements for acceptance, and a community-acknowledged recognition system.

Public transparency, collaboration with key research institutions, strategic partnerships, and change advocacy will underline the AI Safety Network’s operational ethos.