February 10, 2025
Chicago 12, Melborne City, USA
Fellowships

Apply to the Re-imagining Trust and Safety (T&S) for Artificial Intelligence (AI)

Re-imagining Trust and Safety (T&S) for Artificial Intelligence (AI) is a critical building block for closing the AI equity gap and advancing human development.

AI can benefit humanity and sustainable development, and empowering local AI ecosystems must be prioritized to build robust safety measures. Shaping the future of AI means identifying and mitigating local risks to ensure that the interests of people and planet remain at the centre of this technology.

There is an urgent need and opportunity to strengthen developing countries’ ecosystems and the responsibilities of global stakeholders in local contexts. This calls for a shift from being reactive and crisis-focused towards proactive, anticipatory and adaptive measures focused on people’s safety, inclusion and wellbeing.

The speed and adoption of AI today demand new approaches and expanded scope for T&S. Existing AI Trust and Safety models need to evolve to become more culturally responsive to the different ways that AI harms can manifest locally, and they ought to be grounded in the realities of different contexts. It goes beyond technical safeguards — investing in human and institutional capabilities, fostering digital literacy, and empowering people and communities to participate meaningfully in shaping AI and its safety measures are all important. Read our reflection on the current state of T&S and opportunities to re-imagine the space.

Achieving and scaling a new model for AI Trust and Safety will require public and private partners to collaborate.

About the AI Trust and Safety Re-imagination Programme
As the United Nations Development Programme (UNDP) advances its commitment to closing the global AI equity gap, a pivot to a more proactive re-envisioning of AI Trust and Safety that centres on equitable growth and is sensitive to local contexts is more important than ever.

Translating insights from UNDP’s forthcoming 2025 Human Development Report on AI into practical action, the AI Trust and Safety Re-imagination Programme, aligned with the AI Hub for Sustainable Development’s goal of ‘growing together’ in the AI revolution, is designed to strengthen local enabling environments and complement ongoing research and policy efforts related to fostering AI advancement.

Trust and Safety (T&S) refers to the practical methods of managing emerging AI escalations, as well as approaches to identifying, defining and mitigating risks. T&S also pertains to how policies and laws are interpreted into product designs, business operations, escalations processes, and corporate communications to achieve their intended purposes.

The AI Trust and Safety Re-imagination Programme invites innovators across the public and private sectors to collectively re-imagine T&S, through measures that both prioritize equitable and practical approaches and foster shared, public–private sector responsibility. The programme aims to advance T&S beyond reactive measures, by creating practices that actively anticipate and prevent harm, and create safer development environments. These practices ought to be tailored to the sensitivities of local contexts and the needs of impacted communities.

More specifically, the programme seeks to:

Gather practical experiences and data on how AI products and systems manifest new risks that create harm in local contexts;
Lay the foundations for equitable and safe AI development ecosystems that support local startups and the safe application of AI in developing countries; and
Explore innovative partnerships that ensure safety-by-design in the early stage of AI development and deployment.

Who can apply?
Submissions are encouraged from innovators working at the intersection of AI and T&S, which may range from prototypes, unpublished or recently published research, to a fully-fledged product or solution that has been deployed. Successful submissions must function beyond the ideation stage and demonstrate substantive outputs or findings.

Submissions are also welcome from individuals with significant expertise from private sector companies, startups, industry leaders integrating with AI in their business sectors, relevant civil society groups, research institutions, universities and other similar organizations.

Teams applying could comprise of a university department, corporate R&D teams, non-governmental organizations working in the field of T&S, industry alliances, think tanks, etc. Please note that only up to three people may present for a team and one of the team members must be an English speaker (see submission requirements below).

Applicants should currently be working on impactful research, interventions or solutions in one of the following areas:

Scalable local insights and solutions: Approaches to T&S that are sensitive and adaptive to local contexts and industries. Examples may include a locally specific taxonomy of AI harms; red-teaming for language-specific vulnerabilities; and interventions specific to auditing for AI fraud in the finance sector.
Local risk forecasting and prevention: Projects that forecast AI risk by leveraging local knowledge on the digital risk and vulnerabilities landscape, or projects that seek to develop locally focused, proactive strategies (e.g. technical AI auditing systems for business-to-business (B2B) financial industry scams in local languages).
Shared responsibility: Public–private collaborative approaches to AI escalations and risk management practices that impact developing countries and distribute risk and responsibility (e.g. product-feedback mechanisms).

Why apply?
Applicants with successful submissions will have an opportunity to:

Present their ideas in a multi-stakeholder forum consisting of leading AI researchers, experts, and innovators who can validate and potentially support implementation of their ideas at scale.
Engage with government stakeholders and experts to co-design and test collaborative strategies towards re-imagining AI Trust and Safety in different regional contexts.
Join UNDP’s AI Trust and Safety community, contribute to thought leadership publications and participate in programming development initiatives through the AI Hub for Sustainable Development and other related activities.

Shaping the conversations and actions for sustainable development
The groundbreaking ideas to re-imagine trust and safety will be showcased at key global events this year, including the Global AI Summit on Africa in Kigali, Rwanda (3-4 April 2025).

Submission requirements
Submissions should be in the form of a slide presentation (e.g. PowerPoint or PDF) with no more than 10 slides, or an abstract paper (e.g. in Microsoft Word or PDF) with no more than three pages, single-spaced.

The submissions should demonstrate:

Alignment with the objectives of the programme and proven experience to deliver impact;
Awareness of the AI and T&S landscape with demonstrable understanding of local AI Trust and Safety challenges;
Proven skills, experience, relationships and expertise needed to re-imagine AI Trust and Safety in developing countries. Submissions addressing AI Trust and Safety for a specific region should be able to demonstrate an existing equitable partnership that validates the local relevancy of their work;
Clarity of impact and focus and realistic understanding of regional challenges and potential impact, including limitations. Note: Effectiveness in a specific industry or with a specific population is highly valued;
A collaborative approach, strong cross-functional thinking and strategies that enable partnership-driven impact; and
A high potential for practical implementation or meaningful impact. Submissions may include evidence or research that brings attention to the nuances of how AI harms manifest locally among specific vulnerable populations, or a series of technical, practical, or educational interventions that enable the detection or protection of AI risks and harms. Solution-based submissions are required to at least be in the early stages of being field-tested and validated by the impacted populations.
Eligible applicants must possess professional-level proficiency in English. English is a requirement for at least one member of each team. Teams will be expected to present online and be interviewed in English.

Selection process
Submissions will be evaluated by a selection committee consisting of Trust and Safety advisors from universities, tech companies and research centres.

List of global advisors includes:

Dr. Rachel Adams
Founding CEO: Global Center on AI Governance

Dr. Hoda Al Khzaimi
Associate Vice Provost of Research Translation and Innovation, New York University Abu Dhabi

Gustavo Beliz
Member of the Pontifical Academy of Social Sciences

Dr. Nick Bradshaw
Founder & Director, South African AI Association

Armando Guio- Español
Executive Director, Global Network of Internet and Society Centers (NoC)

Madhulika Srikumar
Head of AI Safety Governance, Partnerships on AI

Dave Willner
Co-Founder, Zentropi

Submissions will be accepted on a rolling basis up to the selection cut-off dates listed below. Early submissions are strongly encouraged to increase opportunities for session presentations, coaching and mentoring from experts.

Please note that once presentation slots are filled, the selection process will be closed.

All materials must be submitted by 11:59 PM EST on the date of each selection round. Only complete applications will be considered.

Submit by theme
There will be four submission rounds, each focused on a distinct theme. These themes are expected to inform the conversations among global stakeholders in following rounds and aid coalition building across shared challenges and approaches. Applicants are required to align their submissions with the themes.

Early submission is encouraged to increase the number of convenings successful applicants can participate in. This field of T&S is dynamic and quickly evolving, and therefore revised or new submissions are welcome for future rounds.

ROUND 1: SYSTEMIC T&S REGIONAL CHALLENGES AND THE IMPACT OF AI

Description: Region-specific challenges (and opportunities) to realizing effective T&S implementation and protections. What existing T&S challenges could be made worse by AI and what existing systems make it difficult to protect local communities at scale?

ROUND 2: EMERGING AI RISKS AND ESCALATIONS

Description: How AI harms are manifesting in locally nuanced ways for diverse populations. What are the early indicators and detection mechanisms to forecast new AI risks? How should AI harms and escalations be handled? What new practices and norms are being established?

ROUND 3: THEME TO BE DETERMINED BY SUBMISSIONS AND PREVIOUS ROUNDS

ROUND 4: LOCALLY TAILORED SOLUTIONS

Link https://www.undp.org/digital/ai-trust-and-safety?

Leave a Reply

Your email address will not be published. Required fields are marked *