Announcing the next round of the MozFest Trustworthy AI Working Groups!

If you would like to contribute to community-led projects that create and promote more Trustworthy AI (TAI), then this is the opportunity for you!

About the projects

Check out the projects below to get a sense of how you might contribute one of them or several:

Building Trustworthy AI Working Group

  1. AI Unveiled: Discovering Patterns, Shaping Patterns. This project is building a serious game environment for users to discover the patterns driving the behavior of automated systems based on machine learning. Our goal is for users - even without technical knowledge - to gain insight into the workings of opaque systems based on so-called artificial intelligence. Join us in giving users a better understanding of how their data feeds algorithms.
  2. Fair EVA: A Toolkit for Evaluating Voice Assistant Fairness. This project will build and deploy an evaluation toolkit for developers to test the fairness of deep learning components that make up embedded voice assistants. We believe a fair voice assistant should work equally well for all users, irrespective of their demographic attributes. At the moment it is not possible to even test if this is the case, despite obvious discrepancies during regular use. Join us to create more equitable access to voice assistants.
  3. Visualizing Internet Subcultures on Social Media. This project will build an interactive visualization of the formation of and interactions between different communities on the internet, from die-hard fan bases to real-life social movements. Join us to help build this dynamic visualization.
  4. Foundational Tech for Low-Resource Languages. This project will re-imagine the foundational technology infrastructure needed to kickstart the artificial intelligence / machine learning frameworks for some of the low-resourced languages of South Asia. Join us in building a more diverse, equitable, and community-run AI ecosystem for global languages starting in South Asia.
  5. Truth of Waste as a Public Good. This project is bringing transparency to waste recycling through standardized content authentication. Focusing on waste management, a prototype powered by AI will interact with the decision-making ecosystem that empowers better decisions around recycling or re-homing. Join us to streamline the cooperative, circular economy business model.
  6. MOSafely: An AI Community Making the Internet a Safer Place for Youth. Modus Operandi Safely (MOSafely.org) is an open-source community that leverages evidence-based research, data, and artificial intelligence to help youth and young adults engage more safely online. Join us in building technology that detects risk behaviors of youth online and/or unsafe online interactions with others.
  7. Atlanta University Center Data Science Initiative. As part of Mozilla’s HBCU Collaborative, The Atlanta University Center Data Science Initiative is working to develop a prototype of a tool or technology that promotes Trustworthy AI. Interested in supporting this open innovation project? Join the working group to learn more.

Civil Society Actors for Trustworthy AI Working Group

  1. A feminist dictionary in AI will create a reference for desinging gender-inclusive algorithms. The dictionary will help AI Builders understand feminist theory and gender equality, as well as describe the kinds of classification, analysis, and prediction-making necessary for developing a gender-inclusive algorithm. Join us to work towards a more equitable and inclusive era in AI.
  2. Accountability Case Labs will build community and common strategic goals across the full range of technical and social experts, researchers, builders, and advocates who care about AI accountability and auditing. Our cross-disciplinary project will also develop a body of collaborative case studies examining real world examples across groups — from policy-making and regulation to design, development, procurement, deployment and security. Join us for a collaborative approach to increasing AI accountability.
  3. AI Governance in Africa will create an open and inclusive space for discussion and research into the development, application, and governance of AI in Africa. We will draft a white paper that tackles issues of societal concern like bias in AI. Join us to help create this resource for more open and inclusive AI in Africa.
  4. Audit of Delivery Platforms, in order to reinforce labour rights defense activities, will design and perform a technological audit of the algorithms used by delivery services in Ecuador. With help from several community organizations, this audit will help further workers’ labour rights and efforts to unionize. Join us to help model this audit in Ecuador and consider how it might help labour rights where you live, as well.
  5. Black Communities - Data Cooperatives (BCDC) will build a community of impact that helps organizers and leaders work with their communities to identify the links between their data and the AI technology around them. The project will also share practices for combating the negative effects of AI in Black communities and spaces like educational technology. Join us to work with like-minded community members who want Black communities to have more control over their data and its use in AI.
  6. Harnessing the civic voice in AI impact assessment will develop guidance for ​meaningful participation of external stakeholders ​such as civil society organizations and affected communities engaged in Human Rights Impact Assessments (HRIA) of AI systems. These assessments should be a requirement for any AI system deployed in communities. Join us to amplify civic voices in AI development and design, safeguard human rights and to promote rights-based AI through HRIAs that benefit ​affected communities.
  7. The Trustworthy AI in Education Toolkit will help science educators introduce and contextualize AI in their classrooms and other learning environments. Educators and learners need a better understanding of AI to understand what makes AI helpful and trustworthy - or not. Join us to gather, organize and document AI education frameworks, curriculum and activities and put them in a Trustworthy AI context to help educators and learners think critically about AI in their lives.
  8. As part of Mozilla’s HBCU Collaborative, students at Spelman College will work with academics and industry experts on a white paper that centers Black women in the conversation on Trustworthy AI and approaches it from an intersectional lens. Interested in contributing to this research project? Join the working group to learn more.

How to get involved

First, complete this form to share your interest in participating in a working group with us.

That’s it! Now you’ll get all our updates about the next round of the MozFest Trustworthy AI Working Groups. You’ll also receive an invitation to register for this round of working group community calls.

Once you join the working group, there are many ways to get involved with a project. For example, you might join a project as:

  • A contributor who helps plan a project, complete its tasks, and deliver its outcomes.
  • An observer who keeps up with a project and gives feedback at major milestones.
  • A potential user who gives feedback and helps file bugs or issues on a project’s output(s).

Our call for interest and participation will stay open throughout October 2021. Once you’ve completed the form, we’ll send you a follow-up email shortly thereafter inviting you to future working group meetings. Please note that this call closes after our kick-off meeting (which will be held the week before), so be sure to sign up early if you’d like to attend that onboarding call.

We cannot wait to begin working on the next round of projects alongside you all! Thank you for all you do to create and promote more trustworthy AI for your global and local communities.

Stay connected

Remember, you can share your interest in the working groups and ensure that you get updates about them by completing this form:

If you have any questions about our next round of MozFest Trustworthy AI Working Groups, please reach out to either of the MozFest team working group chairs Temi Popo or Chad Sansing.

To keep up with the latest news from the MozFest team in general, subscribe to our newsletter, follow MozFest on Twitter, and join us on LinkedIn. You can also join the MozFest community Slack to meet other people contributing to the internet health and TAI movements.

About the authors

A photo of MozFest Program Manager, Technical Audiences, Temi Popo.

Temi Popo is an open innovation practitioner and creative technologist leading Mozilla's developer-focused strategy around Trustworthy AI and MozFest.

A photo of Chad Sansing, middle-aged white male with short hair, a short beard, dark-rimmed glasses, a plaid shirt, and hoodie, with a pink and purple gradient over the image.

Chad Sansing works on leadership and training, as well as facilitator and newcomer support, for MozFest. When he’s not at work, you can find him gaming, reading, or enjoying time with his family. Prior to joining Mozilla, he taught middle school for 14 years.

MozFest is part art, tech and society convening, part maker festival, and the premiere gathering for activists in diverse global movements fighting for a more humane digital world. To learn more, visit www.mozillafestival.org.

Sign up for the MozFest newsletter here to stay up to date on the latest festival and internet health movement news.


Related content