Introducing the AI IRL Space at MozFest 2021
AI IRL: practitioners-driven space to gain insights on AI in the real world
While it’s important to investigate Artificial Intelligence (AI) — the systems that could impact society at unprecedentedly large scale — as a mechanism to cause and perpetuate harm, we also need a holistic understanding of AI as a step to identifying effective interventions to promote Trustworthy AI.
In the AI IRL space at MozFest 2021, we aim to create a neutral space for investigating AI in the real world. We’re inviting AI builders, practitioners from various domains working on AI projects, and researchers working on socio-technical case studies on AI technologies to share their observations. This may include but is not limited to how we make decisions to build and evaluate AI solutions, and how AI algorithms and technologies are deployed in the real world and integrated into our lives. We especially welcome insights and evidence from the non-western societal contexts.
In addition, we are also looking to help bring current research on Trustworthy AI into practice through tutorials and workshops. Your audience would be AI builders, AI educators/trainers from global communities, or practitioners from other domains who are involved in the decision-making process within the AI building pipeline and deployment. This may be expanded to include the pedagogical discussion on responsible AI practice for the educators.
If this applies to you, we’d like to invite you to submit an AI IRL session proposal. We envision the following as our space elements, but please feel free to propose something else if it fits with the purpose of our space. The Call For Proposals is open until ̵N̶o̶v̶e̶m̶b̶e̶r̶ ̶2̶3̶,̶ ̶2̶0̶2̶0̶ November 30, 2020, and the (all virtual) MozFest will take place in March 2021.
Lessons-learned from practitioners & socio-technical case studies
Are you an AI builder with experience in implementing responsible practices within your organization? What did it entail to successfully implement the responsible pipeline within your organization/team to produce Trustworthy AI? This may include technical best practice that you adapted from current literature, or internal procedures/negotiations you took part in.
Are you a practitioner with experience in deploying AI and can share your lessons-learned? This applies to various fields (commercial, medical, government, humanitarian, etc) and various roles involved with the decision making process to integrate the technology. How did your team ensure that the AI was developed/ deployed responsibly and effectively?
Are you a researcher working on socio-technical case studies on real-world usage of AI and its deployment? We would love your work to be presented! As an example, we like AI on the Ground’s work on Repairing Innovation, A study of integrating AI in clinical care.
** We understand that sharing your work is a sensitive issue and may require internal clearance within your organization/team, or require some level of privacy. We accept a work-in-progress that could be shared in March 2021, so you don’t need to have finished research or ready-to-share materials at the point of proposal. If you require some accommodation to share your lessons-learned, please indicate your requirements in your proposal.**
Tutorial / Workshop on Trustworthy AI methodologies and toolkits
Are you currently developing toolkits/tutorials based on current research in AI fairness, explainability and beyond that contribute to building Trustworthy AI? Share your project with our practitioner communities to get feedback and increase community base. Examples of such projects are:
- Tutorials on explainability or interpretability methodologies for ML algorithms
- Statistical fairness toolkits for biased real-world data
- Diagnosis tools for ML algorithm, such as What-If Tool
- Responsible practice guidelines for AI building pipelines at large, which may include internal audit and decision making process on ML models concerning trade-offs (explainability, fairness properties, performance, efficiency, etc).
- Discussion on methodologies for verification of AI generated information or anti-misinformation strategies
- Privacy-preserving data sharing/governance model, or re-identification diagnosis toolkit
We prefer if your workshop includes the real-world scenarios and a discussion on the limitations of its methodologies, if any. Remember, this is not an academic conference and your audience are practitioners, rather than AI researchers. We also welcome the workshop that facilitates pedagogical discussion on Trustworthy AI methodologies and toolkits.
Help us gain insights on AI in the real world
Submit a session to the AI IRL space for MozFest 2021! If you have any questions on our Call For Proposals (CFP) process or for anything related to helping you design your session proposals, check out this awesome blog post for the CFP support resources. If you have any questions specific to the AI IRL space, feel free to reach out to me, or join us in the #ai-irl channel on MozFest slack.