Diversity and Inclusion in AI: Lessons from Human and AI Collaboration

3 Jul 2024


(1) Muneera Bano;

(2) Didar Zowghi;

(3) Vincenzo Gervasi;

(4) Rifat Shams.

Abstract, Impact Statement, and Introduction

Defining Diversity and Inclusion in AI

Research Motivation

Research Methodology



Conclusion and Future Work and References


In this paper we have presented our extensive research on exploring D&I in AI guidelines and our attempt to operationalise them in the process of specifying D&I in AI requirements. We have identified 23 unique themes related to D&I in AI considerations from our literature review. We introduced a user story template for articulating D&I in AI requirements, and we have conducted a focus group with four human analysts to develop user stories for two cases of AI systems to gain insights into the process of writing D&I in AI requirements. Furthermore, we have explored the utility and usefulness of using GPT-4 as an agent in the automation of generating D&I in AI requirements. Comparing the user stories developed by human analysts and those generated by GPT-4 we have gained insights into the pros and cons of the use of LLM in this activity and the complementary nature of this form of human-machine collaboration. There remains a need for further exploration of the influence of cultural and legal contexts on the implementation of diversity and inclusion requirements in AI. Investigations could delve into how variances in privacy laws, data protection regulations, and cultural perspectives impact AI system development across different regions. Moreover, research could be undertaken to assess the effects of various AI systems, such as recognition technology, on individuals from diverse backgrounds encompassing race, gender, and age.


  1. Cachat-Rosset, G. and A. Klarsfeld, Diversity, Equity, and Inclusion in Artificial Intelligence: An Evaluation of Guidelines. Applied Artificial Intelligence, 2023. 37(1): p. 2176618.

  2. Nyariro, M., E. Emami, and S. Abbasgholizadeh Rahimi. Integrating Equity, Diversity, and Inclusion throughout the lifecycle of Artificial Intelligence in health. in 13th Augmented Human International Conference. 2022.

  3. Bano, M., D. Zowghi, and F. da Rimini, User satisfaction and system success: an empirical exploration of user involvement in software development. Empirical Software Engineering, 2017. 22: p. 2339-2372.

  4. Ahmad, K., et al., Requirements practices and gaps when engineering human-centered Artificial Intelligence systems. Applied Soft Computing, 2023. 143: p. 110421.

  5. Ahmad, K., et al., Requirements engineering for artificial intelligence systems: A systematic mapping study. Information and Software Technology, 2023: p. 107176.

  6. Ahmad, K., et al. What’s up with requirements engineering for artificial intelligence systems? in 2021 IEEE 29th International Requirements Engineering Conference (RE). 2021. IEEE.

  7. Roche, C., P. Wall, and D. Lewis, Ethics and diversity in artificial intelligence policies, strategies and initiatives. AI and Ethics, 2022: p. 1-21.

  8. Hickok, M., Lessons learned from AI ethics principles for future actions. AI and Ethics, 2021. 1(1): p. 41-47.

  9. Dattner, B., et al., The legal and ethical implications of using AI in hiring. Harvard Business Review, 2019. 25.

  10. Hagendorff, T., The ethics of AI ethics: An evaluation of guidelines. Minds and machines, 2020. 30(1): p. 99-120.

  11. Leavy, S., B. O'Sullivan, and E. Siapera, Data, power and bias in artificial intelligence. arXiv preprint arXiv:2008.07341, 2020.

  12. Xivuri, K. and H. Twinomurinzi. A systematic review of fairness in artificial intelligence algorithms. in Responsible AI and Analytics for an Ethical and Inclusive Digitized Society: 20th IFIP WG 6.11 Conference on e-Business, e-Services and eSociety, I3E 2021, Galway, Ireland, September 1–3, 2021, Proceedings 20. 2021. Springer.

  13. Felzmann, H., et al., Towards transparency by design for artificial intelligence. Science and Engineering Ethics, 2020. 26(6): p. 3333-3361.

  14. Arrieta, A.B., et al., Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 2020. 58: p. 82-115.

  15. Fosch-Villaronga, E. and A. Poulsen, Diversity and inclusion in artificial intelligence. Law and Artificial Intelligence: Regulating AI and Applying AI in Legal Practice, 2022: p. 109-134.

  16. Zowghi, D. and F. da Rimini, Diversity and Inclusion in Artificial Intelligence. arXiv preprint arXiv:2305.12728, 2023.

  17. Jobin, A., M. Ienca, and E. Vayena, The global landscape of AI ethics guidelines. Nature Machine Intelligence, 2019. 1(9): p. 389-399.

  18. Syed, J. and M. Ozbilgin, Managing diversity and inclusion: An international perspective. 2019: Sage.

  19. Klarsfeld, A., et al., Diversity in under-researched countries: new empirical fields challenging old theories? Equality, Diversity and Inclusion: An International Journal, 2019.

  20. Mhlambi, S. and S. Tiribelli, Decolonizing AI Ethics: Relational Autonomy as a Means to Counter AI Harms. Topoi, 2023: p. 1-18

  21. Attard-Frost, B., A. De los Ríos, and D.R. Walters, The ethics of AI business practices: a review of 47 AI ethics guidelines. AI and Ethics, 2022: p. 1-18.

  22. Mittelstadt, B., Principles alone cannot guarantee ethical AI. Nature machine intelligence, 2019. 1(11): p. 501-507.

  23. Munn, L., The uselessness of AI ethics. AI and Ethics, 2022: p.1-9.

  24. Lundgren, B., In defense of ethical guidelines. AI and Ethics, 2023: p. 1-8.

  25. Schwartz, R., et al. Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. March 2022; Available from: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP. 1270.pdf

  26. World-Economic-Forum. A Blueprint for Equity and Inclusion in Artificial Intelligence. 29 June 2022; Available from: https://www.weforum.org/whitepapers/a-blueprint-for-equityand-inclusion-in-artificial-intelligence.

  27. Department of Industry Science and Resources, A.G. Australia’s Artificial Intelligence Ethics Framework. 2019; Available from: https://www.industry.gov.au/publications/australias-artificialintelligence-ethics-framework.

  28. Shams, R.A., D. Zowghi, and M. Bano, Challenges and Solutions in AI for All. arXiv preprint arXiv:2307.10600, 2023.

  29. Cohn, M., User stories applied: For agile software development. 2004: Addison-Wesley Professional.

  30. Cohn, M., Advantages of userstories for requirements. Inform IT Network, 2004.

  31. Zhao, J., et al., Men also like shopping: Reducing gender bias amplification using corpus-level constraints. arXiv preprint arXiv:1707.09457, 2017.

  32. West, S.M., M. Whittaker, and K. Crawford, Discriminating systems. AI Now, 2019.

  33. World-Economic-Forum, How to prevent discriminatory outcomes in machine learning. Global Future Council on Human Rights 2016–2018, 2018.

  34. Voigt, P. and A. Von dem Bussche, The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing, 2017. 10(3152676): p. 10-

1 st Author: Muneera Bano, PhD is Senior Research Scientist and member of Diversity and Inclusion team at CSIRO’s Data61. She is an award-winning scholar, is passionate advocate for gender equity in STEM. She is a Diversity Inclusion and Belongingness (DIB) officer at Data61 and a member of the 'Equity, Diversity and Inclusion’ committee for Science and Technology Australia. Muneera graduated with a PhD in Software Engineering from UTS in 2015. She has published more than 50 research articles in notable international forums on Software Engineering. Her research, influenced by her interest in AI and Diversity and Inclusion, emphasizes humancentric technologies.

Contact her at muneera.bano@csiro.au

Official Webpage: https://people.csiro.au/B/M/muneera-bano

2nd Author: Didar Zowghi, (PhD, IEEE Member since 1995) is a Senior Principal Research Scientist and leads the science team for Diversity and Inc(lusion in AI at CSIRO’s Data61. She is an Emeritus Professor at the University of Technology Sydney (UTS) and conjoint professor at the University of New South Wales (UNSW). She has decades of experience in Requirements Engineering research and practice. In 2019 she received the IEEE Lifetime Service Award for her contributions to the RE research community, and in 2022 the Distinguished Educator Award from IEEE Computer Society TCSE. She has published over 220 research articles in prestigious conferences and journals and has co-authored papers with over 100 researchers from 30+ countries.

Contact her at didar.zowghi@csiro.au

Official Webpage: https://people.csiro.au/z/D/Didar-Zowghi

3rd Author: Vincenzo Gervasi, PhD is an associate professor in the University of Pisa’s Computer Science Department. His research focuses on natural language processing applied to requirements engineering, formal specifications, and software architectures, fields in which he has published over 140 papers in international venues. Prof. Gervasi received his PhD in computer science from the University of Pisa and is a member of IFIP WG 2.9.

Contact him at vincenzo.gervasi@unipi.it

This paper is available on arxiv under CC 4.0 license.