According to the report “Artificial Intelligence, platform work and gender equality” by the European Institute for Gender Equality, in the EU and UK, only 16% of individuals with skills in the Artificial Intelligence field are women.
The gender gap grows with career length: women with 0-2 years of work experience in the sector represent 20% of all AI professionals, a percentage dropping to 12% when considering women with over 10 years of experience.
Different obstacles make it hard for women to start and maintain a career in Artificial Intelligence: the impact of barriers to access the industry, due to gender stereotypes behind educational choices, as well as obstacles in women’s professional careers often facing the so-called “glass ceiling,” which limits their professional advancement.
Despite the persistent obstacles and challenges, the number of women employed in the field is growing. It is therefore crucial to give space and recognition to women who contributed and continue to significantly contribute to its evolution.
From Computer Vision to Medicine: Fei-Fei Li and Suchi Saria
Born and raised in China, at the age of 15, Fei-Fei Li moved to the United States with her family. After the degree in physics, she continued her studies in computer science and engineering at Princeton University. Since 2009, she has been an associate professor at Stanford University and director of the Stanford Artificial Intelligence Lab (SAIL).
Fei-Fei Li is renowned for her work in computer vision, specifically for leading the ImageNet project. Before this, research in AI was mainly focused on algorithms, which experts and researchers considered superior to data itself. Fei-Fei Li revolutionized this paradigm highlighting the value of large amounts of real data to train machine learning algorithms more effectively and accurately. Launched in 2009, ImageNet is a database of 14 million manually annotated images, with details about the represented objects and their spatial positioning. Objects were classified into over 20,000 categories, including some common ones, such as “balloon” or “strawberry”.
The availability of a wide range of annotated images in ImageNet allowed machine learning and computer vision algorithms to be trained over a wider and diversified dataset, with a significant improvement in performance and reliability. This improved accuracy in computer vision had a direct impact on several practical applications, such as facial recognition, analysis of medical images, automated surveillance systems, and autonomous vehicles.
The success of ImageNet fostered innovation in the field of Deep Learning as well. Artificial Neural Networks – Convolutional Neural Networks (CNNs) in particular – significantly advanced in image classification accuracy thanks to the availability of a large dataset for training.
Fei-Fei Li’s research and the development of ImageNet was able to catalyze major change in the field of AI, shifting attention to the importance of data and fostering the development of more evolved and reliable computer vision technologies.
In recent years, Fei-Fei Li’s research expanded in Healthcare sector, with the goal of reducing medical errors while enhancing the benefits of ambient intelligence (the use of devices and sensors making physical spaces sensitive and responsive to human presence).
Regarding Healthcare, the contribution of Suchi Saria is to be mentioned.
Suchi Saria is associate professor of computer science, statistics and health policy at Johns Hopkins University – where she also directs the Machine Learning and Healthcare Lab – and founder & CEO of Bayesian Health, a company developing software and AI solutions to assist hospital staff in managing high-risk situations and therapeutic decision-making.
Saria played a pioneering role in applying AI in healthcare, implementing systems able to support disease diagnosis and treatment. She conducted groundbreaking research in early sepsis diagnosis through machine learning and in the development of sensor-based systems to monitor Parkinson’s symptoms progression.
Described by the Sloan Foundation as ‘the future of 21st-century medicine”, her work earned her many awards, recognitions, scholarships and accolades, including the Sloan Research Fellowship and the title of Young Global Leader by the World Economic Forum.
From Robotics to Ethics: Cynthia Breazeal and Joanna Bryson
American scientist and entrepreneur, founder and director of the Personal Robots Group at the MIT Media Lab, Cynthia Breazeal is considered a pioneer in the field of social robotics and human-machine interaction.
Social robotics is a research field focusing on the development of robots aimed at interacting with humans in a social context. Designed to be perceived as friendly and capable of understanding and responding to human needs, these robots require advanced Artificial Intelligence systems and natural language understanding (Natural Language Processing). But there’s more. Breazeal also explored the impact of non-verbal signals in communication, capable to step up humans/machines interaction and relationship.
Breazeal focused on the concept of “living with AI”, investigating the impact of integrating social robots into daily life; and today her research group’s studies explore areas such as education, paediatrics, wellness, and support for aging individuals.
In 2022, Cynthia Breazeal was appointed President for digital learning at MIT, with the aim to expand knowledge about Artificial Intelligence both within and outside the academic environment. Following this purpose, she launched the MIT RAISE initiative, providing worldwide students and teachers in primary and secondary schools with materials and resources in order to promote awareness and understanding of AI.
However, the social impact of Artificial Intelligence is not a recent topic; it’s been extensively studied since the 1990s by Joanna Bryson, a researcher and professor of Ethics & Technology at the Center for Digital Governance at the Hertie School in Berlin.
Bryson dedicated her studies to the interaction between Artificial Intelligence systems and human societies, providing a major contribution to the definition of governance principles and rights protection. In 1998, she published “Just Another Artifact“, her first article about the ethics of Artificial Intelligence, addressing the perception and conceptualization of AI in society; and how excessively positive or negative expectations can distract from the implied real issues; issues still crucial and debated today. Bryson provides consultancy to governments, agencies, and companies globally, particularly about AI related policies.
From Today to Tomorrow: Women Changing AI
In 2017, Margaret Mitchell founded Google’s Ethical AI research team, co-directed with Timnit Gebru. In 2021, both left the company following the issue of a controversial article where, together with linguist Emily Bender, they pointed out risks and costs related to the use of Large Language Models (LLMs), described as “Stochastic Parrots“, a term presumably coined by Emily Bender.
Since then, Mitchell and Gebru have been recognized as among the leading and most influential advocates for Diversity & Inclusion in AI. They emphasized how their absence could negatively affect the quality of new technologies as well as the importance of building not just larger and more efficient models but also safe and ethical ones.
The linguistic Emily Bender, albeit not a researcher in this sector, is now one of the top voices in the AI debate. As a matter of fact, we now talk about a “Bender rule“, as her studies urged AI researchers to indicate the languages they use to develop their models, avoiding the false and potentially harmful misconception that they work the same way in non-English-speaking world.
Last but not least, Daniela Amodei stands out among the prominent women in AI: along with her brother Dario, she is the co-founder of Anthropic, an emerging and promising startup in the field. Daniela Amodei was also vice president for safety & policy at OpenAI, contributing to the foundation of ChatGPT. In 2021, with her brother and other collaborators, she left the company to start Anthropic. The startup got noticed for its pioneering research on mechanistic interpretability, a technique allowing developers to perform something similar to a brain scan to understand the internal processes of an AI system, rather than relying just on its textual outputs. The chatbot they developed, Claude, operates following a “constitution” driving its behavior, based on principles from trustworthy sources, including the Universal Declaration of Human Rights of the United Nations. Amodei claims that this kind of initiatives are essential to foster an innovative and safer approach to Artificial Intelligence.
Once again, we have seen how the concepts of ethics and inclusivity have to play a pivotal role in the field of Artificial Intelligence. Only through the diversification of perspectives and skills, in particular enforcing women’s presence and active participation, we can rely on a more ethical, balanced and truly representative development of AI, meeting the demands of the society as a whole.