Explores different tools and methodologies we can use to de-bias Ai outputs
Ai is a powerful tool. But what happens when that tool reflects our own biases?
When it perpetuates inequalities?
AI systems can inherit biases from their training data,
amplifying existing societal inequities.
Her research explores the prevalence of biases in AI education tools. It proposes a framework for
identifying and mitigating those biases.
Facial recognition, for example, has higher error rates for minority women.
In criminal justice, algorithms mislabel African-American defendants as
"high risk" nearly twice as often as white defendants.
Aanya's work highlights the importance of transparent and inclusive AI for everybody. It's about ensuring fairness and accuracy in AI-generated content.
It's about building a responsible society.
This paper has the potential to contribute to tangible advancements in Ai technology