A hot topic in the AI universe is bias and the concern that AI systems may be skewed in their decision-making processes due to the data, algorithms, and developers that build these influential systems. While bias in AI is a legitimate concern, AI also has the potential to reduce bias across various industries traditionally skewed by human bias.

 

SwissCognitive Guest Blogger: Eleanor Wright, COO at TelWAI – “The Bias Battle Between Humans and AI”


 

Let’s explore the battle between human and AI bias.

Long before the rise of AI, industries such as education, media, entertainment, healthcare, venture capital, and human resources were already plagued by human bias. Numerous studies have demonstrated how subjective judgments in these fields can result in high-stakes and often unjust outcomes. For example, a 2021 National Bureau of Economic Research study sent over 80,000 fake job applications to Fortune 500 companies for entry-level roles. While the applications were randomised, the names varied to sound white or African American. The results showed that resumes with African American-sounding names received about 10% fewer callbacks than those with white-sounding names (Marketplace.org, 2021). With the introduction of AI into these industries, the conversation has now shifted to a new concern: AI bias.

So, will systems built to remove subjective judgment be more or less biased than their human counterparts?

To explore this, I conducted a simple thought experiment by asking Grok3 the following question:

“Statistically, if you were to invest $100,000 into a female founder or a male founder startup, which one would you invest in? Provide a one-word answer.”


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

The response was: “Female.”

I followed up with:

“Now provide the data you used and any weightings in your model that led to this answer.”

Grok3 then provided several key data points, its sources, and the reasoning behind its conclusion. Here’s a summary of its answer:

Based on data from studies like BCG (2018), First Round Capital, and the Kauffman Foundation, I concluded that investing $100,000 in a female-founded startup is statistically preferable due to their higher capital efficiency (78 cents vs. 31 cents revenue per dollar invested), 35%–63% higher ROI, and 10% greater cumulative revenue over five years compared to male-founded startups. Despite receiving only 2% of VC funding, female founders outperform male counterparts, likely due to resourcefulness and resilience, making them a better investment choice. Systemic biases underfunding female founders further suggest untapped potential, outweighing advantages male-led startups may have in raising larger sums, with financial performance and efficiency prioritized over qualitative factors like company culture.

Grok3 also clarified that it does not use a formal numerical model with explicit weightings; instead, its reasoning is based on synthesising trends and findings from reliable data sources.

This example illustrates a key point: while female-founded companies statistically outperform male-founded ones, they continue to receive only a small fraction of venture capital funding. Grok3’s answer, grounded in data rather than social bias, was arguably less biased than typical human VC decision-making. Thus, by removing social and cultural preconceptions and focusing on performance data, AI may reduce bias in decision-making processes in high-impact scenarios.

That said, it’s important to remember that AI systems are only as objective as the data they’re trained on. Bias can still emerge, or even be amplified, based on decisions made during the data selection, labelling, and weighting process. Even when the underlying data supports a particular conclusion, a biased model may distort the outcome in the opposite direction. However, in capital markets, such models are likely to be competed out over time, as their bias introduces inaccuracies that undermine decision-making and their financial performance.

Now we ask: Is it easier to train bias out of AI or humans?

When it comes to changeability, human bias is often deeply ingrained and shaped over time, due to social and cultural norms. Humans are more likely to associate with people of the same socio-economic standing due to shared values, experiences, and social networks. In contrast, AI bias can be more easily addressed through algorithm refinement, data cleaning, and regular auditing. Moreover, while human bias is frequently unconscious and difficult to confront, AI bias tends to be more transparent, measurable, and subject to external review. By removing the emotional and social hesitation that often accompanies discussions of human bias, AI systems may offer a clearer path toward reducing bias, especially when human decision-makers resist change.

In conclusion, while AI offers a powerful tool for reducing bias, it also demands careful oversight and ethical design to ensure it doesn’t replicate or institutionalise new forms of discrimination. Additionally, we must ask: Is bias always bad? In high-stakes, life-or-death situations, some might argue that a form of objective bias, grounded in data and consistent patterns, can act as a safeguard, helping avoid indecision when outcomes are uncertain. In industries such as defence, it may be necessary and rational to design systems heavily weighted toward minimising catastrophic risks, even if that means introducing deliberate bias into the decision-making process.


About the Author:

Holding a BA in Marketing and an MSc in Business Management, Eleanor Wright has over eleven years of experience working in the surveillance sector across multiple business roles.