image description image description
Massachusetts Institute of Technology Legatum center for development & entrepreneurship

Legatum Center Blog

Below is an article from the Legatum Center

Systemic Inequality Begets AI inequality – A Call to Action for Entrepreneurs

Source: The Garage Artist: Vincent Roche

 

“Widely used algorithm for follow up care in hospitals is racially biased.” This was the headline for a Statnews article in October 2019. The underlying cause? Biased data. The lesson learned? Systemic inequality begets AI inequality. As of the end of 2020, there were 64 FDA-approved artificial intelligence (AI) healthcare applications according to Nature.  The primary fields for AI applications were radiology, cardiology, and internal medicine. The problem lies not in the AI algorithms themselves but in the underlying bias of the training data. At its core, AI is simply data and math. You can have the most advanced mathematical algorithms powered by the most advanced tensor processing units (TPUs) known to science, but if the data is inherently biased, then the outcomes will also be biased.

AI in healthcare has had several waves of hype over the decades with various amounts of fanfare. This most recent resurgence has been propagated by the digitization of petabytes of healthcare data brought about in large part by the passing of the Affordable Care Act of 2010 and the ability to compute large amounts of data by processing units primarily driven by the gaming industry. This led large companies like IBM Watson, Amazon backed Haven and Google backed Verily to attempt to revolutionize the industry with the expertise in computational science and AI. This has been followed by scores of startups backed by private equity funding seeking to upend a traditionally slow moving industry. A prospectus by Accenture states that the market for AI has grown to an estimated 6 billion USD over the course of a few years and Gartner predicts that by 2021 over 75% of healthcare delivery organizations will have made investment into AI.

In the case of racial bias that we discussed at the opening of this article, the issue stemmed from the AI’s training data on biased insurance claims. Due to inherent systemic inequalities historically within America’s healthcare system, black patients and others from underserved communities have fewer insurance claims. The AI would incorrectly predict that they would require adequate coverage thereby propagating the established bias within the data. Already at-risk populations were placed at greater risk by the implementation of biased AI.

These issues will continue due in large part because of where the data is being aggregated and what data companies continue to utilize. It is known that the largest and most robust medical data sets have excluded our most vulnerable. Yet companies continue to rely heavily on these datasets such as the Framingham Heart Study, European SCORE, and Gruelich and Pyle (G&P) skeletal developmental aging which was created utilizing only North American Caucasian children from high socioeconomic status. Research has also found that the vast majority of AI applications in healthcare come from only a handful of top tier institutions, such as Massachusetts General Hospital and Stanford, which causes inherent data bias. These biased data sets are often why commercialization of touted AI applications tend to fail when tested outside of these centers, even when implemented by technology giants such as IBM and Google.

Obstacles like these are pain points for an industry that continues to grow exponentially.  Solving pain points is how entrepreneurs revolutionize an industry and society. There are many avenues that entrepreneurs can follow to ameliorate the bias gap.

First, companies can create tools to identify bias. 

Several market and industry-specific tools have been created such as Microsoft’s LinkedIn FairnessToolkit (LiFT) to help reduce bias in job searches, to more general applications like IBM’s AI Fairness 360 and Google’s What-If tool.

Second, companies can create data sets from traditionally marginalized communities

There are also national campaigns to try and mitigate this long standing bias like NIH’s All of Us program which is looking to create the most diverse dataset in history. This is also a place where entrepreneurs with a focus on underserved communities or developing markets can make a real impact.

Entrepreneurs are making data collection more inclusive. For example, BloomerTech is revolutionizing cardiovascular data gathering for women with their smart-bra technology. Also, Ever Medical Technologies (Ever) is driving medical data digitization in Southeast Asia by allowing millions of people to take part in the future of AI in medicine. Ever is amassing millions of longitudinal records and breaking down barriers to AI through their blockchain based electronic health record (EHR). This data will cause a paradigm shift in medical AI applications for people of Asian ancestry from services such as tailored drug discovery to medical diagnostics. SOMOS is another startup focused on becoming the 23andMe for Hispanic and Indigenous people throughout the Americas. SOMOS seeks to ensure that Latinos are not left out of the coming AI revolution.

Also, entrepreneurs are being smarter about how data is used. HUED is a US-based startup that has created a platform to connect patients from marginalized communities to doctors that they can trust and who understand the nuances of their communities to tackle racial disparities. HUED’s platform considers the patient’s physical, mental, cultural, and socio-economic situation and tailors the healthcare provider to that patient. This helps reduce disparities and improves healthcare delivery to patients of color.

Third, startups can create less biased models.

Companies can use several methods to improve the algorithms once bias has been detected. To do this companies must spend time understanding how different data points can be proxies for discrimination on the basis of race or gender.  An example of this can be seen with zipcode data which due to the racially biased use of “redlining” instituted in 1934 by the Federal House Administration has been a proxy for race.  One can see where the utilization of data that was used as a means to de-incentivize investment into black communities could lead to biased outcomes if used for AI training.

Companies that invest the time to understand the biased data points can create counterfactual modeling which flips the underlying biased data points and thereby reduces bias when known. One company tackling this type of gender bias in healthcare and beyond is Equilo. Their algorithms utilize the transforming agency, access, and power (TAAP) approach to hack equality and transform social and gender analysis. Synthesized, a UK startup, can analyze datasets in a matter of minutes and determine whether a subgroup is being tied to a criteria that generates bias. They can then generate a “fairness score” which can allow the user to determine how balanced the data is overall.

The promise of AI in healthcare has always been to democratize medicine and provide a better future for all—a  future where health is for everyone regardless of age, country of origin, sex, or socioeconomic status.  Unfortunately, bias is threatening to increase the digital divide that tends to exacerbate inequality. But any time of great challenge is a time of great opportunity. An opportunity for the best of us to rise and begin to address the wrongs of our past.

To the entrepreneurs that rise to this challenge, a great reward awaits.  Not just the economic reward of solving industry-wide challenges, but the reward of moving humanity forward.  Forward to a future that is a little more just for all.

This article was written by Dr. Jose Morey, Founder and CEO of Ad Astra Media.

You might also like

    WANT MORE TO READ FROM US?

    By clicking subscribe, you agree to receive email notifications from us

    There was an error trying to subscribe. Please try again later.

    Thank you, you've been successfully subscribed.

    One or more fields have an error. Please check and try again.