“Any technology in the history of humanity can be used as a tool and as a weapon,” Mr. Juan Lavista Ferres, chief scientist and co-founder of Microsoft’s “AI for Good Lab,” told us in a recent interview. In this sense, Artificial Intelligence (AI), as we all know, is no different from any other major technology.
But while much attention over the last decades has been given to the weaponization of AI and its commercial potential, far less focus has been placed on the various ways AI can help address global problems.
This is where the AI for Good Lab’s recently published book comes in, pulling together the experience and knowledge of a team of veteran researchers at the lab — launched in 2018 to leverage AI for positive social impact — under the guidance of Mr. Lavista Ferres.
Titled, “AI for Good: Applications in Sustainability, Humanitarian Action, and Health,” the book provides unique insights into AI’s immense and exciting potential. It is intended to help those at the forefront of addressing these problems envision how AI could help, Lavista Ferres explains.
“A lot of the organizations we work with have a very good understanding of the problem they want to solve, whether they’re working on pancreatic cancer or sustainability or accessibility or some other area. What is sometimes difficult for them to understand is how they can use AI to help solve the problem, and that is in part because they don’t have a deep understanding of AI,” he elaborates.
The AI for Good Lab team realized early on that the best way to help organizations they work with identify the opportunities of AI was to show them examples of the work the lab was doing; even when, as Lavista Ferres underlines, “these examples have nothing to do with what they are doing.”
To understand what he meant by this, one needs to realize that Mr. Lavista Ferres, as a well-known data scientist who has had his work published in leading academic journals and has spoken at TEDx, Strata, IEEE, Cornell University, and UC Berkeley (among other place), has enjoyed an international audience and drawn attention to his ideas far and wide.
An example he brings to attention is one of his team’s early projects that the lab shared with an NGO supporting and helping people in Syria during the conflict. Insight into this project, despite having nothing to do with the NGO’s work, helped them come up with a solution to one of the issues they were working on.
Among other things, the NGO collected evidence of war crimes. The project the AI for Good Lab shared with the NGO, done in collaboration with the US National Oceanic and Atmospheric Administration (NOAA), involved the use of acoustic underwater sensors and AI to detect and track Beluga whales.
The NGO, Lavista Ferres tells us, “asked if they could use something like that to understand potential weapons that should not be used because they were restricted by the Geneva Convention. The answer was that, as long as those weapons — in this particular case it was cluster ammunition — have a particular sound fingerprint, and a lot of them do, we could use the same models. These are two examples that have nothing to do with one another, they cannot be further apart, but from an AI perspective, they are basically the same problem.”
Since then, when meeting with organizations the AI for Good Lab team would point to examples of things they were doing, and this would help the organizations “brainstorm how they could use AI to help them solve their problem.”
Thus, each of the book’s 30 chapters presents a different real-world example, a project the Microsoft AI for Good Lab has worked on with partners and experts (the lab does not work on projects without a subject matter expert). Each one exemplifies the applications of AI in sustainability, humanitarian action, and health.
AI: A case of “the Only Viable Solution”
Asked to share what he thinks is the most exciting application of AI presented in the book, Lavista Ferres smiles and compares the projects to children. “It’s difficult to select your favorite child,” he says. He is excited about all of them.
Still, and perhaps unsurprisingly given the impact and scale of the issue that the project is helping solve, as well as Mr. Lavista Ferres’s background — he holds a PhD in AI in healthcare on top of his computer science degree and data mining and machine learning graduate degree — he singles out one of his lab’s projects:
“There’s a chapter in the book about retinopathy of prematurity, the leading cause of blindness among children worldwide. The disease didn’t exist a few decades ago, but now we’re seeing exponential growth in many countries, particularly in the global South. The reason it didn’t exist is that it affects premature babies that wouldn’t have survived in the past. But now, thanks to improvements in health and healthcare, more premature babies are surviving and, as a result, there are more babies suffering from retinopathy of prematurity.”
According to the research, in 2010 an estimated 184,700 preterm babies suffered from retinopathy of prematurity (ROP) globally, with around 20,000 becoming blind or severely visually impaired. Blindness from ROP, as Lavista Ferres stresses, is preventable.
“All you need to do is diagnose it and have surgery, but you have a very small window. If you have a severe case of ROP, you have 24 hours,” he explains. “The problem is that we only have 200,000 ophthalmologists in the world, of which 10,000 are pediatric ophthalmologists. And we have millions of babies born every year that could be affected by ROP and should be screened. But we don’t have enough pediatric ophthalmologists. Even if we could distribute them perfectly equally around the world, which would be impossible to begin with — some countries do not even have one ophthalmologist — and even if these people would work 24/7, 365 days a year, they would still not be enough to do that work.”
This is where AI becomes the “only viable solution”: The lab worked with ophthalmologists to develop an app that can use an AI-powered smartphone camera to detect ROP, replacing costly diagnostic equipment.
In terms of accuracy, these algorithms are “as good as a very good ophthalmologist,” Lavista Ferres notes, underlining that they are “not going to replace ophthalmologists”:
“The tool will not do the surgery, but it will help with the screening. It will help doctors scale to the point that they can only focus on those cases that are either severe or have ROP. This is a case where AI is not just a solution, but the only solution we have.”
Leaving no one behind: Bridging the “electricity gap”
In the AI for Good book, Mr. Lavista Ferres writes that AI’s potential reminds him of electricity, noting however that today, 150 years after the invention of electricity, access to it “remains out of reach for over 700 million people around the world.” To ensure that a similar gap isn’t created with AI, he says “we need to have the tools to access AI.”
“We know for a fact that, independent of where one is born, the distribution of intelligence is kind of equal around the world,” he goes on. “But the opportunities are not. I say we need to work in that whole pyramid. The first one is access to electricity. If you don’t have access to electricity, you will not have access to the Internet and you will not have access to AI.” To bridge the electricity access gap, we need to “understand that there are people left behind in this world” and “continue to invest in the infrastructure to help people access electricity.”
“Second, we need access to the Internet, in an affordable way,” Lavista Ferres adds.
The talent problem: The shortfall in AI expertise
One concern presented in the book is that governments and organizations “often don’t have the capacity to attract or retain AI experts” needed to solve problems they are working on.
“The majority of the AI experts are working in the financial services or in the tech sector,” Lavista Ferres notes. “Unfortunately, right now there is a huge demand for AI talent, which makes it more difficult for organizations and NGOs around the world to hire or retain these talents.”
Related Articles: How Microsoft Plans to Help the Global South Fight Climate Change | AI Is Set to Change Fertility Treatment Forever | Imagining an Ethical Place for AI in Environmental Governance | Microsoft Leads the Way to a Faster Green Energy Transition | Can Artificial Intelligence Help Us Speak to Animals? | AI’s Threat to Humanity Rivals Pandemics and Nuclear War, Industry Leaders Warn | Governments’ Role in Regulating and Using AI for the SDGs | The Challenges Ahead for Generative AI
But with “many more people getting into AI,” Mr. Lavista Ferres is hopeful. “I think that sometimes those trains move slowly. I do expect and I do hope that a few years from now, people being trained as medical doctors will also be training AI.”
“No matter the problem that you’re working with, whether you are an architect, an accountant, a historian, or a physicist, you need to be working with data and because you’re working with data, AI will be able to help,” he adds.
Doing the right thing
Regarding how AI threats can be mitigated, Lavista returns to electricity. He reminds us of the war of currents (between alternating current and direct current) in the early days of electricity, specifically the debate about the risks of alternating current.
“Society always does the right thing, even if sometimes it takes some time.”
Alternating current allows us to move electricity across very long distances. Electricity from power plants, for example, is transmitted to homes as alternating current. Direct current is used in computers, remote controls, LEDs, and phones, among other things. While safer, direct current cannot move electricity over long distances.
“If you cannot do that, you cannot electrify cities,” Lavista Ferres notes. “But there is risk — alternating current can kill people. Thomas Edison, the biggest promoter of alternative current, was very much against it. He would electrocute poor dogs in front of people to show them the risk of alternating currents. And he was right in the sense that this was a risk for society.”
The way Lavista Ferres sees it, the problem was that Edison “couldn’t think of how society could work together to make sure we have safe electricity.”
“Today people are not questioning whether alternating current is risky. Why? Because we have mitigated a significant amount of those risks. The risk of AC is not zero, but we have mitigated it and that’s something that people need to understand,” he says.
When it comes to AI, Lavista Ferres believes it is our responsibility to “work together to minimize the risks, and he is “eternally optimistic.” We’ve done it with all other technologies, he says — “AI is just one more.”
“Society always does the right thing, even if sometimes it takes some time,” he adds, reminding us that the majority of improvements over the last 200 years, such as in life expectancy, access to services, water, and sanitation, are all a result of using technology for good.
Editor’s Note: The opinions expressed here by the authors are their own, not those of Impakter.com — Cover Photo Credit: Microsoft.