Fr. Vicini Discusses the Bioethical Challenges of Artificial Intelligence

On February 10, Fr. Andrea Vicini, S.J. spoke to the Bioethics Society about the Bioethical Challenges of Artificial Intelligence. He defined artificial intelligence (AI) as the ability of a given technology to sense, comprehend, act, and learn either independently or in tandem with human activities, according to the guidelines established by the European Commission’s definition of 2019. This technology is already widely integrated into our social fabric and has, in Vicini’s words, the power to “grant our freedom—or to take it away from us.”

Vicini began with an example of AI implementation. In India, there are a total of 11 eye doctors per one million people. In Madurai, Google’s implementation of AI has allowed for more access to diagnoses of retinal problems and, as a result, obtain better diagnoses and treatment of diabetes. However, the process of incorporating AI into medicine has raised questions about the potential for misdiagnoses, the value of the patient-doctor relationship, and the risk of bias.

Advertisements

With the rapid development of AI within and without the medical field, it is critical that these questions find answers at a comparable pace, lest the world be taken by surprise by unforeseen dangers.

Fr. Vicini is able to address these questions from his experience as Catholic priest, physician, theologian, and bioethicist. He was recently appointed the Michael P. Walsh, S.J., Professor of Bioethics at Boston College.

Medicine is a major target for AI technology development. Already, AI computer programs are used for note-taking, delicate microsurgeries, and detection of cancer and cardiovascular conditions. Vicini highlighted the potential dangers of further integrating AI into medicine, namely worsening inequity and the heightening data security risk.

Beyond the endangerment of equity and security, Vicini expressed his own concern for the preservation of the patient-doctor relationship. He explained, “[the] medical profession is an art… an embracing of something that goes beyond the expertise” to encompass “compassion, expertise, accompaniment, support… How can I experience that with a machine?”

A more broadly influential use of AI is the invisible omnipresence of facial recognition. Formerly restricted to a government database of names and faces, AI facial recognition technology is now developed by private companies that use the programs to scrape images from the internet to compile databases of more than 3 million profiles. Clearview AI sells database access to government security organizations like the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI).

In 16 US states, AI facial recognition programs are used for “pre-trial risk assessment” to predict the future behavior of the defendant and to set bail, and in the United Kingdom similar technology is used to rate teenagers’ likelihood of becoming criminals. The dangers of using this technology are evident in its reliance on judgments based on age, sex, and race. Vicini raised ethical concerns about the access to these now privatized databases and about the risk of misidentification and bias. As the databases grow, so too does the risk of misidentification, and the immense lack of transparency and regulation of these private companies hinders accountability.

A third key sector to be impacted by AI technology is manufacturing. The McKinsey Global Institute predicts that 33% of American workers will be displaced and have to change occupation due to the automation of manufacturing. While these jobs will be eliminated in the short-term, Accenture estimates that AI has the potential to increase labor productivity and boost economic growth in the long-term. To support this, Vicini cited the absorption of displaced workers into delivery jobs in the past couple years, and expressed hope that other sectors experiencing growth could absorb more displaced workers, namely into health and environmental protection jobs.

To guide the ethical implementation of AI in medicine, security, and the workplace, Vicini referred to the European Commission Internal Document (2019) and its four guiding principles: Respect for Human Autonomy, Prevention of Harm, Fairness, and Explicability. He also cited Thilo Hagendorff’s astute observation that these types of ethical guidelines are not easily implemented and enforced.

To supplement the EC guidelines, Vicini recommended that the discussion be reframed in terms of virtues, contexts, and the common good. Vicini reflected on these themes through a series of questions to the audience. He mused that virtues, which guide our being and our doing, guide us to act for the good; on the contrary, in which ways could vices play a role in the misuse of AI? Analysis of data includes analysis of contexts; in which ways could AI misinterpret context data and lead to bias and discrimination? The common good is the ultimate realization of all, individually and collectively; how can AI be used to serve this common good? 

Vicini concluded his remarks with hope that these conversations will be continued and further developed at Boston College with the establishment of the new Schiller Institute for Integrated Science and Society.

Annemarie Arnold
Latest posts by Annemarie Arnold (see all)

Join the Conversation!