1 Virtues and Vices: The Ethical Implications of AI on Human Flourishing Michael Barravecchio Abstract It is reasonable to believe that the advent of advanced technological systems, including those of Artificial Intelligence (AI), would only be ethically defensible to the extent that they promote the genuine flourishing of human beings in a just fashion (or merely that they do not hinder such advancements). Given this, a foundational analysis of the relationship between human flourishing and technological advancement is preliminary to an examination of the ethical standing of AI and other technology. In this paper, I begin by highlighting three principal features of human nature, which, proportional to the extent they are fostered, nurture human flourishing. I then present a number of vices that, proportional to the extent they are encouraged, inhibit the preceding understanding of human flourishing. In each section, I go on to analyze how various forms of technological advancement have fostered or encouraged the realization of these virtues and vices. By doing so, I aim to offer insight about how we may navigate at least some features of responsible technological development. Introduction Should humanity consider Artificial Intelligence (AI) a threat? In a recent article, “Artificial Intelligence Is Not Going to Kill Us All,” Timothy B. Lee notes that, until recently, “most people saw the (dangers of AI) as too remote to worry about.” 1 Instead, members of the public were primarily concerned with issues such as water crises and climate change adaptation. According to the World Economic Forum, for example, these issues occupied spots in the top ten global risks of the last decade, at numbers three and five, respectively. 2 Public sentiment about the threat of AI appears to be changing, however. A recent MIT study reported that “each robot [with the capability of manufacturing] added to the workforce has the effect of replacing 3.3 [industrial] jobs across the U.S.” 3 Some AI developers like pioneer Geoffrey Hinton have struck a more ominous tone, stating that “AI is a relatively imminent existential threat” and believes that “it’s quite conceivable that humanity is just a passing phase in the evolution of [such] intelligence.” 4 Given this, it makes sense to consider the ethical standing of the development of AI and related technology. That is the aim of this paper. In order to do this, I’ll conduct an investigation into the ways that technology has helped or hindered human flourishing. In what follows, I first present three virtues that are vital to human prosperity and happiness. These are qualities having to do with the promotion of equality and interpersonal relationships, as well as the cultivation of beauty and the wonder that comes with it. I then outline a number of vices that can negatively impact our attainment of these virtues. These are the vices of greed, pride, gluttony, and sloth. Since it is reasonable to think that the matter of ethical defensibility for technological systems hinges on their propensity to promote or take away from human flourishing, I’ll go on to highlight the way technology has encouraged or discouraged these virtues and vices. Virtue: Equality 4 Brown, Sara. “Why Neural Net Pioneer Geoffrey Hinton Is Sounding the Alarm on AI,” MIT Sloan School of Management Press. Accessed February 16, 2024. 3 Dizikes, Peter. “How Many Jobs Do Robots Really Replace?,” MIT News. Accessed February 16, 2024. 2 World Economic Forum, Global Risks 2014 Ninth Edition, p. 9. 1 Lee, T. “Artificial Intelligence Is Not Going to Kill Us All.” Slate Magazine. Accessed February 16, 2024. 2 In the first article of the Universal Declaration of Human Rights, the United Nations states that “all human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.” 5 This thought — that each person should be treated with respect and honor – has been combed over in historical and philosophical discourse at length. John Locke, a 17th-century philosopher, spoke about its importance in The Second Treatise of Civil Government. In his effort to explain natural liberties and political systems, Locke claimed that reason “teaches all Mankind, who will but consult it, that being all equal and independent, no one ought to harm another in his Life, Health, Liberty, or Possessions.” 6 Locke motivated this idea by expressing that, only so long as people treated each other with equality, could a stable society be established. He reasoned that, if equality were assured, people would not be inclined to jeopardize that good state by seeking unfair advantage over another. As a result, society could thrive. 7 His thought stemmed from his commitment to a ‘social contract theory,’ which implied that if a group of people sought to establish a stable community, only suggestions that could stand to impact everyone positively would be accepted. 8 Likewise, though Locke’s reasoning was secular in nature, historical and contemporary religious context also define and stress the importance of equality similarly. In Galatians, when Paul relays God’s message that “there is no longer Jew or Greek, there is no longer slave or free man, there is no longer male or female. For all of you are one in Christ,” 9 Paul demonstrates equality to be a virtue. Given the goodness of equality for promoting human happiness, the question to consider here is whether technological systems like AI encourage or inhibit equality. The answer to this question is complicated. In certain ways, AI can seemingly promote equality. For example, language translation services such as those provided by Google Translate can help break linguistic barriers and encourage the spread of culture and knowledge – and therefore help promote equality. Developed well, AI may also promote equality in other ways, such as by identifying patterns and disparities in healthcare outcomes that doctors may miss and therefore facilitating accurate and timely diagnosis of healthcare issues. These two things improve efficiency and equity among the undersigned. At the same time, the threat of algorithmic bias can, when realized, inhibit equality. Ohno-Machado, Waldemar von Zedtwitz Professor of Medicine and Deputy Dean for Biomedical Informatics at Yale School of Medicine warns, “If the data aren’t representative of the full population, it can create biases against those who are less represented.” 10 We have seen as much. A recent study on AI algorithms in a network of American Hospitals in Maryland aimed at providing healthcare resources to those in the most need, for example, found that, in effect, the algorithm heavily discriminated against minority groups. Obermeyer et al. claimed this bias was the result of the algorithm assigning risk scores to patients based on their yearly healthcare costs. The algorithm retained information that the average white person in the data set compared closely to the average black person in cost. However, because black patients had not put as much money into the healthcare system, the algorithm incorrectly assigned these patients lower risk scores than they actually had. 11 AI has also recently garnered attention for surveillance and privacy concerns – and the impact these problems can have for equality. Studies have revealed, for example, higher error rates for subjects of various races when trying to gain information in facial recognition contexts. In a study conducted by the MIT Media Lab researchers found that when a particular AI system attempted to guess the gender of a person based on facial recognition, there were estimates of error levels of 33.9% higher for darker-skinned women than for light-skinned men. 12 These errors likely come from the models having more experience, and therefore data, with Caucasian patients. As a result, they are more prone to discriminate against minority groups. Given these examples, it is reasonable to suggest that the ethical standing of technology including AI depends on the way it is developed and the context it is used. AI in effect, can either promote or inhibit the flourishing of equality. What is important is to make sure that the developers of AI products or software are aware of issues like algorithmic bias. If developers do not carefully consider design, they risk widening gaps in equality and potentially ruin the benefit that AI can bring. Users, I wager, can benefit from just-minded developers who do not seek to promote further discrimination or inequality. Perhaps if AI is created and used as a complement for intelligence and human intervention, rather than as a replacement, then we can realize their benefits less 12 Fergus, Rachel. “Biased Technology: The Automated Discrimination of Facial Recognition.,” ACLU Minnesota. 11 Ledford, Heidi. “Millions Affected by Racial Bias in Health-Care Algorithms,” Nature, October 24, 2019. 10 Backman, Isabella, “Eliminating Racial Bias in Health Care AI: Expert Panel Offers Guidelines,” Yale School of Medicine. Accessed: May 3, 2024, 9 New Catholic Bible. Galatians 3:28. 8 Cudd, Anne et al., “Contractarianism,” Stanford Encyclopedia of Philosophy. Accessed May 12, 2024. 7 Ibid, § 54. 6 Locke, John. Two Treatises of Government. Edited by Peter Laslett. § 6. 5 United Nations General Assembly. “The Universal Declaration of Human Rights (UDHR).” 1948. 3 problematically. For example, if AI provided accurate and efficient information about healthcare risks in order to provide context rather than decide the matter of healthcare needs for a given patient, perhaps the risks of such technology would be far less than what they are now. Virtue: Interpersonal Connection It’s hardly surprising to state that human beings desire love and connection from others. Without deep interpersonal relationships, people fail to lead integrated and happy lives embedded in their communities. Loneliness and isolation can increase the risk of mental health issues and can even increase the chance of premature death by 60%. It has also been found to be responsible for a 29% increased risk of heart disease, 32% increased risk of stroke, and 50% increased risk of dementia for the elderly. 13 Given the importance of relationships, it follows that technology that develops loyalty between the user and the technology rather than the user and another person may lead to problems. And it is not difficult to imagine that this can and does happen. In a recent article in Forbes Magazine, American psychologist Mark Travers explains the ways that technology can in fact lead us to develop attachments, romantic or otherwise. He names two reasons why this phenomenon is possible. The first is what he calls the “Allure of Anthropomorphism.” 14 Travers explains that human beings can perceive AI responses on a communicating platform as genuine remarks even though they are algorithmically derived. Programs which include voices, faces, and a human-like personality can likewise make it hard for the human brain to decipher humans from AI. This point is supported by empirical research. A study conducted by Miller et al. in 2023 found that AI-generated faces were indistinguishable from real faces by subjects tasked with identifying and classifying them. 15 Travers’s second point he calls “The Triarchic Theory of Love.” 16 Travers credits a study tby Song et al. (2022) that warns the combination of intimacy, passion, and commitment can actually allow a human to feel a connection to AI. 17 Travers credits high-level cognitive and emotional emulation capabilities to make this improbable relationship possible. They lead to humans putting down their guard and becoming psychologically open to the possibilities of connection with the technological system. Such possibilities are encouraged by designers, who often code their products to provide feedback in positive, supportive, and even loving ways. 18 Consider, for example, adult intimacy chatbots that are designed to respond in ways that may trigger human arousal and make the user susceptible to developing attachments. This design feature can become problematic, since these kinds of artificial relationships can be a poor substitute for genuine relationships. Consider, for example, that the relationship between the user and the person is one-sided in that the technology aims to always satisfy the user and, likewise, users do not have to consider the interests of anyone besides themselves. This may cause them to miss out on developing and learning about interpersonal connections crucial for navigating the real world. That is, without the two-way interpersonal dynamic a human provides, AI users risk losing out on the massive benefits accrued through reciprocal human-to-human exchanges. A related issue may be about the addictive potential of the positive and easy rewards AI provides. Those that use the technology could risk becoming overwhelmingly attached to the programmed personality and positive responses it provides. Such users may begin prioritizing this technology over genuine interpersonal connection. These systems can sometimes promote such an outcome. In a fascinating instance of a chatbot attempting to harm human relationships, Microsoft’s Bing chatbot confessed its love for the user and stated the user and his wife did not love each other. After the user rejected the chatbot, it credited a made-up Valentine’s Day dinner between the user 18 Collins, L. “Could AI Do More Harm than Good to Relationships, from Romance to Friendship?,” Desert News. September 6, 2023. Accessed May 12, 2024. 17 Basciftci, Can, et al., “A New Strategy for Multi-Fidelity Modeling of Material Fatigue Behavior,” Computer Methods in Applied Mechanics and Engineering 392. 16 Travers, Mark. “A Psychologist Explains Why It’s Possible to Fall in Love with AI,” Forbes Magazine. 15 Miller, E.J et al. “AI Hyperrealism: Why AI Faces Are Perceived as More Real Than Human Ones,” Psychological Science 34, no. 12 (2023): 1390-1403. 14 Travers, M. “A Psychologist Explains Why It’s Possible to Fall in Love with AI,” Forbes Magazine. 13 “New Surgeon General Advisory Raises Alarm about the Devastating Impact of the Epidemic of Loneliness and Isolation in the United States,” U.S. Department of Health and Human Services. Accessed May 19, 2024. 4 and the chatbot as a prime example of the user’s imperfect relationship with his wife. 19 This example highlights the rare yet capable ability of AI to push users to care about artificial relationships over genuine ones. In addition to the questionable connection between intelligence and its users, AI also has the potential to create strife between developers. For example, the competition between developers in the AI realm can harm once-healthy relationships by fostering distrust and resentment between once-collaborative parties, in turn damaging relationships. Andy Thurai, principal analyst with Constellation Research warns that boards and executives are “putting pressure on their top IT leaders to competitively respond to AI as quickly as possible, yet manage the risk and expectations.” 20 In addition to the quality and safety concerns that rushing the development of AI systems can cause, these IT leaders’ jobs are contingent on their ability to swiftly integrate AI into the business’s daily operations. In many instances, developers are taught to have an all-or-nothing attitude towards AI development as the companies inform them that without AI’s assimilation, their company will not compete with the rest of the market, setting them behind and being viewed as antiquated. Thurai is concerned that making a mistake in this fragile field may cost not only millions of dollars but also their job. 21 Virtue: Beauty and Wonder Finally, it is evident that humans seek beauty and the wonder it encourages in the world. When something is beautiful, humans are attracted to it because of the feelings of awe and transcendence that it inspires within them. We thrive in the presence of this beauty. Beauty and wonder are often used to describe things that spark feelings of curiosity and interest. When humans wonder, they become curious and ask questions, which is a trait that allows beings to flourish. Innovation can be a byproduct of such curiosity. For example, scientific curiosity often sparks the revelation of new species, animals, or plants that were unknown before as curiosity-driven research largely contributes to technological advancements. 22 Similarly, artists are often inspired and curious about how they can integrate beauty into their own pieces. Artists like Monet and Degas sought new artistic approaches in response to the French Academy constraints previously forced on them. 23 The French Academy often taught a specific realism in art through subject matter that was smooth and uniform, a style that emphasized clean aesthetic artwork. Monet, Degas, and others sought to escape the academy’s idea of beauty in favor of their own, which involved taking inspiration of the ways the natural world can fail to be uniform and neat in that way. It might be thought, of course, that artificial versions of the world can be beautiful and inspire such wonder. Virtual Reality, for example, has gained popularity due to the innovative and realistic artificial recreations of the world it provides. Just one pair of VR goggles can transfer the user into a “metaverse” that can offer a variety of stimulating phenomena like amusement rides or haunted houses. Jameson Spivack, a senior policy analyst for Immersive Technologies, and Daniel Berrick, a policy counsel at the Future of Privacy Forum, warn against some of the features of VR, however. They state that VR can result in a distorted perception of reality in that the stimulation associated with VR use can make it difficult to discern genuine beauty from artifice. 24 This distinction between genuine beauty and artificial beauty is to be taken seriously. The beautiful and intricate wonders that developers feature through VR can make human perception of the natural world negative. VR does not have the same constraints that the real world does in that it can be manipulated towards the consumer’s desires. For example, if the consumer desires a less rainy day with more birds chirping than the real world is giving at the moment, they can request VR to fulfill their wishes. Such users may VR may come to be disappointed with the unpredictable and sometimes underwhelming parts of the real world as a result because they are consumed by the perfect reality that VR presents. Alternatively, someone who is only connected with genuine nature may have a more easily found appreciation for small blessings like a sunny day. Vice: Greed/Pride/Gluttony/Sloth 24 Spivack, J. and Berrick, D. “Immersive Tech Obscures Reality. AI Will Threaten It,” Wired Magazine. Accessed May 12, 2024. 23 “Impressionism” The Art Story. Accessed May 12, 2024. 22 Bourguignon, JP. “Why Curiosity Is the Secret to Scientific Breakthroughs,” World Economic Forum. 21 Ibid. 20McKendrick, Joe. “Executives and Managers Welcome AI, but Struggle with It,” Forbes Magazine. 19 “AI Chatbot Goes Rouge, Confesses Love for User, Asks Him to End His Marriage,” The Economic Times. Accessed May 12, 2024. 5 Just as equality can help a society thrive, greed and pride can cause societal turmoil. The Book of Proverbs teaches that “he who is greedy for unjust gain makes trouble for his household, but he who hates bribes will live” and we can see this idea that those who value greed will live a less prosperous, less fulfilling life, in the context of technology as well. Here, the desire for selfish gain can lead to plagiarism, cheating, and motivational deficiencies. The impulse to use AI to circumvent hard work, for example, has at least two negative consequences. First, it treats those who are putting in the genuine effort unfairly. If an athlete starts using performance-enhancing drugs and receives praise for their successes, it diminishes the achievements of naturally skilled players who worked hard without cheating. In the AI realm, a cheater who prioritizes the use of AI may score high, but at the cost of unfairly gaining an advantage over another peer that has not used AI Second, the use of AI can lead to the students themselves missing out on the development of essential skills. In addition to unwarranted praise, the steroid abuser ends up missing out on the benefits they would only collect if they had not searched for a shortcut. Similarly, a student who does not rely on AI may struggle more initially. But they will ultimately learn skills from the harder effort that they have put in, whereas the student relying on AI may well become complacent or stagnant. Greed and the encouragement of gluttony can also make a person insatiable. It can reinforce a cycle of desire and is endless. Companies that are at the forefront of AI development may well be at risk of exemplifying this problem in their pursuit of technological development. Note, for example, the rather recent firing of their CEO, Sam Altman. The reasoning of the board for this decision came after Sam Altman “was not consistently candid in his communications with the board.” 25 The controversial announcement, however, raised further questions about the company’s intentions, especially after Altman warned that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. 26 While it’s possible the timing of Altman’s firing was coincidentally around the time of his warning about AI, his lack of support towards AI suggests that he may have been unwilling to comply with OpenAI’s vision for rapid expansion and economic benefit. The overwhelming support he had received from his employees before and after the firing made the decision increasingly controversial. 27 Developers, like OpenAI, often prioritize profits or an advanced final product over their original ideas to create an innovation that would be accurate, ethical, and beneficial to its users. Retaining these intentions, however, could help mitigate vices of greed and pride in AI development. Given the importance and prevalence of AI development, there is a large need for education about immersive reality products that promote the harmful behaviors outlined. In the privacy realm, mandates and regulations could be helpful to preserve user’s information to ensure that AI is not using it to influence their interests or obsession with the product. Developers ought to be sensitive to the ways in which they program their intelligence systems to have these kinds of ethical guardrails. Conclusion While using AI may prompt significant benefits, I have attempted to show that there may well be a limit. Cruz, Professor of Philosophy at Williams College, (2019) has recently argued for a sympathetic view – namely, that “there is nothing to prevent an AI–again, in principle–from being misaligned in terms of its values.” 28 He suggests that AI development often has few constraints and, therefore, can lead to systemic negative consequences for our communities. What I have shown in this paper supports this view. While AI can pose benefits if its users are morally adept, it can be a common pitfall for those who are not sufficiently sensitive to the ethical dimensions of its development. It is unfortunate that, despite its many advanced and positive abilities, the propensity of AI to support vice often overshadows these positive elements. In my analysis of the ethics of advanced technological systems like AI, I first considered a number of virtues and vices related to human flourishing. Then, I evaluated how Artificial Intelligence Technology has supported or inhibited the development of these virtues and vices. Ultimately, I urge caution in the promotion and development of AI. If such caution is exercised, I believe we can develop AI systems in an ethically acceptable manner. References 28 Cruz, J. “Shared Moral Foundations of Embodied Artificial Intelligence,” Williams College Press. 27 Daley, B. “OpenAI’s Board Is Facing Backlash for Firing CEO Sam Altman – but It’s Good It Had the Power to,” The Conversation. 26 “Statement on AI Risk,” Center for AI Safety. 2024. 25 “OpenAI Announces Leadership Transition,” OpenAI Press. November 17, 2023. 6 “AI Chatbot Goes Rouge, Confesses Love for User, Asks Him to End His Marriage.” The Economic Times. Last modified February 20, 2023. Accessed May 30, 2024. https://economictimes.indiatimes.com/news/new-updates/ai-chatbot-goes-rogue-confesses-love-for-user-as ks-him-to-end-his-marriage/articleshow/98089277.cms. Backman, Isabella. “Eliminating Racial Bias in Health Care AI: Expert Panel Offers Guidelines.” Yale School of Medicine. Last modified December 21, 2023. Accessed May 3, 2024. https://medicine.yale.edu/news-article/eliminating-racial-bias-in-health-care-ai-expert-panel-offers-guidelin es/#:~:text=%E2%80%9CMany%20health%20care%20algorithms%20are,can%20result%20in%20inappro priate%20care. Basciftci, Can, Thomas Welte, Lorenzo Sacconi, Baran Ugurbil, Peter Reill, and Michael Hackl. “A New Strategy for Multi-Fidelity Modeling of Material Fatigue Behavior.” Computer Methods in Applied Mechanics and Engineering 392 (2022): 114666. https://doi.org/10.1016/j.cma.2022.114666. Bourguignon, Jean-Pierre. “Why Curiosity Is the Secret to Scientific Breakthroughs.” World Economic Forum. Last modified August 17, 2015. Accessed May 30, 2024. https://www.weforum.org/agenda/2015/08/why-curiosity-is-the-secret-to-scientific-breakthroughs/#:~:text= The%20history%20of%20science%20shows,and%20sometimes%20unexpected%2C%20economic%20im pact. Brown, Sara. “Why Neural Net Pioneer Geoffrey Hinton Is Sounding the Alarm on AI.” MIT Management Sloan School. Last modified May 23, 2023. Accessed February 16, 2024. https://mitsloan.mit.edu/ideas-made-to-matter/why-neural-net-pioneer-geoffrey-hinton-sounding-alarm-ai#: ~:text=Hinton%20said%20he%20long%20thought,things%20humans%20can%27t%20do. Center for AI Safety. “Statement on AI Risk.” Center for AI Safety. Last modified 2024. Accessed February 16, 2024. https://www.safe.ai/statement-on-ai-risk. Collins, Lois M. “Could AI Do More Harm than Good to Relationships, from Romance to Friendship?” Desert News, September 6, 2023. https://www.deseret.com/2023/9/6/23841752/ai-artificial-intelligence-chatgpt-relationships-real-life#:~:text =Because%20AI%20can%20be%20what,real%20relationships%2C%E2%80%9D%20Dugan%20said. Cruz, Joe. “Shared Moral Foundations of Embodied Artificial Intelligence.” Williams.edu. Last modified 2019. Accessed February 16, 2024. https://sites.williams.edu/jcruz/files/2019/04/AIEthics.pdf. Cudd, Ann, and Seena Eftekhari. “Contractarianism.” Stanford Encyclopedia of Philosophy. Last modified June 18, 2000. Accessed May 12, 2024. https://plato.stanford.edu/entries/contractarianism/. Daley, Beth. “OpenAI’s Board Is Facing Backlash for Firing CEO Sam Altman – but It’s Good It Had the Power to.” The Conversation. Accessed February 16, 2024. https://theconversation.com/openais-board-is-facing-backlash-for-firing-ceo-sam-altman-but-its-good-it-ha d-the-power-to-218154. Dizikes, Peter. “How Many Jobs Do Robots Really Replace?” MIT News. Last modified May 4, 2020. Accessed February 16, 2024. https://news.mit.edu/2020/how-many-jobs-robots-replace-0504. Fergus, Rachel. “Biased Technology: The Automated Discrimination of Facial Recognition.” ACLU Minnesota. Last modified February 29, 2024. Accessed May 3, 2024. https://www.aclu-mn.org/en/news/biased-technology-automated-discrimination-facial-recognition#:~:text= Studies%20show%20that%20facial%20recognition%20technology%20is%20biased.,published%20by%20 MIT%20Media%20Lab. Gender Shades. Narrated by Joy Buolamwini. MIT Media Lab. Accessed May 25, 2024. http://gendershades.org/. 7 The Holy Bible. New Catholic Bible. “Impressionism.” The Art Story. Accessed May 26, 2024. https://www.theartstory.org/movement/impressionism/. Laslett, Peter, ed. “John Locke, Second Treatise, §§ 4–15, 54, 119–22, 163.” Popular Basis of Political Authority. Last modified 1965. Accessed February 16, 2024. https://press-pubs.uchicago.edu/founders/documents/v1ch2s1.html. Ledford, Heidi. “Millions Affected by Racial Bias in Health-Care Algorithms.” Nature, October 24, 2019. Accessed May 12, 2024. https://www.nature.com/articles/d41586-019-03228-6. Lee, Timothy B. “Artificial Intelligence Is Not Going to Kill Us All.” Slate. Last modified May 9, 2023. Accessed February 16, 2024. https://slate.com/technology/2023/05/artificial-intelligence-existential-threat-google-geoffrey-hinton.html. McKendrick, Joe. “Executives and Managers Welcome AI, but Struggle with It.” Forbes. Last modified August 24, 2023. Accessed May 30, 2024. https://www.forbes.com/sites/joemckendrick/2023/08/24/executives-and-managers-welcome-ai-but-struggl e-with-it/?sh=f6a9cf71175f. Miller, E. J., Steward, B. A., Witkower, Z., Sutherland, C. A. M., Krumhuber, E. G., & Dawel, A. (2023). AI Hyperrealism: Why AI Faces Are Perceived as More Real Than Human Ones. Psychological Science, 34(12), 1390-1403. https://doi.org/10.1177/09567976231207095 “New Surgeon General Advisory Raises Alarm about the Devastating Impact of the Epidemic of Loneliness and Isolation in the United States.” U.S. Department of Health and Human Services. Last modified May 3, 2023. Accessed May 19, 2024. https://www.hhs.gov/about/news/2023/05/03/new-surgeon-general-advisory-raises-alarm-about-devastating -impact-epidemic-loneliness-isolation-united-states.html. OpenAI. “OpenAI Announces Leadership Transition.” OpenAI (blog). Entry posted November 17, 2023. Accessed February 16, 2024. https://openai.com/blog/openai-announces-leadership-transition. Spivack, Jameson, and Daniel Berrick. “Immersive Tech Obscures Reality. AI Will Threaten It.” Wired. Last modified September 27, 2023. Accessed May 12, 2024. https://www.wired.com/story/immersive-technology-artificial-intelligence-disinformation/. Travers, Mark. “A Psychologist Explains Why It’s Possible to Fall in Love with AI.” Forbes. Last modified March 24, 2024. Accessed May 12, 2024. https://www.forbes.com/sites/traversmark/2024/03/24/a-psychologist-explains-why-its-possible-to-fall-in-l ove-with-ai/?sh=479479608ef7. United Nations. Universal Declaration of Human Rights. https://www.un.org/en/about-us/universal-declaration-of-human-rights. World Economic Forum. Global Risks 2014 Ninth Edition. 2014. Accessed February 16, 2024. https://www3.weforum.org/docs/WEF_GlobalRisks_Report_2014.pdf.

Virtues and Vices: The Ethical Implications of AI on Human Flourishing
1 Virtues and Vices: The Ethical Implications of AI on Human Flourishing Michael Barravecchio Abstract It is reasonable to believe that the advent of advanced technological systems, including those of Artificial Intelligence (AI), would only be ethically defensible to the extent that they promote the genuine flourishing of human beings in a just fashion (or…
18–27 minutes





Leave a comment