Pros and Cons of Artificial Intelligence Essay (Critical Writing)

The system of artificial intelligence (AI) has significantly developed over the last decade. AI has both pros and cons, while the consequences of its growing impact are a controversial matter. Artificial intelligence can make any process efficient since it works quickly and correctly. AI in the modern world brings mainly positive results in those areas where it is used. Despite all merits of AI, it has disadvantages due to probable errors, the duration of the learning process, and public fear of rebellion against humanity. For these reasons, AI researchers are divided into two opposing camps. To come to a clear opinion, it is necessary to analyze each of the parties thoroughly.

The first party of researchers considers AI as a tool for developing technology, raising the standard of living, and the prosperity of humankind as a whole. Representatives of this view are Groom and Jones – Doctors of Philosophy and eminent figures in information and communication sciences who have successful technological careers. Their views are multi-perspective and valuable, as they considered the technological and philosophical sides of the issue.

They highlight several essential characteristics that distinguish AI from human performance. The first is that “the introduction of AI makes it possible to process large amounts of data quickly and efficiently, minimizing human involvement and, by reducing the human factor, minimizing error.” (87). The second one is that “neural networks do not have a performance threshold”, which means that if a person can check, for example, 100 parts for quality in a day, then the system will review them as much as the server capacities allow (89). These statements are well-founded and emphasize the advantages of Artificial Intelligence.

I agree with the authors on this point and can only add that AI can work 24/7 without the need for sleep or rest. Its productivity and quality of work do not decrease depending on the working time, fatigue, and personal circumstances. This means that the main advantages of currently existing solutions are the ability to automate many areas of activity while minimizing human participation and expanding areas where it is possible to use software instead of human labor.

Groom and Jones consider that “with the help of AI technologies, the speed and level of automation of processing large amounts of information are significantly growing while improving the quality and manufacturability” (110). I agree that, at the moment, AI is especially good at analyzing large amounts of data, where a person would take too much time, and conventional programs that do not use machine learning would not be able to achieve the necessary accuracy. With the right attitude to new technologies, the use of data increases, as well as the efficiency and quality of management decisions.

On the other hand, my opinion is that not only intellectual and computational work should be shifted to AI. Among other advantages, AI technology is best suited for optimizing various mechanical activities, automating routine operations, and using them in hazardous industries. Proper use of robotics on conveyor lines may allow switching to non-stop functions, optimize the cost of the enterprise, and improve product quality. It is essential to understand that progress in the world is still strongly connected to technology (Shaw 5). However, after the introduction of automation, these operations will occur faster and cheaper than a person can do.

To fully understand the pros and cons of AI, a different perspective needs to be considered. I consider Hurley a prominent representative of the opinion that artificial intelligence is not able to change a person and has many weaknesses. Hurley thinks that “each task that is being solved now is RnD in its purest form: a person needs to define, systematize, come up with a solution, and implement this solution” (107). That is a creative process that requires a high level of science and expertise in the field of application of this solution, whether it be FMCG, space, medicine, or the area of implementing neural network systems (Hurley 97). The fundamental difficulty of these projects is related to the unpredictability of the result. This view is supported by Harkut and Kasat (2019). They note that AI technology has a programmed automized decision-making process, which is a simulation of human intelligence. AI has a specific code for the decision-making process; however, it may lead to inconsistencies since it does not admit possible important human factors.

I do not fully agree with the authors on this question: implementing an AI system makes it possible to predict development without implementing the project. Discovering technologies and algorithms will tell practically nothing to a person without a mathematical education and practical experience, only a few top managers with such a background among customers.

The contrary argument and the weak point of Artificial Intelligence is also the lack of large enough data sets for training AI. Technologies for collecting and processing data are constantly evolving, and companies can already implement Data Lake technologies, which are becoming an excellent platform for training artificial intelligence. However, this is still not enough to complete fast neural network training.

I consider it important to study the views on artificial intelligence by Dr. Garg. He is assistant director, of executive programs management at Amity University Uttar Pradesh India, a Ph.D. and UGC NET qualified with 15+ years of academic experience. As well as Dr. Agrawal is PhD, and UGC-NET qualified with 18+ years of experience in teaching and research, working as a Professor. Because of their significant experience and ability to evaluate using AI in real business processes, their opinion seems authoritative and worthy of attention. They consider that implementing artificial intelligence in business is a financially costly process (Agrawal and Garg 21). For industrial enterprises, such solutions may lead to a delayed economic effect.

Transitional solutions and real-time data visualization allow approximating economic benefits. Another difficulty is the need to restructure the business process when introducing intelligent systems (Agrawal and Garg 27). I consider it is not enough to buy such a solution and put it like a flower in a vase or an application on a computer. It is necessary to make this decision friendly to the business process: create, reconfigure, or even cancel some operations, retrain people, and optimize staff.

Most existing and developing artificial intelligence products aim to perform routine tasks by many specialists. Even though this leads to more straightforward work, it leads to a reduction in jobs. In Oshida’s opinion, it is unprofitable for the result to keep a certain number of professionals doing tasks under AI control (99). Accordingly, for the sake of the economy and profit, employers will seek to eliminate irrelevant employees (Faris et al. 61). Unemployment is increasing, and the retraining of specialists will take much time and additional resources of the state and the educational system. Further complicating the situation is that artificial intelligence does not make reason in human terms (Faris et al. 54). AI leads to the fact that the robot or computer does not take into account ethical norms and values. Its activity is designed to complete tasks but not to create a positive atmosphere, considering other people’s interests and teamwork.

Based on the totality of all the above points of view, I can conclude that artificial intelligence is an ambiguous technology for humanity. However, most of the problems that arise as a result of AI integration can be solved by indirect methods. For example, the number of vacancies for programmers and other professionals whose activities will be directed to the control and maintenance of computers will increase. From my point of view, that can retrain those dismissed due to lack of demand following the new standards, which will help keep the unemployment rate at the same level. At the same time, one should consider the large utility that AI provides for corporations and general human activities. I believe that this phenomenon has much more advantages than disadvantages, so abandoning artificial intelligence is inefficient.

Artificial intelligence is a somewhat controversial issue. The discussions around it are essential for its development. It is worth saying that artificial intelligence has two sides: positive and negative. Admittedly, the positive side is more significant, as it has enabled many industries to improve their operations and make them significantly more efficient. It is necessary to approach them non-standard to solve all the negative results of artificial intelligence.

Works Cited

Agrawal, Rashmi and Garg, Vikas. (Eds.). Transforming management using artificial intelligence techniques . CRC Press, 2020.

Faris, Hossam, Aljarah, Ibrahim, and Mirjalili, Seyedali. (Eds.). Evolutionary machine learning techniques. Algorithms and applications . Springer Singapore, 2017.

Groom, Frank M. and Jones, Stephan S. (Eds.). Artificial intelligence and machine learning for business for non-engineers . CRC Press, 2020.

Harkut, Dinesh G., and Kashmira Kasat. “Introductory chapter: artificial intelligence-challenges and applications.” Artificial Intelligence-Scope and Limitations (2019).

Hurley, Richard. Big data . Ationa Publishers, 2020.

Oshida, Yoshiki. Artificial intelligence for medicine. People, society, pharmaceuticals, and medical materials . De Gruyter, 2021.

Shaw, James, et al. “Artificial intelligence and the implementation challenge.” Journal of medical Internet research. Vol. 21, no. 7, 2019, p. 11.

  • Ethical Issues in the Artificial Intelligence Field
  • Artificial Intelligence: The Trend in the Evolution
  • Robotics and Artificial Intelligence in Organizations
  • Artificial Intelligence in the Military
  • Artificial Intelligence in Dental Hygiene
  • The Age of Artificial Intelligence (AI)
  • Hyper Evolution: The Rise of the Robots
  • Legal Risks of AI Cybersecurity in the European Union
  • Artificial Intelligence: Positive and Negative Sides
  • Artificial Intelligence in the Transportation Industry
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2023, June 28). Pros and Cons of Artificial Intelligence. https://ivypanda.com/essays/pros-and-cons-of-artificial-intelligence/

"Pros and Cons of Artificial Intelligence." IvyPanda , 28 June 2023, ivypanda.com/essays/pros-and-cons-of-artificial-intelligence/.

IvyPanda . (2023) 'Pros and Cons of Artificial Intelligence'. 28 June.

IvyPanda . 2023. "Pros and Cons of Artificial Intelligence." June 28, 2023. https://ivypanda.com/essays/pros-and-cons-of-artificial-intelligence/.

1. IvyPanda . "Pros and Cons of Artificial Intelligence." June 28, 2023. https://ivypanda.com/essays/pros-and-cons-of-artificial-intelligence/.

Bibliography

IvyPanda . "Pros and Cons of Artificial Intelligence." June 28, 2023. https://ivypanda.com/essays/pros-and-cons-of-artificial-intelligence/.

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

IvyPanda uses cookies and similar technologies to enhance your experience, enabling functionalities such as:

  • Basic site functions
  • Ensuring secure, safe transactions
  • Secure account login
  • Remembering account, browser, and regional preferences
  • Remembering privacy and security settings
  • Analyzing site traffic and usage
  • Personalized search, content, and recommendations
  • Displaying relevant, targeted ads on and off IvyPanda

Please refer to IvyPanda's Cookies Policy and Privacy Policy for detailed information.

Certain technologies we use are essential for critical functions such as security and site integrity, account authentication, security and privacy preferences, internal site usage and maintenance data, and ensuring the site operates correctly for browsing and transactions.

Cookies and similar technologies are used to enhance your experience by:

  • Remembering general and regional preferences
  • Personalizing content, search, recommendations, and offers

Some functions, such as personalized recommendations, account preferences, or localization, may not work correctly without these technologies. For more details, please refer to IvyPanda's Cookies Policy .

To enable personalized advertising (such as interest-based ads), we may share your data with our marketing and advertising partners using cookies and other technologies. These partners may have their own information collected about you. Turning off the personalized advertising setting won't stop you from seeing IvyPanda ads, but it may make the ads you see less relevant or more repetitive.

Personalized advertising may be considered a "sale" or "sharing" of the information under California and other state privacy laws, and you may have the right to opt out. Turning off personalized advertising allows you to exercise your right to opt out. Learn more in IvyPanda's Cookies Policy and Privacy Policy .

  • Artificial Intelligence in the Workplace Words: 2878
  • The Promises and Perils of Artificial Intelligence Words: 1211
  • Artificial Intelligence and How It Affects Hospitality Words: 1166
  • Artificial Intelligence in the Labor Market Words: 1201
  • The Role and Impact of Artificial Intelligence Words: 2106
  • Artificial Intelligence as an Enhancer of Human Abilities Words: 2840
  • Artificial Intelligence in the Hotel Industry Words: 1655
  • Artificial Intelligence and Its Role in Business Words: 1379
  • Artificial Intelligence: Pros and Cons Words: 410
  • Artificial Intelligence in Machinery Words: 1170
  • The Impact of Artificial Intelligence on Young People Words: 1052
  • The Future of Artificial Intelligence in Fiction and Science Words: 2025
  • Artificial Intelligence in Accounting Words: 1175
  • Artificial Intelligence and Singularity Words: 882

Artificial Intelligence Pros and Cons: Essay Sample

Artificial intelligence pros and cons: essay introduction, artificial intelligence pros and cons essay: background information, artificial intelligence pros and cons essay: discussion, artificial intelligence pros and cons: essay conclusion, works cited.

Artificial Intelligence (AI) is a machine’s ability to demonstrate intelligence comparable to that of humans. The AI algorithms are developed for a specific task, implying that a device can scan its environment and perform actions to achieve a set goal. The use of artificial intelligence suggests both improvements in various domains of human activity, such as medicine or manufacturing, and dangers connected to the loss of jobs and unknown implications of AI’s advancement.

AI is a technology that was first introduced in 1956 and has overcome several transformations over the years, including skepticism towards its capabilities and lack of appropriate hardware to support AI’s work. The current stage of AI’s development is innovation, meaning that different AI algorithms are being introduced to the market. Jennings states, “today, you will find AIs in factories, schools, hospital banks, police stations” (1). Learning from the given information or the environment and solving problems are the two critical characteristics of AI. With AI’s rapid development and improvement, many aspects of people’s lives are at stake, including millions of jobs and how individuals receive healthcare or education services.

Since AI can be applied in different settings, the stakeholders of this technology are all people. The main conversation surrounding the increasing popularity of AI is the safety and reliability of the algorithms. Another aspect is the impact of AI’s use on employment prospects in many industries, including logistics and manufacturing. The positive effect of AI that some sources cite is connected to the superiority of its problem-solving and the ability to analyze. AI can produce better analysis and lower costs associated with manufacturing. Makridakis states that AI will help companies make better decisions based on data analysis, creating an additional competitive advantage (46). Hence, AI will help reduce costs and make more efficient decisions based on the algorithms’ analysis.

The financials available to companies because of improved efficiency will be invested in further development, which will lead to the introduction of better products and services. Jennings states that AI has already allowed many companies, specifically auto manufacturers, to minimize the number of people engaged in production by applying this innovative technology (2). Another example is Amazon Go, a fully autonomous grocery store in Seattle that does not have cashiers or sales representatives. Moreover, AI is used in medicine to analyze photographs, detect skin cancer, or help medical professionals diagnose conditions (Jenkins 2).

It improves healthcare quality since people receive a better diagnosis more quickly. Wilson et al. argue that although AI already disrupts the workforce, it will create a substantial amount of jobs because the machinery will require maintenance and programming (14). The authors argue that newly created positions will require people to work with AI to produce better results for their companies.

The arguments supporting AI and its implementation in different domains highlight the positive impact that it will have on companies and people. These sources provide information connected to AI prospects, meaning that they focus on how the benefits will outweigh the negative impact. These stakeholders choose this approach because AI will disrupt many aspects of people’s lives, most notably by eliminating millions of jobs. Hence, by focusing on the jobs created due to AI to maintain and support the technology and other positive outcomes, these stakeholders can highlight the need to address the immediate issues that will arise soon, such as work shortage.

While currently, AI can perform varied tasks, and this technology will continue to evolve as new computers and other advanced hardware are introduced to the market. In his interview with Bill Gates, Holley discusses the potential dangers of AI and the destruction that it can cause if not managed properly. Gates states that the technology industry will undergo rapid development and progress in the following thirty years, making accurate vision and speech recognition with AI possible.

Holley references the opinion of scientist Stephen Hawking, who stated that AI could end the human race. Other well-known technology experts and entrepreneurs, for instance, Elon Musk, voiced a similar opinion. The latter stated that people should be “very careful about artificial intelligence” (Holey). The main argument is that it is unclear how AI will develop in the future and what capabilities it will have.

If AI surpasses humans’ ability to think and solve tasks, how it will interact with people is unclear. The popular argument against AI is that they are presented as warnings. Technology specialists choose this approach to discuss their opinion on AI because it is rapidly developing, and no governmental or international regulations are present. This reasoning is most evident in Elon Musk’s commentary on AI, in which he states that regulatory oversight should be introduced to contain and oversee the development of AI.

The previous paragraph focused on stakeholders’ opinions about the future of AI. However, there are several ways in which AI currently affects people’s day-to-day life in a negative manner. Knight and Hoa (2019) cite the crashes of self-driving cars in recent months and the various manipulations of information done by bots as examples of AI’s misuse. The recent Cambridge Analytica case, a data-collecting scandal, revealed that individuals’ news feeds could be manipulated to display particular information. It can potentially impact opinions regarding significant social and political problems.

Finally, both pros and cons AI stakeholders note that once the technology is advanced enough, it will be used in some significant domains of people’s lives. For example, Gates states that once robots can move things appropriately, they will be used in hospitals to help with patient transportation (Holey). Similar applications will be possible in warehouses and other facilities with much inventory. Jenkins states that self-driving trucks and other AI-supported technology will eliminate one-third of jobs in the United States (2). Similarly to the previous argument, these concerns are voiced as a warning for politicians and organizations developing AI.

Overall, AI was first introduced in 1956 as a technology miming human thinking and task-solving capabilities. The main stakeholders of the debate are all people since AI is already used in different domains, for example, healthcare or education. The main argument supporting AI is the efficiency and capabilities of this technology, which surpasses human abilities. However, the arguments against the uncontrolled development of AI presented by technology specialists and scientists argue that it is unclear how AI will develop in the future and how humanity will interact with it.

Holley, Peter. “ Bill Gates on Dangers of Artificial Intelligence: ‘I Don’t Understand Why Some People are not Concerned. ” Washington Post , 2015. Web.

Jennings, Charles. Artificial Intelligence: Rise of the Lightspeed Learners. Rowman & Littlefielf, 2019.

Knight, Will and Karen Hoa. “ Never Mind Killer Robots—Here are Six Real AI Dangers to Watch Out for in 2019. ” MIT Technology Review, 2019.

Makridakis, Spyros. “The Forthcoming Artificial Intelligence (AI) Revolution: Its Impact on Society and Firms.” Futures, vol. 90, 2017, pp. 46-60.

Wilson, James et al. “The Jobs That Artificial Intelligence Will Create.” MIT Sloan Management Review, vol. 58, no. 4, 2017, pp. 14-16.

Cite this paper

  • Chicago (N-B)
  • Chicago (A-D)

StudyCorgi. (2021, July 4). Artificial Intelligence Pros and Cons: Essay Sample. https://studycorgi.com/artificial-intelligence-pros-and-cons/

"Artificial Intelligence Pros and Cons: Essay Sample." StudyCorgi , 4 July 2021, studycorgi.com/artificial-intelligence-pros-and-cons/.

StudyCorgi . (2021) 'Artificial Intelligence Pros and Cons: Essay Sample'. 4 July.

1. StudyCorgi . "Artificial Intelligence Pros and Cons: Essay Sample." July 4, 2021. https://studycorgi.com/artificial-intelligence-pros-and-cons/.

Bibliography

StudyCorgi . "Artificial Intelligence Pros and Cons: Essay Sample." July 4, 2021. https://studycorgi.com/artificial-intelligence-pros-and-cons/.

StudyCorgi . 2021. "Artificial Intelligence Pros and Cons: Essay Sample." July 4, 2021. https://studycorgi.com/artificial-intelligence-pros-and-cons/.

This paper, “Artificial Intelligence Pros and Cons: Essay Sample”, was written and voluntary submitted to our free essay database by a straight-A student. Please ensure you properly reference the paper if you're using it to write your assignment.

Before publication, the StudyCorgi editorial team proofread and checked the paper to make sure it meets the highest standards in terms of grammar, punctuation, style, fact accuracy, copyright issues, and inclusive language. Last updated: November 8, 2023 .

If you are the author of this paper and no longer wish to have it published on StudyCorgi, request the removal . Please use the “ Donate your paper ” form to submit an essay.

14 Risks and Dangers of Artificial Intelligence (AI)

AI has been hailed as revolutionary and world-changing, but it’s not without drawbacks.

Mike Thomas

As AI grows more sophisticated and widespread, the voices warning against the potential dangers of  artificial intelligence grow louder.

“These things could get more intelligent than us and could decide to take over, and we need to worry now about how we prevent that happening,”  said Geoffrey Hinton , known as the “Godfather of AI” for his foundational work on  machine learning and  neural network algorithms. In 2023, Hinton left his position at Google so that he could “ talk about the dangers of AI ,” noting a part of him even  regrets his life’s work .

The renowned computer scientist isn’t alone in his concerns.

Tesla and SpaceX founder Elon Musk, along with over 1,000 other tech leaders, urged in a 2023 open letter  to put a pause on large AI experiments, citing that the technology can “pose profound risks to society and humanity.”

Dangers of Artificial Intelligence

  • Automation-spurred job loss
  • Privacy violations
  • Algorithmic bias caused by bad data
  • Socioeconomic inequality
  • Market volatility
  • Weapons automatization
  • Uncontrollable self-aware AI

Whether it’s the increasing automation of certain jobs,  gender and  racially biased algorithms or autonomous weapons that operate without human oversight (to name just a few), unease abounds on a number of fronts. And we’re still in the very early stages of what AI is really capable of.

14 Dangers of AI

Questions about who’s developing AI and for what purposes make it all the more essential to understand its potential downsides. Below we take a closer look at the possible dangers of artificial intelligence and explore how to manage its risks.

Is AI Dangerous?

The tech community has long debated the threats posed by artificial intelligence. Automation of jobs, the spread of fake news and a dangerous arms race of AI-powered weaponry have been mentioned as some of the biggest dangers posed by AI.

1. Lack of AI Transparency and Explainability  

AI and  deep learning models can be difficult to understand, even for those who work directly with the technology. This leads to a lack of transparency for how and why AI comes to its conclusions, creating a lack of explanation for what data AI algorithms use, or why they may make biased or unsafe decisions. These concerns have given rise to the use of  explainable AI , but there’s still a long way before transparent AI systems become common practice.

To make matters worse, AI companies continue to remain tight-lipped about their products. Former employees of OpenAI and Google DeepMind have accused both companies of concealing the potential dangers of their AI tools. This secrecy leaves the general public unaware of possible threats and makes it difficult for lawmakers to take proactive measures ensuring AI is developed responsibly.      

2. Job Losses Due to AI Automation

AI-powered job automation is a pressing concern as the technology is adopted in industries like  marketing ,  manufacturing and  healthcare . By 2030, tasks that account for up to 30 percent of hours currently being worked in the U.S. economy could be automated — with Black and Hispanic employees left especially vulnerable to the change —  according to McKinsey . Goldman Sachs even states  300 million full-time jobs could be lost to AI automation.

“The reason we have a low unemployment rate, which doesn’t actually capture people that aren’t looking for work, is largely that lower-wage service sector jobs have been pretty robustly created by this economy,” futurist Martin Ford told Built In. With AI on the rise, though, “I don’t think that’s going to continue.”

As  AI robots become smarter and more dexterous, the same tasks will require fewer humans. And while AI is estimated to create  97 million new jobs by 2025 , many employees won’t have the skills needed for these technical roles and could get left behind if companies don’t  upskill their workforces .

“If you’re flipping burgers at McDonald’s and more automation comes in, is one of these new jobs going to be a good match for you?” Ford said. “Or is it likely that the new job requires lots of education or training or maybe even intrinsic talents — really strong interpersonal skills or creativity — that you might not have? Because those are the things that, at least so far, computers are not very good at.”

As technology strategist Chris Messina has pointed out,  fields like law and accounting are primed for an AI takeover as well. In fact, Messina said, some of them may well be decimated. AI already is having a significant impact on medicine. Law and accounting are next, Messina said, the former being poised for “a massive shakeup.”

“It’s a lot of attorneys reading through a lot of information — hundreds or thousands of pages of data and documents. It’s really easy to miss things,” Messina said. “So AI that has the ability to comb through and comprehensively deliver the best possible contract for the outcome you’re trying to achieve is probably going to replace a lot of corporate attorneys.”

More on Artificial Intelligence AI Copywriting: Why Writing Jobs Are Safe

3. Social Manipulation Through AI Algorithms

Social manipulation also stands as a danger of artificial intelligence. This fear has become a reality as politicians rely on platforms to promote their viewpoints, with one example being Ferdinand Marcos, Jr., wielding a  TikTok troll army to capture the votes of younger Filipinos during the Philippines’ 2022 election. 

TikTok, which is just one example of a social media platform that relies on  AI algorithms , fills a user’s feed with content related to previous media they’ve viewed on the platform. Criticism of the app targets this process and the algorithm’s failure to filter out harmful and inaccurate content, raising concerns over  TikTok’s ability to protect its users from misleading information. 

Online media and news have become even murkier in light of AI-generated images and videos, AI voice changers as well as  deepfakes infiltrating political and social spheres. These technologies make it easy to create realistic photos, videos, audio clips or replace the image of one figure with another in an existing picture or video. As a result, bad actors have another avenue for  sharing misinformation and war propaganda , creating a nightmare scenario where it can be nearly impossible to distinguish between credible and faulty news.

“No one knows what’s real and what’s not,” Ford said. “You literally cannot believe your own eyes and ears; you can’t rely on what, historically, we’ve considered to be the best possible evidence ... That’s going to be a huge issue.”

4. Social Surveillance With AI Technology

In addition to its more existential threat, Ford is focused on the way AI will adversely affect privacy and security. A prime example is  China’s use of facial recognition technology in offices, schools and other venues . Besides tracking a person’s movements, the Chinese government may be able to gather enough data to monitor a person’s activities, relationships and political views. 

Another example is U.S. police departments embracing predictive policing algorithms to anticipate where crimes will occur. The problem is that these algorithms are influenced by arrest rates, which  disproportionately impact Black communities . Police departments then double down on these communities, leading to over-policing and questions over whether self-proclaimed democracies can resist turning AI into an authoritarian weapon.

“Authoritarian regimes use or are going to use it,” Ford said. “The question is, ‘How much does it invade Western countries, democracies, and what constraints do we put on it?’”

Related Are Police Robots the Future of Law Enforcement?

5. Lack of Data Privacy Using AI Tools

A 2024 AvePoint survey found that the top concern among companies is data privacy and security . And businesses may have good reason to be hesitant, considering the large amounts of data concentrated in AI tools and the lack of regulation regarding this information. 

AI systems often collect personal data to  customize user experiences or to help train the AI models you’re using (especially if the AI tool is free). Data may not even be considered secure from other users when given to an AI system, as one bug incident that occurred with  ChatGPT in 2023 “allowed some users to see titles from another active user’s chat history.” While there are laws present to protect personal information in some cases in the United States, there is no explicit federal law that protects citizens from data privacy harm caused by AI.

Related Reading AI-Generated Content and Copyright Law: What We Know

6. Biases Due to AI

Various forms of AI bias are detrimental too. Speaking to the New York Times , Princeton computer science professor Olga Russakovsky said AI bias goes well beyond gender and race. In addition to data and  algorithmic bias (the latter of which can “amplify” the former), AI is developed by humans — and  humans are inherently biased .

“A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities,” Russakovsky said. “We’re a fairly homogeneous population, so it’s a challenge to think broadly about world issues.”

The narrow views of individuals have culminated in an AI industry that leaves out a range of perspectives. According to UNESCO , only 100 of the world’s 7,000 natural languages have been used to train top  chatbots . It doesn’t help that 90 percent of online higher education materials are already produced by European Union and North American countries, further restricting AI’s training data to mostly Western sources.   

The limited experiences of AI creators may explain why  speech-recognition AI often fails to understand certain dialects and accents, or why companies fail to consider the consequences of a  chatbot impersonating historical figures . If businesses and legislators don’t exercise greater care to avoid recreating powerful prejudices, AI biases could spread beyond corporate contexts and exacerbate societal issues like housing discrimination .   

7. Socioeconomic Inequality as a Result of AI  

If companies refuse to acknowledge the inherent biases baked into AI algorithms, they may compromise their  DEI initiatives through  AI-powered recruiting . The idea that AI can measure the traits of a candidate through facial and voice analyses is still tainted by racial biases, reproducing the same  discriminatory hiring practices businesses claim to be eliminating.  

Widening socioeconomic inequality sparked by AI-driven job loss is another cause for concern, revealing the class biases of how AI is applied. Workers who perform more manual, repetitive tasks have experienced  wage declines as high as 70 percent because of automation, with office and desk workers remaining largely untouched in AI’s early stages. However, the increase in  generative AI use is  already affecting office jobs , making for a wide range of roles that may be more vulnerable to wage or job loss than others.

8. Weakening Ethics and Goodwill Because of AI

Along with technologists, journalists and political figures, even religious leaders are sounding the alarm on AI’s potential pitfalls. In a  2023 Vatican meeting and in his  message for the 2024 World Day of Peace , Pope Francis called for nations to create and adopt a binding international treaty that regulates the development and use of AI.

Pope Francis warned against AI’s ability to be misused, and “create statements that at first glance appear plausible but are unfounded or betray biases.” He stressed how this could bolster campaigns of disinformation, distrust in communications media, interference in elections and more — ultimately increasing the risk of “fueling conflicts and hindering peace.” 

The rapid rise of generative AI tools gives these concerns more substance. Many users have applied the technology to get out of writing assignments, threatening academic integrity and creativity. Plus, biased AI could be used to determine whether an individual is suitable for a job, mortgage, social assistance or political asylum, producing possible injustices and discrimination, noted Pope Francis.

More on Artificial Intelligence What Are AI Ethics?

9. Autonomous Weapons Powered By AI

As is too often the case, technological advancements have been harnessed for warfare . When it comes to AI, some are keen to do something about it before it’s too late: In a  2016 open letter , over 30,000 individuals, including AI and  robotics researchers, pushed back against the investment in AI-fueled autonomous weapons. 

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” they wrote. “If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”

This prediction has come to fruition in the form of Lethal Autonomous Weapon Systems , which locate and destroy targets on their own while abiding by few regulations. Because of the proliferation of potent and complex weapons, some of the world’s most powerful nations have given in to anxieties and contributed to a  tech cold war .  

Many of these new weapons pose  major risks to civilians on the ground, but the danger becomes amplified when autonomous weapons fall into the wrong hands. Hackers have mastered various  types of cyber attacks , so it’s not hard to imagine a malicious actor infiltrating autonomous weapons and instigating absolute armageddon.  

If political rivalries and warmongering tendencies are not kept in check, artificial intelligence could end up being applied with the worst intentions. Some fear that, no matter how many powerful figures point out the dangers of artificial intelligence, we’re going to keep pushing the envelope with it if there’s money to be made.   

“The mentality is, ‘If we can do it, we should try it; let’s see what happens,” Messina said. “‘And if we can make money off it, we’ll do a whole bunch of it.’ But that’s not unique to technology. That’s been happening forever.’”

10. Financial Crises Brought About By AI Algorithms

The  financial industry has become more receptive to AI technology’s involvement in everyday finance and trading processes. As a result, algorithmic trading could be responsible for our next major financial crisis in the markets.

While AI algorithms aren’t clouded by human judgment or emotions, they also  don’t take into account contexts , the interconnectedness of markets and factors like human trust and fear. These algorithms then make thousands of trades at a blistering pace with the goal of selling a few seconds later for small profits. Selling off thousands of trades could scare investors into doing the same thing, leading to sudden crashes and extreme market volatility.

Instances like the  2010 Flash Crash and the  Knight Capital Flash Crash serve as reminders of what could happen when trade-happy algorithms go berserk, regardless of whether rapid and massive trading is intentional.  

This isn’t to say that AI has nothing to offer to the finance world. In fact, AI algorithms can help investors make smarter and more informed decisions on the market. But finance organizations need to make sure they  understand their AI algorithms and how those algorithms make decisions. Companies should consider  whether AI raises or lowers their confidence before introducing the technology to avoid stoking fears among investors and creating financial chaos.

11. Loss of Human Influence

An overreliance on AI technology could result in the loss of human influence — and a lack in human functioning — in some parts of society. Using AI in healthcare could result in reduced  human empathy and reasoning , for instance. And applying generative AI for creative endeavors could diminish  human creativity and emotional expression . Interacting with AI systems too much could even cause  reduced peer communication and social skills. So while AI can be very helpful for automating daily tasks, some question if it might hold back overall human intelligence, abilities and need for community.

12. Uncontrollable Self-Aware AI

There also comes a worry that AI will progress in intelligence so rapidly that it will become  sentient , and act  beyond humans’ control — possibly in a malicious manner. Alleged reports of this sentience have already been occurring, with one popular account being from a former Google engineer who stated the AI chatbot  LaMDA was sentient and speaking to him just as a person would. As AI’s next big milestones involve making systems with  artificial general intelligence , and eventually  artificial superintelligence , cries to completely stop these developments  continue to rise .

More on Artificial Intelligence What Is the Eliza Effect?

13. Increased Criminal Activity 

As AI technology has become more accessible, the number of people using it for criminal activity has risen. Online predators can now generate images of children , making it difficult for law enforcement to determine actual cases of child abuse. And even in cases where children aren’t physically harmed, the use of children’s faces in AI-generated images presents new challenges for protecting children’s online privacy and digital safety .  

Voice cloning has also become an issue, with criminals leveraging AI-generated voices to impersonate other people and commit phone scams . These examples merely scratch the surface of AI’s capabilities, so it will only become harder for local and national government agencies to adjust and keep the public informed of the latest AI-driven threats.  

14. Broader Economic and Political Instability

Overinvesting in a specific material or sector can put economies in a precarious position. Like steel , AI could run the risk of drawing so much attention and financial resources that governments fail to develop other technologies and industries. Plus, overproducing AI technology could result in dumping the excess materials, which could potentially fall into the hands of hackers and other malicious actors.

How to Mitigate the Risks of AI

AI still has  numerous benefits , like organizing health data and powering self-driving cars. To get the most out of this promising technology, though, some argue that plenty of regulation is necessary .

“There’s a serious danger that we’ll get [AI systems] smarter than us fairly soon and that these things might get bad motives and take control,” Hinton  told NPR . “This isn’t just a science fiction problem. This is a serious problem that’s probably going to arrive fairly soon, and politicians need to be thinking about what to do about it now.”

Develop Legal Regulations

AI regulation has been a main focus for  dozens of countries , and now the U.S. and European Union are creating more clear-cut measures to manage the rising sophistication of artificial intelligence. In fact, the White House Office of Science and Technology Policy (OSTP) published the  AI Bill of Rights in 2022, a document outlining to help responsibly guide AI use and development. Additionally, President Joe Biden issued an  executive order in 2023 requiring federal agencies to develop new rules and guidelines for AI safety and security.

Although legal regulations mean certain AI technologies could eventually be banned, it doesn’t prevent societies from exploring the field.  

Ford argues that AI is essential for countries looking to innovate and keep up with the rest of the world.

“You regulate the way AI is used, but you don’t hold back progress in basic technology. I think that would be wrong-headed and potentially dangerous,” Ford said. “We decide where we want AI and where we don’t; where it’s acceptable and where it’s not. And different countries are going to make different choices.”

More on Artificial Intelligence Will This Election Year Be a Turning Point for AI Regulation?

Establish Organizational AI Standards and Discussions

On a company level, there are many steps businesses can take when integrating AI into their operations. Organizations can  develop processes for monitoring algorithms, compiling high-quality data and explaining the findings of AI algorithms. Leaders could even make AI a part of their  company culture and routine business discussions, establishing standards to determine acceptable AI technologies.

Guide Tech With Humanities Perspectives

Though when it comes to society as a whole, there should be a greater push for tech to embrace the diverse perspectives of the humanities . Stanford University AI researchers Fei-Fei Li and John Etchemendy make this argument in a 2019 blog post that  calls for national and global leadership in regulating artificial intelligence:   

“The creators of AI must seek the insights, experiences and concerns of people across ethnicities, genders, cultures and socio-economic groups, as well as those from other fields, such as economics, law, medicine, philosophy, history, sociology, communications, human-computer-interaction, psychology, and Science and Technology Studies (STS).”

Balancing high-tech innovation with human-centered thinking is an ideal method for producing  responsible AI technology and ensuring the  future of AI remains hopeful for the next generation. The dangers of artificial intelligence should always be a topic of discussion, so leaders can figure out ways to wield the technology for noble purposes . 

“I think we can talk about all these risks, and they’re very real,” Ford said. “But AI is also going to be the most important tool in our toolbox for solving the biggest challenges we face.”

Frequently Asked Questions

What is ai.

AI (artificial intelligence) describes a machine's ability to perform tasks and mimic intelligence at a similar level as humans.

Is AI dangerous?

AI has the potential to be dangerous, but these dangers may be mitigated by implementing legal regulations and by guiding AI development with human-centered thinking.

Can AI cause human extinction?

If AI algorithms are biased or used in a malicious manner — such as in the form of deliberate disinformation campaigns or autonomous lethal weapons — they could cause significant harm toward humans. Though as of right now, it is unknown whether AI is capable of causing human extinction.

What happens if AI becomes self-aware?

Self-aware AI has yet to be created, so it is not fully known what will happen if or when this development occurs.

Some suggest self-aware AI may become a helpful counterpart to humans in everyday living, while others suggest that it may act beyond human control and purposely harm humans.

Is AI a threat to the future?

AI is already disrupting jobs, posing security challenges and raising ethical questions. If left unregulated, it could be used for more nefarious purposes. But it remains to be seen how the technology will continue to develop and what measures governments may take, if any, to exercise more control over AI production and usage. 

Hal Koss contributed reporting to this story.

Recent Artificial Intelligence Articles

What Is Retrieval Augmented Generation (RAG)?

The Case Against AI Everything, Everywhere, All at Once

Neuron system

I cringe at being called “Mother of the Cloud, " but having been part of the development and implementation of the internet and networking industry—as an entrepreneur, CTO of Cisco, and on the boards of Disney and FedEx—I am fortunate to have had a 360-degree view of the technologies that are at the foundation of our modern world.

I have never had such mixed feelings about technological innovation. In stark contrast to the early days of internet development, when many stakeholders had a say, discussions about AI and our future are being shaped by leaders who seem to be striving for absolute ideological power. The result is “Authoritarian Intelligence.” The hubris and determination of tech leaders to control society is threatening our individual, societal, and business autonomy.

What is happening is not just a battle for market control. A small number of tech titans are busy designing our collective future, presenting their societal vision, and specific beliefs about our humanity, as theonly possible path. Hiding behind an illusion of natural market forces, they are harnessing their wealth and influence to shape not just productization and implementation of AI technology, but also the research.

Artificial Intelligence is not just chat bots, but a broad field of study. One implementation capturing today’s attention, machine learning, has expanded beyond predicting our behavior to generating content—called Generative AI. The awe of machines wielding the power of language is seductive, but Performative AI might be a more appropriate name, as it leans toward production and mimicry—and sometimes fakery—over deep creativity, accuracy, or empathy.

The very fact that the evolution of technology feels so inevitable is evidence of an act of manipulation, an authoritarian use of narrative brilliantly described by historian Timothy Snyder. He calls out the politics of inevitability “.. . a sense that the future is just more of the present, ... that there are no alternatives, and therefore nothing really to be done.” There is no discussion of underlying values. Facts that don’t fit the narrative are disregarded.

Read More: AI's Long-term Risks Shouldn't Makes Us Miss Present Risks

Here in Silicon Valley, this top-down authoritarian technique is amplified by a bottom-up culture of inevitability. An orchestrated frenzy begins when the next big thing to fuel the Valley’s economic and innovation ecosystem is heralded by companies, investors, media, and influencers.

They surround us with language coopted from common values—democratization, creativity, open, safe. In behavioral psych classes, product designers are taught to eliminate friction—removing any resistance to us to acting on impulse .

The promise of short-term efficiency, convenience, and productivity lures us. Any semblance of pushback is decried as ignorance, or a threat to global competition. No one wants to be called a Luddite. Tech leaders, seeking to look concerned about the public interest, call for limited, friendly regulation, and the process moves forward until the tech is fully enmeshed in our society.

We bought into this narrative before, when social media, smartphones and cloud computing came on the scene. We didn’t question whether the only way to build community, find like-minded people, or be heard, was through one enormous “town square,” rife with behavioral manipulation, pernicious algorithmic feeds, amplification of pre-existing bias, and the pursuit of likes and follows.

It’s now obvious that it was a path towards polarization, toxicity of conversation, and societal disruption. Big Tech was the main beneficiary as industries and institutions jumped on board, accelerating their own disruption, and civic leaders were focused on how to use these new tools to grow their brands and not on helping us understand the risks.

We are at the same juncture now with AI. Once again, a handful of competitive but ideologically aligned leaders are telling us that large-scale, general-purpose AI implementations are the only way forward. In doing so, they disregard the dangerous level of complexity and the undue level of control and financial return to be granted to them.

While they talk about safety and responsibility, large companies protect themselves at the expense of everyone else. With no checks on their power, they move from experimenting in the lab to experimenting on us, not questioning how much agency we want to give up or whether we believe a specific type of intelligence should be the only measure of human value.

The different types and levels of risks are overwhelming, and we need to focus on all of them: the long-term existential risks, and the existing ones. Disinformation, supercharged by deep fakes, data privacy issues, and biased decision making continue to erode trust—with few viable solutions. We do not yet fully understand risks to our society at large such as the level and pace of job loss, environmental impacts , and whether we want opaque systems making decisions for us.

Deeper risks question the very aspects of humanity. When we prioritize “intelligence” to the exclusion of cognition, might we devolve to become more like machines? On the current trajectory we may not even have the option to weigh in on who gets to decide what is in our best interest. Eliminating humanity is not the only way to wipe out our humanity .

Human well-being and dignity should be our North Star—with innovation in a supporting role. We can learn from the open systems environment of the 1970s and 80s. When we were first developing the infrastructure of the internet , power was distributed between large and small companies, vendors and customers, government and business. These checks and balances led to better decisions and less risk.

AI everything, everywhere, all at once , is not inevitable, if we use our powers to question the tools and the people shaping them. Private and public sector leaders can slow the frenzy through acts of friction; simply not giving in to the “Authoritarian Intelligence” emanating out of Silicon Valley, and our collective group think.

We can buy the time needed to develop impactful national and international policy that distributes power and protects human rights, and inspire independent funding and ethics guidelines for a vibrant research community that will fuel innovation.

With the right priorities and guardrails, AI can help advance science, cure diseases, build new industries, expand joy, and maintain human dignity and the differences that make us unique.

More Must-Reads from TIME

  • How the Electoral College Actually Works
  • Your Vote Is Safe
  • Mel Robbins Will Make You Do It
  • Why Vinegar Is So Good for You
  • The Surprising Health Benefits of Pain
  • You Don’t Have to Dread the End of Daylight Saving
  • The 20 Best Halloween TV Episodes of All Time
  • Meet TIME's Newest Class of Next Generation Leaders

Contact us at [email protected]

July 12, 2023

AI Is an Existential Threat—Just Not the Way You Think

Some fear that artificial intelligence will threaten humanity’s survival. But the existential risk is more philosophical than apocalyptic

By Nir Eisikovits & The Conversation US

closeup macro shot of a large pile of triangular shaped shiny silver paper clips on black

AI isn’t likely to enslave humanity, but it could take over many aspects of our lives.

krishna dev/Alamy Stock Photo

The following essay is reprinted with permission from The Conversation , an online publication covering the latest research.

The rise of ChatGPT and similar artificial intelligence systems has been accompanied by a sharp  increase in anxiety about AI . For the past few months, executives and AI safety researchers have been offering predictions, dubbed “ P(doom) ,” about the probability that AI will bring about a large-scale catastrophe.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Worries peaked in May 2023 when the nonprofit research and advocacy organization Center for AI Safety released  a one-sentence statement : “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” The statement was signed by many key players in the field, including the leaders of OpenAI, Google and Anthropic, as well as two of the so-called “godfathers” of AI:  Geoffrey Hinton  and  Yoshua Bengio .

You might ask how such existential fears are supposed to play out. One famous scenario is the “ paper clip maximizer ” thought experiment articulated by Oxford philosopher  Nick Bostrom . The idea is that an AI system tasked with producing as many paper clips as possible might go to extraordinary lengths to find raw materials, like destroying factories and causing car accidents.

A  less resource-intensive variation  has an AI tasked with procuring a reservation to a popular restaurant shutting down cellular networks and traffic lights in order to prevent other patrons from getting a table.

Office supplies or dinner, the basic idea is the same: AI is fast becoming an alien intelligence, good at accomplishing goals but dangerous because it won’t necessarily align with the moral values of its creators. And, in its most extreme version, this argument morphs into explicit anxieties about AIs  enslaving or destroying the human race .

Actual harm

In the past few years, my colleagues and I at  UMass Boston’s Applied Ethics Center  have been studying the impact of engagement with AI on people’s understanding of themselves, and I believe these catastrophic anxieties are  overblown and misdirected .

Yes, AI’s ability to create convincing deep-fake video and audio is frightening, and it can be abused by people with bad intent. In fact, that is already happening: Russian operatives likely attempted to embarrass Kremlin critic  Bill Browder  by ensnaring him in a conversation with an avatar for former Ukrainian President Petro Poroshenko. Cybercriminals have been using AI voice cloning for a variety of crimes – from  high-tech heists  to  ordinary scams .

AI decision-making systems that  offer loan approval and hiring recommendations  carry the risk of algorithmic bias, since the training data and decision models they run on reflect long-standing social prejudices.

These are big problems, and they require the attention of policymakers. But they have been around for a while, and they are hardly cataclysmic.

Not in the same league

The statement from the Center for AI Safety lumped AI in with pandemics and nuclear weapons as a major risk to civilization. There are problems with that comparison. COVID-19 resulted in almost  7 million deaths worldwide , brought on a  massive and continuing mental health crisis  and created  economic challenges , including chronic supply chain shortages and runaway inflation.

Nuclear weapons probably killed  more than 200,000 people  in Hiroshima and Nagasaki in 1945, claimed many more lives from cancer in the years that followed, generated decades of profound anxiety during the Cold War and brought the world to the brink of annihilation during the Cuban Missile crisis in 1962. They have also  changed the calculations of national leaders  on how to respond to international aggression, as currently playing out with Russia’s invasion of Ukraine.

AI is simply nowhere near gaining the ability to do this kind of damage. The paper clip scenario and others like it are science fiction. Existing AI applications execute specific tasks rather than making broad judgments. The technology is  far from being able to decide on and then plan out  the goals and subordinate goals necessary for shutting down traffic in order to get you a seat in a restaurant, or blowing up a car factory in order to satisfy your itch for paper clips.

Not only does the technology lack the complicated capacity for multilayer judgment that’s involved in these scenarios, it also does not have autonomous access to sufficient parts of our critical infrastructure to start causing that kind of damage.

What it means to be human

Actually, there is an existential danger inherent in using AI, but that risk is existential in the philosophical rather than apocalyptic sense. AI in its current form can alter the way people view themselves. It can degrade abilities and experiences that people consider essential to being human.

For example, humans are judgment-making creatures. People rationally weigh particulars and make daily judgment calls at work and during leisure time about whom to hire, who should get a loan, what to watch and so on. But more and more of these judgments are  being automated and farmed out to algorithms . As that happens, the world won’t end. But people will gradually lose the capacity to make these judgments themselves. The fewer of them people make, the worse they are likely to become at making them.

Or consider the role of chance in people’s lives. Humans value serendipitous encounters: coming across a place, person or activity by accident, being drawn into it and retrospectively appreciating the role accident played in these meaningful finds. But the role of algorithmic recommendation engines is to  reduce that kind of serendipity  and replace it with planning and prediction.

Finally, consider ChatGPT’s writing capabilities. The technology is in the process of eliminating the role of writing assignments in higher education. If it does, educators will lose a key tool for teaching students  how to think critically .

Not dead but diminished

So, no, AI won’t blow up the world. But the increasingly uncritical embrace of it, in a variety of narrow contexts, means the gradual erosion of some of humans’ most important skills. Algorithms are already undermining people’s capacity to make judgments, enjoy serendipitous encounters and hone critical thinking.

The human species will survive such losses. But our way of existing will be impoverished in the process. The fantastic anxieties around the coming AI cataclysm, singularity, Skynet, or however you might think of it, obscure these more subtle costs. Recall T.S. Eliot’s famous closing lines of “ The Hollow Men ”: “This is the way the world ends,” he wrote, “not with a bang but a whimper.”

This article was originally published on The Conversation . Read the original article .

  • Share full article

A foreboding dark sky above a desolate landscape.

Opinion Guest Essay

The True Threat of Artificial Intelligence

Credit... Mathieu Larone

Supported by

By Evgeny Morozov

Mr. Morozov is the author of “To Save Everything, Click Here: The Folly of Technological Solutionism” and the host of the forthcoming podcast “ The Santiago Boys .”

  • June 30, 2023

In May, more than 350 technology executives, researchers and academics signed a statement warning of the existential dangers of artificial intelligence. “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the signatories warned.

This came on the heels of another high-profile letter , signed by the likes of Elon Musk and Steve Wozniak, a co-founder of Apple, calling for a six-month moratorium on the development of advanced A.I. systems.

Meanwhile, the Biden administration has urged responsible A.I. innovation, stating that “in order to seize the opportunities” it offers, we “must first manage its risks.” In Congress, Senator Chuck Schumer called for “first of their kind” listening sessions on the potential and risks of A.I., a crash course of sorts from industry executives, academics, civil rights activists and other stakeholders.

The mounting anxiety about A.I. isn’t because of the boring but reliable technologies that autocomplete our text messages or direct robot vacuums to dodge obstacles in our living rooms. It is the rise of artificial general intelligence, or A.G.I., that worries the experts.

A.G.I. doesn’t exist yet, but some believe that the rapidly growing capabilities of OpenAI’s ChatGPT suggest its emergence is near. Sam Altman, a co-founder of OpenAI, has described it as “systems that are generally smarter than humans.” Building such systems remains a daunting — some say impossible — task. But the benefits appear truly tantalizing.

Imagine Roombas, no longer condemned to vacuuming the floors, that evolve into all-purpose robots, happy to brew morning coffee or fold laundry — without ever being programmed to do these things.

Sounds appealing. But should these A.G.I. Roombas get too powerful, their mission to create a spotless utopia might get messy for their dust-spreading human masters. At least we’ve had a good run.

Discussions of A.G.I. are rife with such apocalyptic scenarios. Yet a nascent A.G.I. lobby of academics, investors and entrepreneurs counter that, once made safe, A.G.I. would be a boon to civilization. Mr. Altman, the face of this campaign, embarked on a global tour to charm lawmakers . Earlier this year he wrote that A.G.I. might even turbocharge the economy, boost scientific knowledge and “elevate humanity by increasing abundance.”

This is why, for all the hand-wringing, so many smart people in the tech industry are toiling to build this controversial technology: not using it to save the world seems immoral.

They are beholden to an ideology that views this new technology as inevitable and, in a safe version, as universally beneficial. Its proponents can think of no better alternatives for fixing humanity and expanding its intelligence.

But this ideology — call it A.G.I.-ism — is mistaken. The real risks of A.G.I. are political and won’t be fixed by taming rebellious robots. The safest of A.G.I.s would not deliver the progressive panacea promised by its lobby. And in presenting its emergence as all but inevitable, A.G.I.-ism distracts from finding better ways to augment intelligence.

Unbeknown to its proponents , A.G.I.-ism is just a bastard child of a much grander ideology, one preaching that, as Margaret Thatcher memorably put it, there is no alternative, not to the market.

Rather than breaking capitalism, as Mr. Altman has hinted it could do, A.G.I. — or at least the rush to build it — is more likely to create a powerful (and much hipper) ally for capitalism’s most destructive creed: neoliberalism.

Fascinated with privatization, competition and free trade, the architects of neoliberalism wanted to dynamize and transform a stagnant and labor-friendly economy through markets and deregulation.

Some of these transformations worked, but they came at an immense cost. Over the years, neoliberalism drew many, many critics, who blamed it for the Great Recession and financial crisis, Trumpism, Brexit and much else.

It is not surprising, then, that the Biden administration has distanced itself from the ideology, acknowledging that markets sometimes get it wrong. Foundations, think tanks and academics have even dared to imagine a post-neoliberal future.

Yet neoliberalism is far from dead. Worse, it has found an ally in A.G.I.-ism, which stands to reinforce and replicate its main biases: that private actors outperform public ones (the market bias), that adapting to reality beats transforming it (the adaptation bias ) and that efficiency trumps social concerns (the efficiency bias).

These biases turn the alluring promise behind A.G.I. on its head: Instead of saving the world, the quest to build it will make things only worse. Here is how.

A.G.I. will never overcome the market’s demands for profit.

Remember when Uber, with its cheap rates, was courting cities to serve as their public transportation systems?

It all began nicely, with Uber promising implausibly cheap rides, courtesy of a future with self-driving cars and minimal labor costs. Deep-pocketed investors loved this vision, even absorbing Uber’s multibillion-dollar losses.

But when reality descended , the self-driving cars were still a pipe dream. The investors demanded returns and Uber was forced to raise prices . Users that relied on it to replace public buses and trains were left on the sidewalk.

The neoliberal instinct behind Uber’s business model is that the private sector can do better than the public sector — the market bias.

It’s not just cities and public transit. Hospitals , police departments and even the Pentagon increasingly rely on Silicon Valley to accomplish their missions.

With A.G.I., this reliance will only deepen, not least because A.G.I. is unbounded in its scope and ambition. No administrative or government services would be immune to its promise of disruption.

Moreover, A.G.I. doesn’t even have to exist to lure them in. This, at any rate, is the lesson of Theranos, a start-up that promised to “solve” health care through a revolutionary blood-testing technology and a former darling of America’s elites. Its victims are real, even if its technology never was.

After so many Uber- and Theranos-like traumas, we already know what to expect of an A.G.I. rollout. It will consist of two phases. First, the charm offensive of heavily subsidized services. Then the ugly retrenchment, with the overdependent users and agencies shouldering the costs of making them profitable.

As always, Silicon Valley mavens play down the market’s role. In a recent essay titled “ Why A.I. Will Save the World ,” Marc Andreessen, a prominent tech investor, even proclaims that A.I. “is owned by people and controlled by people, like any other technology.”

Only a venture capitalist can traffic in such exquisite euphemisms. Most modern technologies are owned by corporations. And they — not the mythical “people” — will be the ones that will monetize saving the world.

And are they really saving it? The record, so far, is poor. Companies like Airbnb and TaskRabbit were welcomed as saviors for the beleaguered middle class ; Tesla’s electric cars were seen as a remedy to a warming planet. Soylent, the meal-replacement shake, embarked on a mission to “solve” global hunger, while Facebook vowed to “ solve ” connectivity issues in the Global South. None of these companies saved the world.

A decade ago, I called this solutionism , but “digital neoliberalism” would be just as fitting. This worldview reframes social problems in light of for-profit technological solutions. As a result, concerns that belong in the public domain are reimagined as entrepreneurial opportunities in the marketplace.

A.G.I.-ism has rekindled this solutionist fervor. Last year, Mr. Altman stated that “A.G.I. is probably necessary for humanity to survive” because “our problems seem too big” for us to “solve without better tools.” He’s recently asserted that A.G.I. will be a catalyst for human flourishing.

But companies need profits, and such benevolence, especially from unprofitable firms burning investors’ billions, is uncommon. OpenAI, having accepted billions from Microsoft, has contemplated raising another $100 billion to build A.G.I. Those investments will need to be earned back — against the service’s staggering invisible costs. (One estimate from February put the expense of operating ChatGPT at $700,000 per day.)

Thus, the ugly retrenchment phase, with aggressive price hikes to make an A.G.I. service profitable, might arrive before “abundance” and “flourishing.” But how many public institutions would mistake fickle markets for affordable technologies and become dependent on OpenAI’s expensive offerings by then?

And if you dislike your town outsourcing public transportation to a fragile start-up, would you want it farming out welfare services, waste management and public safety to the possibly even more volatile A.G.I. firms?

A.G.I. will dull the pain of our thorniest problems without fixing them.

Neoliberalism has a knack for mobilizing technology to make society’s miseries bearable. I recall an innovative tech venture from 2017 that promised to improve commuters’ use of a Chicago subway line. It offered rewards to discourage metro riders from traveling at peak times. Its creators leveraged technology to influence the demand side (the riders), seeing structural changes to the supply side (like raising public transport funding) as too difficult. Tech would help make Chicagoans adapt to the city’s deteriorating infrastructure rather than fixing it in order to meet the public’s needs.

This is the adaptation bias — the aspiration that, with a technological wand, we can become desensitized to our plight. It’s the product of neoliberalism’s relentless cheerleading for self-reliance and resilience.

The message is clear: gear up, enhance your human capital and chart your course like a start-up. And A.G.I.-ism echoes this tune. Bill Gates has trumpeted that A.I. can “help people everywhere improve their lives.”

The solutionist feast is only getting started: Whether it’s fighting the next pandemic , the loneliness epidemic or inflation , A.I. is already pitched as an all-purpose hammer for many real and imaginary nails. However, the decade lost to the solutionist folly reveals the limits of such technological fixes.

To be sure, Silicon Valley’s many apps — to monitor our spending, calories and workout regimes — are occasionally helpful. But they mostly ignore the underlying causes of poverty or obesity. And without tackling the causes, we remain stuck in the realm of adaptation, not transformation.

There’s a difference between nudging us to follow our walking routines — a solution that favors individual adaptation — and understanding why our towns have no public spaces to walk on — a prerequisite for a politics-friendly solution that favors collective and institutional transformation.

But A.G.I.-ism, like neoliberalism, sees public institutions as unimaginative and not particularly productive. They should just adapt to A.G.I., at least according to Mr. Altman, who recently said he was nervous about “the speed with which our institutions can adapt” — part of the reason, he added, “of why we want to start deploying these systems really early, while they’re really weak, so that people have as much time as possible to do this.”

But should institutions only adapt? Can’t they develop their own transformative agendas for improving humanity’s intelligence? Or do we use institutions only to mitigate the risks of Silicon Valley’s own technologies?

A.G.I. undermines civic virtues and amplifies trends we already dislike.

A common criticism of neoliberalism is that it has flattened our political life, rearranging it around efficiency. “ The Problem of Social Cost ,” a 1960 article that has become a classic of the neoliberal canon, preaches that a polluting factory and its victims should not bother bringing their disputes to court. Such fights are inefficient — who needs justice, anyway? — and stand in the way of market activity. Instead, the parties should privately bargain over compensation and get on with their business.

This fixation on efficiency is how we arrived at “solving” climate change by letting the worst offenders continue as before. The way to avoid the shackles of regulation is to devise a scheme — in this case, taxing carbon — that lets polluters buy credits to match the extra carbon they emit.

This culture of efficiency, in which markets measure the worth of things and substitute for justice, inevitably corrodes civic virtues.

And the problems this creates are visible everywhere. Academics fret that, under neoliberalism, research and teaching have become commodities. Doctors lament that hospitals prioritize more profitable services such as elective surgery over emergency care. Journalists hate that the worth of their articles is measured in eyeballs .

Now imagine unleashing A.G.I. on these esteemed institutions — the university, the hospital, the newspaper — with the noble mission of “fixing” them. Their implicit civic missions would remain invisible to A.G.I., for those missions are rarely quantified even in their annual reports — the sort of materials that go into training the models behind A.G.I.

After all, who likes to boast that his class on Renaissance history got only a handful of students? Or that her article on corruption in some faraway land got only a dozen page views? Inefficient and unprofitable, such outliers miraculously survive even in the current system. The rest of the institution quietly subsidizes them, prioritizing values other than profit-driven “efficiency.”

Will this still be the case in the A.G.I. utopia? Or will fixing our institutions through A.G.I. be like handing them over to ruthless consultants? They, too, offer data-bolstered “solutions” for maximizing efficiency. But these solutions often fail to grasp the messy interplay of values, missions and traditions at the heart of institutions — an interplay that is rarely visible if you only scratch their data surface.

In fact, the remarkable performance of ChatGPT-like services is, by design, a refusal to grasp reality at a deeper level, beyond the data’s surface. So whereas earlier A.I. systems relied on explicit rules and required someone like Newton to theorize gravity — to ask how and why apples fall — newer systems like A.G.I. simply learn to predict gravity’s effects by observing millions of apples fall to the ground.

However, if all that A.G.I. sees are cash-strapped institutions fighting for survival, it may never infer their true ethos. Good luck discerning the meaning of the Hippocratic oath by observing hospitals that have been turned into profit centers.

Margaret Thatcher’s other famous neoliberal dictum was that “ there is no such thing as society .”

The A.G.I. lobby unwittingly shares this grim view. For them, the kind of intelligence worth replicating is a function of what happens in individuals’ heads rather than in society at large.

But human intelligence is as much a product of policies and institutions as it is of genes and individual aptitudes. It’s easier to be smart on a fellowship in the Library of Congress than while working several jobs in a place without a bookstore or even decent Wi-Fi.

It doesn’t seem all that controversial to suggest that more scholarships and public libraries will do wonders for boosting human intelligence. But for the solutionist crowd in Silicon Valley, augmenting intelligence is primarily a technological problem — hence the excitement about A.G.I.

However, if A.G.I.-ism really is neoliberalism by other means, then we should be ready to see fewer — not more — intelligence-enabling institutions. After all, they are the remnants of that dreaded “society” that, for neoliberals, doesn’t really exist. A.G.I.’s grand project of amplifying intelligence may end up shrinking it.

Because of such solutionist bias, even seemingly innovative policy ideas around A.G.I. fail to excite. Take the recent proposal for a “ Manhattan Project for A.I. Safety .” This is premised on the false idea that there’s no alternative to A.G.I.

But wouldn’t our quest for augmenting intelligence be far more effective if the government funded a Manhattan Project for culture and education and the institutions that nurture them instead?

Without such efforts, the vast cultural resources of our existing public institutions risk becoming mere training data sets for A.G.I. start-ups, reinforcing the falsehood that society doesn’t exist.

Depending on how (and if) the robot rebellion unfolds, A.G.I. may or may not prove an existential threat. But with its antisocial bent and its neoliberal biases, A.G.I.-ism already is: We don’t need to wait for the magic Roombas to question its tenets.

Evgeny Morozov , the author of “To Save Everything, Click Here: The Folly of Technological Solutionism,” is the founder and publisher of The Syllabus and the host of the podcast “The Santiago Boys .”

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips . And here’s our email: [email protected] .

Follow The New York Times Opinion section on Facebook , Twitter (@NYTopinion) and Instagram .

Advertisement

Home — Essay Samples — Information Science and Technology — Artificial Intelligence — Is Artificial Intelligence Dangerous?

test_template

Is Artificial Intelligence Dangerous?

  • Categories: Artificial Intelligence

About this sample

close

Words: 623 |

Published: Sep 16, 2023

Words: 623 | Page: 1 | 4 min read

Table of contents

The promise of ai, the perceived dangers of ai, responsible ai development.

  • Medical Advancements: AI can assist in diagnosing diseases, analyzing medical data, and developing personalized treatment plans, potentially saving lives and improving healthcare outcomes.
  • Autonomous Vehicles: Self-driving cars, powered by AI, have the potential to reduce accidents and make transportation more accessible and efficient.
  • Environmental Conservation: AI can be used to monitor and address environmental issues, such as climate change, deforestation, and wildlife preservation.
  • Efficiency and Automation: AI-driven automation can streamline processes in various industries, increasing productivity and reducing costs.
  • Job Displacement
  • Bias and Discrimination
  • Lack of Accountability
  • Security Risks
  • Transparency and Accountability
  • Fairness and Bias Mitigation
  • Ethical Frameworks
  • Cybersecurity Measures

This essay delves into the complexities surrounding artificial intelligence (AI), exploring both its transformative benefits and potential dangers. From enhancing healthcare and transportation to posing risks in job displacement and security, it critically assesses AI’s dual aspects. Emphasizing responsible development, it advocates for transparency, fairness, and robust cybersecurity measures. For a deeper understanding, students can check more AI websites for students which offer further resources and expert guidance.

Image of Alex Wood

Cite this Essay

To export a reference to this article please select a referencing style below:

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Dr. Heisenberg

Verified writer

  • Expert in: Information Science and Technology

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

3 pages / 1484 words

4 pages / 2763 words

3 pages / 1462 words

1 pages / 651 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Artificial Intelligence

The infusion of artificial intelligence into business operations has triggered a transformative shift in industries worldwide. AI's role in optimizing processes, fostering innovation, and enhancing customer experiences cannot [...]

In the rapidly evolving landscape of modern society, artificial intelligence (AI) has become an integral part of our daily routines, seamlessly intertwining with various facets of our lives. This essay delves into the pervasive [...]

Artificial intelligence growth has exceeded people expectations in the last decade. The purpose of creating an AI system is to make a machine capable of emulating human like functions, and performance. The fast growth of AI has [...]

As technology continues to advance, cyberbullying poses an ongoing threat to individuals' well-being and societal harmony. By speculating on the future trends of cyberbullying, considering potential dangers and innovative [...]

ARTIFICIAL INTELLIGENCE IN MEDICAL TECHNOLOGY What is ARTIFICIAL INTELLIGENCE? The term AI was devised by John McCarthy, an American computer scientist, in 1956. AI or artificial intelligence is the stimulation of human [...]

Our topic for a rebate is “Is the threat of Artificial Intelligence technology real?” Artificial intelligence may attempt to copy our own intelligence. Nowadays Computers can communicate and calculate data quicker than the [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

essay against artificial intelligence

Find anything you save across the site in your account

There Is No A.I.

Illustration of a laptop wearing a human mesh face mask.

As a computer scientist, I don’t like the term “A.I.” In fact, I think it’s misleading—maybe even a little dangerous. Everybody’s already using the term , and it might seem a little late in the day to be arguing about it. But we’re at the beginning of a new technological era—and the easiest way to mismanage a technology is to misunderstand it.

The term “ artificial intelligence ” has a long history—it was coined in the nineteen-fifties, in the early days of computers. More recently, computer scientists have grown up on movies like “The Terminator” and “ The Matrix ,” and on characters like Commander Data, from “Star Trek: The Next Generation.” These cultural touchstones have become an almost religious mythology in tech culture. It’s only natural that computer scientists long to create A.I. and realize a long-held dream.

What’s striking, though, is that many of the people who are pursuing the A.I. dream also worry that it might mean doomsday for mankind . It is widely stated, even by scientists at the very center of today’s efforts, that what A.I. researchers are doing could result in the annihilation of our species, or at least in great harm to humanity, and soon. In a recent poll , half of A.I. scientists agreed that there was at least a ten-per-cent chance that the human race would be destroyed by A.I. Even my colleague and friend Sam Altman, who runs OpenAI , has made similar comments . Step into any Silicon Valley coffee shop and you can hear the same debate unfold: one person says that the new code is just code and that people are in charge, but another argues that anyone with this opinion just doesn’t get how profound the new tech is. The arguments aren’t entirely rational: when I ask my most fearful scientist friends to spell out how an A.I. apocalypse might happen, they often seize up from the paralysis that overtakes someone trying to conceive of infinity. They say things like “Accelerating progress will fly past us and we will not be able to conceive of what is happening.”

I don’t agree with this way of talking. Many of my friends and colleagues are deeply impressed by their experiences with the latest big models, like GPT-4 , and are practically holding vigils to await the appearance of a deeper intelligence. My position is not that they are wrong but that we can’t be sure; we retain the option of classifying the software in different ways.

The most pragmatic position is to think of A.I. as a tool, not a creature. My attitude doesn’t eliminate the possibility of peril: however we think about it, we can still design and operate our new tech badly, in ways that can hurt us or even lead to our extinction. Mythologizing the technology only makes it more likely that we’ll fail to operate it well—and this kind of thinking limits our imaginations, tying them to yesterday’s dreams. We can work better under the assumption that there is no such thing as A.I. The sooner we understand this, the sooner we’ll start managing our new technology intelligently.

If the new tech isn’t true artificial intelligence, then what is it? In my view, the most accurate way to understand what we are building today is as an innovative form of social collaboration.

A program like OpenAI’s GPT-4, which can write sentences to order, is something like a version of Wikipedia that includes much more data, mashed together using statistics. Programs that create images to order are something like a version of online image search, but with a system for combining the pictures. In both cases, it’s people who have written the text and furnished the images. The new programs mash up work done by human minds . What’s innovative is that the mashup process has become guided and constrained, so that the results are usable and often striking. This is a significant achievement and worth celebrating—but it can be thought of as illuminating previously hidden concordances between human creations, rather than as the invention of a new mind.

As far as I can tell, my view flatters the technology. After all, what is civilization but social collaboration? Seeing A.I. as a way of working together, rather than as a technology for creating independent, intelligent beings, may make it less mysterious—less like HAL 9000 or Commander Data. But that’s good, because mystery only makes mismanagement more likely.

It’s easy to attribute intelligence to the new systems; they have a flexibility and unpredictability that we don’t usually associate with computer technology. But this flexibility arises from simple mathematics. A large language model like GPT-4 contains a cumulative record of how particular words coincide in the vast amounts of text that the program has processed. This gargantuan tabulation causes the system to intrinsically approximate many grammar patterns, along with aspects of what might be called authorial style. When you enter a query consisting of certain words in a certain order, your entry is correlated with what’s in the model; the results can come out a little differently each time, because of the complexity of correlating billions of entries.

The non-repeating nature of this process can make it feel lively. And there’s a sense in which it can make the new systems more human-centered. When you synthesize a new image with an A.I. tool, you may get a bunch of similar options and then have to choose from them; if you’re a student who uses an L.L.M. to cheat on an essay assignment, you might read options generated by the model and select one. A little human choice is demanded by a technology that is non-repeating.

Many of the uses of A.I. that I like rest on advantages we gain when computers get less rigid. Digital stuff as we have known it has a brittle quality that forces people to conform to it, rather than assess it. We’ve all endured the agony of watching some poor soul at a doctor’s office struggle to do the expected thing on a front-desk screen. The face contorts; humanity is undermined. The need to conform to digital designs has created an ambient expectation of human subservience. A positive spin on A.I. is that it might spell the end of this torture, if we use it well. We can now imagine a Web site that reformulates itself on the fly for someone who is color-blind, say, or a site that tailors itself to someone’s particular cognitive abilities and styles. A humanist like me wants people to have more control, rather than be overly influenced or guided by technology. Flexibility may give us back some agency.

Still, despite these possible upsides, it’s more than reasonable to worry that the new technology will push us around in ways we don’t like or understand. Recently, some friends of mine circulated a petition asking for a pause on the most ambitious A.I. development. The idea was that we’d work on policy during the pause. The petition was signed by some in our community but not others. I found the notion too hazy—what level of progress would mean that the pause could end? Every week, I receive new but always vague mission statements from organizations seeking to initiate processes to set A.I. policy.

These efforts are well intentioned, but they seem hopeless to me. For years, I worked on the E.U.’s privacy policies, and I came to realize that we don’t know what privacy is. It’s a term we use every day, and it can make sense in context, but we can’t nail it down well enough to generalize. The closest we have come to a definition of privacy is probably “the right to be left alone,” but that seems quaint in an age when we are constantly dependent on digital services. In the context of A.I., “the right to not be manipulated by computation” seems almost correct, but doesn’t quite say everything we’d like it to.

A.I.-policy conversations are dominated by terms like “alignment” (is what an A.I. “wants” aligned with what humans want?), “safety” (can we foresee guardrails that will foil a bad A.I.?), and “fairness” (can we forestall all the ways a program might treat certain people with disfavor?). The community has certainly accomplished much good by pursuing these ideas, but that hasn’t quelled our fears. We end up motivating people to try to circumvent the vague protections we set up. Even though the protections do help, the whole thing becomes a game—like trying to outwit a sneaky genie. The result is that the A.I.-research community communicates the warning that their creations might still kill all of humanity soon, while proposing ever more urgent, but turgid, deliberative processes.

Recently, I tried an informal experiment, calling colleagues and asking them if there’s anything specific on which we can all seem to agree. I’ve found that there is a foundation of agreement. We all seem to agree that deepfakes —false but real-seeming images, videos, and so on—should be labelled as such by the programs that create them. Communications coming from artificial people, and automated interactions that are designed to manipulate the thinking or actions of a human being, should be labelled as well. We also agree that these labels should come with actions that can be taken. People should be able to understand what they’re seeing, and should have reasonable choices in return.

How can all this be done? There is also near-unanimity, I find, that the black-box nature of our current A.I. tools must end. The systems must be made more transparent. We need to get better at saying what is going on inside them and why. This won’t be easy. The problem is that the large-model A.I. systems we are talking about aren’t made of explicit ideas. There is no definite representation of what the system “wants,” no label for when it is doing a particular thing, like manipulating a person. There is only a giant ocean of jello—a vast mathematical mixing. A writers’-rights group has proposed that real human authors be paid in full when tools like GPT are used in the scriptwriting process; after all, the system is drawing on scripts that real people have made. But when we use A.I. to produce film clips, and potentially whole movies, there won’t necessarily be a screenwriting phase. A movie might be produced that appears to have a script, soundtrack, and so on, but it will have been calculated into existence as a whole. Similarly, no sketch precedes the generation of a painting from an illustration A.I. Attempting to open the black box by making a system spit out otherwise unnecessary items like scripts, sketches, or intentions will involve building another black box to interpret the first—an infinite regress.

At the same time, it’s not true that the interior of a big model has to be a trackless wilderness. We may not know what an “idea” is from a formal, computational point of view, but there could be tracks made not of ideas but of people. At some point in the past, a real person created an illustration that was input as data into the model, and, in combination with contributions from other people, this was transformed into a fresh image. Big-model A.I. is made of people—and the way to open the black box is to reveal them.

This concept, which I’ve contributed to developing, is usually called “data dignity.” It appeared, long before the rise of big-model “A.I.,” as an alternative to the familiar arrangement in which people give their data for free in exchange for free services, such as internet searches or social networking. Data dignity is sometimes known as “data as labor” or “plurality research.” The familiar arrangement has turned out to have a dark side: because of “network effects,” a few platforms take over, eliminating smaller players, like local newspapers. Worse, since the immediate online experience is supposed to be free, the only remaining business is the hawking of influence. Users experience what seems to be a communitarian paradise, but they are targeted by stealthy and addictive algorithms that make people vain, irritable, and paranoid.

In a world with data dignity, digital stuff would typically be connected with the humans who want to be known for having made it. In some versions of the idea, people could get paid for what they create, even when it is filtered and recombined through big models, and tech hubs would earn fees for facilitating things that people want to do. Some people are horrified by the idea of capitalism online, but this would be a more honest capitalism. The familiar “free” arrangement has been a disaster.

One of the reasons the tech community worries that A.I. could be an existential threat is that it could be used to toy with people, just as the previous wave of digital technologies have been. Given the power and potential reach of these new systems, it’s not unreasonable to fear extinction as a possible result. Since that danger is widely recognized, the arrival of big-model A.I. could be an occasion to reformat the tech industry for the better.

Implementing data dignity will require technical research and policy innovation. In that sense, the subject excites me as a scientist. Opening the black box will only make the models more interesting. And it might help us understand more about language, which is the human invention that truly impresses, and the one that we are still exploring after all these hundreds of thousands of years.

Could data dignity address the economic worries that are often expressed about A.I.? The main concern is that workers will be devalued or displaced. Publicly, techies will sometimes say that, in the coming years, people who work with A.I. will be more productive and will find new types of jobs in a more productive economy. (A worker might become a prompt engineer for A.I. programs, for instance—someone who collaborates with or controls an A.I.) And yet, in private, the same people will quite often say, “No, A.I. will overtake this idea of collaboration.” No more remuneration for today’s accountants, radiologists, truck drivers, writers, film directors, or musicians.

A data-dignity approach would trace the most unique and influential contributors when a big model provides a valuable output. For instance, if you ask a model for “an animated movie of my kids in an oil-painting world of talking cats on an adventure,” then certain key oil painters, cat portraitists, voice actors, and writers—or their estates—might be calculated to have been uniquely essential to the creation of the new masterpiece. They would be acknowledged and motivated. They might even get paid.

There is a fledgling data-dignity research community, and here is an example of a debate within it: How detailed an accounting should data dignity attempt? Not everyone agrees. The system wouldn’t necessarily account for the billions of people who have made ambient contributions to big models—those who have added to a model’s simulated competence with grammar, for example. At first, data dignity might attend only to the small number of special contributors who emerge in a given situation. Over time, though, more people might be included, as intermediate rights organizations—unions, guilds, professional groups, and so on—start to play a role. People in the data-dignity community sometimes call these anticipated groups mediators of individual data ( MIDs ) or data trusts. People need collective-bargaining power to have value in an online world—especially when they might get lost in a giant A.I. model. And when people share responsibility in a group, they self-police, reducing the need, or temptation, for governments and companies to censor or control from above. Acknowledging the human essence of big models might lead to a blossoming of new positive social institutions.

Data dignity is not just for white-collar roles. Consider what might happen if A.I.-driven tree-trimming robots are introduced. Human tree trimmers might find themselves devalued or even out of work. But the robots could eventually allow for a new type of indirect landscaping artistry. Some former workers, or others, might create inventive approaches—holographic topiary, say, that looks different from different angles—that find their way into the tree-trimming models. With data dignity, the models might create new sources of income, distributed through collective organizations. Tree trimming would become more multifunctional and interesting over time; there would be a community motivated to remain valuable. Each new successful introduction of an A.I. or robotic application could involve the inauguration of a new kind of creative work. In ways large and small, this could help ease the transition to an economy into which models are integrated.

Many people in Silicon Valley see universal basic income as a solution to potential economic problems created by A.I. But U.B.I. amounts to putting everyone on the dole in order to preserve the idea of black-box artificial intelligence. This is a scary idea, I think, in part because bad actors will want to seize the centers of power in a universal welfare system, as in every communist experiment. I doubt that data dignity could ever grow enough to sustain all of society, but I doubt that any social or economic principle will ever be complete. Whenever possible, the goal should be to at least establish a new creative class instead of a new dependent class.

There are also non-altruistic reasons for A.I. companies to embrace data dignity. The models are only as good as their inputs. It’s only through a system like data dignity that we can expand the models into new frontiers. Right now, it’s much easier to get an L.L.M. to write an essay than it is to ask the program to generate an interactive virtual-reality world, because there are very few virtual worlds in existence. Why not solve that problem by giving people who add more virtual worlds a chance for prestige and income?

Could data dignity help with any of the human-annihilation scenarios? A big model could make us incompetent, or confuse us so much that our society goes collectively off the rails; a powerful, malevolent person could use A.I. to do us all great harm; and some people also think that the model itself could “jailbreak,” taking control of our machines or weapons and using them against us.

We can find precedents for some of these scenarios not just in science fiction but in more ordinary market and technology failures. An example is the 2019 catastrophe related to Boeing’s 737 MAX jets . The planes included a flight-path-correction feature that in some cases fought the pilots, causing two mass-casualty crashes. The problem was not the technology in isolation but the way that it was integrated into the sales cycle, training sessions, user interface, and documentation. Pilots thought that they were doing the right thing by trying to counteract the system in certain circumstances, but they were doing exactly the wrong thing, and they had no way of knowing. Boeing failed to communicate clearly about how the technology worked, and the resulting confusion led to disaster.

Anything engineered—cars, bridges, buildings—can cause harm to people, and yet we have built a civilization on engineering. It’s by increasing and broadening human awareness, responsibility, and participation that we can make automation safe; conversely, if we treat our inventions as occult objects, we can hardly be good engineers. Seeing A.I. as a form of social collaboration is more actionable: it gives us access to the engine room, which is made of people.

Let’s consider the apocalyptic scenario in which A.I. drives our society off the rails. One way this could happen is through deepfakes. Suppose that an evil person, perhaps working in an opposing government on a war footing, decides to stoke mass panic by sending all of us convincing videos of our loved ones being tortured or abducted from our homes. (The data necessary to create such videos are, in many cases, easy to obtain through social media or other channels.) Chaos would ensue, even if it soon became clear that the videos were faked. How could we prevent such a scenario? The answer is obvious: digital information must have context. Any collection of bits needs a history. When you lose context, you lose control.

Why don’t bits come attached to the stories of their origins? There are many reasons. The original design of the Web didn’t keep track of where bits came from, likely to make it easier for the network to grow quickly. (Computers and bandwidth were poor in the beginning.) Why didn’t we start remembering where bits came from when it became more feasible to at least approximate digital provenance? It always felt to me that we wanted the Web to be more mysterious than it needed to be. Whatever the reason, the Web was made to remember everything while forgetting its context.

Today, most people take it for granted that the Web, and indeed the Internet it is built on, is, by its nature, anti-contextual and devoid of provenance. We assume that decontextualization is intrinsic to the very idea of a digital network. That was never so, however; the initial proposals for digital-network architecture, put forward by the monumental scientist Vannevar Bush in 1945 and the computer scientist Ted Nelson in 1960, preserved provenance. Now A.I. is revealing the true costs of ignoring this approach. Without provenance, we have no way of controlling our A.I.s, or of making them economically fair. And this risks pushing our society to the brink.

If a chatbot appears to be manipulative, mean, weird, or deceptive, what kind of answer do we want when we ask why? Revealing the indispensable antecedent examples from which the bot learned its behavior would provide an explanation: we’d learn that it drew on a particular work of fan fiction, say, or a soap opera. We could react to that output differently, and adjust the inputs of the model to improve it. Why shouldn’t that type of explanation always be available? There may be cases in which provenance shouldn’t be revealed, so as to give priority to privacy—but provenance will usually be more beneficial to individuals and society than an exclusive commitment to privacy would be.

The technical challenges of data dignity are real and must inspire serious scientific ambition. The policy challenges would also be substantial—a sign, perhaps, that they are meaningful and concrete. But we need to change the way we think, and to embrace the hard work of renovation. By persisting with the ideas of the past—among them, a fascination with the possibility of an A.I. that lives independently of the people who contribute to it—we risk using our new technologies in ways that make the world worse. If society, economics, culture, technology, or any other spheres of activity are to serve people, that can only be because we decide that people enjoy a special status to be served.

This is my plea to all my colleagues. Think of people. People are the answer to the problems of bits. ♦

New Yorker Favorites

A wedding ring that lost itself .

How the World’s 50 Best Restaurants are chosen .

Did a scientist put millions of lives at risk—and was he right to do it ?

Linda Ronstadt has found another voice .

The Web site where millennial women judge one another’s spending habits .

The foreign students who saw Ukraine as a gateway to a better life .

An essay by Haruki Murakami: “ The Running Novelist .”

Sign up for our daily newsletter to receive the best stories from The New Yorker .

What Kind of Writer Is ChatGPT?

One Hundred Year Study on Artificial Intelligence (AI100)

SQ10. What are the most pressing dangers of AI?

Main navigation, related documents.

2019 Workshops

2020 Study Panel Charge

Download Full Report  

AAAI 2022 Invited Talk

Stanford HAI Seminar 2023

As AI systems prove to be increasingly beneficial in real-world applications, they have broadened their reach, causing risks of misuse, overuse, and explicit abuse to proliferate. As AI systems increase in capability and as they are integrated more fully into societal infrastructure, the implications of losing meaningful control over them become more concerning. 1 New research efforts are aimed at re-conceptualizing the foundations of the field to make AI systems less reliant on explicit, and easily misspecified, objectives. 2 A particularly visible danger is that AI can make it easier to build machines that can spy and even kill at scale . But there are many other important and subtler dangers at present.

In this section

Techno-solutionism, dangers of adopting a statistical perspective on justice, disinformation and threat to democracy, discrimination and risk in the medical setting.

One of the most pressing dangers of AI is techno-solutionism, the view that AI can be seen as a panacea when it is merely a tool. 3 As we see more AI advances, the temptation to apply AI decision-making to all societal problems increases. But technology often creates larger problems in the process of solving smaller ones. For example, systems that streamline and automate the application of social services can quickly become rigid and deny access to migrants or others who fall between the cracks. 4

When given the choice between algorithms and humans, some believe algorithms will always be the less-biased choice. Yet, in 2018, Amazon found it necessary to discard a proprietary recruiting tool because the historical data it was trained on resulted in a system that was systematically biased against women. 5 Automated decision-making can often serve to replicate, exacerbate, and even magnify the same bias we wish it would remedy.

Indeed, far from being a cure-all, technology can actually create feedback loops that worsen discrimination. Recommendation algorithms, like Google’s page rank, are trained to identify and prioritize the most “relevant” items based on how other users engage with them. As biased users feed the algorithm biased information, it responds with more bias, which informs users’ understandings and deepens their bias, and so on. 6 Because all technology is the product of a biased system, 7 techno-solutionism’s flaws run deep: 8 a creation is limited by the limitations of its creator.

Automated decision-making may produce skewed results that replicate and amplify existing biases. A potential danger, then, is when the public accepts AI-derived conclusions as certainties. This determinist approach to AI decision-making can have dire implications in both criminal and healthcare settings. AI-driven approaches like PredPol, software originally developed by the Los Angeles Police Department and UCLA that purports to help protect one in 33 US citizens, 9 predict when, where, and how crime will occur. A 2016 case study of a US city noted that the approach disproportionately projected crimes in areas with higher populations of non-white and low-income residents. 10 When datasets disproportionately represents the lower power members of society, flagrant discrimination is a likely result.

Sentencing decisions are increasingly decided by proprietary algorithms that attempt to assess whether a defendant will commit future crimes, leading to concerns that justice is being outsourced to software. 11 As AI becomes increasingly capable of analyzing more and more factors that may correlate with a defendant's perceived risk, courts and society at large may mistake an algorithmic probability for fact. This dangerous reality means that an algorithmic estimate of an individual’s risk to society may be interpreted by others as a near certainty—a misleading outcome even the original tool designers warned against. Even though a statistically driven AI system could be built to report a degree of credence along with every prediction, 12 there’s no guarantee that the people using these predictions will make intelligent use of them. Taking probability for certainty means that the past will always dictate the future.

An original image of low resolution and the resulting image of high resolution

There is an aura of neutrality and impartiality associated with AI decision-making in some corners of the public consciousness, resulting in systems being accepted as objective even though they may be the result of biased historical decisions or even blatant discrimination. All data insights rely on some measure of interpretation. As a concrete example, an audit of a resume-screening tool found that the two main factors it associated most strongly with positive future job performance were whether the applicant was named Jared, and whether he played high school lacrosse. 13 Undesirable biases can be hidden behind both the opaque nature of the technology used and the use of proxies, nominally innocent attributes that enable a decision that is fundamentally biased. An algorithm fueled by data in which gender, racial, class, and ableist biases are pervasive can effectively reinforce these biases without ever explicitly identifying them in the code. 

Without transparency concerning either the data or the AI algorithms that interpret it, the public may be left in the dark as to how decisions that materially impact their lives are being made. Lacking adequate information to bring a legal claim, people can lose access to both due process and redress when they feel they have been improperly or erroneously judged by AI systems. Large gaps in case law make applying Title VII—the primary existing legal framework in the US for employment discrimination—to cases of algorithmic discrimination incredibly difficult. These concerns are exacerbated by algorithms that go beyond traditional considerations such as a person’s credit score to instead consider any and all variables correlated to the likelihood that they are a safe investment. A statistically significant correlation has been shown among Europeans between loan risk and whether a person uses a Mac or PC and whether they include their name in their email address—which turn out to be proxies for affluence. 14 Companies that use such attributes, even if they do indeed provide improvements in model accuracy, may be breaking the law when these attributes also clearly correlate with a protected class like race. Loss of autonomy can also result from AI-created “information bubbles” that narrowly constrict each individual’s online experience to the point that they are unaware that valid alternative perspectives even exist.

AI systems are being used in the service of disinformation on the internet, giving them the potential to become a threat to democracy and a tool for fascism. From deepfake videos to online bots manipulating public discourse by feigning consensus and spreading fake news, 15 there is the danger of AI systems undermining social trust. The technology can be co-opted by criminals, rogue states, ideological extremists, or simply special interest groups, to manipulate people for economic gain or political advantage. Disinformation poses serious threats to society, as it effectively changes and manipulates evidence to create social feedback loops that undermine any sense of objective truth. The debates about what is real quickly evolve into debates about who gets to decide what is real, resulting in renegotiations of power structures that often serve entrenched interests. 16

While personalized medicine is a good potential application of AI, there are dangers. Current business models for AI-based health applications tend to focus on building a single system—for example, a deterioration predictor—that can be sold to many buyers. However, these systems often do not generalize beyond their training data. Even differences in how clinical tests are ordered can throw off predictors, and, over time, a system’s accuracy will often degrade as practices change. Clinicians and administrators are not well-equipped to monitor and manage these issues, and insufficient thought given to the human factors of AI integration has led to oscillation between mistrust of the system (ignoring it) and over-reliance on the system (trusting it even when it is wrong), a central concern of the 2016 AI100 report.

These concerns are troubling in general in the high-risk setting that is healthcare, and even more so because marginalized populations—those that already face discrimination from the health system from both structural factors (like lack of access) and scientific factors (like guidelines that were developed from trials on other populations)—may lose even more. Today and in the near future, AI systems built on machine learning are used to determine post-operative personalized pain management plans for some patients and in others to predict the likelihood that an individual will develop breast cancer. AI algorithms are playing a role in decisions concerning distributing organs, vaccines, and other elements of healthcare. Biases in these approaches can have literal life-and-death stakes.

In 2019, the story broke that Optum, a health-services algorithm used to determine which patients may benefit from extra medical care, exhibited fundamental racial biases. The system designers ensured that race was precluded from consideration, but they also asked the algorithm to consider the future cost of a patient to the healthcare system. 17 While intended to capture a sense of medical severity, this feature in fact served as a proxy for race: controlling for medical needs, care for Black patients averages $1,800 less per year.

New technologies are being developed every day to treat serious medical issues. A new algorithm trained to identify melanomas was shown to be more accurate than doctors in a recent study, but the potential for the algorithm to be biased against Black patients is significant as the algorithm was trained using majority light-skinned groups. 18 The stakes are especially high for melanoma diagnoses, where the five-year survival rate is 17 percentage points less for Black Americans than white. While technology has the potential to generate quicker diagnoses and thus close this survival gap, a machine-learning algorithm is only as good as its data set. An improperly trained algorithm could do more harm than good for patients at risk, missing cancers altogether or generating false positives. As new algorithms saturate the market with promises of medical miracles, losing sight of the biases ingrained in their outcomes could contribute to a loss of human biodiversity, as individuals who are left out of initial data sets are denied adequate care. While the exact long-term effects of algorithms in healthcare are unknown, their potential for bias replication means any advancement they produce for the population in aggregate—from diagnosis to resource distribution—may come at the expense of the most vulnerable.

[1]  Brian Christian, The Alignment Problem: Machine Learning and Human Values, W. W. Norton & Company, 2020

[2]   https://humancompatible.ai/app/uploads/2020/11/CHAI-2020-Progress-Report-public-9-30.pdf  

[3]   https://knightfoundation.org/philanthropys-techno-solutionism-problem/  

[4]   https://www.theguardian.com/world/2021/jan/12/french-woman-spends-three-years-trying-to-prove-she-is-not-dead ; https://virginia-eubanks.com/ (“Automating inequality”)

[5]   https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

[6]  Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism , NYU Press, 2018 

[7]  Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code , Polity, 2019

[8]   https://www.publicbooks.org/the-folly-of-technological-solutionism-an-interview-with-evgeny-morozov/

[9]   https://predpol.com/about  

[10]  Kristian Lum and William Isaac, “To predict and serve?” Significance , October 2016, https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1740-9713.2016.00960.x

[11]  Jessica M. Eaglin, “Technologically Distorted Conceptions of Punishment,” https://www.repository.law.indiana.edu/cgi/viewcontent.cgi?article=3862&context=facpub  

[12]  Riccardo Fogliato, Maria De-Arteaga, and Alexandra Chouldechova, “Lessons from the Deployment of an Algorithmic Tool in Child Welfare,” https://fair-ai.owlstown.net/publications/1422  

[13]   https://qz.com/1427621/companies-are-on-the-hook-if-their-hiring-algorithms-are-biased/  

[14]   https://www.fdic.gov/analysis/cfr/2018/wp2018/cfr-wp2018-04.pdf  

[15]  Ben Buchanan, Andrew Lohn, Micah Musser, and Katerina Sedova, “Truth, Lies, and Automation,” https://cset.georgetown.edu/publication/truth-lies-and-automation/  

[16]  Britt Paris and Joan Donovan, “Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence,” https://datasociety.net/library/deepfakes-and-cheap-fakes/  

[17]   https://www.washingtonpost.com/health/2019/10/24/racial-bias-medical-algorithm-favors-white-patients-over-sicker-black-patients/ .

[18]   https://www.theatlantic.com/health/archive/2018/08/machine-learning-dermatology-skin-color/567619/

Cite This Report

Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. "Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report." Stanford University, Stanford, CA, September 2021. Doc:  http://ai100.stanford.edu/2021-report. Accessed: September 16, 2021.

Report Authors

AI100 Standing Committee and Study Panel  

© 2021 by Stanford University. Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report is made available under a Creative Commons Attribution-NoDerivatives 4.0 License (International):  https://creativecommons.org/licenses/by-nd/4.0/ .

IMAGES

  1. Artificial Intelligence Essay

    essay against artificial intelligence

  2. A Complete Essay on Artificial Intelligence for students

    essay against artificial intelligence

  3. Artificial Intelligence In Modern World

    essay against artificial intelligence

  4. The Impact of Artificial Intelligence on Society

    essay against artificial intelligence

  5. Argumentative Essay on Artificial Intelligence

    essay against artificial intelligence

  6. Artificial Intelligence, Are the Machines Taking over Free Essay Example

    essay against artificial intelligence

VIDEO

  1. 100 words essay on Artificial intelligence in English

  2. Essay: Artificial Intelligence with quotations

  3. Pedagogical strategies for architecture against Artificial Intelligence (AI)

  4. Artificial intelligence essay for Students

  5. Artificial Inteligence: What are the benefits and risks of frontier AI?

  6. A world in which artificial intelligence destroys humanity…

COMMENTS

  1. Here’s Why AI May Be Extremely Dangerous—Whether It’s ...

    Artificial intelligence algorithms will soon reach a point of rapid self-improvement that threatens our ability to control them and poses great potential risk to humanity

  2. Pros and Cons of Artificial Intelligence Essay (Critical Writing)

    The system of artificial intelligence (AI) has significantly developed over the last decade. AI has both pros and cons, while the consequences of its growing impact are a controversial matter. Artificial intelligence can make any process efficient since it works quickly and correctly.

  3. Artificial Intelligence Pros and Cons: Essay Sample - StudyCorgi

    Artificial Intelligence (AI) is a machine’s ability to demonstrate intelligence comparable to that of humans. The AI algorithms are developed for a specific task, implying that a device can scan its environment and perform actions to achieve a set goal.

  4. 14 Dangers of Artificial Intelligence (AI) | Built In

    Dangers of artificial intelligence include bias, job losses, increased surveillance, growing inequality, lack of transparency and large-scale targeted fraud.

  5. The Case Against AI Everything, Everywhere, All at Once | TIME

    The result is “Authoritarian Intelligence.” The hubris and determination of tech leaders to control society is threatening our individual, societal, and business autonomy.

  6. AI Is an Existential Threat—Just Not the Way You Think

    AI Is an Existential ThreatJust Not the Way You Think. Some fear that artificial intelligence will threaten humanity’s survival. But the existential risk is more philosophical than...

  7. Opinion | The True Threat of Artificial Intelligence - The ...

    In May, more than 350 technology executives, researchers and academics signed a statement warning of the existential dangers of artificial intelligence. “Mitigating the risk of extinction from...

  8. Is Artificial Intelligence Dangerous?: [Essay Example], 623 ...

    The debate over whether artificial intelligence is dangerous is a complex and multifaceted one, with arguments on both sides. In this essay, we will explore the various aspects of AI's potential dangers and consider the measures that can be taken to ensure its responsible development and deployment.

  9. There Is No A.I. - The New Yorker

    Jaron Lanier writes about how the term “artificial intelligence” is a misnomer, possibilities for productively using A.I., and the subject of data dignity.

  10. SQ10. What are the most pressing dangers of AI?

    What are the most pressing challenges and significant opportunities in the use of artificial intelligence to provide physical and emotional care to people in need? Conclusions Annotations on the 2016 Report