Exploring the Risks and Benefits of AGI vs Narrow AI:
Navigating the Path Towards Safe and Ethical Artificial Intelligence and The Problem of Hallucination
Introduction
From time to time, we will explore AI generally to both provide a foundation for our readers and make connections to the law as well. In this dispatch we are going to provide a background on versions of AI, risks, benefits, ethical and other concerns as well as the problem of hallucination which already is affecting legal research via ChatGPT.
AGI versus Narrow AI
Artificial General Intelligence (AGI) refers to the development of intelligent machines that can perform a wide range of intellectual tasks that would normally require human-level intelligence. AGI aims to create machines that can learn, reason, and solve problems much like humans do. The goal of AGI is to develop not merely intelligence approximating that of humans, but an ability for a machine or software to learn about novel circumstances relying on past data - like humans do. Up to now, AGI has been theoretical and for many experts, still believed to be a distant goal 50-100 years in the future.
The development of AGI raises legal and ethical issues such as liability for the actions of intelligent machines, the regulation of AGI development and the impact of AGI on employment and privacy. Lawyers and policy makers will play an important role in establishing appropriate legal frameworks ensuring the responsible and ethical development and deployment of AGI.
Narrow AI
Narrow AI refers to the development of intelligent machines that are designed to perform specific tasks or solve particular problems, rather than being designed to be generally intelligent like humans. Unlike Artificial General Intelligence (AGI), which seeks to create machines that can learn, reason, and solve problems across a wide range of domains, Narrow AI systems are developed to perform a specific task or set of tasks within a narrow domain, such as image or speech recognition, natural language processing, or game playing. Narrow AI systems are typically designed to be highly specialized and optimized for a specific task, often outperforming humans in that task. As such, Narrow AI has become increasingly prevalent in many industries and fields, from healthcare and finance to manufacturing and transportation. The development of Narrow AI has the potential to revolutionize these industries by providing more efficient and effective solutions to complex problems.
Narrow AI is already here. The self driving software in Tesla vehicles is one such example. It is a sophisticated tool, but it is only good at one thing - driving. That tool cannot be adapted to writing poetry or even doing simple mathematical calculations. It is designed for one narrow task and the AI onboard is narrow AI.
The reinforcement mechanisms used by YouTube and TikTok to keep users scrolling to the next video clip are also versions of narrow AI. That tool is constantly being fed data from users and “learning” which videos are best to suggest to each individual user to maximize their time on the app. This attention economy is the heart of social media and some have argued it is a psychological experiment being run at scale without any of the usual safeguards for such activities.
The Control Problem
The control problem is a central concern in the development of advanced artificial intelligence systems. It refers to the challenge of ensuring that intelligent machines behave in a way that is aligned with human values and objectives. As AI systems become more capable, there is a risk that they may act in ways that are harmful or unintended, due to a lack of understanding of human values or a misalignment of objectives. This could lead to catastrophic consequences, such as the loss of human life or the destabilization of global systems. Addressing the control problem requires developing new approaches to designing and training AI systems, as well as developing robust safety mechanisms that ensure that AI systems behave in a way that is consistent with human values and objectives. It is an ongoing challenge, and continued research and development will be necessary to ensure that AI systems are developed in a way that is safe and beneficial for humanity.
The Control of Killing Machines
Many politicians and military leaders have been quoted as saying “the purpose of the military is to kill people and break things.” The use of AI powered Drone aircraft has been ongoing in the military for more than ten years. AI powered aircraft have already bested the most well trained fighter pilots the U.S. military produces. However, the debate rages on within the military and its civilian leaders as to when, if ever, to equip AI powered aircraft (drones) with lethal weaponry. Who is responsible if the drone misses its target? Or, worse yet, chooses a target that is at odds with the human controlling it? A host of articles, some quoting famous philosophers and cultural critics, have been published in the last decade seeking to outline the problems and possible solutions to unmanned AI aircraft with lethal weapons on board.
The Cultural Earthquake
AI is transforming many industries including healthcare, finance, transportation and it has the potential to revolutionize the way we live, work, and communicate. The adoption of AI powered tools will change our culture in ways both subversive and obvious. As we have written about here (and undoubtedly will revisit in the months and years to come) AI will change the law, literally, and the way the legal system operates, the evidence jurors can see, the rules regarding the admissibility of that evidence and the way lawyers prepare pleadings for court among other things.
While AI tools will undoubtedly aid members of society in important ways, it will also displace many others from professions they have dedicated their lives to. Here is a summary of just some of the effects we will see in the years to come:
Improved healthcare: AI has the potential to transform healthcare by improving diagnostics, personalizing treatment plans, and helping doctors make more accurate diagnoses.
More efficient transportation: AI can help optimize traffic flow and reduce accidents, making transportation safer and more efficient.
Increased productivity: AI can automate repetitive tasks, freeing up workers to focus on more important work.
Enhanced customer service: AI can improve customer service by providing personalized assistance and support.
Better decision-making: AI can provide insights and recommendations that can help businesses make better decisions.
These benefits come at the risk of AI being intentionally or accidentally in ways that generates severe and unexpected consequences.
Risks and Challenges of AI
Job losses: One of the most significant risks of AI is job losses. As AI becomes more advanced, it can automate many jobs that were previously performed by humans. This could lead to significant unemployment and economic disruption.
Bias: Another potential risk of AI is bias. AI algorithms can be biased if they are trained on biased data. This can lead to unfair treatment of certain groups of people.
Security: AI systems can be vulnerable to cyber attacks, which could have devastating consequences. As AI becomes more prevalent, it will become increasingly important to ensure that these systems are secure.
Ethical concerns: AI raises a number of ethical concerns, including issues related to privacy, transparency, and accountability. It will be important to address these concerns as AI becomes more widespread.
It Is Not Just a U.S. Problem
Many countries are investing heavily in AI research and development. Some of the countries that are leading the way in AI include:
The United States is home to some of the world's leading tech companies, and it has been investing heavily in AI research and development.
China is also investing heavily in AI research and development, and it has set a goal to become the world leader in AI by 2030.
Canada is home to several leading AI research institutions, and it has been investing heavily in AI research and development.
The United Kingdom has established itself as a leader in AI research, and it has set a goal to become a world leader in AI.
Job Losses and Jobs Created
As we mentioned earlier, one of the biggest risks of AI is job losses. However, it's important to note that AI will also create new jobs. Some of the jobs that will be created by AI include:
AI specialists: As AI becomes more prevalent, there will be a growing demand for experts who can develop and maintain AI systems.
Data analysts: AI relies on data, so there will be a growing demand for data analysts who can collect, analyze, and interpret data.
Human-AI interaction specialists: As AI becomes more integrated into our lives, there will be a growing need for experts who can design AI systems that are user-friendly and easy to interact with.
Ethicists: As we mentioned earlier, AI raises a number of ethical concerns. As a result, there will be a growing demand for ethicists who can help ensure that AI systems are developed and used in an ethical manner.
It's also worth noting that while some jobs will be automated by AI, new jobs will be created that we can't even imagine yet. This is a common pattern with technological advancements – while some jobs are lost, new jobs are created that require different skills and expertise.
Other Problems with AI
In addition to the risks and challenges we've discussed so far, there are other problems with AI that we must consider. These include:
Lack of transparency: AI systems can be difficult to understand, and this lack of transparency can be a problem. It can be difficult to determine how AI systems make decisions, which can make it hard to identify and correct errors or biases.
Lack of regulation: AI is a relatively new technology, and there is currently a lack of regulation surrounding its development and use. This can be a problem because it can lead to unethical or unsafe use of AI.
Dependence on data: AI relies on data to make decisions, and if this data is biased or incomplete, it can lead to biased or incomplete decisions.
Impact on social interactions: As AI becomes more prevalent, it could have an impact on social interactions. For example, if people begin to rely on AI for communication, it could lead to a decrease in face-to-face interactions.
The Hallucination Problem
As artificial intelligence (AI) technology advances, there are concerns about the potential for AI systems to develop hallucinations. Hallucination in AI refers to the phenomenon where AI systems generate output that is not based on any real input or data. Instead, the output is created by the AI system itself, leading to false or misleading information.
Causes of Hallucination in AI
Hallucination in AI can be caused by several factors, including:
Overfitting: Overfitting is a common problem in machine learning where the model is trained too well on the training data, causing it to memorize the data instead of learning to generalize from it. When this happens, the model may generate output that is based on the training data but does not accurately reflect the real world.
Limited data: AI systems rely on data to learn and make decisions. If the data is limited or biased, the AI system may generate output that is not accurate or relevant to the real world.
Complexity: As AI systems become more complex, it can be more difficult to understand how they make decisions, leading to errors and hallucinations.
Hallucinations can lead to false or misleading information, which can have serious consequences in applications such as healthcare or autonomous vehicles. It is also something that lawyers need to pay attention to in blindly relying on the legal analysis (and even citations) arising from LLMs like ChatGPT. In addition, the lack of transparency and explainability in AI systems can lead to a lack of trust in the technology, which could hinder its adoption and advancement.
I have already conducted my own limited experiments with ChatGPT and observed this hallucination problem close up. Try for yourself posing a legal question to ChatGPT and specifically asking for case summaries and citations supporting that argument. What you will likely find is a set of well written case summaries, complete with case citations in the format you would expect. However, if you try and search for those cases using those citations in any legal database, you are likely to find out that they citations are imaginary or, if they are real, point to a case that is completely different than the one being summarized. ChatGPT already is hallucinating on case law and the unsuspecting user, perhaps looking to save a few bucks on the hourly rate of a lawyer doing actual informed research, is risking making decisions on false information.
Conclusion
The development of AI is a rapidly evolving field with both risks and benefits. Artificial General Intelligence (AGI) has the potential to revolutionize the way we live and work, but it also raises legal and ethical concerns. Narrow AI is already prevalent in many industries and has the potential to improve efficiency and productivity. However, the control problem, job losses, bias, and security concerns are among the risks associated with even Narrow AI.
It is crucial to establish appropriate legal frameworks to ensure the responsible and ethical development and deployment of AGI. It is also important to address the risks and challenges associated with AI, such as job losses, bias, security, and ethical concerns. Those losses are already being felt. The time to begin considering the limits we want as a society to put on AI tools is already upon us. A recent letter signed by many prominent AI researches and well known tech entrepreneurs is urging a six month (at least) moratorium on the release of any new advanced in LLMs like ChatGPT. A later edition of this newsletter will discuss that letter at length and the competing opinions as to whether AI poses an existential risk to humanity or is overblown hyperbole.
The hallucination problem is a serious concern in AI, which can lead to false or misleading information and a lack of transparency and trust in the technology. AI tools that are released on the general public may well likely have to be regulated in some way in the future to at least provide transparency as to how they reach the decisions or provide the output to users.
Although this newsletter is focused on AI and the law, the effect of AI outside the legal industry will undoubtedly come into the legal industry in many ways. Litigation, regulation, policy making, education, client representation and more are going to continue to be affected. We lawyers have to be vigilant to these advances from the benefits to our practice (AI research and writing tools) to the risks of AI manipulated evidence being unwittingly admitted in criminal and civil cases.