Artificial intelligence (AI) has been around a lot longer than most people realize, as its use has been steadily growing for almost 50 years. In fact, the 1980s became known as the “AI boom” due to the rapid breakthroughs in technology and increased governmental funding.
However, until recently, AI use was confined to computation and organization. In recent years, due to rapid technological advancements, AI is now being used for decision-making in many different industries.
Artificial Intelligence is exactly what its name suggests. According to the Oxford dictionary, it is “the theory and development of computer systems able to perform tasks that normally require human intelligence.”
Artificial intelligence is a specialty of the computer science field in which computer scientists create systems that mimic human qualities such as problem solving, information processing and decision making.
New forms of AI have unprecedented capabilities that promise to revolutionize industries, enhance productivity and improve lives through personalized automation. However, this rapid advancement also raises concerns about pitfalls such as job displacement, ethical dilemmas and the potential for bias in decision-making algorithms.
In the 2020s, most people encounter AI in their everyday lives, in applications such as facial recognition in phones, email and digital voice assistants. The technology has infiltrated industries from banking to education.
Advocates argue that AI has the capability to make life easier, reduce human error, and increase productivity. There are endless possibilities for the benefits it can provide.
The education sector has been tremendously affected by the growth of AI for better and for worse. For instance, although AI can be helpful, students may use the tool in the wrong ways such as to generate essays.
This obstructs the learning process, causing some educators to completely swear off the use of AI. However, others accept that AI is here to stay and instead incorporate it into their curriculum.
English teacher Todd Butler chose to embrace the presence of AI while still being conscious of its limitations. In fact, Butler’s newest addition to his syllabus acknowledges the use of AI and how it can be used to generate ideas only if the teacher approves it beforehand.
“What that looks like to me is teaching students that the future depends on us being one step ahead of AI,” Butler said. “Because if we let AI do everything for us, then we crumble.”
Full transparency about one’s use of AI is extremely important, because if students do not acknowledge their use of AI, it will lead down a path of academic dishonesty. In education, AI has to be intertwined with accountability.
Applications such as Chat GPT are oftentimes not better than a student’s original writing at all. While its writing may be fluid, elaborate, and grammatically sophisticated to a casual reader, upon further analysis, it lacks human reasoning and does not fact check its work.
Butler demonstrates this idea frequently to his students as a way to dissuade them from relying solely on AI in the classroom.
“One thing that we’ve been doing is giving Chat GPT an essay prompt,” Butler said. “We discover inaccuracies and flaws. Afterwards, I’ll put a student’s writing on the board and we’ll juxtapose the two. Ten out of ten times the student’s writing is better than GPT.”
Junior Arpith Prasad shares similar sentiments as he weighs the pros and cons of using AI in his education.
“As a student, AI would help me a lot with homework or doing tasks faster,” Prasad said. “But at the same time, I [would not actually be] doing the work, and it is not what I actually believe in.”
Last year, Butler followed a strict policy of outlawing the use of AI, but this policy was not effective because there was no way to regulate students using it. By incorporating AI into the classroom, he hopes to communicate its limitations to students so they can see how the human brain will always be better.
Prasad agrees about the limitations of AI but also acknowledges its potential in aiding students with learning.
“AI helps students with breaking down a concept, explaining a topic, or studying for tests,” Prasad said. “It could become more advanced in the future, but right now, I think it is still in the earlier phases, and I think it needs more time to just be nuanced and get better.”
These sentiments regarding AI in education can be seen state-wide as Ohio Lt. Governor Jon Husted partnered with The AI Education Project and announced the launch of an AI Toolkit for Ohio’s K-12 school districts on February 15, 2024.
This toolkit acts as a guide to respond to the pressures about the rapid proliferation of AI tools and to prepare students and educators alike for a future in an AI-driven world.
“AI technology is here to stay, and as a result, InnovateOhio took the lead on hosting forums over the summer to discuss the impacts,” Husted wrote in a public statement.
The push for AI to be integrated in education came primarily from teachers as they realized they needed coherent guidelines on how to let students use the tools in a trustworthy manner.
“The more resources we place in the hands of school leaders, educators, families and students, the better positioned we will be to use AI tools thoughtfully and responsibly,” said Stephen D. Dackin, director of the Ohio Department of Education and Workforce.
Without state-wide action, educators worry that AI will be used unethically by students, hindering learning.
Workers are also concerned about the impact of AI on the world’s job markets.
A study by Mckinsey Global Institute found that 30% of activities of 60% of occupations can be automated, and if AI can take on this burden, efficiency will increase substantially.
Critics are concerned that AI has the capability to completely change the job market and render many jobs easily replaceable. This will lead to rising unemployment and people struggling to learn new skills to keep up with the rapidly advancing technological society.
According to a December CNBC SurveyMonkey Workforce survey in 2023, 60% of employees who use AI are concerned about its impact on their jobs.
Additionally, Goldman Sachs published a report in March 2023 estimating that 300 million jobs could be displaced by fast-growing technology. These statistics raise ethical debates about whether or not the negatives of AI outweigh the positives.
So, what are some fields that AI will impact and what should prospective employees expect?
One distinctive industry AI has affected for the better is biomedical informatics.
Dr. Rong Xu is a Professor of Biomedical Informatics and a founding Director of the Center for AI in Drug Discovery at the Case Western Reserve School of Medicine (CWRU). Her research goal is to aid biomedical discovery and improve healthcare through the development of AI algorithms in computer science.
“Since our center is research-focused, our goal is to develop new AI algorithms,” she said when asked why she founded the new center. “Specific AI algorithms that we commonly use are natural language processing, machine learning, networks, knowledge representation and engineering.”
Dr. Xu uses these algorithms daily in her research in drug discovery, drug toxicity predictions, gene-disease-environment interactions and more. A recent example is AlphaFold.
“[AlphaFold is] the AI system that tackled the long-standing challenge of predicting the three-dimensional structure of proteins from the one-dimensional sequence of their amino acids,” Xu said.
AlphaFold has been used to predict the structures of the proteins of SARS-CoV-2, which is the agent for COVID-19. With the use of AlphaFold, scientists are able to better predict the mutations of the virus and design vaccines faster with higher accuracy and effectiveness.
These algorithms lead to numerous AI-driven advances in the biomedical field. The potential of AI is all the more clear in light of the devastation caused by the COVID pandemic.
Yet despite the critical role AI plays in healthcare research, Xu also warns against the potential pitfalls of using AI incorrectly.
“AI can greatly facilitate novel discovery in the biomedical field as AlphaFold does,” she said. “[However], the cons are that sometimes powerful AI tools are used incorrectly since users don’t have a good understanding of them.”
Another field where new AI technologies could be effective is in public health to mitigate disasters and humanitarian emergencies.
Lieutenant Colonel (Retired) Joanne E. McGovern is a highly decorated combat veteran with years of disaster, humanitarian and complex emergency experience internationally. She also served as a senior advisor at the Yale New Haven Health System Center for Emergency Preparedness and Disaster Response (YNHHS-CEPDR), and believes that AI can significantly help during disasters and public health emergencies.
McGovern is also seeing her field transformed by AI.
“In disaster management, AI is used to forecast natural disasters, extreme weather events, identify potential risks, and possible outcomes,” she said. “For example, Google’s Flood Hub uses AI to predict where floods are most likely to occur.”
Applications such as Flood Hub have the ability to deploy rapid emergency responses by analyzing previous data of similar disasters to help officials make an informed plan. These preventative measures can save countless human lives.
AI could also assist in emergency medical planning, mass casualty evacuation and detailed communication and reports to key organizations.
“During a natural disaster and public health emergencies, communication is critical,” McGovern said. “AI-powered chatbots are being used by organizations like the Red Cross to provide accurate information about emergency preparedness measures.”
Clara is the Red Cross’s AI emergency response Chatbot that can quickly respond to inquiries of disaster survivors to provide information and resources. Furthermore, this system has the ability to individually cater to a person’s needs and provide personalized advice.
These programs help in disaster recovery efforts by assessing strategies and the possible courses of action.
During the COVID pandemic, McGovern served as the Program Director for Yale’s Contact Tracing and Outreach Program and understands how AI is utilized in stopping communicable diseases.
“A great example of this is Uganda is utilizing Chat-GPT-4 to optimize the surveillance of zoonotic diseases and predict future outbreaks,” McGovern said.
McGovern mentioned the Gates Foundation Global Grand Challenges grants. Many of the recipients are applying AI tools to public health problems.
“It shows how AI is being used to level the playing field with public health and preparedness,” she said.
When conducting surveillance of diseases and public health emergencies, a vast amount of data needs to be collected and processed. This amount of data that is generated is actually one of the challenges that comes with any type of disaster.
“AI can be utilized to leverage all this [data] into useful tools for creating predictive risk analysis, predictive infrastructure failure assessment, situational awareness during the event and monitoring of recovery and rapid assessment of the impacts of disasters,” McGovern said.
Currently Texas A&M is doing just that in its Urban Resilience AI Lab.
McGovern emphasizes that AI should be meant only as an aid for humans because in many ways, it is not as reliable or critical as humans.
“[AI] still needs a human to review the work,” McGovern concluded. “It makes mistakes. As it becomes more sophisticated and refined it will continue to improve.”
Dr. Shaomin Hu, a pathologist at the Cleveland Clinic, shares similar ideas about the influence of AI in the field of pathology.
Pathology is the examination of body fluid and tissue samples. Pathologists use laboratory tests and analysis skills to reach diagnoses of patients.
While Hu recognizes the noteworthy contributions made by AI in the medical field, he maintains that the technology is still in its preliminary stages and cannot fully replace human activities.
According to Dr. Hu, AI has the potential to enhance efficiency and diagnostic precision for pathologists in clinical settings, as well as aiding in research to identify new biomarkers that could potentially forecast disease prognosis, guide therapeutic interventions or facilitate diagnostic processes.
Currently, there are a few AI algorithms that have been developed to assist with routine pathological diagnosis. One example is Paige Prostate, the inaugural FDA-cleared AI solution in digital pathology.
“Traditionally, pathologists would meticulously scrutinize all Hematoxylin and Eosin-stained slides either under a microscope or via scanned images on a computer to determine a diagnosis for a prostate biopsy,” Dr. Hu said. “This process is inherently laborious and time-intensive. Paige Prostate employs AI-driven software capable of pinpointing critical areas of interest on scanned images, streamlining the diagnostic process.”
This technology can expedite the final diagnosis by allowing pathologists to focus solely on reviewing the identified areas by AI, thus saving time and eliminating errors.
One of the AI programs Hu implements in his practice is AI-assisted Ki-67 counting.
Ki-67 is an immunohistochemical stain used to assess cell proliferation rates, an important indicator for determining tumor malignancy.
“Previously, the process involved manually counting 500 tumor cells to calculate the percentage of Ki-67 positive cells,” Hu said. “However, with AI-assisted software, this calculation can now be completed in just a few seconds.”
Yet, despite new technologies such as Ki-67 counting and what they promise, there are still relatively few AI applications directly applicable to pathologists’ practices. Many concerns and limitations still impede the widespread adoption of AI in pathology.
The main concern is diagnostic accuracy because no algorithm is infallible.
“Algorithms are only reliable within the specific contexts in which they were trained,” Hu said. “Encountering scenarios beyond their training may lead to serious consequences, such as misdiagnosis resulting in delayed or unnecessary treatment.”
While Hu is confident that AI will revolutionize the practice of pathology, he believes that AI is still not developed enough to make accurate diagnoses on the level of a trained pathologist and can only serve as an aid.
“It is important to emphasize that pathologists ultimately retain the responsibility for making the final diagnostic decision and maintaining full accountability for the contents of the pathology report,” Hu stated. “I do not foresee AI algorithms ever functioning entirely independently of pathologists.”
The limits of AI algorithms leads to data biases, which directly affects the quality of care hospitals provide.
“Data bias poses a significant risk, potentially leading to misdiagnosis of rare diseases prevalent in certain minority populations,” Hu said.
In other words, AI can be inclined to exacerbate prejudice in healthcare.
Recently, there has been debate about how AI is used at the Center for Medicare and Medicaid Services (CMS). The CMS uses AI to help make decisions due to the huge amount of data and people enrolled in their healthcare programs.
However this is a dangerous precedent because AI is not foolproof, and its unpredictable algorithms can decide to cut payments for patients’ treatments. An incorrect recommendation from the CMS’s algorithm could give recommendations that conflict with the patient’s individual circumstances because they failed to take a holistic view.
These concerns were addressed by the House of Representatives on Nov. 3, 2023 when over 30 members sent a letter urging CMS to reevaluate how they use AI for medical coverage decisions.
In fields such as science and technology where AI can affect the quality of life for people, AI should be much more strictly regulated and air on the side of caution. But even in fields such as filmmaking where AI poses no health threats, there are still concerns.
Last summer the entertainment industry witnessed a historic strike by actors and scriptwriters in SAG-AFTRA, and one of their biggest concerns was the unregulated use of AI. Actors are fearful that their unique characteristics may be illegally mimicked by AI, and the extras of films are worried of being replaced altogether. Writers are anxious that AI-generated storylines and scripts may become common and limit creativity.
The SAG-AFTRA strike ended on Nov. 9, 2023, but controversy over the role of AI in creative work is far from over. On Jan. 9, 2024, a new AI voice agreement was introduced between Replica Studios (an AI voice technology company) and SAG-AFTRA.
The new agreement provides protections for professional voiceover actors and allows digital voice replicas to be used ethically in video games.
SAG-AFTRA issued a statement describing the contract as a positive development.
“This contract marks an important step towards the ethical use of AI voices in creative projects by game developers, and sets the basis for fair and equitable employment of voice actors as they explore the new revenue opportunities provided by AI,” the statement said.
Some of the terms in the new contract are consent from the voice actor to use their developed digital voice double and an option to opt out of projects without huge repercussions.
“Our voice actor agreements ensure that game developers using our platform are only accessing licensed talent who have given permission for their voice to be used as a training data set,” Shreyas Nivas, CEO of Replica Studios, said in the statement by SAG-AFTRA. “As opposed to the wild west of AI platforms using unethical data-scraping methods to replicate and synthesize voices without permission.”
The new contract illustrates how an industry is able to effectively combat the controversies posed by AI.
“Recent developments in AI technology have underscored the importance of protecting the rights of voice talent,” SAG-AFTRA National Executive Director and Chief Negotiator Duncan Crabtree-Ireland said. “Particularly as game studios explore more efficient ways to create their games.”
Seeing all of the legal battles and stifling of creativity and innovation raises a lot of worries about the ethics of AI. Furthermore, the dangers of mishandling the risky tool discussed in the other industries also necessitates the exploration of regulatory frameworks and ethical guidelines to ensure its responsible use and mitigate potential harm.
But in such a new industry that is so undefined, where should one draw the line to make sure that AI does not hinder human activities?
According to the American Psychological Association, there is growing worry among employees about the roles of AI in the workforce. In fact, two out of every five workers (or 38%) reported anxiety that their jobs may be replaced and rendered useless.
As AI gets used more and more in everyday life, more people are concerned about how lawmakers should ensure that AI does not hurt human life.
Shannon E. French, the Inamori Professor in Ethics, Director of the Inamori International Center for Ethics and Excellence, a professor in the philosophy department and the School of Law at CWRU, attempted to answer these difficult questions.
Currently, she is researching the ethics of emerging technologies such as AI and is writing a book about the impact of technology on ethics, with the working title “Artificial Ethics: Human Value(s) in a High-Tech Era.”
According to French, the biggest concerns with AI are autonomy and accountability. For example, if AI systems are used to control autonomous weapons systems, which many people including French oppose, it is unclear if they have the proper decision making capabilities to adhere to the principles of Just War Theory.
If AI makes a costly mistake violating the rules of war, who will be held accountable?
“Some decisions carry too much moral weight to be turned over to AI systems,” French wrote in an email. “It’s important for humans to remain accountable for actions with life-or-death consequences.”
This is why in the military, AI is only deployed in non-lethal operations such as logistics and surveillance, where there is too much data for a human to process manually. The ideal application of AI should only suggest possible courses of actions to humans, but leave the actual decision making alone.
“The decisions involved there are more straightforward than ethical decisions, so it’s a good use of the technology,” French said.
With the extensive and broad capabilities of AI, if it is not properly regulated, its abilities will be exploited at the expense of people. Ultimately, solely relying on AI to make decisions about human life can have devastating consequences because the algorithm lacks human empathy.
“Humans are also able to make more nuanced and complex decisions, and to use emotions and instincts in a positive way in decision making,” French stated. “They are also more creative and capable of surprising others.”
Furthermore, AI tends to amplify existing societal biases. An industry that this can be evidenced in is law and our judicial courts.
“In the law, there have been some experiments with AI systems to help guide sentencing decisions by judges,” French said. “But they have generally been failures, because they are biased against minorities.”
AI magnifies a lot of other biases in society. For example, AI (and more specifically Chat GPT) was found to be biased against non-native English speakers. At Stanford University, researchers realized that over half of non-native English writing samples were mistakenly flagged as AI-generated, which indirectly penalizes people like immigrants who have limited English proficiency.
Additionally, MIT researchers discovered that AI algorithms promote gender stereotypes by correlating words (such as jobs) with masculinity or femininity. For example, the phrase “doctor” is associated with masculinity while the phrase “secretary” is associated with femininity.
In order to become more applicable in industries like law that directly impact the fate of people, AI must develop the capability of mitigating human biases by acting like an ethical adviser.
French believes that the best option for AI would be to analyze questionable decisions on a case-by-case basis and asking key questions like “have you considered whether this individual might be seen as more threatening merely because of bias against their race or ethnicity?”
In other words, AI could be used to help make us aware of biases that were formed as preconceived notions. Although the idea sounds nice on the surface, it will take a lot of scientific intelligence and hard work for AI to be utilized as an asset rather than an impediment.
French is not confident that this can be accomplished.
“The way these systems “think” is extremely different from how humans do, and so it stands to reason that any ethics they develop will also be different,” French said. “It would be like dealing with aliens from another world.”
Ultimately, AI is here to stay in all industries and will profoundly impact the world’s scientific and technological development. It is up to humans to create proper regulations that use AI’s immense capabilities for the benefit of humanity.
“In my opinion, fighting AI and ignoring its existence may be doing a disservice,” English teacher Butler said. “If we can figure out a way to incorporate it creatively and avoid using it for substituting our own thoughts, we could have a good relationship with AI.”