The saying “AI might take over the world” is coming true in some aspects of human life. From helping Dominos prepare pizzas to robotaxis, AI has slowly been replacing humans in several industries. Taking the place of human interactions are robots, Siri voice, and non-existent people chatting with you. In a world that has access to information with a click of a button, we need to understand the role of generative AI, how the U.S. government regulates, promotes, and improves AI, and the effects that it might have on our values.
Dr. Bailey's syllabus for PSYC101 offers a description of generative AI:
A type of computer system that is constructed by training with large sets of data. Generative AI systems learn patterns in the data sets and can generate compositions based on those patterns very quickly. In many cases, generative AI produces output that matches what a knowledgeable human would produce; in other cases, generative AI produces output that is equally confident and fluent but is incorrect or made-up. Human expertise is required to distinguish between these two cases; generative AI is not self-aware enough to distinguish between fact and fiction.
Some of the most used generative AI is image creating and editing, game development, chatbots and virtual assistants, and 3D modeling. All these platforms allow the user to gain a broader idea, design, and tools to develop their brand or industry.
Generative AI can be helpful in many industries, including the healthcare system. AI applications can detect early symptoms of certain diseases and when partnered with an application it can teach the user to understand risks, self-examine, and address immediate concerns. Conversational AI are designed to enhance patient engagement and address staffing challenges. The use of Microsoft Bing allows users to search with a broader range of material which gives the user a more comprehensive search. Generative AI can also help people control their money and assists users in saving, budgeting, and gaining financial knowledge.
The Department of State states that it “focuses on technological revolution; advances in AI technology present both great opportunities and challenges.” The Department of State globally collaborates to ensure that technology is advancing, protects its citizens, and also promotes safety in the society. Various government offices regulate certain aspects of AI and its usage in the U.S. In the Office of the Under Secretary of State for Economic Growth, Energy, and the Environment, they implement and partner with global AI research and developments by engaging in international support in the U.S. science and technology enterprise. They make sure that fair rules are set for economic competition, advocate for U.S. companies, enable foreign policies, and regulate environments that benefit U.S. capabilities in AI. In comparison, the Office of the Under Secretary of State for Arms Control and International Security directs its attention to the security implications of AI, such as potential applications in weapon systems, impact on U.S. military comparability with its allies and partners and stability, and export controls related to AI.
As the 2024 presidential election quickly approaches, AI has slowly been integrated in the process and become an active participant in the race. A deepfaked video surfaced on the web of candidate Paul Vallas during the Chicago February mayoral primary election. It made Vallas appear to approve of police brutality and may have caused him to immediately lose the race. People have speculated that this video might have impacted his campaign and will ultimately impact future campaigns.
This article from CNBC states that “the battle between the falsely generated content and the detection of mechanisms that try to eradicate it will surely ramp up. Using AI itself to detect and mark AI-generated content is better than retroactively fact-checking content, because it can be applied during the posting of the content and doesn’t wait until people have already absorbed - and believed - the information.” In essence, AI is fighting itself to stop misinformation and persuade people with information from different sides. On a positive note, if it is used to deliver and detect accurate information, AI could help diverse communities gain appropriate and trustworthy information in their own language and dialect so that they can better understand the candidates and the nuances of the messages in the translations rather than word accuracy.
With this in mind, our awareness of human values and the difference between human values and AI values are vital in a fast-growing technological world. Human values are defined as “the principles, beliefs, and ideals that guide our behavior and decision-making. These values are shaped by cultural, societal, and personal factors, and they can vary greatly from person to person.” Human values are conflicting with AI by making it harder to make decisions, independently take actions, and ultimately impact our autonomy without external influence or coercion. Taking this into account, humans need to be aware of their values and the impact that AI could potentially have on them.
Being aware of the pros and cons of AI is important as we move the world technologically. In the vastness of knowledge, we need to understand what we ultimately stand for and make sure that we don't get persuaded or tricked into changing our values and ethics. As we navigate the complexity of AI’s evolving landscape in our lives, we can prioritize transparency, and societal well-being remains paramount in shaping a future where AI serves as a force for positive transformation and not degeneration.
The Student Movement is the official student newspaper of Andrews University. Opinions expressed in the Student Movement are those of the authors and do not necessarily reflect the opinions of the editors, Andrews University or the Seventh-day Adventist church.