Using AI Ethically: Can We Learn from Our Past?

Using AI Ethically: Can We Learn from Our Past?

When technology disrupts human patterns of behavior, we often see that tech moves faster than people can figure out the implications. This has also occurred with the internet, broadband, and cloud computing. Now its happening again with generative artificial intelligence (AI).

The numbers tell a compelling story. In just two years since the launch of ChatGPT, 72% of organizations now regularly use generative AI. More startling is that between 80%-92% of students say they use generative AI in their work when surveyed by companies like Chegg and the Digital Education Council. This adoption rate is faster than we saw with personal computers or the internet at similar points in their development. Bloomberg Intelligence has estimated that generative AI will add up to $1.4 trillion over ten years. It is clear why the race is on.

But here is the catch. While we are racing to adopt this powerful technology, were not always thinking carefully about how to use it well.

Technology Moves Faster Than Us

History shows us that when breakthrough technologies appear, people rush to find ways to make money from them before fully understanding the consequences. The internet brought us amazing connectivity, but also cybercrime and privacy concerns. Social media created a channel to reconnect with old friends, but also created mental health challenges and enabled the spread of misinformation.

Generative AI follows this same pattern. According to Harvard Business Review research, people are using AI for everything from writing emails to creating art, with use cases split almost equally between personal and business needs (Harvard Business Review, 2025). In 2025, HBR found that the number one application of gen AI is for personal companionship and therapy.

Companies are implementing AI solutions in under three months in some cases, driven by the potential for quick returns. The pressure to move fast is real. Companies that invested early in generative AI are seeing returns of $3.70 for every dollar invested (AmplifAI, 2025). But this rush to capitalize creates gaps where ethical considerations get overlooked and requires judgment, transparency, and better guardrails so that we all can make informed judgments and protect ourselves from the prowlers.

Answers Over Learning

A recent survey found that 56% of college students have used AI on assignments or exams (BestColleges, 2023). Colleges were quick to create rules that sought to limit the use of ChatGPT on papers by using checkers that assessed the percentage of generated AI content. This bandage does not get to the heart of the problem. While research shows that overall cheating rates haven't increased dramatically since ChatGPT arrived—staying around 60-70% as they have for years (Stanford Graduate School of Education, 2024)—the way students approach learning is changing.

What happens when students get used to getting quick answers instead of working through difficult problems? In the U.S., the public education system relies on outcomes. For kids, this means getting to the answers so that they have free time for other endeavors. The journey of learning takes a second seat (or maybe even ninth). Research on human-AI decision making shows that humans need to develop their own judgment capabilities to properly evaluate AI outputs (Bearman & Ajjawi, 2024). When things are hard to learn, that struggle helps students develop critical thinking skills and judgment. Without the struggle, students shortchange the process. This also hinders secondary skills such as determination, patience, and experience. Elementary school students especially lack the patience to work through challenging experiences. Fast forward 20 years, when these kids enter the workforce, fast and easy will be the dominant desire, diminishing individual judgment and reducing the opportunity for deep learning.

This matters because good judgment comes from experience, including the experience of making mistakes and learning from them. When AI provides the answers too easily, students miss opportunities to develop the reasoning skills they will need as adults.

When Rules Cant Keep Up

Governments and institutions typically move slowly when creating new policies. However, these policies often lag behind technology. In 2023, only 21% of organizations that use AI had established policies governing employees use of generative AI in their work, but that is increasing in 2025. However, advances in technology are so rapid that the rules need to start with the tech giants themselves.

Google recently launched the latest version of Veo 3, which generates remarkable video from just prompts. These videos can have ambient noises, backgrounds, and even dialogue. In a NY Times article, A.I. Videos Have Never Been Better. Can You Tell What’s Real? by Stuart A. Thompson, the times had ten videos that were either original or generated. I failed to identify three generated videos. While there are still some challenges AI has with generating certain situations, it is easy to imagine that those “tells” will be fixed soon. Tech leaders like Google need to create obvious markings so that people cannot apply this technology for nefarious uses. Bad actors can take advantage by spreading disinformation.

The Key Elements of Ethical AI Use

When considering how to use AI responsibly, several key principles should guide our decisions:

  • Augment Instead of Replacing: AI can be an excellent partner, but it shouldn’t do the heavy lifting for areas that need human interpretation. AI has yet to demonstrate the ability to process nuances and does not have experiences that shape that judgment. AI should help us think, not think for us.
  • Privacy and Security: AI systems often require access to personal or sensitive data. We must ensure this information is protected and used only for intended purposes. We need to make the intended use clear and easy to access.
  • Transparency: People should know when the content they have is generated by AI or when they're interacting with AI. Hidden AI use creates trust problems and can mislead people.
  • Bias: AI systems can amplify existing biases based on the training data. As Harvard's Michael Sandel notes, AI not only replicates human biases, but it also confers on these biases a kind of scientific credibility (Harvard Gazette, 2024). We need to actively identify and address these issues.
  • Trust and Verification: We should verify AI outputs rather than blindly accepting them. Research shows that humans rely on AI in the context of problems in which the AI is more accurate and rely on their judgment when the AI is less accurate (PMC, 2024). We all need to be more skeptical and more curious to challenge what we see.
  • Human In the Loop: The most important principle is that humans must remain in control of important decisions. Human judgment is indispensable in deciding about people, aspirations, and levels of risk. AI should augment human capabilities, not replace human judgment entirely.

Moving Forward Responsibly

Now, this perspective is not meant to deter organizations or individuals from using generative AI. Personally, I use it every day and continue to be amazed by its capabilities. At the same time, I'm cautious in how and when I use it. In the process of writing this article, I found summarizations that misrepresented the original data. Yet, the benefits are so vast and significant that our skills would become outdated without learning these tools. Instead, we need to be more intentional about how we adopt and apply these tools and the output they create. This means taking time to consider the ethical implications before implementing AI solutions.

Organizations that are actively creating policies and processes that support responsible use need leaders and users alike to learn rapidly and adapt quickly. It means establishing clear policies and training people to use AI responsibly. Most importantly, it means maintaining human oversight and judgment even as we rely more heavily on automated systems. Doing so will better position us to benefit from AIs advantages while avoiding its pitfalls.

To view or add a comment, sign in

Others also viewed

Explore content categories