The Irresponsible AI Era: why Responsible AI and Generative AI are opposites?

The Irresponsible AI Era: why Responsible AI and Generative AI are opposites?

This article focuses on the core issues of Generative AI. We will try to follow a "bottom-up" approach to understand the problem with these models and why they may not responsibly lead us to the ultimate AI goal of "AGI for humanity".


More Data, More Computation, More AI: What’s the Cost?

Raise your hand if your first learning algorithm was Linear Regression 🙋

OpenAI's ChatGPT brought a revolution in the field of AI. Now, AI is no longer a tool that just helps you to gain business insights and make business decisions from some tabular data. It has got closer to what we always thought AI would be! It can draft your emails, answer your questions and even help you debug your code. How we got to this AI era from that naive Linear Regression?

There are two ways to advance AI -

  • rely on better techniques

  • rely on high computation, large model sizes and loads of data

In the early days of AI, we were working to discover better methods that led to the rise of the field of machine learning. Artificial Neural Networks soon caught the attention and we discovered the use of activation functions to introduce non-linearity in ANNs and the backpropagation algorithm to help them "learn". The rise of Cloud Computing, GPUs and availability of high volumes of data assisted.

Source: information is beautiful

Deep Learning demanded more computation power and data as compared to machine learning, but it did provide better results. Also, the use of Deep Learning led to the decline in explainability in the models and initiated the era of "black box" models. Today's AI Era has just continued this trend! As compared to early Deep Learning, Large Models give better results, require tremendous computation & data, and have reduced explainability even further. The image above shows the drastic size increase in these models over the years.


The Irresponsible AI Era

A general definition of Responsible AI is "the practice of developing and using AI in a way that is ethical, safe, and trustworthy". Here's why Generative AI contradicts Responsible AI:

  1. AI for Business: Generative AI's demand for tremendous computation and data naturally translates to high financial needs. Therefore, a company doing Generative AI must also allocate sufficient company funds or bring in some investors for funding. In any case, the company must also convince the stakeholders about the profitability of Generative AI in return for the huge investments. Ethics, safety and trust take a backseat when profit enters the picture. This is precisely the cause behind the OpenAI drama. The ultimate goal of AI is to serve humanity and not businesses, right?

  2. Lack of Explainability: It's not easy to decipher a large model's output. And yeah, there are indeed tools like BERTviz to help you visualize attention maps. There are also gradient based approaches to understand the dependencies of a particular token. But are these approaches enough? Especially when dealing with sensitive areas like finance or healthcare, it's important to understand the true cause of a model's actions. If you cannot interpret, how can you trust?

  3. Lack of Safety: A large part of Generative AI community focuses on image and video generation. There are various models and platforms that can generate a video from a single image and prompt. This is concerning! Our images are publicly available on social media. So, if anyone can use my profile picture and generate a video from it without my consent, isn't that concerning? AI models do have guardrails but are they sufficient to cover any possible misuse?

  4. Credentials are not enough: It's important to identify and trace the origin of AI generated content. Various companies like Meta, Google, Microsoft, Amazon, OpenAI became a part of the project C2PA (Coalition for Content Provenance and Authenticity). Under the project, the companies add "credentials" or "watermark" to the generated output from their models. The credential can also keep a track of every edit made to the output. But what if someone tampers with these credentials? Is it impossible? Also, we are entering the SLM era. With the rise of 1-bit quantization, openly available weights, datasets, and training recipes, it raises more concerns. Are credentials enough for the digital safety? Do we need credential-free methods that can identify any given AI generated content?

  5. Data dependency: We have witnessed various lawsuits being filed on companies for the "unethical" use of data in training the models. Also, if an individual make their data (image/video/code) publicly available, they might not like the use of their data by a company to earn profits. If a company uses publicly available data to earn profits, is it ethical? And you may have noticed that a model that is good at generating python code, may provide comparatively poor results for any other programming language like Go. A primary reason for this may lie in the data used for training the model. The training dataset might had more and good quality python code examples, but it lacked sufficient data for Go. This feels more like overfitting. This high dependency on training data does not seem desirable.

The aforementioned points contradicts the notion of ethics, safety and trust. The environmental impact that these models have is altogether a different story. I do acknowledge that Generative AI does have a positive side. It can help professionals increase their productivity, transform the cinema and animation industries, aid in scientific discoveries and do a lot more. However, today's Gen AI does not seem to be a responsible approach to AI.


Responsible AI

The traditional machine learning was Responsible AI in true sense. These were "white-box" models. You could interpret their predictions and make a final decision yourself. This helps to trust the model. These models did not require loads of data and the training process did not use publicly available personal data. The process was ethical. The models were primarily used by companies to gain business insights and make data-driven business decisions. It was safe AI. Although it was not as powerful as today's Generative AI, yet Machine Learning was indeed truly an example of Responsible AI.

Deep Learning and Generative AI provided better results than traditional Machine Learning. We weighted better results more than ethics, safety and trust. In today's AI era, it's impossible to stash Gen AI. The best way to make it safe, ethical and trustworthy is by augmenting it with new technologies and methods. Quantum Computing could be one of those technologies! So let's get back to the first method of advancing AI and rely on better methods.

Quantum Computing has multiple underlying physical architectures like Superconducting Qubits, Trapped Ion Qubits, Photonic Qubits, etc. This technology is yet in its infancy, but these physical systems are indeed explainable! The promise of Quantum Computing lies in faster and efficient computation. Can we use this technology and develop it to make Gen AI explainable, safe and ethical?

Besides Quantum Computing, there are numerous other unconventional AI techniques that could be explored. While developing these technologies, it's important to consider ethics, safety and trust and not just prioritize better results.


If you found the article though provoking, I would love to know your perspective in comments! If you would like to discuss this idea with me, let's have a chat!

I'm a Quantum Machine Learning researcher, podcast host, quantum educator and YouTuber. You can find my podcasts on YouTube here. In these podcasts, I discuss effects of Quantum Computing on Finance, AI, Software Engineering and a lot more with quantum computing experts and professionals.

Lalith Kanna Ramakrishnan

Persuing B.Tech Artificial intelligence and Data Science at Dr.N.G.P Institute of Technology. (2023-2027)

9mo

Sir,I have a doubt for long time that Whether AI can become Creative as humans cause now various Genarative AI's Like Mid journey,Chatgpt ,blackbox copilot and even many Public gpt's in chatgpt are good at generating various creative contents in a decent way but according to my knowledge When chatgpt is launched in 2022 I haven't thought that in 2 years it will be in this level and it might be even better as some models like o1 and Sora haven't launched yet but What I think is as Companies like Openai uses user interactions to train their model as it goes if a creative person shares his creative ideas and thoughts with chatgpt than is it possible that chatgpt can learn from him and think like him in Future.

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore topics