Limitations and Potential Drawbacks of AI Chatbots in the Workplace.
As a partner in a private equity firm, I am often asked about emerging technology. Lately my clients seem enamored with the potential of AI chatbots. These tools leverage natural language processing and large language models to answer queries with AI generated but very human-like images, text and videos. As an example a user might submit the following query, "Create four viral Instagram Reels ideas for Anheuser Busch's Bud Light beer," and in less than a minute, the AI chatbot produces a human-like response.
When I've demonstrated ChatGPT-4 to clients in a live environment and they see the high quality content that can be coaxed out of the tool, the reactions are not surprising:
Where can I use this incredible tool in my business?
In which disciplines will I see the biggest ROI?
What's the best chatbot?
Is it better to start with marketing or information systems?
Will implementation allow me to radically reduce my IT infrastructure?
Can this technology transform our employee review process and let us downsize HR?
As is often the case with new technology, there's a strong tendency to jump into "how can we implement?" before the question of "should we implement?" is even explored.
That having been said, it is tempting to think of AI chatbots as omnipotent generators of professional content and I will be among the first to agree that at times ChatGPT feels magical. But its not all sunshine and rainbows. While far from an exhaustive list, below are six questions/concerns that business leaders should explore before they jump headlong unto the breach:
Who owns the data entered into AI chatbot queries? By its very nature, natural language processing encourages users to disclose as much data as possible as part of a query because more data often leads to more contextually relevant output. But how much data is too much? Are queries, the query output, and attached files used to train the model? Can competitors steal such data with creative queries? When does the AI chatbot become a liability due to the amount of information employees have voluntarily disclosed? At the end of the day, AI chatbot vendors need to answer hard questions about data security and how proprietary data can be safeguarded.
Arguably, the value of content produced by any AI chatbot is a function of how the model was trained and what data was used. Unfortunately, for the vast majority of AI chatbots commercially available today these processes are undisclosed: Even popular AI chatbots like OpenAI's ChatGPT have not published their full datasets or their detailed training methodologies. In fact, most AI chatbots devour huge amounts of undisclosed, structured data based on human-to-human dialogue where "facts" can be both contradictory and misleading. Why should anyone care about data or training? Think of it this way ... suppose someone who claimed to be a surgeon refused to provide proof that they finished medical school, completed a residency, a fellowship or that they were board certified. Would you let that person operate on your child? Your mother? Of course not, because training and experience matter. Content generated by undocumented AI chatbots might only be as reliable as an anonymous reddit post.
AI chatbots are not arbiters of truth: Few people missed the story about the New York attorney who used ChatGPT to search for precedents to support a personal injury lawsuit. When the attorney submitted his AI chatbot generated brief, the judge questioned the veracity of the prior cases cited. Which, as it turns out, was not surprising since they were totally fabricated--by ChatGPT. Because the data used to train most natural language models is by its nature unreliable, AI chatbots occasionally "invent" facts. How does a business account for this kind of mistake? Businesses must deploy processes where humans review content and gather verification for the key facts AI chatbots claim to be true.
In their current iterations, AI chatbots provide a very narrow set of capabilities: ChatGPT uses natural language processing and relies primarily on information included in the query to interpret the appropriate context to answer questions or perform tasks. When compared to a person, AI chatbots have narrow subject matter expertise and limited reasoning abilities. As such, relevant content is a direct function of precise query language. When asking an AI chatbot to build content, words matter. There's also the fact that AI chatbots often have knowledge cutoff dates that render their output oblivious to recent news and market developments.
The potential for harmful or inappropriate content is not zero: Content created by AI chatbots can be problematic, biased and even toxic. A poorly worded prompt can produce objectionable and occasionally offensive content even if there was no intent to generate such a response. Operating procedure should demand all AI-produced content must be proofread and approved before it leaves the office.
Never forget that the driving motivation for most AI chatbots is commercial viability: Most AI chatbots are deployed to generate revenue and their development is driven by business incentives such as increasing adoption or driving user interaction. Producing the "best" answer for the person making the query is rarely the first priority. This is more a cautionary flag than anything else. AI chatbots are not employees of the company who can be indoctrinated, specially trained or incentivized to do the right thing. AI chatbots are tools controlled by a third party just trying to make some money.
It's not all bad news. These tools can be impressive and on occasion seemingly clairvoyant. Just remember, AI chatbots such as ChatGPT are far from infallible. They may empower a single person to produce a veritable tidal wave of content, but like many tradeoffs, improved speed and efficiency comes at a price--in this case it is often content quality and data security. Before we become too reliant on AI chatbots in the workplace we should demand significant advances in first party data protection, robustness, transparency, and AI safety. For the foreseeable future, AI chatbots should remain just a tool best controlled by thoughtful humans.
Technology Channel Leader | Sales, Marketing and Operational Acumen | Advocate for your Brand
1yGood perspective Frank and definitely a factor to consider. Let’s chat about how NVIDIA can help implement a private LLM with a companies own data. We have pre-trained models for chat bots and LLM that can be tuned for specific languages in healthcare, science, automotive, or any vertical.
I like these 4 Instagram reel considerations very much but I would very much like to take the reader's past media consumption/ posting history into account. If the AI was considering the best promotion to show the individual viewer based on their history, a more successful ad message may sway the eventual buyer more successfully.
Sales Director OTT/Streaming TV TeamV | Founder of RUSH Sports
1yThis is an important lens that not many people are thinking about. Today's business is built on cutting cost and increasing speed. And as you have outlined...proof readers may have a resurgence.