The Business of IT Blog

9 Reasons Why ChatGPT is Not “Enterprise Ready”…Yet

manat-computer-night
3 minute read
Madhav Mehra

ChatGPT’s ability to converse on seemingly everything has taken the world by storm since its launch in November 2022. Social media is loving it. News media outlets are reporting on it. Educators are worried about it. But is ChatGPT ready to securely support and deliver enterprise applications? Read on to see how BMC is planning to support large language model (LLM)-based approaches for self-service and other enterprise use cases.

Zero shot falls short of enterprise needs

First, some background. Today’s chatbot deployments require initial and ongoing training in order for the natural language processing (NLP) models to understand the user’s intents and extract “entities.”

For example, if I type in: “I need guest wifi for 2 at Dallas tomorrow” the training data that we have already entered is used to classify:

  • The intent is “order guest wifi”
  • Related entities are “2 guests”
  • Location is “Dallas”
  • For the next day’s date range (e.g., 2/7/23 to 2/7/23)

LLMs such as ChatGPT, Macaw, and others offer a “zero-shot” approach where there’s minimal to no training data needed to achieve the same classification result. This is attractive for customers who can theoretically go live faster and spend less time tweaking and maintaining the training data. However, LLMs carry gigantic amounts of data compared to a mid-sized enterprise’s knowledge base. So, in some cases, we still need to override the general data to deliver customer-specific responses.

For example, while the general answer to, “How do I install Microsoft Office?” might be to visit Microsoft 365 and download installers, an enterprise might set up its laptops to receive downloads and updates automatically via a provisioning model. Or, while some clients may have hardware and software support queues, others may want Mac versus Windows support, or some other distinction. This level of customization on top of the LLM data is essential for enterprises.

ChatGPT’s path to enterprise deployment

ChatGPT is fun and a great demonstration of artificial intelligence (AI)’s potential, however, it needs to go further before it’s suitable for enterprise chatbot use. Here’s why:

  1. ChatGPT is a closed model, and it relies on data collected up through 2021. Enterprises require up-to-date information.
  2. ChatGPT cannot yet be used or integrated with the internal data sources of an enterprise. As such, ChatGPT cannot provide organization-specific knowledge such as, “Where do I download the VPN client?”
  3. Customers have no control over the language in the response since ChatGPT writes its own answers, which enterprises can’t update or style.
  4. There’s no dialog management for a turn-by-turn conversation, leading to fulfilment by a backend API. For example, enterprise chatbots need to take a user through a service request like “Order a phone” > “Which phone do you want?” > “Which model do you need?” > “Please confirm” > “Here is a request number.”
  5. ChatGPT’s new commercial tier features are limited to faster response time and general availability, and enterprise requirements for segregation, security, and uptime, etc. are not yet covered.
  6. ChatGPT makes up incorrect and untruthful answers at an alarming rate.
  7. There’s no way to constrain topics. Customers may want their IT/sales/HR chatbots to stick to their domains and not chat about politics, art, or other random topics.
  8. There’s no clarity on ChatGPT’s data residency, security, or anonymization—all big concerns for an enterprise.
  9. OpenAI’s founders and leaders have publicly advised that ChatGPT is not ready for production use.

Other enterprise LLMs are available now

BMC has already deployed a customized LLM for summarizing AIOps situations in BMC Helix Operations Management. We are also developing other use cases such as extracting useful information from logs and tickets to answer user queries. This is possible because we can configure and prioritize the LLM to return answers from the enterprise knowledge base. Again, we compare this favorably against ChatGPT, which doesn’t have access to internal websites or knowledge.

The Autonomous Digital Enterprise moves closer

Here at BMC, we’re focused on the future, where enterprises can adapt to changes and thrive amid rapid transformation. LLMs are one of the tools at our disposal, and we are actively experimenting with them to reliably, securely, and transparently deliver accurate answers to employees. ChatGPT provides a fun experimental window for personal use at this early phase, especially for content generation topics. Solutions like BMC Helix Virtual Agent are a better fit for enterprise needs today. We will keep you posted on our progress with LLMs and other AI approaches, and meanwhile, we can all marvel at the progress made by our worldwide community of researchers.

Try BMC Helix Knowledge Management Trial for Free

BMC Helix Knowledge Management enables Customer Service and Support organizations to create excellent agent and customer experience by building and sharing knowledge across channels.


These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

See an error or have a suggestion? Please let us know by emailing blogs@bmc.com.

BMC Brings the A-Game

BMC works with 86% of the Forbes Global 50 and customers and partners around the world to create their future. With our history of innovation, industry-leading automation, operations, and service management solutions, combined with unmatched flexibility, we help organizations free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead.
Learn more about BMC ›

About the author

Madhav Mehra

Madhav Mehra leads virtual agent and live chat products for BMC. He has 7+ years of experience deploying conversational AI across commerce, banking, IT, and HR use cases.