Generative AI tools have captured the attention of business leaders across industries for their remarkable ability to create coherent text with relative ease. These tools hold promise to transform how we work by accelerating and improving communication. Yet, like any technology, gen AI tools are not immune to imperfections. Sometimes, these tools produce outputs that fall short of the desired mark. 

During a recent Grammarly Business webinar, Timo Mertens, Head of ML and NLP Products at Grammarly, and Knar Hovakimyan, Engineering Manager at Grammarly, discussed the origins of problematic gen AI outputs and how Grammarly mitigates them. 

Why do gen AI tools sometimes create harmful or inaccurate outputs?

On the surface, gen AI tools seem to possess an uncanny ability to create effective first-draft or even final-draft copy and text. However, there are instances when gen AI delivers less-than-ideal outputs. When these undesirable outputs happen and go undetected or unedited by the user, the repercussions can be significant. They may include impaired brand reputation, delays, inefficiencies, and even physical harm. These undesirable outputs are often grouped into the following categories:

  • Hallucinations. This occurs when text generated by a model includes information that is irrelevant to the user’s request. For example, if someone is writing an email about a project update, the model might independently come up with project names that don’t exist. 
  • Biases. Explicit biases occur when text includes overt references to stereotypes or other biased information. Implicit biases, which are often more subtle, occur when underlying bias in the model leads to generated text that looks different depending on demographic information in the user’s prompt. For example, a model might generate a different output for a prompt that includes a man’s name versus a woman’s name. 
  • Inaccurate content. Sometimes models can create content that is inaccurate or out of date. For example, a model might generate text with outdated due dates for a particular project. 
  • Harmful content. This is content that could result in direct harm, such as incorrect or incomplete medical advice or faulty instructions for performing a high-risk task. 

So what causes these problems to sometimes occur? 

At a high level, the issue is that language models are trained on imperfect, human-written text. “Imagine a random collection of text gathered from all over the internet,” Hovakimyan said. “Think about all the different biases, incorrect information, conflicting viewpoints, and offensive content that might appear in these texts.” The language models used to power generative AI learn to mirror language from text across the internet, so any problems in the text across this vast dataset could also ultimately be reflected in the generated text.

Another core challenge is the complexity of communication. Communication is multilayered and informed by the identity of the writer, who the audience is, and the context in which the text is being communicated. For example, something that is considered appropriate in one scenario might be perceived as inappropriate in another. 

These nuances are challenging for large language models to get right; however, research teams, including those at Grammarly, constantly observe these pitfalls and develop strategies to mitigate them.

What can businesses do to mitigate harmful, biased, and sensitive content with generative AI?

If your first instinct is to avoid generative AI altogether, you’re likely putting your organization at even greater risk. 

A recent study by Forrester Consulting commissioned by Grammarly, found that almost 70% of respondents said they already used gen AI for most or all of their writing and editing tasks. This means that most employees are making independent choices to leverage gen AI tools and capabilities without company guidance. When employees make their own AI selections, they could put the organization at risk if the tools don’t align with the company’s security, privacy, or ethical standards. 

Mertens outlined the three common choices companies have when it comes to implementing gen AI and the risks and benefits associated with each option.  

Option 1: Allow employees to choose their own gen AI chat interface

Allowing employees to choose their own gen AI tools makes it challenging to control what information flows to these different interfaces. This opens up serious privacy and security concerns for the business. It’s also difficult to guide employees on responsible use when various tools are being used across the company. 

This approach also limits the organization’s ability to harness the potential of gen AI for enhancing workflows. Generic chat interfaces do not possess enterprise capabilities, such as understanding the context of the business or brand style guidelines. 

Option 2: Build gen AI tools internally

While there have been incredible breakthroughs over the past year when it comes to open-source models, deploying a proprietary gen AI model would require significant human and financial resources dedicated to building and maintaining the model.

In-house experts or hired contractors would need to decide on a multitude of important considerations, including but not limited to how big the model will be, the most critical tasks that the model should be good at, how to counteract bias and sensitivity, and how to measure quality and improve the model over time. The list goes on, and the costs associated with building the model—not to mention maintaining it—could get out of hand quickly. 

Another critical limitation of in-house models is their capacity to leverage cutting-edge advancements. Generative AI is evolving rapidly, and deploying new capabilities would inevitably take time. 

Option 3: Use a dedicated communication assistant, such as Grammarly Business

Ubiquitous communication assistants with generative AI capabilities, including Grammarly Business, are designed for enterprise needs. They work directly where users are writing and understand the context of the business to deliver relevant, on-brand communication across every workflow. They are also designed with enterprise-grade security and privacy standards and have entire teams dedicated to responsible AI development.

Because Grammarly is at the forefront of AI advancements, organizations can also expect state-of-the-art capabilities and well-designed user experiences. Grammarly’s products are informed by specialized, expert teams that think about the many nuanced layers and complexities of communication across a variety of contexts and enterprise demands. 

Grammarly’s approach to responsible AI

Grammarly is guided by a core belief that AI should augment and empower people to reach their potential. This philosophy underpins our product development process and informs our responsible AI framework (you can read more about that framework here). 

Below are a few examples of Grammarly’s commitment to responsible AI development.

  • A team of linguists and engineers works full-time on responsible AI. These experts actively participate in research and publish papers to inform the industry about new approaches to reducing bias and harmful content.
  • Grammarly’s Responsible AI team is involved in the product development lifecycle for every release and scrutinizes new functionality for its potential to generate harmful, inaccurate, or biased content. 
  • Linguists, researchers, and engineers also evaluate the performance of large language models against defined metrics to understand quality and the risk of undesirable behaviors. We use only the models that have the best performance or rely on our high-quality in-house models. 
  • Grammarly prioritizes user needs during feature development, taking into account communication goals, context, and individual requirements.
  • Grammarly has best-in-class enterprise security and privacy standards. We never sell customers’ data, we don’t allow third parties to train on user data, and users have full control of their data, retaining all rights to their text.

Confidently bring gen AI into your organization 

Gen AI tools offer remarkable potential but come with complexities. To navigate these challenges and unlock the benefits of gen AI, embracing a responsible solution is imperative. Grammarly Business is committed to responsible AI development and equipped with enterprise-grade security. 

To learn more about Grammarly’s generative AI solution and how you can bring it safely into your organization, click here to get in touch with a product expert.

Ready to see Grammarly
Business in action?