GLTR: A New Method To Detect Computer-Generated Language

  • The new statistical method can detect AI-generated content. 
  • It works by identifying texts that are too predictable rather than just flagging errors in texts. 

In the recent decade, the natural language processing community has witnessed the growth of increasingly larger and smarter language models.

In a time of artificial intelligence and deep neural networks equipped with human natural language, researchers at Harvard University and IBM Research have developed a statistical method to detect computer-generated text.

They have built an interactive tool (publicly available) to differentiate natural human language and text generated by machines from human speech. The objective is to give people more information so that they can make an informed decision about what is fake and what is real.

Artificial intelligence models are usually trained on millions of texts (taken from worldwide web). They predict words that most often follow one another to mimic human language. For instance, the word “You” is statically most likely to be followed by the words “were”, “have”, and “are”.

Using this methodology, researchers built a tool that detects texts that are too predictable [rather than flagging errors in texts]. It enables both AI and humans to work together to identify the machine-generated language.

How It Works?

The new technique — named Giant Language model Test Room (GLTR) — is based on a model trained on about 45 million texts from websites. It has access to one of the largest publicly available models, GPT-2.

Thus, it can observe what GPT-2 would have predicted at each position (for any textual input) and performs efficiently against GPT-2 and many other models.

GLTR represents a visually forensic tool to identify automatically generated texts. It shows 3 different histograms aggregating the information over the whole text.

Reference: The Harvard Gazette | GitHub

Just enter a paragraph into the toolbox and it will highlights all words in four different colors, each denoting the predictability of the word in the context of what it follows. Purple means the word is not predictable; red, slightly predictable; yellow, moderately predictable; and green shows highly predictable words in the paragraph.

This is how a machine-generated paragraph looks like –

The first histogram shows how many words from each category appear in the paragraph. The second one shows the ratio between the probabilities of the highest predicted word and the following word. The third histogram represents the distribution over the prediction entropies.

Of course, the uncertainty will be higher for human-written texts, especially for research papers and academic texts. This is how abstract of a research paper (on EAGLE galaxies) looks like –

Read: Artificial Intelligence Can Generate Speech From Neural Activity

The research team also tested their new tool with a bunch of computer science graduates. The students were able to detect 50% of computer-generated paragraphs, however with the help of this tool, they identified 72%. The percentage could get even better with a little training with the system.

Written by
Varun Kumar

Varun Kumar is an experienced science and technology journalist interested in machines, AI, and space exploration. He received a Master's degree in computer science from Indraprastha University. To find out what his latest project is, feel free to directly email him at [email protected] 

View all articles
Leave a reply