Q. Can I use ChatGPT or other AI for writing or researching blog posts, video scripts, or other content for my company’s website? A. The better question is, “should you?” The answer is, probably not. Here’s why.

Share

Writing

Use caution when using AI to draft your articles

One problem with AI-generated writing is that it tends to be simplistic and repetitive, like an essay by a fifth-grader desperately trying to expand a handful of facts into multiple pages. (“General Napoleon was the Emperor of France. When it comes to French emperors, Napoleon was one. Also he was a general.”)

An AI-generated article might sound pretty good on first read, but it’s not likely to generate content of the depth and thoughtfulness that reflects your expertise. Nor is it likely to bring any new insights to your topic. Since AI works by mining existing databases for content, what you get back will be a reflection of whatever it found.

For example, let’s say that you want to publish a post about the impact of Napoleon on European legal reform. Your AI-generated article might be an amalgam of all the widely accepted scholarship about the Napoleonic Code and how it was adopted by legal systems across Europe. That’s accurate, but not ground-breaking. Or it might include a lot of random facts about Napoleon, such as his shoe size and the names of Josephine’s dogs. Still accurate, but not relevant.

Or it might say that Napoleon died on Elba, which is widely believed, but incorrect (he died on the island of Saint Helena). Or the AI might have dredged up some outlier research by Professor Ivy Got-Tenure at Whatsamatta U, and you wind up publishing an article that says that Napoleon died on Maui with a coconut in one hand and a pineapple in the other. This would certainly raise eyebrows, but not your status as a thought leader.

The solution: If you do use AI-generated articles as a starting point, review them carefully. Edit and expand the text as needed. Make sure that you’re saying something fresh that is relevant to your audience, with language and style that reflect your personality, your knowledge, and your company’s brand.

Researching

Use the same caution when using AI for research — times ten.

In its current iteration, artificial intelligence systems like ChatGPT have a mental health problem: they hallucinate. Seriously. They imagine information that is not there, present it as true, and deliver it in ways that can be pretty convincing. This phenomenon is literally called “hallucinating” by the developers of ChatGPT.

The consequences can be career-killing.  In response to a lawyer’s query, ChatGPT produced legal cases that did not exist and even generated PDFs of the “documents” filed in those cases. These citations and PDFs were submitted to a court by human attorneys. The details and case numbers were in convincing legal language and correct formatting, and even included “handwritten” margin notes of a type commonly found in such documents. It was astonishing detail for documents that were completely fictional.

One reason the lawyers were found out is that although the opposing counsel were experts in the relevant area of law, they didn’t recognize the arguments provided by ChatGPT and couldn’t locate the cited cases. The other detail that gave away the game is that the AI-generated documents named real court districts and real judges – but the judges and court districts did not match.

Unfortunately, the consequences for the hapless lawyers who submitted them are painfully real. They include a furious judge, sanctions, and international humiliation. For an excellent and very entertaining description of this case — and about how AI chatbots work — watch this video.

The liability can be huge. As of this writing, a multi-million-dollar defamation case is wending its way through the courts. It started when a reporter who was using ChatGPT for research got results that included an article about an individual’s lengthy criminal history. The article was complete with the person’s name, age, where they lived, and links to press quotes about their long crime spree. There were only two problems: First, this information was unrelated to the story that the reporter was researching. Second, it was all completely made up, from the criminal history to the “press quotes”. The reporter forwarded the results to the subject, and the subject is suing OpenAI, the parent company of ChatGPT.

According OpenAI, the fake court cases and made-up article about the crime spree are both examples of chatbot “hallucinations” – elaborate and convincing, but not real.  OpenAI says that these hallucinations are not unusual. They argue to the court that they are not OpenAI’s fault, either, because the hallucinations were generated by a computer system instead of by human beings, in response to questions asked by human beings who were most definitely not ChatGPT or anybody who worked at OpenAI. Yet, had the reporter included that information in a published story, the reporter, his editor, and his publication could all have been exposed to liability for damages.

The solution: If you use AI for research, treat the results like a Wikipedia article created by a well-meaning editor with a vivid imagination, a tenuous grip on reality, and no oversight.

  • Be diligent about framing your questions with specificity. Narrow inquiries will get more useful results, and be easier to verify, than broad conceptual questions.
  • Check every result against a second source. DO NOT use the same AI to check the results that you used to generate them. (The lawyers made that mistake, by asking ChatGPT, “Are these cases real?”) Use other research resources instead — Google, another search engine, physical books, online publications, or even picking up the phone and calling the source or other experts to verify the information that the AI delivered.
  • Check attributions, too, especially if you are planning to use quotes from books or other works. In response to queries by scam artists, AI has generated whole ebooks – some for sale on Amazon — that are collections of copyrighted material stolen from human authors and subject experts. This is especially common for technical books and other works of non-fiction. Google the author to see if they have a body of work and a professional history (like a profile on LinkedIn, or a bio on a university website). Search some text from your quote to see if it shows up in more than one work with more than one author name.
  • If the chatbot results to your research queries generate links to articles or other resources, read the entire referenced article – at the publication’s source, not within the chatbot results — to make sure that a) it’s accurate, and b) the quote that makes your point isn’t contradicted by something else in the same document. Those with legal or scientific training will recognize this excellent habit as a standard practice that prevents embarrassing mistakes.
  • If your results include conclusions from, excerpts from, or citations to research, scientific studies, or business case studies,
    • Read the original research reports or studies, in the original publication.
    • Include links to the original research in your article.

Links to the original research don’t always have to be included in your published post, but they must be included in your draft. This assures that another set of eyes can look at them, and so that your organization has the research in the documentation if it’s ever needed to defend your work.

Learn more:

Lawyers blame ChatGPT for tricking them into citing bogus case law

The Next Big Thing?: The Legal Implications of ChatGPT

Copyright Guidelines for Content Creators

Share

Leave a Reply

Your email address will not be published. Required fields are marked *