The Gist on Editing AI-Generated Tech Content

By Chris Dole, Posted October 4, 2024 in articles

Reading time: 3 minutes

Intent of this article: I’ve learned a few things about editing tech copy that might be helpful to you. While reading, note my experience and savvy. You want to talk to me.

I think the value here is to demonstrate that I can write and edit clearly by showing my experience, but doing so in a way that’s relatable and perhaps even a bit fun to read. Let’s see how that works.

The angle is around the area of “generative AI and tech content research”… let’s see what I can make from this…

It’s been interesting to “refactor” my long experience building systems with tech into writing and editing content about tech products and services. This is especially true when the research for the content originates from generative AI. I’m familiar with the shortcomings of AI through building it into business systems. It’s fascinating to see these shortcomings (and opportunities?) reflected in the tech content research process.

To illustrate, I recently attended a survey course sponsored by an AWS partner. They promised free food and drinks to sit through the course and then tell people about it. That was a win/win in my books! I’m holding up my end of the bargain by writing this article.

During the course we compared various generative AI models to see what they could do. The differences between them were two-fold: The data used to train them, and how the model’s algorithm specialized in a certain type of report format. These takeaways from the course stood out:

  1. The variety of generative AI models available.
  2. The ability to compose content for various audiences and genres.
  3. The blind spots when training data did not include enough material to answer the prompt.

I expected the blind spots. Generative AI is fairly recent, but predictive AI has been around for a while.

In my time building business systems, I worked elbow-to-elbow with PhDs in statistics. The algorithms they applied to derive numerical results were only as good as the training data. If the production data was similar, then the predictions might be useful. But if the production data deviated too much, the output could be grossly misleading.

How does this manifest in generative AI output? At this point, we know that AI LLMs will “make stuff up” to fill in a prompt template if they don’t have matching data.

This problem gets worse with technical topics. Unlike more common topics, tech content on the internet provides several challenges to the researcher trying to use AI tools:

  1. There might be enough data for the generative AI models - but it might be obsolete.
  2. There might be very little data at all.

One “internet hack” I stumbled upon in my work got past the “I just have to say SOMETHING” tendency of generative AI models. If you haven’t tried this yet yourself, you really should. Go to Google an ask it a series of specific tech questions.

Here’s what I found: If it “has something to say” you’ll get a generative AI response. If it punts and just gives you a list of blue links (of course, after the paid-for results) then you know there wasn’t enough data for an answer.

When I earned my master’s degree, I was trained in what the AI industry now (kindly) calls “Classical AI”. I felt a little better about my education after reading the book Rebooting AI recently, the authors lamented that the AI industry would never achieve AI

(Can I cite particular research use cases without implying too much? Such as: As a builder, the research was about how to get things done. As a writer/editor, the research is about assessing the coverage a given topic might already have.)