We deserve to know if something was generated by AI

Monday, May 22, 2023

We're plunging into a world where AI-generated text surrounds us. But we don't know where we are on that. What portion of the text you read each day was generated fully or partially by a human, or by an LLM? We don't know, and probably can't know, and that brings about some problems.

I'm not so naive as to think that because something should be done, that it can or will be. Don't let that distract from the point of this post. If we know what we'd like to aim for in an ideal world, we can better observe the results of not getting there, which can inform solutions to second-order (or first-order) problems.

LLMs haven't reached their saturation point yet, but there are still a lot of places where you expect to see them. Chat bots on websites? Would not be shocking to have it powered by an LLM. Emails from your sales rep? Probably written by ChatGPT. And recruiter emails? At the best of times they often felt robotic, so why would they be written by humans anymore?

I'm not alone in having fears about the future with these technologies. And this is not at all new for this technology; it's probably the most boring take for a new technology. Breaking news: person is afraid that new technology will change things! But these fears are worth airing, because they come from somewhere; our emotions are grounded in something about reality that we've observed.

In this particular case, I'm afraid that by masquerading text generated by AI as something written by humans, that we'll break our ability to interact with systems effectively.

In general, knowing how something works is crucial to interacting with it well. If you gain mechanical sympathy, you know how to push it to optimal performance. But if you don't have an understanding of it, then you're painstakingly building a mental model of it over time, and that's a slow and error-prone process.

LLMs are very powerful, and also limited. They make mistakes in surprising ways if you're not used to interacting with them, mistakes very different from those that humans make. Reviewing something that an LLM generated takes a very different kind of review than something from a human, even if both require review. They have different failure modes. Sam from the ops team probably isn't making up fake facts when writing a design document, but ChatGPT sure is. Not disclosing the provenance of a text robs us of the agency to actually interact with that text properly, on our terms.

This problem isn't unique to the latest hotness, though. It's been around since we first were able to put computers inbetween customers and our support staff. Have you ever had a chat with an "agent" to get support from a site and had this feeling that you're talking to a robot, not a person? I sure have, and I suspect in many of those cases I was talking to a machine1. It really changes the tenor of the conversation.

Not disclosing this increases effort and emotional cost for people interacting with machines. If you think the other side of the chat box is a human, you have to put a lot more effort into writing your messages. But if you know it's a machine, you can interact with it as such and put in less effort for the same result. You can skip the pleasantries, say things in short ungrammatical phrases, and get good results while saving time and effort.

This goes deeper, too, I think. We're going to see systems-level effects of AI-generated content in ways that we cannot predict. Some fundamental parts of our systems are just altered overnight. A poignant example is the submission of AI-generated text to a scifi publication. The system for reviewing submission wasn't designed for the vast increase in quantity of submissions that would come from generated content. That's a harbinger of what's to come.

Many of our systems are designed for human-scale inputs and outputs. But what happens to those systems when we generate inputs and consume outputs at the speed of machines, instead? I don't know. You don't know. But what we do know is that some things are going to break.

It sure would help if we knew clearly when AI-generated text is being used, so we could forecast the breakage more easily. Then we could adapt our systems and repair them before we see too many negative effects. Every technological change brings the bad with the good. I have hope that in the long term, this technology will also be applied in unambiguously good ways. The paths to get there are many; let's work to make it as painless and as ethical as possible.


1

Sometimes support staff are required to follow a script strictly. In these cases they are being utilized as an automoton following a decision tree. My years in tech support taught me some things about this, from both sides of the table. You can only get around the decision tree if you know the decision tree exists.


Post notes:

I'm experimenting with adding this section at the bottom with some reflections on the post I've written. I don't know if I'll keep doing it, but it's fun and it's an opportunity to let some of the subjective and meta things out.

This one makes me nervous to post, because anything that touches LLMs is very charged these days. And then when you toss in ethics, people can understandably grow defensive or touchy. I think this is an important topic, but I'm just nervous about how people will react; the comments on anything LLM-related can get out of hand easily.

I wanted to get into the systems side of things on this post. But ultimately, I wasn't able to. I just don't know enough about how things will sit within and impact systems, so I had to cut it!


If this post was enjoyable or useful for you, please share it! If you have comments, questions, or feedback, you can email my personal email. To get new posts and support my work, subscribe to the newsletter. There is also an RSS feed.

Want to become a better programmer? Join the Recurse Center!
Want to hire great programmers? Hire via Recurse Center!