Does technology have a right to exist? (No.)

Monday, January 30, 2023

So often, people argue against restrictions on technology (or tech companies) with the argument that those restrictions aren't possible given the scale, value, or some other property of the technology. For example, a common retort to arguments that Facebook and YouTube should have better moderation is that "human moderation is impossible at that scale!"

There's just one problem: this presupposes that their technology has a right to exist at scale. More broadly, the argument is that their technology has a right to exist.

This was manifest recently in Heather Meeker's article "Is Copyright Eating AI?". In it, she argues that we need clear legal rules that "neural networks, and the outputs they produce, are not presumed to be copies of the data used to train them" (emphasis mine) or else we'll kill the industry and stifle innovation. Specifically, she believes that generative AI in particular is at risk of being brought down by copyright lawsuits.

And let's be clear: she isn't just arguing that this is the consequence if there's not such a legal rule. She's arguing clearly that it would be good if the legal rule existed, saying "let's hope this nascent field doesn't sink".

Among her arguments is that it's not feasible to take any of a variety of approaches which would let people opt in or out. Among her arguments:

  • Having a robots.txt equivalent avoiding ML training would have a gigantic backlog, so it won't work.
  • Compensating everyone who contributed the original material is technically infeasible due to distribution costs1.
  • It would be difficult to decide how much to pay to which people, because we can't tell which works have been used how much.

These statements are true in that they do pose a barrier to those particular mechanisms working! But here's the thing. That's not an exhaustive list of solutions. It's possible that there's some other brilliant idea which could make model training both feasible and consensual, so artists and programmers could opt into their art and code being used or being excluded.

But the bigger thing: It doesn't matter if it's exhaustive.

Her argument is this: It's impossible to let people opt in/out of ML training on their creative works, so we must allow ML training without such a mechanism. But that presupposes that the ML training should exist.

That doesn't hold water for me. It seems to me that if you can't avoid a harm with the introduction of a technology, you have to either argue how the benefits of the new technology specifically outweigh the harm and should be allowed, or you have to not create the technology.

With generative AI, we're at a crossroads. We're going to decide soon2, as a society, whether we value generative AI or the creative works of humans more. In one direction, we decide that generative AI should exist, and we set up legal rules such that it is protected and you can train models on just about anything. In the other direction, we decide that the copyright of creative works matters more than training a model, and we empower creators to decide how their works are used, or not used, in ML.

But let me be clear about this: generative AI isn't dead if we decide to protect creators. It will look different if we choose creators over models, but it will continue to develop3.

And this is true of all technology: the greatest engineering is shaped by its constraints. We can, and must, place restrictions on technology to avoid harms. Trust in human ingenuity to find ways to continue to create value in the face of constraints and restrictions.

And there we come back to Facebook and YouTube, too. These sites have claimed they can't moderate content at scale, and so we should tolerate the poor moderation, the existence of hate speech and abuse, the promotion of terrorist training materials on YouTube, because they just can't do anything about it. The Fediverse shows that there's an alternate path: content moderation is now distributed and feels like it's potentially much more tractable. Hopefully this is a blueprint for future transformations, and serves as an existence proof.

You can add constraints and things will look different. But the technology will keep existing if it can be made in a way that isn't quite as harmful. If not, well... technology itself doesn't have a right to exist.


1

Never mind the fact that this isn't even the point of open source. I'd be very upset if I my code were able to be literally bought to remove the copyleft license restrictions I've put upon it.

2

"Soon" in the scale of society: it may take a decade or more for us to get any clarity in the current cases, and that may not resolve the matter. But we're hurtling toward the other side of this, one way or the other.

3

It's naive to assume that this is the final form of generative AI, and if we don't allow this then the field will surely die. There are so many other approaches we can try for generative AI! This field is just getting started, and we'll find new, greater ways of doing it.


If this post was enjoyable or useful for you, please share it! If you have comments, questions, or feedback, you can email my personal email. To get new posts and support my work, subscribe to the newsletter. There is also an RSS feed.

Want to become a better programmer? Join the Recurse Center!