And here we go!!!

This is going to be interesting to watch!

A collection of bestselling authors, including Game of Thrones author George R.R. Martin and legal drama writer John Grisham, has sued ChatGPT developer OpenAI for using their writing to train the artificial intelligence without paying.

The lawsuit, which was filed in Manhattan federal court on Wednesday by the writer’s advocacy group known as the Authors Guild, claims that the chatbot uses the works of famous authors to train its answers without providing sufficient compensation for the creators. It’s the latest action by creators worldwide to ensure proper compensation as more and more people adopt generative tools like ChatGPT.

Full article, HERE from the Washington Examiner!

When you add these to the suits against the ‘art’ AI people, they whole AI thing ‘could’ collapse if they win in court, because of the massive copyright violations on these systems to ‘train’ the AIs.

I’ma need more popcorn…



And here we go!!! — 9 Comments

  1. If Frank Herbert were alive, we could really call it the Butlerian Jihad…

  2. This. I have no problem with machines computing, and providing data for us, as long as we make the decisions. But this is what we get – and deserve – for asking machines to think for us. It will either end … or not end well.

  3. Boy howdy, this distraction will really motivate George to write those last two books. *rolls eyes*

  4. Do Universities pay royalties when they use books to teach their students?

    One could argue that this is the same difference.

  5. This realm of law and intellectual rights has virtually no legal precedent for guidance. The courts will likely move very slowly so as to not set bad guidelines for future litigation.

  6. I’ve asked “AI” to write British Imperial SciFi.

    Utter rubbish.

    Obvs the Curzon algo needs tweaking.

  7. If you print a book people are pretty free to do with as they please. What is chat GOT doing with the book that I don’t do with mine. Read, analyze, memorize, quote and share.

  8. Unless the AI developer documents what sources used to train, there is kinda no proof linking to specific content creators, unless they document that the specific configuration needs LOL data that could only be satisfied with such a high fraction of available text that it is impossible to have omitted this or that person’s copyrighted work.

    This is a fundamental challenge wrt to this law suit, because it is quite possible technically to make low data requirement neural nets of the specific sort, and there is quite a lot of public domain material that predates any material that would require compensation of living creators.

    Now, judges include a lot of crooks and people who have no clue how this stuff works, so the lawsuit may proceed on absurd merits.

    These things are absolutely not thinking, they are doing a bunch of matrix (2D ‘vector’) operations, and they can be understood as a really fancy set of adaptive digital filters. Lots of people, me included, don’t even understand simpler adaptive digital filters such as Kalman filters.

    Kalman filters are used in GPS, and in some other controls applications. In a GPS, the filters solve distances to satellites to get a high precision value for location. You could class that as thinking, or you could not class it that way. If it counts as thinking sufficient for personhood, then perhaps various seeker missiles are in fact persons, and even test firings are murder of the seeker person.

    Since you know aviation, you know this sort of thing can be applied to various tasks there. Some of them are good and sound applications of technology. Some of them are really stupid applications, dna perhaps the managers and engineers should hang for the stupidity.

    There is perhaps an irreconcilable philosophical divide between some of the parties on this issue. On the one hand, many artists find explanations of technology to be merely a bunch of arbitrary monkey sounds. Techy claims that ‘it cannot be doing x’ are merely magical verbal rituals in such ears, and perhaps powerless. On the other hand, many techies find it impossible to beleive what the artists claim that the tech is doing. However, try to convince very many electrical engineers that they can get signal content with a lot of frequencies well above their maximum sampling frequency. Surely you will be able to find some so poorly trained, or lazy, that they can be caught making that assumption.

    Information content of the NN model has a similar limit to the frequency rule of thumb.

    You can build your own image set, and train one of these smaller models, and verify for yourself that copyright is not strictly violated, by comparing the size of your training images to the size of your model. If I print off that picture of your face, and trace over it with paper, and make an oval from the outline of your head, is that a copyright violation, or a legitimate derived work?

    Obviously, tracing with many lines, and creating a recognizable image of your face, is not my own original work.

    Tracing a simple oval fit to that image is clearly even less of my own effort, but starts to get to a point that is obviously like the ‘bits with color’ essay, and to the point where I can argue that my technique is so fundamentally simple that I can redo it on any similar form of IP, /and/ that the elements of it cannot be protected as IP by any one person.

    IE, someone cannot protect the information content of 600 hz acoustic signals, and require everyone else in the world to pay a fee whenever their speech includes frequeency content at 600.

    I can have someone take a picture of me in a similar posture to your author photo, pay them for their rights, and then do my oval tracing exercise on my photo print out. It would not matter that I got the idea from looking at your author photo.

    All of this is a way of sharing the foundation for arguing that we can know that what these AI doing is not high fidelity copying, but more akin to the low fidelity copying, or filtering and matching, that humans do when learning. If you read Tom Clancy once, and write a novel later, does that mean that your own act of writing a novel is necessarily a copyright violation of Clancy’s work?

    There is a point to the techie argument that criminalizing or restricting AI/Neural Net training is effectively a complete rewrite of established case law, specifically and to discriminate.

    There is also the image/artwork specific argument, perhaps also true of text, that the creators licensed the work in ways that they did not entirely understand and choose, but which nonetheless means that they did effectively give permission. If you sign a publishing contract, and do not specify limits and formats of printings, and then later the publisher has printed 10k copies, you do not have a legal foundation for going ‘hey, wait, I did not explicitly tell you that you could print 10k, this is a violation, you don’t permission, you have to pay me damages’. The boilerplate on a lot of services for uploading and sharing images is going to say that you give permission to the service to store the image on their server, and to let people download the image. Artists who used those services in many cases failed to retain the rights that they now believe that they had retained.

    Now, on the tech side, there are absolutely a bunch of unethical people basically entirely ignoring any IP rights held by other people.

    AI researchers also include a lot of really crazy people who primarily want AI as an idol that they think they will finally be able to get Christians to bow down and worship.

    However, unless they can show that the developer used specific texts, this lawsuit is also really crazy.