254
submitted 11 months ago by sculd@beehaw.org to c/technology@beehaw.org

Apparently, stealing other people's work to create product for money is now "fair use" as according to OpenAI because they are "innovating" (stealing). Yeah. Move fast and break things, huh?

"Because copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today’s leading AI models without using copyrighted materials," wrote OpenAI in the House of Lords submission.

OpenAI claimed that the authors in that lawsuit "misconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence."

you are viewing a single comment's thread
view the rest of the comments
[-] Even_Adder@lemmy.dbzer0.com 5 points 11 months ago

When people say that the "model is learning from its training data", it means just that, not that it is human, and not that it learns exactly humans. It doesn't make sense to judge boats on how well they simulate human swimming patterns, just how well they perform their task.

Every human has the benefit of as a baby training on things around them and being trained by those around them, building a foundation for all later skills. Generative models rely on many text and image pairs to describe things to them because they lack the ability to poke, prod, rotate, and disassemble for themselves.

For example, when a model takes in a thousand images of circles, it doesn't "learn" a thousand circles. It learns what circle GENERALLY is like, the concept of it. That representation, along with random noise, is how you create images with them. The same happens for every concept the model trains on. Everything from "cat" to more complex things like color relationships and reflections or lighting. Machines are not human, but they can learn despite that.

[-] Eccitaze@yiffit.net 3 points 11 months ago

It makes sense to judge how closely LLMs mimic human learning when people are using it as a defense to AI companies scraping copyrighted content, and making the claim that banning AI scraping is as nonsensical as banning human learning.

But when it's pointed out that LLMs don't learn very similarly to humans, and require scraping far more material than a human does, suddenly AIs shouldn't be judged by human standards? I don't know if it's intentional on your part, but that's a pretty classic example of a motte-and-bailey fallacy. You can't have it both ways.

[-] Even_Adder@lemmy.dbzer0.com 1 points 11 months ago

I don't understand what you mean, can you elaborate?

[-] ParsnipWitch@feddit.de 3 points 11 months ago* (last edited 11 months ago)

In general I agree with you, but AI doesn't learn the concept of what a circle is. AI reproduces the most fitting representation of what we call a circle. But there is no understanding of the concept of a circle. This may sound nit picking, but I think it's important to make the distinction.

That is why current models aren't regarded as actual intelligence, although people already call them that...

[-] Even_Adder@lemmy.dbzer0.com 1 points 11 months ago* (last edited 11 months ago)

I understand. I didn't mean to imply any sort of understanding with the language I used.

this post was submitted on 11 Jan 2024
254 points (100.0% liked)

Technology

37801 readers
257 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS