>I think it's learning styles in a way that's at least partially analogous, because it comes out with things that are reasonably original and not in the training data.
I don't think that is evidence that what it is doing is "learning".
>I'm sure an LLM can write you an essay like that for any artist you want, but I'm not all that convinced those are meaningful even with humans.
Well, it wouldn't be reflective of what the LLM thinks, so what is your point? If you are of the belief that humans don't have thoughts, I guess it's not a surprise you view things this way.
>That's the thing, it's not a hypothetical, it's a past story from here on HN. Someone did that, asking for copies of a famous painting (Girl with a Pearl Earring) and got highly derivative items out of the model and we had a debate over whether that even means anything, because that's both a simple description of the painting and the name of a famous work, so it makes it so it can be ambiguous whether you asked for "Girl with a Pearl Earring" or a girl with a pearl earring in the prompting.
You say derivative but without any reference to what it actually means... what about is derivative - that's the analysis that's happening in court. The analysis isn't "what you asked the LLM" because that's not dispositive to whether or not something is a copy.
>I agree that it looks like copyright infringement whether it's done by a human or AI, though. I guess a lot of people missed the prior discussion on HN.
Sorry I don't read every single thread about copyright on HN? This is the second posting I've seen on the RFC today. Give me a break!
> I don't think that is evidence that what it is doing is "learning".
When I say learning I mean something like "gaining new ability by studying how others did the same task, resulting in being able to produce novel output." I'm not quite sure what you are using the word to mean here, though I might agree that there are differences between what AIs do and what humans do, the question being what they are and whether they're important here.
I don't claim to know anything about the internal experience (if any) of an LLM writing such an essay and I can't really reason about that because I've never been an LLM, whereas I can at least relate to human experience. I think your assertion that it "wouldn't be reflective of what the LLM thinks" is a bit like saying that you don't think submarines are actually "swimming," as the saying goes, though. It may not "think" in human terms as we do, but it's certainly doing some kind of calculation that produces an equivalent output, so I have a lot of questions about whether we can say that on principle. We're well past passing the Turing test for a lot of things, either the original or censored form, these questions are getting less academic by the day.
> You say derivative but without any reference to what it actually means
We're talking about copyright law, so the meaning of derivative was borrowed from that, i.e. that AI model was producing works that could be reasonably thought to have infringed on the copyright of that painting when prompted for "a girl with a pearl earring" and this was held up to mean that AIs are just regurgitating training data and are therefore implicitly missing something essential to being an artist or what have you and all their work should be considered derivative works of the training data as far as copyright law is concerned.
Meanwhile, I'm saying that I think the AI should be judged about like a human artist would be to argue against the people who seem to want to say that the AI can't take input from copyrighted things without all of its output being tainted forever. We have no such requirement for humans and I don't see why it makes sense to add this new restriction on AIs specifically.
> Sorry I don't read every single thread about copyright on HN?
I'm not faulting you for not knowing, I'm faulting myself for assuming too much context and just trying to explain what I had in my head when writing that so you could understand how I came to think that. Hopefully this lets you see where I'm coming from.
>When I say learning I mean something like "gaining new ability by studying how others did the same task, resulting in being able to produce novel output." I'm not quite sure what you are using the word to mean here, though I might agree that there are differences between what AIs do and what humans do, the question being what they are and whether they're important here.
I think the dictionary definition is more than sufficient: "the acquisition of knowledge or skills through experience, study, or by being taught." This is what I mean by running with your own made up definition.
>I don't claim to know anything about the internal experience (if any) of an LLM writing such an essay and I can't really reason about that because I've never been an LLM, whereas I can at least relate to human experience. I think your assertion that it "wouldn't be reflective of what the LLM thinks" is a bit like saying that you don't think submarines are actually "swimming," as the saying goes, though. It may not "think" in human terms as we do, but it's certainly doing some kind of calculation that produces an equivalent output, so I have a lot of questions about whether we can say that on principle. We're well past passing the Turing test for a lot of things, either the original or censored form, these questions are getting less academic by the day.
You are the one redefining words like "think" and "experience" not me. I'm not playing that game at all. After all, you are the one that is equivocating these processes between humans and AI by coming up with your own, much more broad concoctions.
>We're talking about copyright law, so the meaning of derivative was borrowed from that, i.e. that AI model was producing works that could be reasonably thought to have infringed on the copyright of that painting when prompted for "a girl with a pearl earring" and this was held up to mean that AIs are just regurgitating training data and are therefore implicitly missing something essential to being an artist or what have you and all their work should be considered derivative works of the training data as far as copyright law is concerned.
I'm familiar with copyright law, I'm not sure you are. A work can be derivative in a number of ways, some are legal, some aren't. It's not a new thing that some uses by a machine can be infringing, and others, non-infringing. Why now must it be that machines should be analyzed the same as humans all of the sudden?
>Meanwhile, I'm saying that I think the AI should be judged about like a human artist would be to argue against the people who seem to want to say that the AI can't take input from copyrighted things without all of its output being tainted forever. We have no such requirement for humans and I don't see why it makes sense to add this new restriction on AIs specifically.
Yes, I understand that. But I asked why it should be judged as a human, and you are saying because it "learns". But that's only based upon your re-defining the concept of learning in order to make it inhuman. The only reasonable arguments I've seen that AI outputs should be copyrightable are based on them being a tool that an artist can use. What you are saying is just dressed up anthropomorphization.
> I think the dictionary definition is more than sufficient: "the acquisition of knowledge or skills through experience, study, or by being taught." This is what I mean by running with your own made up definition.
I mean, if a human looked at a bunch of art, essays, etc. and then was able to produce similar works, we'd normally consider that "learning." What word would you use for being able to reproduce Picasso (or whomever) by looking at a bunch of examples?
Also I don't think I have defined "think" or "experience" at all. But I'd point out that I don't see anything like a principled boundary around them or that we can point to something that humans do that AIs don't or can't do. It seems to fall back on something that looks like qualia or subjective internal experience and philosophy hasn't resolved that with respect to other humans... except by analogy. "I think the other humans are like me and I have subjective internal experience, so they probably have it to, rather than being p-zombies."
If you have a better answer to that, feel free to tell me, it'd be interesting.
> It's not a new thing that some uses by a machine can be infringing, and others, non-infringing. Why now must it be that machines should be analyzed the same as humans all of the sudden?
Sure, I'll agree that it's not even necessary to consider the works transformative or whatever.
FWIW, I don't think that AIs should be getting their own copyrights or anything like that, I'm just saying that the training data shouldn't forever taint the output no matter what's produced.
>I mean, if a human looked at a bunch of art, essays, etc. and then was able to produce similar works, we'd normally consider that "learning." What word would you use for being able to reproduce Picasso (or whomever) by looking at a bunch of examples?
Would we? What you described sounds a lot more like copying than learning. That's why I asked the question I originally did. Your whole perspective seems to be based on an ignorant and misanthropic view of the arts. That art students just go to school to look at things so they can then reproduce things that look like those things. It's a bit asinine and insulting.
>Also I don't think I have defined "think" or "experience" at all. But I'd point out that I don't see anything like a principled boundary around them or that we can point to something that humans do that AIs don't or can't do. It seems to fall back on something that looks like qualia or subjective internal experience and philosophy hasn't resolved that with respect to other humans... except by analogy. "I think the other humans are like me and I have subjective internal experience, so they probably have it to, rather than being p-zombies."
That's your burden to demonstrate as the person equivocating AI to humanity. You couldn't do it with "learning" without redefining learning, and you can't do it with "experience" or "think", without redefining those words either. Who is seriously advocating that LLMs are thinking and experiencing? I haven't seen anyone make those arguments.
>Sure, I'll agree that it's not even necessary to consider the works transformative or whatever.
That wasn't my point. A transformative analysis is one of the most fundamental elements of determining if something is a copy or not in copyright law. So I don't really have any idea what you are talking about with this one.
>FWIW, I don't think that AIs should be getting their own copyrights or anything like that, I'm just saying that the training data shouldn't forever taint the output no matter what's produced.
Yeah but your only argument for that is to redefine learning to pretend it's the same thing that humans are doing when that's clearly not the case.
> Yeah but your only argument for that is to redefine learning to pretend it's the same thing that humans are doing when that's clearly not the case.
What test can I do to differentiate them, then?
At first, you said they couldn't write an essay... but AIs can absolutely do that. The internal experience of even other people is unknowable and something we guess by analogy, so if you want me to agree you need some other actual test on measurable outputs to differentiate.
Otherwise this is all about qualia and there's no way to come to rational agreement.
You are being obtusely literal, as I did not ask you if they could write an essay. I asked you if they could express their feelings. There's no point in us conversing if you are going to respond this way, as it's disingenuous. I'd think you are capable of understanding the difference between the two. And I don't care if you agree with me or not, it's your burden to elevate AI to humanity, not mine, and you haven't done it here. Your perspective here seems to come from a life devoid of art and experience in things. For that, I'm sorry for you.
> I asked you if they could express their feelings.
And I asked how we can test whether someone has actual feelings or any other kind of conscious internal experience. If it's "obvious" then why is there no consensus on the whole https://en.wikipedia.org/wiki/Philosophical_zombie thing?
I only said it was obvious that LLM's don't know anything about art past what you described, which you didn't dispute and was an obvious logicaly conclusion from your own explanation of what AI "learned".
>I gave this conversation to an LLM to respond to.
I'm not surprised, I repeatedly characterized your responses as obtuse, disingenuous, or ignorant. I'm not sure what you think you proved.
I don't think that is evidence that what it is doing is "learning".
>I'm sure an LLM can write you an essay like that for any artist you want, but I'm not all that convinced those are meaningful even with humans.
Well, it wouldn't be reflective of what the LLM thinks, so what is your point? If you are of the belief that humans don't have thoughts, I guess it's not a surprise you view things this way.
>That's the thing, it's not a hypothetical, it's a past story from here on HN. Someone did that, asking for copies of a famous painting (Girl with a Pearl Earring) and got highly derivative items out of the model and we had a debate over whether that even means anything, because that's both a simple description of the painting and the name of a famous work, so it makes it so it can be ambiguous whether you asked for "Girl with a Pearl Earring" or a girl with a pearl earring in the prompting.
You say derivative but without any reference to what it actually means... what about is derivative - that's the analysis that's happening in court. The analysis isn't "what you asked the LLM" because that's not dispositive to whether or not something is a copy.
>I agree that it looks like copyright infringement whether it's done by a human or AI, though. I guess a lot of people missed the prior discussion on HN.
Sorry I don't read every single thread about copyright on HN? This is the second posting I've seen on the RFC today. Give me a break!