Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's an interesting legal question.

If a human cannot publicly use a copyrighted image without a license, why / how a non-human can?

If some image are free to use with attribution, how can an ML model track and provide such attribution?



It’s not interesting or complicated in any way: because a non-human has no free will and is operated by a human, the human (or in this case OpenAI/Microsoft/etc.) is ultimately infringing.

If and when the non-human is granted human rights, this can be revisited.


> If some image are free to use with attribution, how can an ML model track and provide such attribution?

Easy - if they cannot provide attribution, they cannot use the image to train an ML model.


> If some image are free to use with attribution, how can an ML model track and provide such attribution?

That's a problem for the people who create the models to solve.

This is what's so frustrating about the ML/AI community, they think the onus is on everybody else to overcome problems created by their products.


Exactly. I'm interested in approaches to a solution.


Humans use copyrighted images without licenses all the time. Even trivial things like sharing a photo of a book cover can qualify.


Yes but do you pass it off as your own original artwork that you created all by yourself, or do you present the book cover in a way that makes it obvious that you are taking a photo of somebody else's book?


So it's what you do with the output and not the output itself that matters, right?


No, its the part where you misrepresent somebody else's creation as your own. if its not alright for a person to do then its not alright to automate it. stop trying to play dumb semantic games, its not nearly as clever as you think it is.


I don't think it's being clever. If someone prompts an AI generator and acts like they created the output, that's on them, not on the AI generator.


It would be on them if the AI generator was forthcoming about how it "created" the image. If a company like MS is advertising that they have a program that creates new content, but that program actually has a known tendency to output content from it's training set then they bear responsibility, especially if that program is being run remotely on their servers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: