Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Computers and machines have been capable of mass production for decades, and humans have used them as tools. In the past 170 years, these tools of mass production have already diminished many thousands of professions that were staffed by people who had to painstakingly craft things one at a time.

Why is art some special case that should be protected, when many other industries were not?

Why should we kill this technology to protect existing artistic business models, when many other technologies were allowed to bloom despite killing other existing business models?

Nobody can really answer these questions.



>Why is art some special case that should be protected, when many other industries were not?

Because in this case the art is still necessary for the machine to work. You don't need horse buggies to make a car, nor existing books to make a printing press. You DO need artist's art to make these generative AI tools work.

If these worked purely off of open source art or from true scratch, I wouldn't personally have an issue.

>Why should we kill this technology to protect existing artistic business models,

We don't need to kill it. Just pay your dang labor. But if we are treating proper compensation as stifling technology, I'm not surprised people are against it.

Maybe in the 2010's tech would have the goodwill to pull this off in PR, but the 2020's have drained that goodwill and then some. Tech's made so many promises to make lives easier and now they joined the very corporations they claimed to fight against.

>Nobody can really answer these questions.

Well it's in courts, so someone is going to answer it soon-ish


> We don't need to kill it. Just pay your dang labor.

> But if we are treating proper compensation as stifling technology, I'm not surprised people are against it.

That's just it, nobody looking to get paid by OpenAI actually did any labor for OpenAI. They did labor for other reasons, and were happy with it.

OpenAI found a way to benefit by learning from these images. The same way that every artist on the planet benefits by learning from the images of their fellow artists. OpenAI just uses technology to do it much more efficiently.

This has never been considered labor in the past. We've never asked artists to "properly compensate" each other for learning/inspiration in the past. I don't know why it should be considered labor or proper compensation now.

But we shall see what the courts decide!


There are many ways an artist can compensate their influences. Some of them are monetary.

When discussing our work, we can name them.

When one of our influences comes out with a new body of work, we can gush about it to our own fans.

When we find ourselves in a position of authority, we can offer work to our influences. No animation studio is really complete without someone old enough to be a grandfather hanging out helping to teach the new kids the ropes in between doing an amazing job on their own scenes, and maybe putting together a few pitches, for instance.

We can draw fan art and send it to them.

None of these are mandatory, but artists tend to do this, because we are humans, and we recognize that we exist in a community of other artists, and these all just feel like normal human things to do for your community.

And if an artist suddenly starts wholesale swiping another artist's style without crediting them, their peers get angry. [1]

1: https://en.wikipedia.org/wiki/Keith_Giffen#Controversy

OpenAI isn't gonna tell you that it was going for a Cat & Girl kind of feel in this drawing. OpenAI isn't gonna offer Dorothy Gambrell a job. OpenAI isn't going to tell you that she just came out with a new collection and she's still at the top of her game, and that you should buy it. OpenAI's not going to send her a painting of Cat & Girl that it did for fun. OpenAI isn't going to do anything for her unless the courts force it to, because OpenAI is a corporation who has found a way to make money by strip-mining the stuff people post publicly on the Internet because they want other humans to be able to see it.


Most people know 20,000-40,000 words. Let's call it 30,000. You've learned 99.999% of those 30,000 people from other people. And don't get me started on phrases, cliches, sentence structures, etc.

How many of those words do you remember learning? How many can you confidently say you remember the person or the book that taught you the word? 5? 10? Maybe 100?

That's how brains work. We ingest vast amounts of information that other people put out into the world. We consume and it incorporate it and start using it on our own work. And we forget where we even got it. My brain works this way. Your brain works this way. Artists' brains work this way. GPT-4 works this way.

The idea that a visual artist can somehow recall where they first saw many of the billions of images stored in their brain -- the photos, movies, architecture, paintings, and real-life scenes that play out every second of every day -- is laughable. Almost all of that goes uncredited, and always will.

This is what it is to learn.


I tend to fall more on the "training should be fair use" side than most, but your comment seems to be missing the point. Nobody is arguing that models are violating copyright or social norms around credit simply because they consume this information. Nobody ever argued/argues that the traditional text generation in markov models on your phone's keyboard runs afoul of these issues. The argument being made is that these particular models are now producing content that very clearly does run into these norms in a qualitatively different way. You cannot convincingly make the argument that the countless generated "X, but in the style of Y" images, text, and video going around the internet are exclusively the product of some unknowable mishmash of influences -- there is clearly some internalized structure of "this work has this name" and "these works are associated with this creator".

To take it to an extreme, you obviously can't just use one of the available neural net lossless compression algorithms to circumvent copyright law or citation rules (e.g., distributing a local LLM that helpfully displays the entirety of some particular book when you ask it to), you can't just tweak it to make it a little lossy by changing one letter, or a little more lossy than that, etc., while on the other hand, any LLM that performs exactly the same as a markov model would presumably be fine, so there is a line somewhere.


A company hires an artist. That artist has observed a ton of other artists' work over the years. The company instructs that artist to draw, "X but in the style of Y", where Y is some copyrighted artwork. The company then prints the result and puts it on their packaging.

A company builds an AI tool. That AI tool is trained on a ton of artists' work over the years. The company opens up the AI tool and asks it to draw, "X but in the style of Y," where Y is come copyrighted artwork. The company then prints the result and puts it on their packaging.

What's the difference?

I'd argue there isn't one. The copyright infringement isn't the ability of the artist or the AI tool to make a copy. It's the act of actually using it to make a copy, and then putting that out into the world.


The artist has a claim for production of a derivative work and for passing off against the other artist.


> What's the difference?

Ultimately, only high courts in each jurisdiction can decide. I can imagine a case where some highly advanced nations decide different interpretations that cause conflict. Then, we need an amendment to the widely accepted international copyright rules, the Berne Convention. Ref: https://en.wikipedia.org/wiki/Berne_Convention


Okay, but then that's an an argument subject to the critiques made upthread that you were initially trying to dismiss? You can't claim that AI doesn't need to worry about citing influences because it's just doing a thing humans wouldn't cite influences for, then proceed to cite an example where you would very much be expected to cite your influences, and AI wouldn't, as evidence.


I never argued that AI doesn't need to worry about citing influences. If I am a person using a tool to create a work, and the final product clearly resembles some copyrighted work that I need to reference and give credit to, what does it matter if my tool is a pencil, a graphics editing program, a GPT, or my own mind? I can cite the work.


Like I said, this is exactly what the comment you first replied to was explaining. It is very clearly not the same as a pencil or a graphics editing program, because those things do not have a notion of Cat & Girl by Willem de Kooning embedded in them that they can utilize without credit. It is clearly not the same as your mind, because your mind can and, assuming you want to stay in good standing, will provide credit for influence.

Again, take it back to basics: do you believe it is permissible to share a model itself (not the model output, the model), either directly or via API, that can trivially reproduce entire copyrighted works?


I'd say that a tool itself can't be guilty of copyright infringement, only the person using the tool can. So it doesn't matter if the GPT has some sort of "notion" of a copyrighted work in it or not. GPTs aren't sentient beings. They don't go around creating things on their own. Humans have to sit down and command them, and that point, whoever issued the command is responsible for the output. Copyright violation happens at the point of creation or distribution, not at the much earlier point of inspiration or learning.

So yeah, of course imo it should be permissible to share a model that can reproduce copyrighted works. Being "capable of being used" to violate a law is not the same thing as violating a law.

A ton of software on my computer can copy-paste others' work, both images and words. It can trivially break copyright. Hell, there are even programs out there than can auto-generate code for me, code that various companies have patent claims for. Do I think distributing any of this software should be illegal? No. But I think using that software to infringe on someone's copyright should be.

(Note: This is different than if the program distributed came with a folder that included bunch of copyrighted works. To me, sharing something like that would be a copyright violation.)


I'm not sure how to explain this any clearer. I am talking about neural net compression algorithms. As in, it is literally just a neural net encoding some copyrighted work, and nothing else. It is ultimately no more intelligent than a zip file, other than the file and program are the same. You can't seriously believe that these programs allow you to avoid copyright claims, can you? Movie studios, music producers, and book publishers should just pack it in, pirates just need to switch to compressing by training a NN, and seeding those instead, and there's no legal precedence to stop them? If you do think that, do you at least understand why nobody is going to take your position seriously?


A neural net designed to do nothing other than compress and decompress a copyrighted work is completely different than GPT-4, unless I'm uninformed. To me that sounds like comparing a VCR to a brain. GPT-4's technology is clearly something that "learns" in order to be able to produce novel thoughts and ideas, rather than merely compressing. A judge or jury would easily understand that it wasn't designed just to reproduce copyrighted works.

> It is clearly not the same as your mind, because your mind can and, assuming you want to stay in good standing, will provide credit for influence

I forgot to respond to this, but it's not true. Your mind is incapable of providing credit for 99.9% of its influence and inspiration, even when you want it to. You simply don't remember where you've learned most of the things you've learned. And when you have a seemingly novel idea, you can't always be aware of every single influential example of another person's work/art that combined to generate that new idea.


> A neural net designed to do nothing other than compress and decompress a copyrighted work is completely different than GPT-4, unless I'm uninformed.

Compression and the output from LLMs are cousins. The model tries to predict what continuations are likely, given context. Indeed, it takes a lot of effort to make LLMs less willing to just output training data verbatim. And conversely, you can get compression algorithms to do things similar to what LLMs do (poorly).

Whether this also describes most of human cognitive process, is subject to debate.


Individual words aren't comparable to the things people are worried about getting copied. People are much more able to tell you where they learned about more sophisticated concepts and styles.


The same principle applies, though. They can tell you maybe a dozen, maybe a few dozen, concepts they've learned and use in their work. But what about the thousands of concepts they use in their work they can't tell you about? The patterns they've noticed, the concepts that don't even have names, but that came from seeing things in the world world that were all created by other people?


For example, how many artists drawing street scenes credit the designer at Ford Motors for teaching them what a generic car looks like? How many even know which designers created their mental model of a car?


That is again a single word.

There is a strong correlation between how copyrightable a concept is and how well you can point to where you learned it.


> That's just it, nobody looking to get paid by OpenAI actually did any labor for OpenAI.

To me this is a strong point in favor of the idea that OpenAI has no business using their work. How can you even think it's ok for OpenAI to use work that was not done for them without paying some kind of license? They aren't entitled to the free labor of everyone on the internet!


> How can you even think it's ok for OpenAI to use work that was not done for them without paying some kind of license?

At the risk of answering a rhetorical question: because copyright covers four rights: copying, distribution, creation of derivative works, and public performance, and LLM training doesn't fit cleanly into any of these, which is why many think copying-for-the-purpose-of-training might be fair use (courts have yet to rule here).

I think the most sane outcome would be to find that:

- Training is fair use

- Direct, automated output of AI models cannot be copyrighted (I think this has already been ruled on[0] in the US).

- Use of an genAI to create works that would otherwise be considered a "derivative work" under copyright law can still be challenged under copyright.

The end result here would be that AI can continue to be a useful tool, but artists still have legal teeth to come after folks using the tool to create infringing works.

Of course, determining whether a work is similar enough to be considered infringing remains a horribly difficult challenge, but that's nothing new[1], and will continue to hinge on how courts assess the four factors that govern fair use[2].

[0]: https://www.reuters.com/legal/ai-generated-art-cannot-receiv...

[1]: https://www.npr.org/2023/05/18/1176881182/supreme-court-side...

[2]: https://fairuse.stanford.edu/overview/fair-use/four-factors/


> They did labor for other reasons, and were happy with it.

They were happy until their copyright got stolen, I guess. Then got unhappy.


> We've never asked artists to "properly compensate" each other for learning/inspiration in the past.

LLMs are collections of GPUs crunching numbers. "Inspiration" doesn't really apply to them.

A better analogy is sampling, and musicians remixing music are very much required to pay for the samples they use.


Only if the "use" where use means distribute.

If I sample a track and play it in my home I don't properly compensate anyone.

If I ask GPT to create a cool new comic based on the article and then delete or use it privately it, same applies.


Assuming that's true, GPT is the one "distributing" in your second example, so that still applies to them (if not you).


> That's just it, nobody looking to get paid by OpenAI actually did any labor for OpenAI. They did labor for other reasons, and were happy with it

Nobody working on a new cancer drug actually did any work for me. They did labour for other reasons, and were happy with it.

There it is okay for me to steal their recipe and sell their cancer drug.


Nope, but it’s ok for you to read their recipe if they place it on the internet (research paper), and use it to make your own drug.


The entire point of the patent system was to say inventors can put their design on the net without it being stolen; so future inventors can build on their work.


And that is a good thing we should all celebrate.


>They did labor for other reasons, and were happy with it.

True, sadly most of those copyright are probably owned by other megacorp. So they either collude to surppess the entire industry or eat each other alive in legal clashes. The latter is happening as we speak (the writers for NYT are probably long retired, but NYT still owns the words) so I guess we'll see how that goes.

>OpenAI found a way to benefit by learning from these images. The same way that every artist on the planet benefits by learning from the images of their fellow artists.

If we treat AI like humans, art historically has an equally thin line between inspiration and plagiarism. There are simply more objective metrics to measure now because we can indeed go inside an AI's proverbial brain. So the metaphor is pretty apt, except with more scrutiny able to be applied.


> Why is art some special case that should be protected, when many other industries were not?

It shouldn't be.

As soon as someone makes an AI that can produce it's own artwork without requiring ingesting every piece of stolen artwork it can, then I'm on board.

But as long as it needs to be trained on the work of humans it should not be allowed to displace those people it relied on to get to where it is. Simple as that.


Are there any humans that can produce artwork without ingesting inspiration from other art? Do you know any artists that lived in a box their whole life and never saw other art? Do you know any writers who'd never read a book?

Are they any human artists who can't, if requested, draw or write something that's a copy of some other person's drawings or writings?

Also, FYI, you can't steal digital artwork. You can only commit copyright infringement, which is not the same crime as theft, because theft requires depriving the owner of something in their possession.


> Are there any humans that can produce artwork without ingesting inspiration from other art? Do you know any artists that lived in a box their whole life and never saw other art? Do you know any writers who'd never read a book?

> Are they any human artists who can't, if requested, draw or write something that's a copy of some other person's drawings or writings?

This still is pretending that humans and AI models are equivalent actors and should have the same rights

Emphatically no they shouldn't. The capabilities are vastly different. Fair use should not apply to AI.


This isn't about giving "rights" to machines. Machines are just tools. The question is about what humans are allowed to do with those tools. Are humans using AI models and humans not using AI models equivalent actors that should have the same rights? I'd argue emphatically yes they should.


The thing is, we already have doctrine that starts to encompass some of these concepts with fair use.

The four pronged test in US case law:

- the purpose and character of use (is a machine doing this different in purpose and character? many would say yes. is "ripping-off-this-artist-as-a-service" different than an isolated work that builds upon another artist's art?)

- the nature of the copyrighted work

- the amount and substantiality of the portion taken (can this be substantially different with AI?)

- the effect of the use upon the potential market for the original work (might mechanization of reproducing a given style have a larger impact than an individual artist inspired by it?)

These are well balanced tests, allowing me as a classroom teacher to duplicate articles nearly freely but preventing me from duplicating books en masse for profit (different purpose; different portion taken; different impact on market).


The problem with this conversation is that its being had by people that make the top level comment here stating that clothing is not copyrightable. It is. Clothing design is copyrightable. This was a huge recent case, Star Athletica. They know nothing about copyright law and they just build intuitions from the world around them, but the intuitions are completely nonsense because they are made in ignorance of the actual law and what the law does and why the law does it. I find it exhausting.


Your sentiment is probably correct in that there are many aspects of copyright law that are not strictly aligned with the public’s intuition. But your example is a bit of a reach. Star Athletica was a relatively novel holding that allows for a specific piece of clothing, when properly argued, could qualify as copyrightable as a semi-sculptural work of art, however this quality of a given piece is separate to its character as clothing. In fact, the USSC in Star Athletica explicitly held a designer/manufacturer has “no right to prohibit any person from manufacturing [clothing] of identical shape, cut, and dimensions” to clothing which they design/manufacture. That quote is directly from a discussion of the ability to apply copyright protections to clothing design. I think the end result is that trying to argue technical legal issues around a poorly implemented statutory regime is always fraught with errors. That really leave moral and commercial arguments outstanding and advocacy should try and focus on that, when not fighting to affect change in the law these copyright determinations are based on.

And just to be clear, this post does not constitute legal advice.


You're dismissing my comment because of what someone else said upthread?

I hate the desire to meta-comment about the site rather than argue on the merits.

We obviously don't know so much about how courts will interpret copyright with LLMs. There's a lot of arguments on all sides, and we're only going to know in several years after a whole lot of case law solidifies. There are so many questions, (fair use, originality, can weights be copyrighted? when can model output be copyrighted? etc etc etc). Not to mention that the legislative branch may weigh in.

This discourse by citizens who are informed about technology is essential for technology to be regulated well, even if not all participants in the conversation are as legally informed as you'd wish. Today's well-meaning intuition about what deserves copyright and why inform tomorrow's case law and legislation.


> Emphatically no they shouldn't. The capabilities are vastly different. Fair use should not apply to AI.

Fair use applies even to use of traditional algorithms, like the thumbnailing/caching performed by search engines. If I make a spam detector network, why should it not be covered by fair use?


Fair use applies to humans and the things they do (including AI). It is not something that applies to algorithms in themselves. AI's are not people, the people who use them are people and fair use may or may not apply to the things they do depending on the circumstances of whatever it is they do. The agent is always the human not the machine.


True; consider the "it" in my question ("If I make a spam detector network, why should it not be covered by fair use?") as "my making (and usage) of the network".


No idea on the legality, but common sense suggests that the difference would be that a spam detector doesn't replace the products that it was trained on, while AI-generated "art" is intended to replace human artists.


> common sense suggests that the difference would be that a spam detector doesn't replace the products that it was trained on

The extent to which it supplants the original work is one of the fair use considerations.

I think it'd make more sense to have a stance of "current LLMs and image generators should be judged by fair use factors and I believe they'd fail", though I'd still disagree, instead of having machine learning models subject to a different set of rules than humans and traditional algorithms.


That is indeed the most common stance. There isn't nearly as much outcry over, say, image classification by LLMs, as there is over AI "art" generation.


The question is "is it a derivative work of the original?" - not if it is a generative work.

If that was the distinction to be made, using ChatGPT as a classifier would be acceptable while using it to write new spam (see the "I am sorry" amazon listings of the other day) would be unacceptable.

If two different uses of a tool allow for both infringing and non-infringing uses (are photocopiers allowed to make copies(!) of copyrighted works?) it has generally been the case that the tool is allowed and the person with agency to either use the copyrighted work in an infringing or a non-infringing way is the one to come under scrutiny.

I believe that if it is found that OpenAI is found to have committed copyright infringement in training the model, then an argument that training a model on spam be considered to be copyright infringement could be reasonably constructed.

If, on the other hand, OpenAI is found to have sufficiently transformative in its creation of the model and some uses are infringing, then it is the person who did the infringing (as with a photocopier or a printer printing off a copy of a comic from the web) that should be have legal consequences.


Yeah, I really think it should fall on the user as opposed to the tool.


> Are there any humans that can produce artwork without ingesting inspiration from other art?

Logically, the answer to this is (almost certainly) yes, so you’ll need to discount this argument.

If the answer were no, then either an infinite number of humans have lived (such that there was always a previous artist to learn from), or it was true in the past but false in the present, which seems unlikely given humans brains have generally become more and not less sophisticated over time.

I presume what you’re missing here is that the brain can be inspired from other sources than human art. For example: nature; life experience; conversation.

Not making any other comment about what machines can or can’t do, just wanted to point out this argument is invalid as it comes up a lot and is probably grounded in ignorance around the artistic process. It’s such a strange idea to suggest that the artist process is ingesting lots of art to make more art. That’s such a weird world view. It’s like insisting every artist is making art the way Quentin Tarantino makes films.

I’ve spent a lot of time with artists, I’ve worked with them, I’ve been in relationships with artists, and I can tell you the great ones see the world differently. There’s something about their brains that would cause them to create art even if born on a desert island without other human contact. Some of them don’t even take an interest in other art.

In fact, those artists that _do_ make art heavily based on other artists’ work as suggested are often derided as “derivative” and “unoriginal”.


> Are there any humans that can produce artwork without ingesting inspiration from other art?

This sounds so detached from human experience that I am tempted to ask if you are a human or just a disembodied spirit that haunts the internet.

When the first neanderthal drew a deer on the walls of a cave, where did they get inspiration?

When a little child draws a tree for the first time, where do they draw inspiration? Do you think they were reviewing works of Picasso?

When the firm man made an axe, chopped a tree, made a bed, sown some clothes, discovered fire, where did they draw inspiration?

Do you not have eyes, ears, do you not perceive and get inspiration from the natural world around you?


> When a little child draws a tree for the first time, where do they draw inspiration? Do you think they were reviewing works of Picasso?

Are we going to discount the hundreds to thousands of artistic pictures children are exposed to? Or how about the teacher sitting up front demonstrating to the class how to draw a tree?

> Do you not have eyes, ears, do you not perceive and get inspiration from the natural world around you?

Learning to see as an artist is a distinct skill. Being able to take the super compressed simplified world view that mind sees and put something recognizable on paper is a specialized skill that has to be developed. That skill is developed by doing it over and over again, often by copying the style of an artist that someone enjoys.

Or to put it another way, go to any period in history prior to the mid 20th century and art in a given region starts to share the same style, dramatically so, because people were inspired by each other, almost to a comical extent. (Financial reasons also had something to do with it as well of course, Artists paint/carve/engrave/etc what sells!)


Yeah, but that’s not really your sole source of inspiration. My son has been ‘inspired’ by the art of all other kids in his kindergarden. Certainly by the time he gets to the age where he does it professionally he’s been inspired by an uncountable number of people.


Being inspired isn't against the law. copying is. it'd be one thing if this conversation could be had with useful terminology that's actually on point. instead we have you, insisting that there is no creative process, there is only experiencing other art and inevitably copying (because apparently you think that's the only thing humans can do!). It's all so telling. Yet its tragic because so many here don't even realize it. I'm sad for your inability to engage with creativity and creative acts.


I think a lot of the discussion is where the balance of the creativity lies when a human uses a model (trained on other artistic works) to create art.

Is the result a copy, or perhaps a derivative work of the art in the training set?

Does the person using the model have authorship of the result?

Was it even okay to use the art to train the model and then share the resulting weights?

Are the resultant weights protected by copyright themselves?

I suspect the actual answers we'll come to on these topics will be full of nuance.


What % is his independent inspiration? 30%? 90%? There are certainly people for whom it was 90%. For most we don’t know.

We do know one thing for sure - that for AI it’s 0%


We don't know what percentage is independent inspiration for a person using the AI to create art.

Once upon a time it was a contentious idea that humans had significant authorship in photographs, which merely mechanically captured the world. What % is the camera's independent inspiration?

Here, we have humans guiding what's often a quite involved process of synthesis of past human (and machine) creation.


> We don't know what percentage is independent inspiration for a person using the AI to create art

The person using the AI doesn't matter in the equation. They aren't an artist, they're a monkey with a typewriter.

We're talking about the AI here, because it can generate the same images no matter which monkey with a typewriter is typing the prompts.


> The person using the AI doesn't matter in the equation. They aren't an artist, they're a monkey with a typewriter.

That's an opinion.

Does your opinion hold in all circumstances? If I spend 20 hours with an AI, iterating prompts, erasing portions of output and asking it to repaint and blend, and combining scenes-- did I do anything creative?


Of course the person using the AI matters. It's literally the same as holding a brush. You can give it a prompt, get a result and be unhappy with it, modify it or remove it, and proceed doing that until you are happy with what you have.

No matter how great the AI is, a monkey with an AI will never generate anything useful.


> Are there any humans that can produce artwork without ingesting inspiration from other art?

Do you think art was there before humans? Or humans made art?

If you believe the 1st proposition… please tell me about your very unique religion!

If not… you've answered your own question.


> But as long as it needs to be trained on the work of humans it should not be allowed to displace those people it relied on to get to where it is. Simple as that.

Do you feel the same way about tools like Google Translate?


Tbh I'm not familiar enough with how Google Translate is built, but if it's ingesting tons of people's work without their permission so it can be used to replace them then yes I do.


For what it's worth: that's pretty much how Translate works.

Translate operates at a large-chunk resolution, and one of the insights in solving the problem was the idea that you can often get a pretty-good-enough translation by swapping a whole sentence for another whole sentence. So they ingest vast amounts of pre-translated content (the UN publications are a great source, because they have to be published in the language of every member nation), align it for sentence- and paragraph-match, and feed the translation engine at that level.

It's created an uncanny amount of accuracy in the result, and it's basically fed wholesale by the diligent work of translators who were not asked their consent to feed that beast. Almost nobody bats an eye about this because the value (letting people using different languages communicate with each other) grossly outstrips the opportunity cost of lost human translator work, and even the translators are, in general, in favor of it; they aren't going to be displaced because (a) it doesn't really work in realtime (yet), (b) it can't handle any of the deeper signal (body language, tone, nuance) of face-to-face negotiation, and (c) languages are living things that constantly evolve, and human translators handle novel constructs way better than the machines do (so in high-touch political environments, they matter; the machines have replaced translators in roles like "rewriting instruction manuals" that were always pretty under-served in the first place).


I would argue that Translate being fed by paid UN translators who likely agreed to the use of their transcriptions in a TOS or something is not an equal comparison to unpaid artists having their art submitted online to sites which become part of a training set used in for-profit models such as OpenAI, that they never consented to. OpenAI is a nonprofit parent company, but this spawned a child for-profit company OpenAI LP which most of their staff work for, which is meant to return many-fold returns to their shareholders who are effectively profiting from the labor of all the artists and sources in their training.


Google translate is very basic and not even close to something good if you already know both languages. Useful if you're translating to your language (you do the correction when reading), but can lead to confusion the other way.


Interesting distinction.

If you can do the correction when reading, it seems reasonable to assume the reader in the opposite direction has the same correction capability.

I would expect the chance of confusion to be identical. The only difference is a matter of perspective, where in one case you are the reader and in one case you are the author.


Yes, they are identical. But I believe the reader is better armed to deal with the confusion, or at least to recognize the error, because it does not fit it. But when producing, you don't know the target language, so there's a better chance for errors to slip in unnoticed.

It's better for me to receive a text in the original language and translate it myself than to try to decipher something translated automatically.


Vastly inappropriate comparison- there are millions of pages of text out of copyright, you can get a good translation engine using public domain.

That’s is not the case for art, vast majority of art used by midjourney is not public domain.


> vast majority of art used by midjourney is not public domain

Is that true? How did you establish that?


It's unfortunately also not great for translation. Language changes fast enough that training on content that went out of copyright is old data.


OpenAI has basically admitted it. Is OpenAI even disputing that it ingested all the works its being sued over? Not as far as I can tell.


Huh? You’re aware that midjouney and OpenAI are different things, right?


What about code? Or what about if we eventually robot labourers that is trained on observing human labourers?


Code has licenses too. And we've had very high profile lawsuits based on "copying code".

>what about if we eventually robot labourers that is trained on observing human labourers?

Interesting point, but by that point in time I don't think generative art will even be in the top 10 ethical dilemmas to solve for "sentient" robots.

As it is now, robots aren't the ones at the helm grabbing data for themselves. Humans give orders (scripts) and provide data and what/where to obtain that data.


What if the AI was solely trained on this person's work, then from that churned out a similar replacement that was monetized?


Well art predates other professions by like thousands of year so it rightfully earned it's privileges.


Just the people in this discussion thread, devs and antrepreneurs, have probably automated a huge amount of work. But here we are bickering about AI and copyrights like its a new thing.


Redacted.


I'm confused about your point. Are you saying we should ban $10 mass produced shirts so that more people can make a living hand-crafting $100 shirts?


>What would you buy? $10 H&M or $100 hand-made shirt? - (My guess, if you could afford the later.)

This is an interesting example because even in the $100 case you are still talking about machine-augmentation. You can have a seamstress or a tailor customize patterns, using off the shelf textiles, for that order of magnitude price - but if you want to use custom built, exotic materials or many kinds combined, the cost is on the orders of thousands not hundreds. Also there is a large industry of just printing designs on stock-shirts, that has a different point effort-scale equilibria.

Thinking about how how automation disintermediates is very important. For animation, often productions have key-frame artists in the animation pipeline that define scenes, and then others that take though to flush out all the details of that scene. GenAI can potentially automate that process. You could still have the artist producing a keyframe, and can render that into a video.

Another big factor is style. One hypothesized reason that more impressionism, absurdism or abstract art all become styles is photography. Once cheap machine-produced photography became available, there is less need for a portrait artist. But further, it also is no longer high-status and others push trends alternative directions.

All the experiments and innovation going right now will definitely settle into a different set of roles for artists, and trends that they will seek to satisfy. Art-style itself will change as a result of both what is technically possible and also what is _not_ easily automatable in order to gain prestige.


Too much wall of text for nothing. Nobody is stopping you from buying hand crafted masterpiece. Just get out of the way of progress.


Mass production hasn't killed art and never will.

What's killing art is this idea by a vocal minority of "artists" that they need to mass produce their work, enter the market, and attempt to make millions of dollars by selling and distributing it to millions.

That's not art. That's capitalism. That's competing to produce something that customers will want to buy more than what your competitors offer.

If you want to compete on the capitalistic marketplace, then compete on the capitalistic marketplace. But if you want to be an artist, be an artist.

Art is still alive and well and always will be. Every day I see people singing because they love singing, making pottery because they love making pottery, writing because they love writing. Whether other people love or enjoy their art, the artist may or may not care. Whether they can profit from their art, the artist may or may not care. But many billions of artists will keep creating, crafting, and designing day after day, and they will never be stopped by AI or anything else.


People do whatever they want with their own property. You have no right to steal it just because they want to monetise it. What’s killing art is stealing it en masse using procedural generators.


Redacted.


Jobs have never been less soul crushing, or more creative, in the history of humanity. And that becomes increasingly true every decade.

Do you know what a job does? What a company does? It contributes to society! It produces something that someone else values. That they value so much they're willing to pay for it. Being part of this isn't a bad thing. It's what makes society work.

A job/company entertains. It keeps things clean. It transports people to where they need to go. It produces. It gives people things they want. It creates tools, and paints, and nails, and shirts. I look out my window, and I see people delivering furniture, chefs cooking food and selling it out of trucks, keepers maintaining grounds, people walking dogs.

Being useful to the fellow members of your society for 40 hours a week is not "soul crushing."


Hey. Thanks. Sorry about wasting your time. Shouldn't have started in the first place. It was my fault for trying to make a silly point.

Too mid to understand your point.


(This is a response to your comment before you edited it.)

Find the intersection of something that people increasingly value, that you enjoy, and that you can compete at.

The best proof that people value something is that they're spending money for it. If people aren't spending money, they don't value it, and you probably don't want to go into it. If people aren't spending more and more money on it every year, then it's not increasing in value, and you probably don't want to go into it.

The best proof that you enjoy something is that you enjoyed it in the past. Things you liked as a kid, activities that excited you as a young adult, etc., are often the best candidates.

Look for intersections of the two things above. Do some Googling, do some research.

Finally, you need to be able to compete at it. If you do something worse than everyone else does it, then no one will pick you, because you're probably not being helpful. The simple answer to this is to practice to make yourself better. But most people don't want to do that. A better answer to this is to be more unique, so you can avoid the competition. Don't do a job that has a title, a college major, and millions of talented applicants. It's not that helpful to society to do something a hundred million other people can already do, which is why there's more competition and lower wages.

When you find the intersection of what's valued and what you enjoy, call up some people in those fields and ask what's rare. What in their area is needed. What are they missing. What is no one else doing.

Or just start your own company. That's the easiest way to be unique. But it's hard.

Finally, if you feel you're too "mid," then make sure your standards aren't crazy. Don't let society tell you that you need to be a millionaire with a yacht and designer clothes to be happy. Get a normal 9 to 5 with some purpose in it, that you can be proud of, that others appreciate. Live within your means and don't stress yourself out financially. Spend your free time doing things you like. Take care of your health, find good relationships, and treasure them. That's a happy life at any income. I know a bunch of miserable depressed rich people who are very good at making money and very bad at health/relationships/etc., which is the real stuff that life is made out of.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: