I don't think the real issue is LLM posts. The issue with low quality on the Internet has always been quantity. The problem always has been humans who post too much, humans that use software to post too much, and now it's humans who use LLMs to post too much.
The problem with a medium that is completely free and unrestricted is that whomever posts the most sort of wins. I could post this opinion 30-40 times in this thread, using bots and alternative accounts, and completely move the discussion to be only this.
Someone using an LLM is craft a reply is not a problem on it's own. Using it craft a low-effort reply in 3 seconds just to get out is the problem.
> Someone using an LLM is craft a reply is not a problem on it's own.
No, someone using an LLM to craft a reply is a problem in its own. I want to hear what a human has to say, not a human filtered through a computer program. No grammar editing, nothing. Give me your actual writing or I'm not interested.
Do you though? Like what real difference does it make to you? Can you even tell if this has been passed through an LLM or not? If you can't tell, why does it matter?
I don't want to be robo-slopped at en masse or be fed complete fabrications but neither of those actually require an LLM. If you're going to use an LLM to gather your thoughts, I don't see a problem with that.
the difference is that you get to see the unfiltered, unique perspective of a real human being. Just like I don't want to talk to anyone through an instagram or tiktok beauty filter or accent remover. If your thoughts are unordered, it's okay I'll take your unordered thoughts over some smoothed over crap.
Do people have really such a low opinion of themselves that they have to push every single thing through some kind of layer of artifice?
> the difference is that you get to see the unfiltered, unique perspective of a real human being.
The implicit unfounded assumption is whether that's actually worth more than a well written orderly response. Most comments are kind of crap.
Not everyone is good at writing. In some cases, it might even be a disability aid. And if their comments aren't good, we have a system in place to rank them accordingly. Again, I think the only problem is quantity. If we're overrun with low-effort posts, no amount of ranking will help that.
> The implicit unfounded assumption is whether that's actually worth more than a well written orderly response.
It's not implicit or unfounded. The parent comment is explicitly saying that's what they prefer. And, as an actual human, their preference is intrinsically valid for them.
If I like my kid's crappy cooking over a Michelin-star meal made by a robot... then I get to like my kid's crappy cooking more. I have that right. There is no social consensus when it comes to what I want. You can't argue whether my preference is correct or not, it's my preference.
As a software developer and human being, I know people often say they prefer one thing while actually preferring something else. That's human nature.
People have strong feelings about AI in general and that can definitely cloud what they will say about it. Everybody hates AI but, like CGI in movies, they only likely hate the AI or CGI that they notice.
Believing that, say, the use of AI will primarily enrich billionaires that are already doing societal harm is not clouding one's view of AI. It is one's view of AI.
To say otherwise is to say that worrying about lung cancer is clouding one's view of smoking.
> they only likely hate the AI or CGI that they notice.
No, this is simply not true at all. I dislike use of AI even more when I don't notice it. My goal getting on the Internet is to connect with other actual people and their creativity. I want actual people to be more connected to each other, and AI makes that worse, especially when it's good enough that people don't even realize their are being intermediated by corporations pumping out simulated humanity.
> Believing that, say, the use of AI will primarily enrich billionaires that are already doing societal harm is not clouding one's view of AI. It is one's view of AI.
That's fine. Nobody is forcing you to use AI. I dislike it when people force their ideas onto others.
> My goal getting on the Internet is to connect with other actual people and their creativity.
It's too bad your goal doesn't include interacting with people who don't speak your language and use AI to translate for them. Or people who struggle with writing in general. I don't think it's as black and white as you make it out to be.
> Nobody is forcing you to use AI. I dislike it when people force their ideas onto others.
I'm still being forced to live in a world filled with people who do use it and whose behavior affects me.
We had the President of the United States posting AI-manipulated propaganda on social media. Millions of voters saw that, regardless of whether or not I happen to personally use ChatGPT.
It doesn't matter if I light up a cigarette myself if I have to spend all day in a crowded bar where everyone else is smoking.
> I don't think it's as black and white as you make it out to be.
I'm not saying it's black and white. All I'm saying is that your description of someone's strong feelings about AI as "clouding" their stance is incorrect. You can be clear-headed about feeling something is a large net negative for the world.
> I'm still being forced to live in a world filled with people who do use it and whose behavior affects me.
My point... way at the top... is exactly that. People's behavior does have an effect but it always has.
The President of the United States posting manipulated propaganda is the problem; using AI now just makes it more obvious. It's actually better, right now, that it is so obvious. But anyone can, and has, done that with lesser tools to better affect.
People posting bullshit on the Internet has always been a problem. I'm not even sure how an AI ban is enforceable. While I don't think I have the solution, I think it makes more sense to look at this as content problem instead of tool problem. Both quality and quantity.
If you had the LLM write the comment, then it wasn't your thoughts.
I sometimes wonder if people aren't forgetting why we're on this platform.
The goal is to have an interesting discourse and maybe grow as a human by broadening your horizon. The likelihood of that happening with llms talking for you is basically nil, hence... Why even go through the motion at that point? It's not like you get anything for upvotes on HN
> If you had the LLM write the comment, then it wasn't your thoughts.
But what if I provided the LLM my thoughts? That's actually how I use LLMs in my life -- I provide it with my thoughts and it generates things from those thoughts.
Now if I'm just giving it your comment and asking it to reply, then yes, those aren't my thoughts. Why would I do that? I think the answer goes back to my original point.
If I'm telling you my thoughts and then you go and tell a friend those thoughts, would you say those are still my thoughts even though I wasn't the one expressing them directly to your friend?
I like to think about it in terms of output-to-prompt ratio. For HN comments, I think an output ratio of 1 or less is _probably_ fine. Examples:
- translating (relatively) literally from one language to another would be ~1:1.
- automatic spelling/grammar correction is ~1:1
- Using an LLM to help you find a concise way of expressing what you mean, i.e. giving it extra content to help it suggest a way of phrasing something that has the connotation you want, would be <1:1
Expansion (output > prompt) is where it gets problematic, at least for HN comments: if you give it an 8 word prompt and it expands it to 50, you've just wasted the reader's time -- they could've read the prompt and gotten the same information.
(expansion is perfectly fine in a coding context -- it often takes way fewer words to express what you want the program to do than the generated code will contain.)
As for expansion, that might just be the risk we take. I been downvoted on reddit for being "too verbose" in my replies and I'm a human. And perhaps just reading the prompt in that case wouldn't give you more information; the LLM might actually have some insight that is relevant to the conversation. What's the difference between that and googling for something and pasting it in?
The linked rule does not make such a distinction, and I don't see how this rule could be enforced with such a caveat, either.
Hence no, none of these examples should be okay. Even if pure translation and grammar check is gonna be effectively impossible to detect too, so likely pointless to talk about
And the last one is often detectable and very clearly against it - I'm not sure how you can come to any other conclusion
> I don't see how this rule could be enforced with such a caveat
I don't see how this rule is going to be enforced anyway. Many people posting with AI help won't get noticed at all and about 100 times a many people are going to be accused of using AI because they use proper grammar.
Amusingly your comment carries some of the tropes of AI authorship ("is not a problem on it's own....is the problem") but it's not shaped like a profound insight is being discovered in every line is what makes it human.
How much of AI writing will pass under the radar when the big companies aren't all maximizing to generate the most engagement hacking content in a chatbot UI? Maybe it'll still stand out for being low quality, but I'm not sure. There's lots of low quality human authored content.
Not sure where my comment is going, I just kinda rambled.
The problem with a medium that is completely free and unrestricted is that whomever posts the most sort of wins. I could post this opinion 30-40 times in this thread, using bots and alternative accounts, and completely move the discussion to be only this.
Someone using an LLM is craft a reply is not a problem on it's own. Using it craft a low-effort reply in 3 seconds just to get out is the problem.