> And it gets worse because LLMs destroy that signal in one direction - towards homogeneity.
Oh, right, yes, if you're not careful they can definitely do that.
But look at what julius_eth_dev is actually saying they're doing:
> "rubber-ducking architecture decisions, pressure-testing arguments before I post them."
That's more like using the LLM as a sparring partner; they're not having the LLM write their comments for them.
I thought you were going to go somewhere really interesting actually, like maybe 'the LLM convinces you that their arguments are better than yours, and now you're acting like a meat puppet.' Or something equally slightly alarming and cool like that! ;-)
Oh, right, yes, if you're not careful they can definitely do that.
But look at what julius_eth_dev is actually saying they're doing:
> "rubber-ducking architecture decisions, pressure-testing arguments before I post them."
That's more like using the LLM as a sparring partner; they're not having the LLM write their comments for them.
I thought you were going to go somewhere really interesting actually, like maybe 'the LLM convinces you that their arguments are better than yours, and now you're acting like a meat puppet.' Or something equally slightly alarming and cool like that! ;-)