llm icon indicating copy to clipboard operation
llm copied to clipboard

IDEA: spotted(?) first enshittification of openai models: artificial follow up questions

Open rdslw opened this issue 9 months ago • 0 comments

[[ This issue tries to get feedback from llm community. ]]

Call me paranoid ;) but in last few days, I observed much greater llm responses from openai ending with follow-up questions enticing user to ask for more. What more important, those questions often have unnaturally explanations in parenthesis, which sound to me very marketing-o-speaky and never saw them before.

I'm talking openai API usage, through llm.

It's varied in terms of models and prompts; read: not specific to them, in prompts that in the past did not elicit such follow up questions.

Hence I thought: did anybody notice change of patterns? Can we make some experiments to asses it?

I call it entshitification, as it greatly reminds me facebook/meta optimizations of 'time on site', and other social walls showing more and more "relevant" posts, in the interest of org (here openai), not the user.

If so, they had to employ some changes in training corpus :-o

Examples: Would you also want me to show how to replace a crutch successfully? (Could be helpful to know!) I can also break down exactly HOW you could generalize their steps for your own browser debugging! 🚀 (Want me to?) I can tailor the exact code for you! 🚀 Would you like a real example as well?

[[ This issue tries to get some feedback from llm community. ]]

rdslw avatar Apr 26 '25 14:04 rdslw