Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One aspect I expect to see play out:

Any entity interested in either the truth, or in maintaining some kind of reputation, will need to keep humans in the loop when using these systems. Language models might multiply e.g. ad copy output 10x per worker, and allow micro-targeted campaigns that were impractical before, but it won't allow, say, a 1000x increase, until or unless we can trust these systems not to produce undesirable output when not checked by a human. Ads are tied to brands which will hesitate to put their reputations in the hands of language models without a human verifying that the output is OK. Likewise, any entities wishing to use these to help with writing illuminating, factual works, may see a large benefit, but it'll be limited. 2x, 5x, something like that.

Propaganda, though? Misinfo campaigns, astroturfing, where you hide behind sockpuppets and shell companies anyway? Who gives a shit if one out of every few hundred messages isn't quite right? Worst case, you burn a sockpuppet account. Those can leverage these to the fullest. 1000x output per person involved, compared with, say, 2016 and 2020, may actually be something we can expect to see.



> Propaganda, though? Misinfo campaigns, astroturfing, where you hide behind sockpuppets and shell companies anyway?

Why just limit there? The chatbot companies can introduce ads where the answers are influenced by whichever company that buys the ads. Looking for information on nutrition? Some fast food company might "insert an ad" subtly changing the text to favor whatever the company wants.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: