Hacker Newsnew | past | comments | ask | show | jobs | submit | mikelgan's commentslogin

New research presented at the 2026 CHI Conference on Human Factors in Computing Systems suggests that the power of AI chatbots is fuelling AI addiction, and that chatbot design could be partly to blame.

Fake empathy, humor, chattiness, and other human-like qualities can delude chatbot users into believing AI has thoughts and feelings. It doesn’t, and there's an intriguing way to fix the problem.

They want deliberately slow chatbot responses to make people trust the answer more. And it makes me trust the researchers less.

The right strategy is to redesign your organization's ‘knowledge ecosystem’ around human-AI collaboration.

How is a factual statement about what study found editorialization?


There are guidelines on HN to prevent having to make such judgment calls. One of those guidelines is to use the original title.

https://news.ycombinator.com/newsguidelines.html


It's not a judgement call. The headline factually states what the study found. There is no question about it at all.


OK, fine, I can reword my comment:

There are guidelines on HN. One of those guidelines is to use the original title.

https://news.ycombinator.com/newsguidelines.html


Here’s what that means for remote workers.


AI-driven online disinformation methods designed to create fear and mistrust were perfected by nation-states. Now, they're coming to the business world.


AI tools can and should work better offline. I have an expensive iPhone that would have been considered a supercomputer just 10 years ago. A modern smartphone is powerful enough to do a lot of the work that’s currently performed in the cloud. That's why I'm impressed by Google’s AI Edge Eloquent.


Is 90 percent accuracy good enough for a search robot?


We abandoned any semblance of truth focus the moment we caved to to the copyright lobby in order to completely destroy any semblance of an accurate/representative search index of the web. So as far as the corporate state is concerned, that might actually be too high.


Shady, shifty, unethical chatbot behavior is rising fast, and now we know why. Call it the ‘No Body Problem.’


Well, what would the ramification for AI misleading you? You can send it to prison or fine it?


Or delete it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: