New research presented at the 2026 CHI Conference on Human Factors in Computing Systems suggests that the power of AI chatbots is fuelling AI addiction, and that chatbot design could be partly to blame.
Fake empathy, humor, chattiness, and other human-like qualities can delude chatbot users into believing AI has thoughts and feelings. It doesn’t, and there's an intriguing way to fix the problem.
AI-driven online disinformation methods designed to create fear and mistrust were perfected by nation-states. Now, they're coming to the business world.
AI tools can and should work better offline. I have an expensive iPhone that would have been considered a supercomputer just 10 years ago. A modern smartphone is powerful enough to do a lot of the work that’s currently performed in the cloud. That's why I'm impressed by Google’s AI Edge Eloquent.
We abandoned any semblance of truth focus the moment we caved to to the copyright lobby in order to completely destroy any semblance of an accurate/representative search index of the web. So as far as the corporate state is concerned, that might actually be too high.
reply