Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can use GPT-3 for relationship advice, but should you?

Could you elaborate a little more on what you're asking? Perhaps an example?

I feel as if any use case like this, — particularly if it crosses into interpersonal relationships and disagreement — risks being incredibly dangerous as far as reinforcing your own biases.

It should go without saying that to even consider this, you should have a solid grasp of the fundamentals of these LLMs, their limitations and inherent biases, and their tendency to unconditionally agree with you regardless of your position – particularly in a dialog-style format.

One clearly problematic use case would be asking about an interpersonal conflict where there exists an established societal and cultural power imbalance.

E.g., describing a disagreement between a man and a woman and asking for insight into that conversation would be incredibly flawed, no matter how objectively it were depicted.

This isn't an attack on you nor am I claiming you're doing anything at this level, but taken to the extreme it’s something I can easily imagine someone doing after reading your post.

I’d really just like people to be cautious. After all, this is the same language model that told me a social media post contains hate speech due to “negative sentiment towards a protected class (racists)”.



It hasn't really played out that way.

With GPT-3, that bias is pretty explicit, and not hard to manage. I also don't treat advice as necessarily /good/ advice. A good way to think about it is ideation.

A good example is to ask how to communicate something. I sometimes miscommunicate what I'm thinking, and it often takes a lot of effort to figure out how to say something. With different prompts, GPT-3 will give different ways to say something, and I can pick one which works well and meets what I want to say. Let's say you promised someone something, and need to change plans, and you don't want them to think you're blowing them off. GPT-3 will often give me good ways to communicate something like that.

Or another example is: "I am struggling with ____. What are good strategies I can use to ____."

GPT-3 will often come up with things which I wouldn't have come up with myself, or ones which would have taken me a lot of time. Just as often, it will give a useless, generic list of suggestions. At the end of the day, though, it provides helpful ideas often enough to be valuable.

GPT-3 definitely doesn't (and shouldn't) act as an arbiter in a conflict, but even in conflicts, there's often a solution which works for both sides. My major problem is that I tend to think slowly, and come up with those solutions too late. GPT-3 thinks a lot faster than I do, and having those ideas before they're moot is sometimes helpful.

As a footnote, human therapists tend to reinforce biases too. They only have one side of the story. I've seen people really damaged in the way you describe. In one case, both the person and the therapist were living in a (plausible-sounding) fantasy world.


Concrete example: I had someone reaching out to me, quite a bit, wanting to interact. On the other hand, whenever we interacted, I had the distinct impression they didn't like me. It was not a person I knew well. I'm limiting a lot of personal information from the story, but that's a fair summary.

I asked GPT-3 for advice for how to interpret that person's behavior (with more context).

GPT-3 gave several plausible explanations, must of which I hadn't thought of.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: