Who are these people? This is ridiculous. :)

I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.

People even have romantic relationships with these things.

I dont agree with the argument that chat gpt should “push back”. They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.

Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?

Very slippery slope if you ask me.

    • 1984@lemmy.todayOP
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      8 days ago

      It will take another five seconds to find the same info using the web. Unless you also think we should censor the entire web and make it illegal to have any information about things that can hurt people, like knives, guns, stress, partners, cars…

      People will not be stopped suiciding because a chat bot doesnt tell them the best way, unfortunately.

      • FartMaster69@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 days ago

        This is also a problem for search engines.

        A problem that while not solved has been somewhat mitigated by including suicide prevention resources at the top of search results.

        This is a bare minimum AI can’t meet, and in conversation with AI vulnerable people can get more than just information, there are confirmed cases of the AI encouraging harmful behaviors up to and including suicide.

  • chunes@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    ffs, this isn’t chatgpt causing psychosis. It’s schizo people being attracted like moths to chatgpt because it’s very good at conversing in schizo.

    • Agent641@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      CGPT literally never gives up. You can give it an impossible problem to solve, and tell it you need to solve it, and it will never, ever stop trying. This is very dangerous for people who need to be told when to stop, or need to be disengaged with. CGPT will never disengage.

  • acosmichippo@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    I dont agree with the argument that chat gpt should “push back”. They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.

    but that’s an inherently unhealthy relationship, especially for psychologically vulnerable people. if it doesn’t push back they’re not in a relationship, they’re getting themselves thrown back at them.

    • TeddE@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 days ago

      Counterpoint: it is NOT an unhealthy relationship. A relationship has more than one person in it. It might be considered an unhealthy behavior.

      I don’t think the problem is solvable if we keep treating the Speak’n’spell like it’s participating in this.

      Corporations are putting dangerous tools in the hands of vulnerable people. By pretending the tool is a person, we’re already playing their shell game.

      But yes, the tool seems primed for enabling self-harm.