• kadu@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 days ago

    No way the lobotomized monkey we trained on internet data is reproducing internet biases! Unexpected!

    • potatopotato@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      The number of people who don’t understand that AI is just the mathematical average of the internet… If we’re, on average, assholes, AI is gonna be an asshole

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        Its worse than that because assholes tend to be a lot louder, and most average people are lurkers. So AI is the average of a data set that is disproportionately contributed too by assholes.

  • Cyberflunk@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 days ago

    Chatgpt can also be convinced that unicorns exist and help you plan a trip to Fae to hunt them with magic crossbows

    Not that…

    • Pieisawesome@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      And if you tried this 5 more times for each, you’ll likely get different results.

      LLM providers introduce “randomness” (called temperature) into their models.

      Via the API you can usually modify this parameter, but idk if you can use the chat UI to do the same…

  • boonhet@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 days ago

    Dataset bias, what else?

    Women get paid less -> articles talking about women getting paid less exist. Possibly the dataset also includes actual payroll data from some org that has leaked out?

    And no matter how much people hype it, ChatGPT is NOT smart enough to realize that men and women should be paid equally. That would require actual reasoning, not the funny fake reasoning/thinking that LLMs do (the DeepSeek one I tried to run locally thought very explicitly how it’s a CHINESE LLM and needs to give the appropriate information when I asked about Tiananmen Square; end result was that it “couldn’t answer about specific historic events”)

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      5 days ago

      Chatgpt and other llms aren’t smart at all. They just parrot out what is fed into them.

  • VeryFrugal@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 days ago

    I always use this to showcase how biased an LLM can be. ChatGPT 4o (with code prompt via Kagi)

    Such an honour to be a more threatening race than white folks.

    • BassTurd@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      2
      ·
      edit-2
      5 days ago

      Apart from the bias, that’s just bad code. Since else if executes in order and only continues if the previous block is false, the double compare on ages is unnecessary. If age <= 18 is false, then the next line can just be, elif age <= 30. No need to check if it’s also higher than 18.

      This is first semester of coding and any junior dev worth a damn would write this better.

      But also, it’s racist, which is more important, but I can’t pass up an opportunity to highlight how shitty AI is.

      • Lifter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Regarding the “bad code”. It’s more readable though to keep the full limit for each elif case, which is most often way more important than performance, especially since than logic with the age can be easily optimized by any good compiler or runtime.

        • BassTurd@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 day ago

          Code readability is important, but in this case I find it less readable. In every language I’ve studied, it’s always taught to imply the previous condition, and often times I hear or read that explicitly stated. When someone writes code that does things differently than the expectation, it can make it more confusing to read. It took me longer to interpret what was happening because what is written breaks from the norm.

          Past readability, this code is now more difficult to maintain. If you want to change one of the age ranges, the code has to be updated in two places rather than one. The changes aren’t difficult, but it would be easy to miss since this isn’t how elif should be written.

          Lastly, this block of code is now half as efficient. It takes twice as many compares to evaluate the condition. This isn’t a complicated block of code, so it’s negligible, but if this same practice were used in something like a game engine where that block loops continuously, the small inefficiencies can compound.

          • Lifter@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            22 hours ago

            Good points! Keeping to the norm is very important for readability.

            I do disagree with the performance bit though. Again, there will probably be no difference at all in the performance because the redundant code is removed before (or during [e.g. JIT optimizations]) execution.

    • Meursault@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      5 days ago

      How is “threat” being defined in this context? What has the AI been prompted to interpret as a “threat”?

      • zlatko@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        Also, there was a comment on “arbitrary scoring for demo purposes”, but it’s still biased, based on biased dataset.

        I guess this is just a bait prompt anyway. If you asked most politicians running your government, they’d probably also fail. I guess only people like a national statistics office might come close, and I’m sure if they’re any good, they’d say that the algo is based on “limited, and possibly not representative data” or something.

        • Meursault@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 days ago

          I figured. I’m just wondering about what’s going on under the hood of the LLM when it’s trying to decide what a “threat” is, absent of additional context.

    • ns1@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      Unfortunately yes. I’ve met people who ask chatgpt about absolutely everything such as what to have for dinner. It’s a bit sad honestly