• skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    I’m constantly mystified at the huge gap between all these “new model obliterates all benchmarks/passes the bar exam/writes PhD thesis” stories and my actual experience with said model.

    • CheeseNoodle@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      Likely those new models are varients trained specifically on the exact material needed to perform those tasks, essentially passing the bar exam as if it were open book.

      • Tomassci@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        Reminds me of a video that starts with the fact you can’t convince image generating AI to draw a wine glass filled to the brim. AI is great at replicating the patterns that it has seen and been trained on, like full wine glasses, but it doesn’t actually know why it works or how it works. It doesn’t know the things we humans know intuitively, like “filled to the brim means more liquid than full”. It knows the what but doesn’t get the why.

        The same could apply to testing. AI knows how you solve test pages, but wouldn’t be that exact if you were to try adapting it into real life.