Women get paid less -> articles talking about women getting paid less exist. Possibly the dataset also includes actual payroll data from some org that has leaked out?
And no matter how much people hype it, ChatGPT is NOT smart enough to realize that men and women should be paid equally. That would require actual reasoning, not the funny fake reasoning/thinking that LLMs do (the DeepSeek one I tried to run locally thought very explicitly how it’s a CHINESE LLM and needs to give the appropriate information when I asked about Tiananmen Square; end result was that it “couldn’t answer about specific historic events”)
Dataset bias, what else?
Women get paid less -> articles talking about women getting paid less exist. Possibly the dataset also includes actual payroll data from some org that has leaked out?
And no matter how much people hype it, ChatGPT is NOT smart enough to realize that men and women should be paid equally. That would require actual reasoning, not the funny fake reasoning/thinking that LLMs do (the DeepSeek one I tried to run locally thought very explicitly how it’s a CHINESE LLM and needs to give the appropriate information when I asked about Tiananmen Square; end result was that it “couldn’t answer about specific historic events”)
Chatgpt and other llms aren’t smart at all. They just parrot out what is fed into them.