I personally think of AI as a tool, what matters is how you use it. I like to think of it like a hammer. You could use a hammer to build a house, or you could smash someone’s skull in with it. But no one’s putting the hammer in jail.
Do you think hammers grow out of the ground? Or that the magically spawn the building materials to work on?
Everything we do has a cost. We should definitely strive for efficiency and responsibile use of resources. But to use this as an excuse, while you read this in a device made of metals mined by children, is pretty hypocritical.
No consumption is ehical under capitalism, take responsibility instead for what you do with that consumption.
Yeah, except it’s a tool that most people don’t know how to use but everyone can use, leading to environmental harm, a rapid loss of media literacy, and a huge increase in wealth inequality due to turmoil in the job market.
So… It’s not a good tool for the average layperson to be using.
Stop drinking the cool aid bro. Think of these statements critically for a second. Environmental harm? Sure. I hope you’re a vegan as well.
Loss of media literacy: What does this even mean? People are doing things the easy way instead of the hard way? Yes, of course cutting corners is bad, but the problem is the conditions that lead to that person choosing to cut corners, the problem is the demand for maximum efficiency at any cost, for top numbers. AI is is making a problem evident, not causing it. If you’re home on a Friday after your second shift of the day, fuck yeah you want to do things easy and fast. Literacy what? Just let me watch something funny.
Do you feel you’ve become more stupid? Do you think it’s possible? Why wouild other people, who are just like you, be these puppets to be brain washed by the evil machine?
Ask yourself. How are people measuring intelligence? Creativity? How many people were in these studies and who funded them?
If we had the measuring instrument needed to actually make categorizations like “People are losing intelligence.” Psychologists wouldn’t still be arguing over the exact definition of intelligence.
Stop thinking of AI as a boogieman inside people’s heads. It is a machine. People using the machine to achieve a mundane goal, it doesn’t mean the machine created the goal or is responsible for everything wrong with humanity.
Huge increase in inequality? What? Brother AI is a machine. It is the robber barons that are exploiting you and all of the working class to get obsenely rich. AI is the tool they’re using. AI can’t be held accountable. AI has no will. AI is a tool. It is people that are increasing inequality. It is the system held in place by these people that rewards exploitation and encourages to look at the evil machine instead. And don’t even use it, the less you know, the better. If you never engage with AI technology, you’ll believe everything I say about how evil it is.
Controversial reply, apparently, but this is literally part of the script to a Philosophy Tube video (relevant part is 8:40 - 20:10)
We sometimes think that technology is essentially neutral. It can have good or bad effects, and it might be really important who controls it. But a tool, many people like to think, is just a tool. “Guns don’t kill people, people do.” But some philosophers have argued that technology can have values built into it that we may not realise.
…
The philosopher Don Idhe says tech can open or close possibilities. It’s not just about its function or who controls it. He says technology can provide a framework for action.
…
Martin Heidegger was a student of Husserl’s, and he wrote about the ways that we experience the world when we use a piece of technology. His most famous example was a hammer. He said when you use one you don’t even think about the hammer. You focus on the nail. The hammer almost disappears in your experience. And you just focus on the task that needs to be performed.
Another example might be a keyboard. Once you get proficient at typing, you almost stop experiencing the keyboard. Instead, your primary experience is just of the words that you’re typing on the screen. It’s only when it breaks or it doesn’t do what we want it to do, that it really becomes visible as a piece of technology. The rest of the time it’s just the medium through which we experience the world.
Heidegger talks about technology withdrawing from our attention. Others say that technology becomes transparent. We don’t experience it. We experience the world through it. Heidegger says that technology comes with its own way of seeing.
…
Now some of you are looking at me like “Bull sh*t. A person using a hammer is just a person using a hammer!” But there might actually be some evidence from neurology to support this.
If you give a monkey a rake that it has to use to reach a piece of food, then the neurons in its brain that fire when there’s a visual stimulus near its hand start firing when there’s a stimulus near the end of the rake, too! The monkey’s brain extends its sense of the monkey body to include the tool!
And now here’s the final step. The philosopher Bruno Latour says that when this happens, when the technology becomes transparent enough to get incorporated into our sense of self and our experience of the world, a new compound entity is formed.
A person using a hammer is actually a new subject with its own way of seeing - ‘hammerman.’ That’s how technology provides a framework for action and being. Rake + monkey = rakemonkey. Makeup + girl is makeupgirl, and makeupgirl experiences the world differently, has a different kind of subjectivity because the tech lends us its way of seeing.
You think guns don’t kill people, people do? Well, gun + man creates a new entity with new possibilities for experience and action - gunman!
So if we’re onto something here with this idea that tech can withdraw from our attention and in so doing create new subjects with new ways of seeing, then it makes sense to ask when a new piece of technology comes along, what kind of people will this turn us into.
I thought that we were pretty solidly past the idea that anything is “just a tool” after seeing Twitler scramble Grok’s innards to advance his personal politics.
Like, if you still had any lingering belief that AI is “like a hammer”, that really should’ve extinguished it.
But I guess some people see that as an aberrant misuse of AI, and not an indication that all AI has an agenda baked into it, even if it’s more subtle.
The reason we can argue for banning guns and not hammers is specifically because guns are meant to hurt people. That’s literally their only use. Hammers have a variety of uses and hurting people is definitely not the primary one.
AI is a tool, not a weapon. This is kind of melodramatic.
then you have little understanding of how genai works… the social impact of genai is horrific, but to argue the tool is wholly bad conveys a complete or purposeful misunderstanding of context
I’m not an expert in AI systems, but here is my current thinkging:
Insofar as ‘GenAI’ is defined as
AI systems that can generate new content, including text, images, audio, and video, in response to prompts or inputs
I think this is genuinely bad tech. In my analysis, there are no good use cases for automating this kind of creative activity in the way that the current technology works. I do not mean that all machine assisted generation of content is bad, but just the current tech we are calling GenAI, which is of the nature of “stochastic parrots”.
I do not think every application of ML is trash. E.g., AI systems like AlphaFold are clearly valuable and important, and in general the application of deep learning to solve particular problems in limited domains is valuable
Also, if we first have a genuinely sapient AI, then it’s creation would be of a different kind, and I think it would not be inherently degenerative. But that is not the technology under discussion. Applications of symbolic AI to assist in exploring problem spaces, or ML to solve classification problems also seems genuinely useful.
But, indeed, all the current tech that falls under GenAI is genuinely bad, IMO.
GenAI is a great tool for devouring text and making practice questions, study guides and summarize, it has been used as a marvelous tool for education and research. Hell, if set properly, you can get it to give you the references and markers on your original data for where to find the answers to the questions on the study guide it made you.
It is also really good for translation and simplification of complex text. It has its uses.
But the oversimplification and massive broad specs LLMs have taken, plus lack of proper training for the users, are part of the problem Capitalism is capitalizing on. They don’t care for the consumer’s best interest, they just care for a few extra pennies, even if those are coated in the blood of the innocent. But a lot of people just foam at the mouth when they hear “Ai”.
Those are not valuable use cases. “Devouring text” and generating images is not something that benefits from automation. Nor is summarization of text. These do not add value to human life and they don’t improve productivity. They are a complete red herring.
Who talked about image generation? That one is pretty much useless, for anything that needs to be generated on the fly like that, a stick figure would do.
Devouring text like that, has been instrumental in learning for my students, especially for the ones who have English as a Second Language(ESL), so its usability in teaching would be interesting to discuss.
Do I think general open LLMs are the future? Fuck no. Do I think they are useless and unjustifiable? Neither. I think, at their current state, they are a brilliant beta test on the dangers and virtues of large language models and how they interact with the human psyche, and how they can help bridge the gap in understanding, and how they can help minorities, especially immigrants and other oppressed groups(Hence why I advocated for providing a class on how to use it appropriately for my ESL students) bridge gaps in understanding, help them realize their potential, and have a better future.
However, we need to solve or at least reduce the grip Capitalism has on that technology. As long as it is fueled by Capitalism, enshitification, dark patterns and many other evils will strip it of its virtues, and sell them for parts.
as an aussie, yeah, then you should stop people from having guns
i honestly wouldn’t be surprised if the total number of gun deaths in australia since we banned guns (1996) was less than the number of gun deaths in the US THIS WEEK
the reason is irrelevant: the cause is obvious… and id have bought the “to stop a tyrannical government” argument a few years ago, but ffs there’s all the kids dying in school and none of the stop the tyrant, so maybe that’s a fucking awful argument and we have it right down under
I’ve never understood how a redneck prepper thinks he’s going to protect himself with a bunch of guns from a government that has millions of soldiers, tanks, machine guns, sidewinder misses and nuclear weapons.
My skull-crushing hammer that is made to crush skulls and nothing else doesn’t crush skulls, people crush skulls
In fact, if more people had skull-crushing hammers in their homes, i’m sure that would lead to a reduction in the number of skull-crushings, the only thing that can stop a bad guy with a skull-crushing hammer, is a good guy with a skull-crushing hammer
the ban on guns in australia has been disastrous! the number of good guys with guns has dropped dramatically and … well, so has the number of bad guys … but that’s a mirage! ignore our near 0 gun deaths… that’s a statistical anomaly!
I personally think of AI as a tool, what matters is how you use it. I like to think of it like a hammer. You could use a hammer to build a house, or you could smash someone’s skull in with it. But no one’s putting the hammer in jail.
deleted by creator
A hammer doesn’t consume exorbitant amounts of power and water.
Do you think hammers grow out of the ground? Or that the magically spawn the building materials to work on?
Everything we do has a cost. We should definitely strive for efficiency and responsibile use of resources. But to use this as an excuse, while you read this in a device made of metals mined by children, is pretty hypocritical.
No consumption is ehical under capitalism, take responsibility instead for what you do with that consumption.
Neither does an algorithm.
No, it actually does.
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
The algorithm is a bunch of math. It’s not until someone wants to run it that it needs any energy.
No shit, chumly. How many times a second do you think that math is “run”?
Yeah, except it’s a tool that most people don’t know how to use but everyone can use, leading to environmental harm, a rapid loss of media literacy, and a huge increase in wealth inequality due to turmoil in the job market.
So… It’s not a good tool for the average layperson to be using.
Stop drinking the cool aid bro. Think of these statements critically for a second. Environmental harm? Sure. I hope you’re a vegan as well.
Loss of media literacy: What does this even mean? People are doing things the easy way instead of the hard way? Yes, of course cutting corners is bad, but the problem is the conditions that lead to that person choosing to cut corners, the problem is the demand for maximum efficiency at any cost, for top numbers. AI is is making a problem evident, not causing it. If you’re home on a Friday after your second shift of the day, fuck yeah you want to do things easy and fast. Literacy what? Just let me watch something funny.
Do you feel you’ve become more stupid? Do you think it’s possible? Why wouild other people, who are just like you, be these puppets to be brain washed by the evil machine?
Ask yourself. How are people measuring intelligence? Creativity? How many people were in these studies and who funded them? If we had the measuring instrument needed to actually make categorizations like “People are losing intelligence.” Psychologists wouldn’t still be arguing over the exact definition of intelligence.
Stop thinking of AI as a boogieman inside people’s heads. It is a machine. People using the machine to achieve a mundane goal, it doesn’t mean the machine created the goal or is responsible for everything wrong with humanity.
Huge increase in inequality? What? Brother AI is a machine. It is the robber barons that are exploiting you and all of the working class to get obsenely rich. AI is the tool they’re using. AI can’t be held accountable. AI has no will. AI is a tool. It is people that are increasing inequality. It is the system held in place by these people that rewards exploitation and encourages to look at the evil machine instead. And don’t even use it, the less you know, the better. If you never engage with AI technology, you’ll believe everything I say about how evil it is.
Seriously, the AI hate gets old fast. Like you said it’s a tool,
geyget over it people.👁️👄👁️🤖 🏳️🌈
Edited. That’s what I get for trying to type fast while my dog is heading for the door after doing her business.
“Guns don’t kill people, people kill people”
Edit:
Controversial reply, apparently, but this is literally part of the script to a Philosophy Tube video (relevant part is 8:40 - 20:10)
I thought that we were pretty solidly past the idea that anything is “just a tool” after seeing Twitler scramble Grok’s innards to advance his personal politics.
Like, if you still had any lingering belief that AI is “like a hammer”, that really should’ve extinguished it.
But I guess some people see that as an aberrant misuse of AI, and not an indication that all AI has an agenda baked into it, even if it’s more subtle.
Bad faith comparison.
The reason we can argue for banning guns and not hammers is specifically because guns are meant to hurt people. That’s literally their only use. Hammers have a variety of uses and hurting people is definitely not the primary one.
AI is a tool, not a weapon. This is kind of melodramatic.
GenAI is a bad tool that does bad things in bad ways.
then you have little understanding of how genai works… the social impact of genai is horrific, but to argue the tool is wholly bad conveys a complete or purposeful misunderstanding of context
I’m not an expert in AI systems, but here is my current thinkging:
Insofar as ‘GenAI’ is defined as
I think this is genuinely bad tech. In my analysis, there are no good use cases for automating this kind of creative activity in the way that the current technology works. I do not mean that all machine assisted generation of content is bad, but just the current tech we are calling GenAI, which is of the nature of “stochastic parrots”.
I do not think every application of ML is trash. E.g., AI systems like AlphaFold are clearly valuable and important, and in general the application of deep learning to solve particular problems in limited domains is valuable
Also, if we first have a genuinely sapient AI, then it’s creation would be of a different kind, and I think it would not be inherently degenerative. But that is not the technology under discussion. Applications of symbolic AI to assist in exploring problem spaces, or ML to solve classification problems also seems genuinely useful.
But, indeed, all the current tech that falls under GenAI is genuinely bad, IMO.
GenAI is a great tool for devouring text and making practice questions, study guides and summarize, it has been used as a marvelous tool for education and research. Hell, if set properly, you can get it to give you the references and markers on your original data for where to find the answers to the questions on the study guide it made you.
It is also really good for translation and simplification of complex text. It has its uses.
But the oversimplification and massive broad specs LLMs have taken, plus lack of proper training for the users, are part of the problem Capitalism is capitalizing on. They don’t care for the consumer’s best interest, they just care for a few extra pennies, even if those are coated in the blood of the innocent. But a lot of people just foam at the mouth when they hear “Ai”.
Those are not valuable use cases. “Devouring text” and generating images is not something that benefits from automation. Nor is summarization of text. These do not add value to human life and they don’t improve productivity. They are a complete red herring.
Who talked about image generation? That one is pretty much useless, for anything that needs to be generated on the fly like that, a stick figure would do.
Devouring text like that, has been instrumental in learning for my students, especially for the ones who have English as a Second Language(ESL), so its usability in teaching would be interesting to discuss.
Do I think general open LLMs are the future? Fuck no. Do I think they are useless and unjustifiable? Neither. I think, at their current state, they are a brilliant beta test on the dangers and virtues of large language models and how they interact with the human psyche, and how they can help bridge the gap in understanding, and how they can help minorities, especially immigrants and other oppressed groups(Hence why I advocated for providing a class on how to use it appropriately for my ESL students) bridge gaps in understanding, help them realize their potential, and have a better future.
However, we need to solve or at least reduce the grip Capitalism has on that technology. As long as it is fueled by Capitalism, enshitification, dark patterns and many other evils will strip it of its virtues, and sell them for parts.
“Video games are dangerous.”
So is rock music! And if you inject one Marijuana you can die!
Guns don’t kill people. People with guns kill people.
Ftfy
as an aussie, yeah, then you should stop people from having guns
i honestly wouldn’t be surprised if the total number of gun deaths in australia since we banned guns (1996) was less than the number of gun deaths in the US THIS WEEK
the reason is irrelevant: the cause is obvious… and id have bought the “to stop a tyrannical government” argument a few years ago, but ffs there’s all the kids dying in school and none of the stop the tyrant, so maybe that’s a fucking awful argument and we have it right down under
I’ve never understood how a redneck prepper thinks he’s going to protect himself with a bunch of guns from a government that has millions of soldiers, tanks, machine guns, sidewinder misses and nuclear weapons.
Secure your guns. Toddlers kill an insane number of people. https://www.snopes.com/fact-check/toddlers-killed-americans-terrorists/
Hey, that level of pedantry is my job
My skull-crushing hammer that is made to crush skulls and nothing else doesn’t crush skulls, people crush skulls
In fact, if more people had skull-crushing hammers in their homes, i’m sure that would lead to a reduction in the number of skull-crushings, the only thing that can stop a bad guy with a skull-crushing hammer, is a good guy with a skull-crushing hammer
you’re absolutely right!
the ban on guns in australia has been disastrous! the number of good guys with guns has dropped dramatically and … well, so has the number of bad guys … but that’s a mirage! ignore our near 0 gun deaths… that’s a statistical anomaly!
Yet gun control works.
Same idea.
https://m.youtube.com/watch?v=xC03hmS1Brk