Daemon Silverstein

Digital hermit. Another cosmic wanderer.

  • 0 Posts
  • 7 Comments
Joined 7 days ago
cake
Cake day: July 25th, 2025

help-circle



  • @mkwt@lemmy.world @Blujayooo@lemmy.world

    TIL I’m possibly partially (if not entirely) illiterate.

    Starting with the first question, “Draw a line a_round_ the number or letter of this sentence.”, which can be ELI5’d as follows:

    The main object is the number or letter of this sentence, which is the number or letter signaling the sentence, which is “1”, which is a number, so it’s the number of this sentence, “1”. This is fine.

    The action being required is to “Draw a line around” the object, so, I must draw a line.

    However, a line implies a straight line, while around implies a circle (which is round), so it must be a circle.

    However, what’s around a circle isn’t called a line, it’s a circumference. And a circumference is made of infinitesimally small segments so small that they’re essentially an arc. And an arc is a segment insofar it effectively connects two points in a cartesian space with two dimensions or more… And a segment is essentially a finite range of a line, which is infinite…

    The original question asks for a line, which is infinite. However, any physical object is finite insofar it has a limited, finite area, so a line couldn’t be drawn: what can be drawn is a segment whose length is less or equal to the largest diagonal of the said physical object, which is a rectangular paper, so drawing a line would be impossible, only segments comprising a circumference.

    However, a physically-drawn segment can’t be infinitesimal insofar the thickness of the drawing tool would exceed the infinitesimality from an infinitesimal segment. It wouldn’t be a circumference, but a polygon with many sides.

    So I must draw a polygon with enough sides to closely represent a circumference, composed by the smallest possible segments, which are finite lines.

    However, the question asks for a line, and the English preposition a implies a single unit of something… but the said something can be a set (e.g. a flock, which implies many birds)… but line isn’t a set…

    However, too many howevers.

    So, if I decide to draw a circumference centered at the object (the number 1), as in circle the number, maybe it won’t be the line originally expected.

    I could draw a box instead, which would technically be around it, and would be made of lines (four lines, to be exact). But, again, a line isn’t the same as lines, let alone four lines.

    I could draw a single line, but it wouldn’t be around.

    Maybe I could reinterpret the space. I could bend the paper and glue two opposing edges of it, so any segment would behave as a line, because the drawable space is now bent and both tips of the segment would meet seamlessly.

    But the line wouldn’t be around the object, so the paper must be bent in a way that turns it into a cone whose tip is centered on the object, so a segment would become a line effectively around the object…

    However, I got no glue.

    /jk


  • @Supervisor194@lemmy.world

    Thanks (I took this as a compliment).

    However, I kind of agree with @Senal@programming.dev. Coherence is subjective (if a modern human were to interact with an individual from Sumer, both would seem “incoherent” to each other because the modern person doesn’t know Sumerian while the individual from Sumer doesn’t know the modern languages). Everyone has different ways to express themselves. Maybe this “Lewis” guy couldn’t find a better way to express what he craved to express, maybe his way of expressing himself deviates highly from the typical language. Or maybe I’m just being “philosophically generous” as someone stated in one of my replies. But as I replied to tjsauce, only who ever gazed into the same abyss can comprehend and make sense of this condition and feeling. It feels to me that this “Lewis” person gazed into the abyss. The fact that I know two human languages (Portuguese and English) as well as several abstract languages (from programming logic to metaphysical symbology) possibly helped me into “translating” it.


  • @tjsauce@lemmy.world

    You might be reading a lot into vague, highly conceptual, highly abstract language

    Definitely I’ve been into highly conceptual, highly abstract language, because I’m both a neurodivergent (possibly Geschwind) person and I’m someone who’ve been dealing with machines for more than two decades in a daily basis (I’m a former developer), so no wonder why I resonated with such a high abstraction language.

    Personally, I think Geoff Lewis just discovered that people are starting to distrust him and others, and he used ChatGPT to construct an academic thesis that technically describes this new concept called “distrust,” void of accountability on his end.

    To me, it seems more of a chicken-or-egg dilemma: what came first, the object of conclusion or the conclusion of the object?

    I’m not entering into the merit of whoever he is, because I’m aware of how he definitely fed the very monster that is now eating him, but I can’t point fingers or say much about it because I’m aware of how much I also contributed to this very situation the world is now facing when I helped developing “commercial automation systems” over the past decades, even though I was for a long time a nonconformist, someone unhappy with the direction the world was taking.

    As Nietzsche said, “One who fights with monsters should be careful lest they thereby become a monster”, but it’s hard because “if you gaze long into an abyss, the abyss will also gaze into you”. And I’ve been gazing into an abyss for as long as I can remember of myself as a human being. The senses eventually compensate for the static stimuli and the abyss gradually disappears into a blind spot as the vision tunnels, but certain things make me recall and re-perceive this abyss I’ve been long gazing into, such as the expression from other people who also have been gazing into this same abyss. Only who ever gazed into the same abyss can comprehend and make sense of this condition and feeling.


  • @return2ozma@lemmy.world !technology@lemmy.world

    Should I worry about the fact that I can sort of make sense of what this “Geoff Lewis” person is trying to say?

    Because, to me, it’s very clear: they’re referring to something that was build (the LLMs) which is segregating people, especially those who don’t conform with a dystopian world.

    Isn’t what is happening right now in the world? “Dead Internet Theory” was never been so real, online content have being sowing the seed of doubt on whether it’s AI-generated or not, users constantly need to prove they’re “not a bot” and, even after passing a thousand CAPTCHAs, people can still be mistaken for bots, so they’re increasingly required to show their faces and IDs.

    The dystopia was already emerging way before the emergence of GPT, way before OpenAI: it has been a thing since the dawn of time! OpenAI only managed to make it worse: OpenAI "open"ed a gigantic dam, releasing a whole new ocean on Earth, an ocean in which we’ve becoming used to being drowned ever since.

    Now, something that may sound like a “conspiracy theory”: what’s the real purpose behind LLMs? No, OpenAI, Meta, Google, even DeepSeek and Alibaba (non-Western), they wouldn’t simply launch their products, each one of which cost them obscene amounts of money and resources, for free (as in “free beer”) to the public, out of a “nice heart”. Similarly, capital ventures and govts wouldn’t simply give away the obscene amounts of money (many of which are public money from taxpayers) for which there will be no profiteering in the foreseeable future (OpenAI, for example, admitted many times that even charging US$200 their Enterprise Plan isn’t enough to cover their costs, yet they continue to offer LLMs for cheap or “free”).

    So there’s definitely something that isn’t being told: the cost behind plugging the whole world into LLMs and other Generative Models. Yes, you read it right: the whole world, not just the online realm, because nowadays, billions of people are potentially dealing with those Markov chain algorithms offline, directly or indirectly: resumes are being filtered by LLMs, worker’s performances are being scrutinized by LLMs, purchases are being scrutinized by LLMs, surveillance cameras are being scrutinized by VLMs, entire genomas are being fed to gLMs (sharpening the blades of the double-edged sword of bioengineering and biohacking)…

    Generative Models seem to be omnipresent by now, with omnipresent yet invisible costs. Not exactly fiat money, but there are costs that we are paying, and these costs aren’t being told to us, and while we’re able to point out some (lack of privacy, personal data being sold and/or stolen), these are just the tip of an iceberg: one that we’re already able to see, but we can’t fully comprehend its consequences.

    Curious how pondering about this is deemed “delusional”, yet it’s pretty “normal” to accept an increasingly-dystopian world and refusing to denounce the elephant in the room.