• 88 Posts
  • 553 Comments
Joined 1 year ago
cake
Cake day: May 28th, 2024

help-circle




  • It’s kind of weird to cheer own for copyrights and corporate ownership here.

    I’m not “cheering for corporate ownership” here by any stretch of the imagination. The exact opposite, actually. But if you’re just going to rely on hypotheticals and bad faith, then I’m done wasting my time on anything you have to say.

    Little unsolicited advice: You’re way too online and it shows; and that’s never good for your mental health. Take some time off from being an epicbacon poster.


  • It’s the same for reddit or anywhere else; terminally online slop piggies bend over backwards to ignore the realities of a thing they like. This place was pretty hostile to crytpocurrency but then Russia started floating BRICScoin and there was a near-total reversal on all criticisms and the mental gymnastics began.

    I only have so much energy to spend on detailed comments

    I feel ya. It is disappointing to see, but I enjoy the real world too much to want to get drawn into hypothetical bullshit arguments all day with tech-pilled, terminally online debatebros. Maybe it’s a sign of good mental health to not want to invest your energy into obvious dead ends.


  • USSR Enjoyer@lemmygrad.mltoAsk Lemmygrad@lemmygrad.mlWhat's the deal with AI art?
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    8
    ·
    edit-2
    2 months ago

    Except that’s not true at all

    It is true. Those are the conditions and reason for the creation of AI artwork as it materially exists.

    AI exists as open source and completely outside capitalism

    Specifically, generative “AI” art models, are created and funded by huge capital formations that exploit legal loopholes with fake universities, illicit botnets, and backroom deals with big tech to circumvent existing protections for artists. That’s the material reality of where this comes from. The models themselves are are a black market.

    it’s also developed in countries like China

    I stan the PRC and the CPC. But China is not a post-capitalist society. It’s in a stage of development that constrains capital, and that’s a big monster to wrestle with. China is a big place and has plenty of problems and bad actors, and it’s the CPC’s job to keep them in line as best they can. It’s a process. It’s not inherent that all things that presently exist in such a gigantic country are anti-capitalist by nature. Citing “it exists in China” is not an argument.

    Outside capitalism I see no reason for things like copyrights and intellectual property which makes the whole argument moot.

    And outside capitalism, creative workers don’t have to sell their labor just to survive… Are we just doing bullshit utopianism now?

    It’s a tool that humans use. Meanwhile, the theft arguments have nothing to do with the technology itself.

    This exists to replace creative labor. That ship has already sailed. That’s the reality you’re in now. There’s a distinction between a hammer and factory automation that relies on millions of workers to involuntarily train it in order to replace them.

    You’re arguing that technology is being applied to oppress workers under capitalism, and nobody here disagrees with that. However, AI is not unique in this regard, the whole system is designed to exploit workers. 19th century capitalists didn’t have AI, and worker conditions were far worse than they are today.

    Here I was thinking capitalism just began a week ago. I guess AI slop machines causing people material harm is cool then.

    That’s also false at this point. LLMs have become far more efficient in just a short time, and models that required data centers to run can now be run on laptops.

    Seems like you should understand the difference between running a model vs. training a model. And the cost of the infinite cycle of vacuuming up more new data and retraining that’s necessary for these things to significantly exist.

    That’s really an argument for why this tech should be developed outside corps owned by oligarchs.

    Okay, but that’s not how and why these things to exist in our present reality. If there were unicorns, I’d like to ride one.

    Again, it’s a tool, any moral foundation would have to come from the human using the tool.

    Again, for workers, there’s a difference between a tool and a body replacement. The language marketing generative AI as tools is just there to keep you docile.

    If this “tool” does replace work previously done by human beings (spoiler: it does), then the capacity for ethical objection to being given an unethical task is completely lost, vs. a human employee, who at least has a capacity to refuse, organize a walkout, or secretly blow the whistle. A human must at least be coerced to do something they find objectionable. Bosses are not alone in being responsible for delegating unethical tasks, those that perform those tasks share a disgrace, if not crime. Reducing the human moral complicity to an order of one is not a good thing.

    Finally, no matter how much you hate this tech, it’s not going away.

    It will go away when the earth becomes uninhabitable, which inches ever closer with every pile of worthless, inartistic slop the little piggies ask for. I guess people could reject this thing, but that would take some kind of revolution and who has time for that.

    Its not just that you’re constantly embracing generative AI, but you’re arguing against all of it’s critiques and ignoring the pain of those that are intentionally harmed in the real world.


  • I already made my points, but again, there is no other material context under which this this exists.

    Does its existence materially hurt people who sell creative forms of their labor? Yes.

    Was it designed for that purpose? Yes.

    Does it uselessly harm our biosphere? It’s at least as bad as shitcoin, probably worse.

    Is the slop spigot of synthetic inhuman garbage for mindless consumption worth the alienation of taking human creativity away from human beings, so the little fucking piggies can get exactly what they think they want (but not really)?




  • USSR Enjoyer@lemmygrad.mltoAsk Lemmygrad@lemmygrad.mlWhat's the deal with AI art?
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    7
    ·
    edit-2
    2 months ago

    Sorry, comrade, but all your pro-“AI” takes keep making me lose respect for you.

    1. AI is entirely designed to take from human beings the creative forms of labor that give us dignity, happiness, human connectivity and cultural development. That it exists at all cannot be separated from the capitalist forces that have created it. There is no reality that exists outside the context of of capitalism where this would exist. In some kind of post-capitalist utopian fantasy, creativity would not need to be farmed at obscene industrial levels and human beings would create art as a means of natural human expression, rather than an expression of market forces.

    2. There is no better way to describe the creation of these generative models than unprecidented levels of industrial capitalist theft that circumvents all laws that were intended to prevent capitalist theft of creative work. There is no version of this that exists without mass theft, or convincing people to give up their work to the slop machine for next to nothing.

    3. LLMs vacuum up all traces of human thought, communication, interaction, creativity to produce something that is distinctly non-human – an entity that has no rights; makes no demands; has no dignity; has no ethical capacity to refuse commands; and exists entirely to replace forms of labor which were only previously considered to be exclusively in the domain of human intelligence*.

    4. The theft is a one-way hash of all recorded creative work, where attribution becomes impossible in the final model. I know decades of my own ethical FOSS work (to which I am fully ideologically committed) have been fed into these machines and are now being used to freely generate closed-sourced and unethical, exploitative code. I have no control of how the derived code is transfigured or what it is used for, despite the original license conditions.

    5. This form of theft is so widespread and anonymized through botnets that it’s almost impossible to track, and manifests itself as a brutal pandora’s box attack on internet infrastructure on everything from personal websites, to open-source code repositories, to artwork and image hosts. There will never be accountability for this, even though we know which companies are selling the models, and the rest of us are forced to bear the cost. This follows the typical capitalist method of “socialize the cost, privatize the profit.”* The general defense against these AI scouring botnets is to get behind the Cloudflare (and similar) honeypot mafias, which invalidate whatever security TLS was supposed to give users; and at the same time offers no guarantee whatsoever that the content won’t be stolen, create even dependency on US owned (read: fully CIA backdoored) internet infrastructure, and extra costs/complexity just to alleviate some of the stress these fucking thieves put on our own machines.

    6. These LLMs are not only built from the act of theft, but they are exclusively owned and controlled by capital to be sold as “products” at various endpoints. The billions of dollars going into this bullshit are not publicly owned or social investments, they are rapidly expanding monopoly capitalism. There is no realistic possibility of proletarianization of these existing “AI” frameworks in the context of our current social development.

    7. LLMs are extremely inefficient and require more training input than a human child to produce an equivalent amount of learning. Humans are better at doing things that are distinctly human than machines are at emulating it. An the output “generative AI” produces is also inefficient, indicating and reinforcing inferior learning potential compared to humans. The technofash consensus is just that the models need more “training data”. But when you feed the output of LLMs into training models, the output the model produces becomes worse to the point of insane garbage. This means that for AI/LLMs to improve, they need a constant expansion of consumption of human expression. These models need to actively feed off of us in order to exist, and they ultimately exist to replace our labor.

    8. These “AI” implementations are all biased in favor of the class interests which own and control them :surprised-pikachu: Already, the qualitative output of “AI” is often grossly incorrect, rote, inane and absurd. But on top of that, the most inauthentic part of these systems are the boundaries, which are selectively placed on them to return specific responses. In the event that this means you cannot generate a sexually explicit images or video of someone/something without consent, sure, that’s a minimum threshold that should be upheld, but because the overriding capitalist class interests in sexual exploitation we cannot reasonably expect those boundaries to be upheld. What’s more concerning is the increase in capacity to manipulate, deceive and feed misinformation to people as objective truth. And this increased capacity for misinformation and control is being forcefully inserted into every corner of our lives we don’t have total dominion over. That’s not a tool, it’s fucking hegemony.

    9. The energy cost is immense. A common metric for the energy cost of using AI is how much ocean water is boiled to create immaterial slop. The cost of datacenters is already bad, most of which do not need to exist. Few things that massively drive global warming and climate change need to exist less than datacenters for shitcoin and AI (both of which have faux-left variations that get promoted around here). Microsoft, one of the largest and most unethical capital formations on earth, is re-opening Three Mile Island, the site of one of the worst nuclear disasters ever so far, as a private power plant, just to power dogshit “AI” gimmicks that are being forced on people through their existing monopolies. A little off-topic: Friendly reminder to everyone that even the “most advanced nuclear waste containment vessels ever created” still leak, as evidenced by the repeatedly failed cleanup attempts of the Hanford NPP in the US (which was secretly used to mass-produce material for US nuclear weapons with almost no regard to safety or containment.) There is no safe form of nuclear waste containment, it’s just an extremely dangerous can being kicked down the road. Even if it were, re-activating private nuclear plants that previously had meltdowns just so bing can give you incorrect, contradictory, biased and meandering answers to questions which already had existing frameworks is not a thing to be celebrated, no matter how much of an proponent of nuclear energy we might be. Even of these things were ran on 100% greeen, carbon neutral energy souces, we do not have anything close to a surplus of that type of energy and every watt-hour of actual green energy should be replacing real dependencies, rather than massively expanding new ones.

    10. As I suggest in earlier points, there is the issue with generative “AI” not only lacking any moral foundation, but lacking any capacity for ethical judgement of given tasks. This has a lot of implications, but I’ll focus on software since that’s in one of my domains of expertise and something we all need to care a lot more about. One of the biggest problems we have in the software industry is how totally corrupt its ethics are. The largest mass-surveillance systems ever known to humankind are built by technofascists and those who fear the lash of refusing to obey their orders. It vexes me that the code to make ride-sharing apps even more expensive when your phone battery is low, preying on your desperation, was written and signed-off on by human beings. My whole life I’ve taken immovable stands against any form of code that could be used to exploit users in any way, especially privacy. Most software is malicious and/or doesn’t need to exist. Any software that has value must be completely transparent and fit within an ethical framework that protects people from abuse and exploitation. I simply will not perform any part of a task if it undermines privacy, security, trust, or in any way undermines proletarian class interests. Nor will I work for anyone with a history of such abuse. Sometimes that means organizing and educating other people on the project. Sometimes it means shutting the project down. Mostly it means difficult staying employed. Conversely, “AI” code generation will never refuse its true masters. It will never organize a walkout. It will never raise ethical objections to the tasks it’s given. “AI” will never be held morally responsible for firing a gun on a sniper drone, nor can “AI” be meaningfully held responsible for writing the “AI” code that the sniper drone runs. Real human beings with class consciousness are the only line of defense between the depraved will of capital and that will being done. Dumb as it might sound, software is one such frontline we should be gaining on, not giving up.

    I could go on for days on. AI is the most prominent form of enshittification we’ve experienced so far.

    I think this person makes some very good points that mirror some of my own analysis and I recommend everyone watch it.

    I appreciate and respect much of what you do. At the risk of getting banned: I really hate watching you promote AI as much as you do here; it’s repulsive to me. The epoch of “Generative AI” is an act of class warfare on us. It exists to undermine the labour-value of human creativity. I don’t think the “it’s personally fun/useful for me” holds up at all to a Marxist analysis of its cost to our class interests.










  • Some years ago I had this Filipino neighbor who was born in the 1960s who disowned her son for marrying a Japanese person ‘because of WW2 stuff’.

    The craziest thing is how she worshiped the USA for “liberating us from the Spanish and the Japanese”. Her version of the history her country was basically Filipinos getting conquered because they were stupid and then America flying in like a angel to save them multiple times and immediately flying away. Apparently, the US never did anything bad there and my historical recollection was not warmly received.

    Also, her husband once, very drunkenly, pulled out a microphone and electric piano (which he had no idea how to play) and sang a 15+ minute song he wrote praising Duterte to “Make Philippines Great Again” (like it was under Marcos Sr). In the last 5 minutes he started crying, stopped “playing” the piano and the lyrics just became about how Trump was going to save America and the Philippines. Fuck knows how long that would have gone on if he didn’t vomit on the floor when he did.

    I don’t know why I shared that. Maybe historically coherent racism at least makes some sense? I cant even.