The objections to AI image gens, training sets containing stolen data, etc. all apply to LLMs that provide coding help. AI web crawlers search through git repositories compiling massive training sets of code, to train LLMs.
Art is meant to be an expression of the self and a form of communication. It’s therapeutic, it’s liberating, it’s healthy and good. We make art to make and keep us human. Chatbot art doesn’t help us, and in fact it makes us worse - less human. You’re depriving yourself of enrichment when you use a chatbot for art.
Code is mechanical and functional, not really artistic. I suppose you can make artistic code, but coders aren’t doing that (maybe they should, maybe code should become art, but for now it isn’t and I think that’s a different conversation). They’re just using tools to perform a task. It was always soulless, so nothing is lost.
Art is also functional. Specifically, paid opportunities for art perform some type of function. Not all art is museum type contemplative work or the highest level of self expression. Some art, its purpose is to serve as a banner on the side of a web page notifying people of a new feature. That isn’t really enriching to create. It’s a rather low form of self expression, similar to code created to be functional.
I think you’re also underestimating AI image gens as a form of self expression. Obviously it’s more freeing to be able to draw or paint or create a thing yourself. But people often lack the prerequisite skills to create the thing they have in their mind. I often see career artists compare their work and style from years ago to their works today, showing off extreme improvement - meaning that even talented artists sometimes lack the skills necessary to create the “perfect” version of what they had in their mind.
With LLMs, you can get quite specific - not just “draw me in a Studio Ghibli style,” but meticulously describing a scene and visual style - and it will render it. There is still creative satisfaction in that process, like how a movie director tells the actors how to accomplish a scene but doesn’t actually play a role in the film themselves.
“i am fine with stolen labor because it wasn’t mine. My coworkers are falling behind because they have ethics and don’t suck corporate cock but instead understand the value in humanity and life itself.”
Then most likely you will start falling behind… perhaps in two years, as it won’t be as noticable quickly, but there will be an effect in the long term.
I use gpt to prototype out some Ansible code. I feel AI slop is just fine for that; and I can keep my brain freer of YAML and Ansible, which saves me from alcoholism and therapy later.
Make it illegal and the funding will dry up and it will mostly die. At least, it wouldn’t threaten the livelihood of millions of people after stealing their labor.
Am I promoting a ban? No. Ai has its use cases but is current LLM and image generation ai bs good? No, should it be banned? Probably.
This isn’t even close to what I was arguing. Like any major technology, all economically competitive countries are investing in its development. There are simply too many important applications to count. It’s a form of arms race. So the only way a country may see fit to ban its use in certain applications is if there are international agreements.
You could say fascism is inevitable. Just look at the elections in Europe or the situation in the USA. Does that mean we cant complain about it? Does that mean we cant tell people fascism is bad?
They said the same thing about cloning technology. Human clones all around by 2015, it’s inevitable. Nuclear power is the tech of the future, worldwide adoption is inevitable. You’d be surprised by how many things declared “inevitable” never came to pass.
Sir, this is a Wendy’s. You personally attacking me doesn’t change the fact that AI is still not inevitable. The bubble is already deflating, the public has started to fall indifferent, even annoyed by it. Some places are already banning AI on a myriad of different reasons, one of them being how insecure it is to feed sensitive data to a black box. I used AI heavily and have read all the papers. LLMs are cool tech, machine learning is cool tech. They are not the brain rotted marketing that capitalists have been spewing like madmen. My workplace experimented with LLMs, management decided to ban them. Because they are insecure, they are awfully expensive and resource intensive, and they were making people less efficient at their work. If it works for you, cool, keep doing your thing. But it doesn’t mean it works for everyone, no tech is inevitable.
We had a custom made model, running on an data center behind proxy and encrypted connections. It was atrocious, no one ever knew what it was going to do, it spewed hallucinations like crazy, it was awfully expensive, it didn’t produce anything of use, it refused to answer shit it was trained to do and it randomly leaked sensitive data to the wrong users. It was not going to assist, much less replace any of us, not even in the next decade. Instead of falling for the sunken cost fallacy like most big corpos, we just had it shut down, told the vendor to erase the whole thing, we dumped the costs as R&D and we decided to keep doing our thing. Due to the nature of our sector, we are the biggest players and no competitor, no matter how advanced the AI they use will never ever get close to even touching us. But yet again, due to our sector, it doesn’t matter. Turns out AI is a hindrance and not an asset to us, thus is life.
deleted by creator
You probably create AI slop and present it proudly to people.
AI should replace dumb monotonous shit, not creative arts.
deleted by creator
That’s what the OP is about, so…
Has AI made you unable to read?
The objections to AI image gens, training sets containing stolen data, etc. all apply to LLMs that provide coding help. AI web crawlers search through git repositories compiling massive training sets of code, to train LLMs.
deleted by creator
But your opinion is off topic.
deleted by creator
Code and art are just different things.
Art is meant to be an expression of the self and a form of communication. It’s therapeutic, it’s liberating, it’s healthy and good. We make art to make and keep us human. Chatbot art doesn’t help us, and in fact it makes us worse - less human. You’re depriving yourself of enrichment when you use a chatbot for art.
Code is mechanical and functional, not really artistic. I suppose you can make artistic code, but coders aren’t doing that (maybe they should, maybe code should become art, but for now it isn’t and I think that’s a different conversation). They’re just using tools to perform a task. It was always soulless, so nothing is lost.
Art is also functional. Specifically, paid opportunities for art perform some type of function. Not all art is museum type contemplative work or the highest level of self expression. Some art, its purpose is to serve as a banner on the side of a web page notifying people of a new feature. That isn’t really enriching to create. It’s a rather low form of self expression, similar to code created to be functional.
I think you’re also underestimating AI image gens as a form of self expression. Obviously it’s more freeing to be able to draw or paint or create a thing yourself. But people often lack the prerequisite skills to create the thing they have in their mind. I often see career artists compare their work and style from years ago to their works today, showing off extreme improvement - meaning that even talented artists sometimes lack the skills necessary to create the “perfect” version of what they had in their mind.
With LLMs, you can get quite specific - not just “draw me in a Studio Ghibli style,” but meticulously describing a scene and visual style - and it will render it. There is still creative satisfaction in that process, like how a movie director tells the actors how to accomplish a scene but doesn’t actually play a role in the film themselves.
“i am fine with stolen labor because it wasn’t mine. My coworkers are falling behind because they have ethics and don’t suck corporate cock but instead understand the value in humanity and life itself.”
deleted by creator
Then most likely you will start falling behind… perhaps in two years, as it won’t be as noticable quickly, but there will be an effect in the long term.
deleted by creator
I know senior devs who fell behind just because they use too much google.
This is demonstrably much worse.
deleted by creator
I use gpt to prototype out some Ansible code. I feel AI slop is just fine for that; and I can keep my brain freer of YAML and Ansible, which saves me from alcoholism and therapy later.
Ai isn’t magic. It isn’t inevitable.
Make it illegal and the funding will dry up and it will mostly die. At least, it wouldn’t threaten the livelihood of millions of people after stealing their labor.
Am I promoting a ban? No. Ai has its use cases but is current LLM and image generation ai bs good? No, should it be banned? Probably.
Illegal globally? Unless there’s international cooperation, funding won’t dry up - it will just move.
That is such a disingeous argument. “Making murder illegal? People will just kill each other anyways, so why bother?”
deleted by creator
This isn’t even close to what I was arguing. Like any major technology, all economically competitive countries are investing in its development. There are simply too many important applications to count. It’s a form of arms race. So the only way a country may see fit to ban its use in certain applications is if there are international agreements.
Wait, you don’t have to like it, but ethical reasons shouldn’t stop you?
You could say fascism is inevitable. Just look at the elections in Europe or the situation in the USA. Does that mean we cant complain about it? Does that mean we cant tell people fascism is bad?
deleted by creator
They said the same thing about cloning technology. Human clones all around by 2015, it’s inevitable. Nuclear power is the tech of the future, worldwide adoption is inevitable. You’d be surprised by how many things declared “inevitable” never came to pass.
deleted by creator
Every 3D Tvs fan said the same. VR enthusiasts for two decades as well. Almost nothing, and most certainly no tech is inevitable.
deleted by creator
Sir, this is a Wendy’s. You personally attacking me doesn’t change the fact that AI is still not inevitable. The bubble is already deflating, the public has started to fall indifferent, even annoyed by it. Some places are already banning AI on a myriad of different reasons, one of them being how insecure it is to feed sensitive data to a black box. I used AI heavily and have read all the papers. LLMs are cool tech, machine learning is cool tech. They are not the brain rotted marketing that capitalists have been spewing like madmen. My workplace experimented with LLMs, management decided to ban them. Because they are insecure, they are awfully expensive and resource intensive, and they were making people less efficient at their work. If it works for you, cool, keep doing your thing. But it doesn’t mean it works for everyone, no tech is inevitable.
deleted by creator
We had a custom made model, running on an data center behind proxy and encrypted connections. It was atrocious, no one ever knew what it was going to do, it spewed hallucinations like crazy, it was awfully expensive, it didn’t produce anything of use, it refused to answer shit it was trained to do and it randomly leaked sensitive data to the wrong users. It was not going to assist, much less replace any of us, not even in the next decade. Instead of falling for the sunken cost fallacy like most big corpos, we just had it shut down, told the vendor to erase the whole thing, we dumped the costs as R&D and we decided to keep doing our thing. Due to the nature of our sector, we are the biggest players and no competitor, no matter how advanced the AI they use will never ever get close to even touching us. But yet again, due to our sector, it doesn’t matter. Turns out AI is a hindrance and not an asset to us, thus is life.