• 275 Posts
  • 168 Comments
Joined 1 year ago
cake
Cake day: March 17th, 2024

help-circle


  • You have yet to refute the deduction based argument:

    If you use the machine to think for you, you will stop thinking.

    Not thinking leads to a degradation of thinking skills

    Therefore, using machine to think for you will lead to a degradation of thinking skills.

    This is not inductive reasoning, like a study, where you look at data and induce a conclusion. This is pure reasoning. Refute it.

    That’s a lot of bon-scientific blogs to talk about the non-scientific study I pointed out. Still no objective evidence.

    They are a bunch of blogs of people sharing that, after utilizing AI for extended periods of time, their ability to solve problems degraded because they stopped thinking and sharpening their cognitive skills.

    So what would satisfy your need for objective evidence? What would I need to show you for you to change your mind? How would a satisfactory study be conducted?

    I didn’t say much about the “hominem” but I think you’re defining Microsoft?

    “Defining Microsoft”… I didn’t define Microsoft?

    Did you mean “Defend”? What do you mean “defend”? Again, ad hominem. Instead of substantiating why it is you say the document doesn’t count, you attack the ones who made it.


    All your dismissals and you have yet to refute the argument all these people make:

    If you use the machine to think for you, you will stop thinking.

    Not thinking leads to a degradation of thinking skills

    Therefore, using machine to think for you will lead to a degradation of thinking skills.

    All you have to do is refute this argument and my then it will be up to me to defend myself. Refute the argument. It’s deductive reasoning.




  • Microsoft did a study on this and they found that those who made heavy usage of AI tools said they felt dumber:

    “Such consternation is not unfounded. Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved. As Bainbridge [7] noted, a key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise.”

    Cognitive ability is like a muscle. If it is not used regularly, it will decay.

    It also said it made people less creative:

    “users with access to GenAI tools produce a less diverse set of outcomes for the same task, compared to those without. This tendency for convergence reflects a lack of personal, contextualised, critical and reflective judgement of AI output and thus can be interpreted as a deterioration of critical thinking.”

    LINK







  • If all it takes to be a “real artist” is drawing proficiently

    I think you are miss-understanding the argument.

    Pro-AI folk say that being anti-AI, as a digital artist, is hypocrisy because you also used a computer. Here it is shown that, despite not using a computer, the artist is still able to create their art, because there is more to the visual arts than the tools you have to make it. This puts rest to the idea that using digital art tools is somehow hypocritical with being against AIGen.

    The argumentor is not saying that not knowing how to draw proficiently excludes being an artist. They are just saying that real artist do not need a computer program to create their arts, much like performances or installation artists you mentioned.


  • nor do I have the talent

    And why do you think you do not have “talent”? What is that “talent” you speak of? Is it something people are born with? What is the problem with what you make, if all you care about is what people put into art?

    Art is whatever people put into it

    “It” what? The pronoun “it” is referring to what? Art? Without this clarification I cannot accurately make sense of anything else in your response.

    Keep in mind that, while defining a term, you cannot use that term in it’s own definition.














  • This is a matter of coding a good enough neuron simulation, running it on a powerful enough computer, with a brain scan we would somehow have to get - and I feel like the brain scan is the part that is farthest off from reality.

    So… Sci-Fi technology that does not exist. You think the “Neurons” in the Neural Networks of today are actually neuron simulations? Not by a long shot! They are not even trying to be. “Neuron” in this context means “thing that holds a number from 0 to 1”. That is it. There is nothing else.

    That’s an unnecessary insult - I’m not advocating for that, I’m stating it’s theoretically possible according to our knowledge, and would be an example of a computer surpassing a human in art creation. Whether the simulation is a person with rights or not would be a hell of a discussion indeed.

    Sorry about the insulting tone.

    I do also want to clarify that I’m not claiming the current model architectures will scale to that, or that it will happen within my lifetime. It just seems ridiculous for people to claim that “AI will never be better than a human”, because that’s a ridiculous claim to have about what is, to our current understanding, just a computation problem.

    That is the reason why I hate the term “AI”. You never know whether the person using it means “Machine Learning Technologies we have today” or “Potential technology which might exist in the future”.

    And if humans, with our evolved fleshy brains that do all kinds of other things can make art, it’s ridiculous to claim that a specially designed powerful computation unit cannot surpass that.

    Yeah… you know not every problem is compute-able right? This is known as the halting problem.

    Also, I’m not interested in discussing Sci-Fi future tech. At that point we might as well be talking about Unicorns, since it is theoretically possible for future us to genetically modify a equine an give it on horn on the forehead.


    Also, why would you want such a machine anyways?


  • It’s not a matter of if “AI” can outperform humans, it’s a matter of if humanity will survive to see that and how long it might take.

    You are not judging what is here. The tech you speak of, that will surpass humans, does not exist. You are making up a Sci-Fi fantasy and acting like it is real. You could say it may perhaps, at some point, exist. At that point we might as well start talking about all sorts of other technically possible Sci-Fi technology which does not exist beyond fictional media.

    Also, would simulating a human and then forcing them to work non-stop count as slavery? It would. You are advocating for the creation of synthetic slaves… But we should save moral judgement for when that technology is actually in horizon.

    AI is a bad term because when people hear it they start imagining things that don’t exist, and start operating in the imaginary, rather than what actually is here. Because what is here cannot go beyond what is already there, as is the nature of the minimization of the Loss Function.


  • I’m not sure what the rest of the message has to do with the fundamental assertion that ai will never, for the entire future of the human race, “outperform” a human artist. It seems like it’s mostly geared towards telling me I’m dumb.

    I is my attempt at an explanation of how the machine fundamentally works, which, as an obvious consequence of it’s nature, cannot but mimic. I’m pretty sure you do not know the inner workings of the “Learning”, so yes… I’m calling you incompetent… in the field of Machine Learning. I even gave you a link to a great in depth explanation of how these machines work! Educate yourself, as for your ignorance (in this specific field) to vanish.

    Correct, humans are flesh bags. Prove me wrong?

    1. Human is "A member of the primate genus Homo, especially a member of the species Homo sapiens, distinguished from other apes by a large brain and the capacity for speech. "

    2. Flesh is "The soft tissue of the body of a vertebrate, covering the bones and consisting mainly of skeletal muscle and fat. "

    3. Flesh does not have brains or the capacity for speech

    4. Therefore, Humans are not flesh

    I supposed I should stop wasting my time talking to you then, as you see me as nothing more than an inanimate object with no consciousness or thoughts, as is flesh.


  • You have stated that AI will improve. Improvement implies being able to classify something as better than something else. You have then stated that art is subjective and therefore a given piece cannot be classified as better than another. This is a logical contradiction.

    I then questioned your standards for “good”. By what criteria are you measuring the pieces in order to determine which one is “better” and thus be able to determine if the AI’s input is improving or not? I then tried to, as simply and as timely as I could, give a basic explanation of how the Learning process actually works. Admittedly I did not do a good job. Explanations of this could take up to two or three hours, depending on how much you already know.

    Then comes some philosophizing about what makes a piece “good”. First, questioning your focus on the pieces of output being good. Then, inquiring what is the harm of a “bad” image? In the context of “Why not draw yourself? Too afraid of making something that is not «perfect»”? Then I asked why is it that you refuse, on your analisys of the “goodness” of an image, to go beyond “I like it.”/“I do not like it.”, “It looks professional”/“It looks amateurish”. Such statements are not meaningful critiques of a piece, they are reports of the feelings of the observer. The subjectivity of art we all speak of. However, it is indeed possible to create a more objective critique of a piece which goes beyond our tastes. To critique a piece, one must try to look at what one believes the piece is trying to accomplish, then evaluate whether or not the piece is succeeding at it. If it is, why? If it isn’t, why not?

    Then, as an addendum, I stated that these functions we call AI have diminishing returns. This is a consequence of the whole loss function thing which is at the heart of the Machine Learning process.

    The some deceitful definitions. The words “Neuron” and “Learning” under the context of Machine Learning do not have the same meaning as they do colloquially. This is something which causes many to be fooled and marketing agencies abuse to market “AI”. Neuron does not mean “simulation of biological neuron”, it means “Number from 0 to 1”. That means that a Neural Network is actually just a network of numbers between 0 and 1, like 0.2031. Likewise, learning in Machine Learning is not the same has biological learning. Learning here is just a short hand for Minimizing the value of the Loss Function”.


    I could add that even the name AI is deceitful, has it has been used as a marketing buss word since it’s creation. Arguably, one could say it was created to be one. It causes people to judge the Function, not for what it is, as any reasonable actor would, but for what it isn’t. Instead judged by what, maybe, it might become, if only we [AI companies] get more funding. This is nothing new. The same thing happened in the first AI craze in the 19’s. Eventually people realized the promised improvements were not coming and the hype and funding subsided. Now the cycle repeats: They found something which can superficially be considered “intelligent” and are now doing it again.


  • Just because a drawn picture won once means squat

    True, a sample of one means nothing, statistically speaking.

    AI can be used alongside drawing

    Why would I want a function drawing for me if I’m trying to draw myself? In what step of the process would it make sense to use?

    for references for instance

    AI is notorious for not giving the details someone would pick a reference image for. Linkie

    It’s a tool like any other

    No they are not “a tool like any other”. I do not understand how you could see going from drawing on a piece of paper to drawing much the same way on a screen as equivalent as to an auto complete function operated by typing words on one or two prompt boxes and adjusting a bunch of knobs.


    Also, just out of curiosity, do you know how “back propagation” is, in the context of Machine Learning? And “Neuron” and “Learning”?


  • improve to the point of being good.

    So… first you say that art is subjective, then you say that a given piece can be classified as “good” or “bad”. What is it?

    Your whole shebang is that it [GenAI] will become better. But, if you believe art to be subjective, how could you say the output of a GenAI is improving? How could you objectively determine if the function is getting better? The function’s definition of success is it’s loss function, which all but a measure of how mismatched the input of a given description is to it’s corresponding image. So, how well it copies the database.

    Also, an image is “good” by what standards?

    Why are you so obsessed with the image looking “good”. There is a whole lot more to an image than just “does it look good”. Why are you so afraid of making something “bad”? Why can you not look at an image any deeper than “I like it.”/“I do not like it.”, “It looks professional”/“It looks amateurish”? These aren’t meaningful critiques of the piece, they’re just reports of your own feelings. To critique a piece, one must try to look at what one believes the piece is trying to accomplish, then evaluate whether or not the piece is succeeding at it. If it is, why? If it isn’t, why not?

    Also, these number networks suffer from diminishing returns.


    Also:

    In the context of Machine Learning “Neuron” means “Number from 0 to 1” and “Learning” means “Minimize the value of the Loss Function”.


  • We’re all flesh bags, what are you talking about?

    So, in your eyes, all humans are but flesh with no greater properties beyond the flesh that makes up part of them? In your eyes, people are just flesh?

    based on flesh bag neural nets

    That is false. Back propagation is not based on how brains work, it is simply a method to minimize the value of a loss function. That is what “Learning” means in AI. It does not mean learn in the traditional sense, it means minimize the value of the loss function. But what is the loss function? For Image Gen, it is, quite literally, how different the output is from the database.

    The whole “It’s works like brains do” is nothing more than a loose analogy taken too far by people who know nothing about neurology. The source of that analogy is the phrase “Neurons that fire together wire together”, which comes with a million asterisks attached. Of course, those who know nothing about neurology don’t care.

    The machine is provided with billions of images with accompanying text descriptions (Written by who?). You the input the description of one of the images and then figure out a way to change the network so that when the description is inputed, it’s output will match, as closely as possible, the accompanying image. Repeat the process for every image and you have a GenAI function. The closer the output is to the provided data, the lower the loss function’s value.

    You probably don’t know what any of that is. Perhaps you should educate yourself on what it is you are advocating for. 3Blue1Brown made a great playlist explaining it all. Link here.