Maybe they think Cyberpunk refers to Cyberpunk 2077 rather than the genre as a whole
Maybe they think Cyberpunk refers to Cyberpunk 2077 rather than the genre as a whole
I think maybe the confusion has to do with how that list at the bottom is meant to be another quote rather than a summary, but since it is a code block that looks different from the other quotes that might imply that it isn’t a quote. Now that I’m looking at things more, in hindsight I should have done it like this:
- list1
- list2
I just didn’t realize it mattered much and figured it cluttered the page less the first way
No, I think that gets conveyed in the second half, the argument isn’t that AI as a whole isn’t using a lot of electricity, it’s that this electricity use is being misattributed to LLM chatbots which are only a very small part of it.
How would I have used AI here, it’s mostly quotes from the article? You’re way off anyway, it actually took me a little while to try out different ways of formatting that list in Lemmy and making the hyperlinks in the quotes display correctly, let alone finding this in the first place as it was the source of the graph I had seen somewhere that I was thinking initially of posting, or reading the thing in order to pick out appropriate passages. To be clear, I have put way too much effort into writing internet comments over the years to be using LLMs for that now and I promise you I do not do that.
imo the kind of tired where you’ve done a lot of stuff and need to lie down is better than the kind of tired when you haven’t exercised at all in a long time and don’t feel like doing anything including non-physical activities
I found a blogpost that cites a Business Insider article that implies this claim as formulated is way off:
Reported energy use implies that ChatGPT consumes about as much energy as 20,000 American homes. An average US coal plant generates enough energy for 80,000 American homes every day. This means that even if OpenAI decided to power every one of its billion ChatGPT queries per day entirely on coal, all those queries together would only need one quarter of a single coal plant. ChatGPT is not the reason new coal plants are being opened to power AI data centers.
It goes on to argue that while it’s true that AI related electricity use is booming, it’s not because of LLM chatbots:
AI energy use is going to be a massive problem over the next 5 years. Projections say that by 2030 US data centers could use 9% of the country’s energy (they currently use 4%, mostly due to the internet rather than AI). Globally, data centers might rise from using 1% of the global energy grid to 21% of the grid by 2030. …
97% of the total energy used by AI as of late 2024 is not being used by ChatGPT or similar apps, it’s being used for other services. What are those services? The actual data on which services are using how much energy is fuzzy, but the activities using the most energy are roughly in this order:
* Recommender Systems - Content recommendation engines and personalization models used by streaming platforms, e-commerce sites, social media feeds, and online advertising networks.
* Enterprise Analytics & Predictive AI - AI used in business and enterprise settings for data analytics, forecasting, and decision support.
* Search & Ad Targeting - The machine learning algorithms behind web search engines and online advertising networks.
* Computer vision - AI tasks involving image and video analysis – often referred to as computer vision. It includes models for image classification, object detection, facial recognition, video content analysis, medical image diagnostics, and content moderation (automatically flagging inappropriate images/videos). Examples are the face recognition algorithms used in photo tagging and surveillance, the object detection in self-driving car systems (though inference for autonomous vehicles largely runs on-board, not in data centers, the training of those models is data-center based), and the vision models that power services like Google Lens or Amazon’s image-based product search.
* Voice and Audio AI - AI systems that process spoken language or audio signals. The most prominent examples are voice assistants and speech recognition systems – such as Amazon’s Alexa, Google Assistant, Apple’s Siri, and voice-to-text dictation services.
Is email really visible, I thought that wasn’t displayed publicly. I don’t see your email
strong privacy laws regarding how that data can be used
In practice this just isn’t going to work, because the whole infrastructure is aligned against effective privacy such that you can’t just pass a simple law to ensure it. What I’ve heard from someone working in local government is that right now there is an overwhelming push to move all computer systems to the cloud (private company servers and software), and most of them are there already, which means that the actual people, practices, and physical hardware managing data are at multiple levels of remove from democratic scrutiny and influence. Also consider the high profile recent events regarding collection and misuse of existing data by the US federal government regardless of laws prohibiting it. None of the information collected and stored by the government (or corporations for that matter) is safe, and the task of making it safe becomes more impractical all the time.
Of course these are also problems that would be good to address, but I think you can’t count on them being resolved because they probably will not be. Which isn’t to say good laws on what data isn’t safe to be collected to begin with, or what decisions affecting people’s lives aren’t safe to be made by computers, are likely either, but that at least seems like a more realistic approach to me than trying to build a Panopticon that somehow doesn’t get abused.
If that was the case he could have deleted /pol/ and banned its users. 2016 would have been a great time for that.
Running water is a technology that tends to solve bigger problems than it causes. You can always count on politics to break sometimes, but when it happens with running water, even if people are getting sick because of lead pipes and sewage is backing up into peoples homes because of organizational dysfunction (happened to me, the city just failed to connect the pipes from my apartment to the sewer and pretended they had), it’s still better than the public health catastrophe that is an absence of running water.
On the other hand, for the entire class of technology where the benefit is more automation of law enforcement, I’d argue it’s completely the other way around; huge inherent political risk, minimal potential improvement.
Well I dislike them mainly because they further enable scalable mass surveillance. There should be more barriers to having records of where everyone is. As for automated enforcement, the way it works is often a blatant scam. I once had a commute where I passed by an intersection that ticketed people turning left, the amount of time it allowed was noticeably shorter than normal, and you could see the flash indicating they were ticketing someone basically every time the light changed, for multiple cars, because it activated if you were in the intersection at all after the light turned red. There was always a long line to turn left at that intersection. I mostly avoided getting ticketed but I did get one once, it was through a private company and I just ignored it and nothing happened. I really think most of those get set up because of corrupt relationships between people in government and the people running those companies that handle the tickets.
Electronic plate readers are an illegitimate anti-privacy technology and should be banned imo. License plates are already too hard to remember, I have a hard time remembering my own license plate number let alone one I had a two second glance at.
Hell yeah worm smoothie
That’s not even necessarily them being nice, if your audience can suddenly afford less, that changes what the optimal price would be for maximizing sales * price. The cost of producing an electronic copy of a DLC is zero.
This one hits different when you read it as an adult
No I meant like, for prompting tool supporting models to be aware of the functions you are making available to it. I’ve tried arbitrary prompts to tell it to do this and it sort of works but yeah the models I’ve tried don’t seem very good at that, was mainly wondering if using a specific format in the prompt would improve performance
Open source code doesn’t mean open API though. Bluesky seems to have made a whole thing out of their technical architecture, and I get the arguments that it’s centralized in practice, but wouldn’t it mean basically scrapping the whole thing to lock down third party clients? Even if that didn’t mean anything I think multiclients could be a good idea anyway, if people were using those and there was a Reddit situation, some portion of users would want to stay with the same clients rather than using whatever proprietary app they try to push.
I don’t think I’ve seen that specific font used in a webcomic except ones generated by ChatGPT, and ChatGPT always uses it. It’s a very clear tell.
What I’m wondering is, is there a standard format for instructing models to give outputs using the tool? They’re specifically trained to be better at doing this right
Convenient indexed search was the only real improvement Windows made since XP and now they’ve ruined it. Windows XP is once again superior.