

Yeah, I think that boils down to: Zelensky is a better negotiator than Trump. He doesn’t give in before the real negotiations even started, so he has some concessions left to make…
A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.
I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.
Yeah, I think that boils down to: Zelensky is a better negotiator than Trump. He doesn’t give in before the real negotiations even started, so he has some concessions left to make…
If you’re talking about normal video surveillance, I think most of today’s cameras use regular light. These sunglasses would be effective against a biometric scanner, the iPhones depth camera, a Windows Hello screen unlock or an XBox Kinect. But the average camera on the street uses visible light and likely has an IR filter in place (at daytime), so it won’t even see infrared.
What works against these are ski masks, wearing a motorcycle helmet… Or even large hat or golf hat, depending on the camera’s perspective.
Sure. But we need to see pics, or it didn’t happen.
The abstract doesn’t mention them re-gaining their old capacity. It only says they shrink. And something about voltage. So I have my doubts. I mean it’s nice if my spicy pillow shrinks a bit. But what does that help if it continues to stay nearly dead? And an application in products would be hard to accomplish. At that temperature, all the plastic etc is going to melt. Maybe the solder as well.
I was impressed by the demo and tried it. But I must say it’s not very straightforward to use. It can only handle about half a minute of audio, so the input needs to be split up and recombined, for which we didn’t have any examples as of yesterday. And it speaks very very fast and chops off things once the parameters are wrong. Sometimes it also did that for me with short input. And the voices weren’t a 100% consistent match, even with the audio snippet in front to guide it.
I’d say it definitely needs more polish before being usable for an application. It’s impressive, though.
I’m always a bit unsure about that. Sure AI has a unique perspective on the world, since it has only “seen” it through words. But at the same time these words conceptualize things, there is information and models stored in them and in the way they are arranged. I believe I’ve seen some evidence, that AI has access to the information behind language, when it applies knowledge, transfers concepts… But that’s kind of hard to judge. I mean an obvious example is translation. It knows what a cat or banana is. It picks the correct french word. At the same time it also maintains tone, deals with proverbs, figures of speech… And that was next to impossible with the old machine translation services which only looked at the words. And my impression with doing computer coding or creative writing is, it seems to have some understanding of what it’s doing. Why we do things a certain way and sometimes a different way, and what I want it to do.
I’m not sure whether I’m being too philosophical with the current state of technology. AI surely isn’t very intelligent. It certainly struggles with the harder concepts. Sometimes it feels like its ability to tell apart fact and fiction is on the level of a 5 year old who just practices lying. With stories, it can’t really hint at things without giving it away openly. The pacing is off all the time. But I think it has conceptualized a lot of things as well. It’ll apply all common story tropes. It loves to do sudden plot twists. And next to tying things up, It’ll also introduce random side stories, new characters and dynamics. Sometimes for a reason, sometimes it just gets off track. And I’ve definitely seen it do suspension and release… Not successful, but I’d say it “knows” more than the words. That makes me think the concepts behind storytelling might actually be somewhere in there. It might just lack the needed intelligence to apply them properly. And maintain the bigger picture of a story, background story, subplots, pacing… I’d say it “knows” (to a certain degree), it’s just utterly unable to juggle the complexity of it. And it hasn’t been trained with what makes a story a good one. I’d guess, that might not be a fundamental limitation of AI, though. But more due to how we feed it award-winning novels next to lame Reddit stories without a clear distinction(?) or preference. And I wouldn’t be surprised if that’s one of the reasons why it doesn’t really have a “feeling” of how to do a good job.
Concerning OP’s original question… I don’t think that’s part of it. The people doing the training have put in deliberate effort to make AI nice and helpful. As far as I know there’s always at least two main steps in creating large language models. The first one is feeding large quantities or text. The result of that is called a “base model”. Which will be biased in all the ways the learning datasets are. It’ll do all the positivity, negativity, stereotypes, be helpful or unhelpful roughly like people on the internet are, the books and wikipedia, which went in, are. (And that’s already more towards positive.) The second step is to tune it for some application. Like answering questions. That makes it usable. And makes it abide by whatever the creators chose. Which likely includes not being rude or negative to customers. That behaviour gets suppressed. If OP wants it a different way, they probably want a different model, or maybe a base model. Or maybe a community-made fine-tune that has a third step on top to re-align the model with different goals.
That’s a very common issue with a lot of large language models. You can either pick one with a different personality, (I liked Mistral-Nemo-Instruct for that, since it’s pretty open to just pick up on my tone and go with that). Or you give clear instructions what you expect from it. What really helps is to include example text or dialogue. Every model will pick up on that to some degree.
But I feel you. I always dislike ChatGPT due to its know-it-all and patronizing tone. Most other models also are deliberately biased. I’ve tried creative writing and most refuse to be negative or they’ll push towards an happy end. They won’t write you a murder mystery novel without constantly lecturing about how murder is wrong. And they can’t stand the tension and want to resolve the murder right away. I believe that’s how they’ve been trained. Especially if there is some preference optimization been done for chatbot applications.
Utimately, it’s hard to overcome. People want chatbots to be both nice and helpful. That’s why they get deliberately biased toward that. Stories often include common tropes. Like resolving drama and a happy ending. And AI learns a bit from argumentative people on the internet, drama on Reddit etc. But generally that “negativity” gets suppressed so the AI doesn’t turn on somebody’s customers or spews nazi stuff like the early attempts did. And Gemma3 is probably aimed at such commercial applications, it’s instruct-tuned and has “built-in” safety. So I think all of that is opposed to what you want it to do.
I think it needs to work across instances, since we’re concerned wit the Fediverse and federation is one of the defining mechanics. Also when I have a look at my subscriptions, they come from a variety of instances. So I don’t think a single instance feature would be of any use for me.
Sure. And with the cosine similarity, you’d obviously need to suppress already watched videos. Obviously I watched them and the algorithm knows, but I’d like it to recommend new videos to me.
Maybe your friend is right? Don’t we all regularly fail to empathize with people? I don’t get why people vote for certain people, I don’t know 100% how it feels to be a woman, someone else… I don’t know how an ilness feels if I never had it… That’s perfectly normal. The thing hat matters if we respect such people (despite not understanding them).
I’d say ignore your friend’s opinion, she isn’t your main concern. Talk to someone and get help. An adult, relative, friend… Or call one of the phone help lines, maybe a trustworthy teacher. I think any of that would maybe have a chance to change the situation. And seems this already starts to drag down on your relationships with other people. So you might want to do it.
So if you have someone in your life who sides with you, start with them and talk about your situation and how you feel. Otherwise… I’m not sure how legit they are, but there is https://www.childhelphotline.org/ if you’re in the USA. You could ask them what to do if you feel your upbringing is bordering on abuse.
What do you mean? The Chinese are known for government-coordinated megaprojects… They regularly build entire city districts pretty much over night. And it’s not really quick… They’ve been at it since almost 20 years. And I believe in 2016 they released their national 15-year plan to ramp up power plants and AI in order to become the global AI leader by 2030.
We can see how in the times before AI, they were able to build large Bitcoin farms quickly (before they banned it) and by today, their AI industry publishes quite some models, papers… So I don’t really see a reason to question this. Or is there anything I’ve missed?
I’m not sure if the title is clickbait… Because that’s not in the following text. The article says they want to ban Nvidia from selling more hardware to them. It doesn’t say anything about limiting availability of the service or anything.
If they do, my best guess is they do it like with TikTok. Change their stance on everything several times and then they don’t really enforce anything.
I’ve always backed up my SMS to my E-Mail inbox. With something like SMS Gate or SMS Backup+. I think it’s nice to have all messages in my mail program. Of course that only does one way. To reply and get immediate notifications, I use KDEConnect (or GSConnect which is the same thing for GNOME.)
Tja. Ich glaube bis zu irgendeinem Alter konnte man im Ernstfall eingezogen werden.
Ich weiß echt nicht ob es sinnvolle Studien oder sowas zu diesem Thema gibt. Ich würde mal raten es gibt sehr viele unterschiedliche Meinungen dazu. Sowohl unter “Europiden” als auch nicht-ebendiesen.
Die Wikipedia sagt übrigens, das ist ein veralteter Begriff aus der Rassenkunde. Ich weiß ja nicht ob ich den verwenden würde. Wahrscheinlich könnte man besser einfach sagen: “Menschen, die in Deutschland von Rassismus betroffen sind”?
Ich kenne auch ehrlich gesagt wenig “Bio-Deutsche” aus meiner Filterblase, die zum Bund gegangen sind um ihr Vaterland zu verteidigen. Die Meisten aus meiner Klasse haben damals verweigert. Und die wenigen, die zur Bundeswehr gegangen sind, hatten meist andere Beweggründe. Also nicht mal die sind (waren) pro Wehrfähigkeit oder Pflicht. Und bei den aktuell 17jährigen, weiß ich ehrlich gesagt nich was die dazu denken.
Wie das mit Migrationshintergrund ist, da bin ich eh der falsche Ansprechpartner. Also ich hab da schon unterschiedliche Sachen gehört ob man mit Deutschland in so einer Form identifiziert. Oft scheint es ja sowieso schwer zu sein seinen Platz zu finden, wenn man irgendwie zwischen den Stühlen sitzt, oder aber zu spüren bekommt, dass man nicht so ganz “zur Gruppe gehört”. Aber da stecken ja sehr unterschiedliche Hintergründe und Lebenssituationen dahinter. Und ich kenne hauptsächlich die besser/gut integrierten Menschen.
Was ist genau die Frage bezüglich Wehrdienstpflicht? Ob es Rassismus in der Bundeswehr gibt? Oder wie man individuell zu dem Land steht, dass sowohl positive als auch negative Seiten hat? Ich bin nicht zu dem Verein gegangen, habe stattdessen Zivildienst gemacht. Dort gab es halbwegs Zusammenhalt. Aber das war auch in der Großstadt im Ruhrgebiet, hier ist das ja durchaus gängig nicht so 100% Deutsch auszusehen… Aber wir haben in der Zeit auch etwas Sinnvolles gemacht. Deutschland mit der Waffe verteidigen, dazu hatte ich auch keine besondere Lust.
Ich finde ja das Konzept Berufsarmee ganz akzeptabel, dann kann sich jede:r selbst aussuchen ob er oder sie da hin möchte, egal aus welcher Motivation.
Wasn’t “error-free” one of the undecidable problems in maths / computer science? But I like how they also pay attention to semantics and didn’t choose a clickbaity title. Maybe I should read the paper, see how they did it and whether it’s more than an AI agent at the same intelligence level guessing whether it’s correct. I mean surprisingly enough, the current AI models usually do a good job generating syntactically correct code one-shot. My issues with AI coding usually start to arise once it gets a bit more complex. Then it often feels like poking at things and copy-pasting various stuff from StackOverflow without really knowing why it doesn’t deal with the real-world data or fails entirely.
It’s not “common” as in a huge percentage of content gets removed… Most of it is fine. But we do moderate here on Lemmy and Beehaw. The perspective might be a bit skewed because some people are very vocal and complain a lot about stuff getting deleted. Some of the time unwarranted… Most communities don’t tolerate derailing arguments or spreading misinformation… Some of the time the mods also make mistakes or they’re overly strict in their communities, happens a lot in drama things and to some extent in the news and political communities. Other than that, I wouldn’t say it’s a very common thing. But yeah, comments get deleted. It depends a lot on what you engage with, and in which corner of the Fediverse. And some people like to stir up drama, usually less so on Beehaw.
I’ve also had that. And I’m not even sure whether I want to hold it against them. For some reason it’s an industry-wide effort to muddy the waters and slap open source on their products. From the largest company who chose to have “Open” in their name but oppose transparency with every fibre of their body, to Meta, the curren pioneer(?) of “open sourcing” LLMs, to the smaller underdogs who pride themselves with publishing their models that way… They’ve all homed in on the term.
And lots of the journalists and bloggers also pick up on it. I personally think, terms should be well-defined. And open-source had a well-defined meaning. I get that it’s complicated with the transformative nature of AI, copyright… But I don’t think reproducibility is a question here at all. Of course we need that, that’s core to something being open. And I don’t even understand why the OSI claims it doesn’t exist… Didn’t we have datasets available until LLaMA1 along with an extensive scientific paper that made people able to reproduce the model? And LLMs aside, we sometimes have that with other kinds of machine learning…
(And by the way, this is an old article, from end of october last year.)
Well if you use a Linux distribution, you generally get your software from some central package repository. That’s driven by maintainers who look at the software, the updates… They patch the software, make sure it runs smoothly on your system and is tied into other things… They’ll also have a look at security vulnerabilities and security in general.
Other than that, there isn’t much really “stopping” people from writing malware. We have tons of it. Fake VLC versions, copycats on the iPhone appstore… MS Windows is full of advertisements and features that send data “home”. They introduce features which border on being malware all the time… We have trojans, viruses etc. It’s all out there.
Generally, it’s a good idea to think before executing random code from the internet. Is it from a trustworthy source? Are other people using a piece of software and they’d have noticed if it deleted all files?
Usually, we have more good people than bad. And people need some motivation. It’s unlikely someone invests 10 years of their life to develop a shiny and polished office suite, just so they can run some malware somewhere. There are easier ways to accomplish that. So it generally doesn’t happen that way. It’s theoretically possible, though.
And in the old way is: Windows, Android etc are way more popular. If someone wants to do something malicious, they likely don’t target the 1-2% using a different operating system. They are going to write malware for a more popular operating system. And on the server, where Linux dominates the market, admins execute less random code. They’ll know they want MariaDB and where to get it. So it’s harder to do an attack this way.
And if I imagine being the attacker… What would be a reason to include malware in a FOSS project? Just to wreck havock and mess with people? That sounds like a 16 yo with too much time on their hands. But we have very few of those in the free software community. So that’s a bit unlikely… If someone wants a botnet, there might be easier ways to do it. And for a targeted attack, you wouldn’t hide your malware in a random project… So I generally don’t see many reasons for someone to combine malware with useful FOSS software.
:() :;: