

Is the code available somewhere?
Keyoxide: aspe:keyoxide.org:MWU7IK7RMUTL3AP6U6UWCF4LHY
Is the code available somewhere?
It is a full Linux stack. It is not Android. It has its own set of apps. Written in Qt with C++ (mostly) and their own UI framework, Silica. It can run Android apps through a layer similar to Waydroid.
Well when Roosevelt was elected 4 times, it was actually legal back then. And he’s the reason why the 2 term limit amendment exists. But of course, that requires actually following the law, so…
Because of the porn or AI? 🙃
This is probably one of the best actual uses for something like generative AI. With enough data, they should be able to vectorize and translate dolphin language, assuming there is one.
Well if she is acquitted on appeal for example. But no idea how the sentencing works in cases like this. Maybe someone with knowledge of French law can chime in.
1 scenario tested is better than 0 tested.
This guy would fit in well at my previous job where the founder discouraged writing unit tests because “there are too many scenarios to test.”
Like, wtf…
That was entirely the point unfortunately.
What?
Lol, there are smaller versions of Deepseek-r1. These aren’t the “real” Deepseek model, but they are distilled from other foundation models (Qwen2.5 and Llama3 in this case).
For the 671b parameter file, the medium-quality version weighs in at 404 GB. That means you need 404 GB of RAM/VRAM just to load the thing. Then you need preferably ALL of that in VRAM (i.e. GPU memory) to get it to generate anything fast.
For comparison, I have 16 GB of VRAM and 64 GB of RAM on my desktop. If I run the 70b parameter version of Llama3 at Q4 quant (medium quality-ish), it’s a 40 GB file. It’ll run, but mostly on the CPU. It generates ~0.85 tokens per second. So a good response will take 10-30 minutes. Which is fine if you have time to wait, but not if you want an immediate response. If I had two beefy GPUs with 24 GB VRAM each, that’d be 48 total GB and I could run the whole model in VRAM and it’d be very fast.
They’re probably referring to the 671b parameter version of deepseek. You can indeed self host it. But unless you’ve got a server rack full of data center class GPUs, you’ll probably set your house on fire before it generates a single token.
If you want a fully open source model, I recommend Qwen 2.5 or maybe deepseek v2. There’s also OLmo2, but I haven’t really tested it.
Mistral small 24b also just came out and is Apache licensed. That is something I’m testing now.
Most open/local models require a fraction of the resources of chatgpt. But they are usually not AS good in a general sense. But they often are good enough, and can sometimes surpass ChatGPT in specific domains.
It’s enough to run quantized versions of the distilled r1 model based on Qwen and Llama 3. Don’t know how fast it’ll run though.
Don’t know about “always.” In recent years, like the past 10 years, definitely. But I remember a time when Nvidia was the only reasonable recommendation for a graphics card on Linux, because Radeon was so bad. This was before Wayland, and probably even before AMD bought ATI. And it was certainly long before the amdgpu drivers existed.
Please bring back the overflow menu!
Where is this? Somewhere in Europe?
I just need one more. I have two but one is old and has very little VRAM 🫤
Seems like something got messed up when copy and pasting. It’s fixed now.
Thanks for catching it!
The problem is that while LLMs can translate, it’s still machine translation and isn’t always accurate. It’s also not going to just be for that. It’ll be applying “AI” to everything that looks like it might vaguely fit, and it’ll stifle productivity.