

It sounds like you agree that they are right to manage the size of the userbase via defederation so that they can maintain their expectations then?
It sounds like you agree that they are right to manage the size of the userbase via defederation so that they can maintain their expectations then?
Defederating is fine.
Different instances have different rules, policies, and procedures. That’s a large part of the reason for having different instances. If your instance will not tolerate what is going on on a specific instance, then defederating is the correct tool for the job.
If users disagree with the change or feel they’re missing out on something important, they’re free to migrate to a space that is more right for them, including hosting their own instance with their own rules and decisions.
AMD GPUs are well supported by many LLM frameworks. I’d recommend ollama
Your intuition about this is not accurate. 24GB is more than enough for running local image generation and training a LoRA. You also don’t need an insane amount of data; a LoRA is generally trained with less than 100 images, usually around 15-30 images.
To do deepfakes, you’re not training an entire brand new image model from scratch, which is only within reach of big organizations, you’re just adapting an existing model that is publicly available. You can do this for free with open source tools. It is within reach of anyone with high-end gaming hardware or anyone willing to pay for some cheap cloud compute.
Further, LoRAs for most celebrities and famous people have already been trained and can be found on the internet for free, so the training step is likely not even necessary in most cases.
And someone like JD Vance is almost always using the same expression in the same light.
If this is the case, then images generated with the same expression in the same light will not look out of place.
But you will still be able to generate images with other lighting and facial expressions, even without sample images for them, because the base image model that is being adapted already “understands” differing facial expressions and lighting and can apply them to the subject of the LoRA, in the same way that it can combine random concepts together to create something “new” that wasn’t present in the training images (eg a painting of a zombie unicorn in the style of a specific painter)
I would disagree with MBFC’s opinion that RFA’s credibility is “High” or that its bias is left leaning
I’m not aware of specific bans or rules for this community beyond what is in the sidebar. It says that sources with Low or Very Low credibility rating from MBFC will be removed. https://mediabiasfactcheck.com/drop-site-news-bias-and-credibility/ Drop Site News has a Mostly Factual rating, 2 notches above Low, so I’d assume it would be allowed? 🤷
Edit: I checked RFA ratings on MBFC and it’s “High” credibility with “Left-Center” bias. LOL!!!
Radio Free Asia
🤦
Calves are insanely strong, in general. They are constantly balancing your entire body’s weight with every step and movement you make. Also as others have stated, squat and calf raise are different ranges of motion
Pretty sure I’ve seen folk run a terminal emulator and ollama on android
Wheat, rye, and barley are the only foods that contain gluten, without contamination. So your list should be quite a bit longer than those 4 options. (Oats [these are often contaminated unfortunately], corn, millet, sorghum, tapioca, …, insert endless list of food here)
A member of my household had to be on a gluten free diet for some time. I was initially very surprised by how easy it was to adjust.
These numbers coming from Tesla have never been actual sales but “deliveries”, which can occur multiple times per sale and also can occur without a sale. They only publicize “deliveries” instead of actual sales data because bigger number.
on an iPhone
Lol. No
Dunno why this is downvoted because RAG is the correct answer. Fine tuning/training is not the tool for this job. RAG is.
Fun fact: PokemonGo was literally just a geolocation and navigation AI trainer disguised as a game. They plop Pokemon on the map at locations they want camera/location data for
Well this is getting silly if you’re just going to keep repeating objectively wrong things and also misrepresent what I’ve said (‘anything your current hardware can’t do is a “marketing gimmick”’ 🙄🙄🙄)
Since we’ve left good faith diacussion and entered the realm of silly, fine! You’ve activated my trap card! Can your OLED do this??? presses degauss button You can see 85 Hz flicker, but I can bench press 1200 lbs and run a 60 second mile 🤪
Enjoy your OLED and I’m glad you’re finally getting to enjoy perfect color, true blacks, and high contrast again after all these years!
You might want to reread my comment because you’re just making false claims that are already addressed about color and resolution (my CRT can display 2560x1920), or you’re only acknowledging low quality CRTs. I’ll give you 4K resolution as a flat panel win over tubes, and obviously size and aspect ratio, but I personally don’t see any value in it, as 4K resolution, ultra-wide aspect ratio, and extremely high framerates are simply marketing gimmicks. Obviously, others do see value in these and that’s fine.
Perceiving flicker at 85Hz rate is literally far beyond human capability. 72Hz is an ultimate upper limit on where any flicker is perceptible to a human… and I run my monitor at 120Hz for competitive games lol. It is not physically possible that a human could see flicker at 85Hz. Backlight strobing of an LCD is not related to refresh rate, so would likely by 60Hz matching AC wall power.
Anyway, there are some reasons that OLED is better, just unrelated to display quality. You can probably fit more on your desk than just a keyboard and don’t risk your back when moving your monitor.
Yeah that’s still normal. Unless we’re both just special. When looking at the center of a 60Hz CRT, the flickering is seen around the edges of the screen where I am not focusing. Or the whole screen if I look to the side of it. I also perceive LEDs flickering the same way you describe.
I’d guess the fact that we are not seeing it in our focus vision probably has less to do with physical attributes of the eye and more to do with the way our brains create our perception of vision. There’s a lot going on there. Like our eyes are also constantly rapidly moving, and we are not conscious of or perceive that movement, there are 2 blind spots in our vision where our optic nerves connect to our retinas that we don’t perceive, and our brain invents the color that we perceive in our peripheral vision, which cannot physically be detected by the eye. Vision is weird and complicated.
Ironic, gallons per mile is actually a more useful measure for high fuel efficiency vehicles because it directly represents fuel consumption. Differences in MPG are more misleading to intuition as MPG increases.
For example, upgrading from an 18mpg car to a 28mpg car (Option A) saves more fuel than upgrading from a 34mpg car to a 50mpg car (Option B). It is easy to be misled that Option B would result in more savings because it is an upgrade of 16mpg rather than only 10mpg in Option A.
GPM does not have this problem and the differences can be compared directly, Option A in GPM is 5.5gpm -> 3.5gpm, for a difference of 2gpm while Option B is 2.9gpm -> 2gpm for a difference of only 0.9gpm, making it easy to see that the Option A is a greater impact to fuel economy than Option B.