
Ah, yes, neoliberals. The love of young MAGA and young leftists right now.
Ah, yes, neoliberals. The love of young MAGA and young leftists right now.
MAGA Outrage Over Director Saying Superman Is an Immigrant
(We’re about a decade too late, and our engorged trolls are now our leaders, but still…)
No, you misunderstand; it’s a sound scheme. I wouldn’t be against it.
…Which just underscores how horrific of a situation we are in. It’s an akin to “okay, a meteor is coming; what about this plan to deflect it into the arctic?”
Fossil fuel companies are lobbying for the “everything is fine” propaganda, not geoengineering schemes that indirectly reinforce how dangerously unstable the planet could be.
There is a nugget of ‘truth’ here:
https://csl.noaa.gov/news/2023/390_1107.html
I can’t find my good source on tis, but there are very real proposals to seed the arctic or antarctic with aerosols to stem a runaway greenhouse gas effect.
It’s horrific. It would basically rain down sulfiric acid onto the terrain; even worse than it sounds. But it would only cost billions, not trillions of other geoengineering schemes I’ve scene.
…And the worst part is it’s arctic/climate researchers proposing this. They intimately know exactly how awful it would be, which shows how desperate they are to even publish such a thing.
But I can totally understand how a layman (maybe vaguley familiar with chemtrail conspiracies) would come across this and be appalled, and how conservative influencers pounce on it cause they can’t help themselves.
Thanks to people like MTG, geoengineering efforts will never even be considered. :(
TL;DR Scientists really are proposing truly horrific geoengineering schemes “injecting chemicals into the atmosphere” out of airplanes. But it’s because of how desperate they are to head off something apocalyptic, and it’s not even close to being implemented. They’re just theories and plans.
I’m sure a foreign government will leak more about Trump’s involvement with Epstein.
So?
Imagine the entire UN (bar the US) put their stamp of approval on video evidence. What difference would it make?
Both can be true, that the Democrats suck and that one should still vote strategically, especially if you’re gonna skip primaries (as most, statistically, do). The analogy holds, even if it’s fruity and won’t get anyone on.
The Republicans won because they have no problem swallowing bile; apparently thats the game now.
OK, while in principle this looks bad…
This is (looking it up) like an experienced engineer’s salary in Peru, in line with some other professions.
It’s reasonable to compensate a president, and for the expectation to not be coming in rich/connected enough to not need a salary. Nor for them to broker power for personal wealth, all as long as other offices and reasonably compensated too.
It avoids perverse incentives, doesn’t seem excessive and TBH is probably a drop in the Peruvian govt’s budget.
Aren’t fighters dead?
Look, I like cool planes, but military scenarios where 5-500 drones are worse than a single mega expensive jet not already covered by existing planes/missiles seem… very rare.
Look at Ukraine’s drone ops. I mean, hell, imagine if the DoD put their budget into that.
Well, exactly. Trump apparently has a line to Apple and could probably get Tim to take it down.
How does this make any sense?
Shouldn’t they be suing Apple to take it down if they don’t like it? I know they just want to weaken press, but it feels like an especially weak excuse.
And I shit you not, latinos will still vote MAGA in droves in 2026, and once again, analysts and Democrats will be left scratching their head wondering why while literally everyone they pass on the street is glued to their phone.
Maybe if they keep campaigning like it’s 1950, it’ll eventually work?
Yeah, I figured.
What @mierdabird@lemmy.dbzer0.com said, but the adapters arent cheap. You’re going to end up spending more than the 1060 is worth.
A used desktop to slap it in, that you turn on as needed, might make sense? Doubly so if you can find one with an RTX 3060, which would open up 32B models with TabbyAPI instead of ollama. Some configure them to wake on LAN and boot an LLM server.
Yes! Fission power is great, with the biggest caveat being the huge upfront investment and slow construction.
I honestly though Trump would consider it ‘woke’ as opposed to ‘clean coal,’ a term he used.
…Well, the pro nuclear angle is a tiny silver lining?
Yeah, just paying for LLM APIs is dirt cheap, and they (supposedly) don’t scrape data. Again I’d recommend Openrouter and Cerebras! And you get your pick of models to try from them.
Even a framework 16 is not good for LLMs TBH. The Framework desktop is (as it uses a special AMD chip), but it’s very expensive. Honestly the whole hardware market is so screwed up, hence most ‘local LLM enthusiasts’ buy a used RTX 3090 and stick them in desktops or servers, as no one wants to produce something affordable apparently :/
I was a bit mistaken, these are the models you should consider:
https://huggingface.co/mlx-community/Qwen3-4B-4bit-DWQ
https://huggingface.co/AnteriorAI/gemma-3-4b-it-qat-q4_0-gguf
https://huggingface.co/unsloth/Jan-nano-GGUF (specifically the UD-Q4 or UD-Q5 file)
they are state-of-the-art at this size, as far as I know.
8GB?
You might be able to run Qwen3 4B: https://huggingface.co/mlx-community/Qwen3-4B-4bit-DWQ/tree/main
But honestly you don’t have enough RAM to spare, and even a small model might bog things down. I’d run Open Web UI or LM Studio with a free LLM API, like Gemini Flash, or pay a few bucks for something off openrouter. Or maybe Cerebras API.
…Unfortunely, LLMs are very RAM intensive, and >4GB (more realistically like 2GB) is not going to be a good experience :(
Actually, to go ahead and answer, the “fastest” path would be LM Studio (which supports MLX quants natively and is not time intensive to install), and a DWQ quantization (which is a newer, higher quality variant of MLX models).
Hopefully one of these models, depending on how much RAM you have:
https://huggingface.co/mlx-community/Qwen3-14B-4bit-DWQ-053125
https://huggingface.co/mlx-community/Magistral-Small-2506-4bit-DWQ
https://huggingface.co/mlx-community/Qwen3-30B-A3B-4bit-DWQ-0508
https://huggingface.co/mlx-community/GLM-4-32B-0414-4bit-DWQ
With a bit more time invested, you could try to set up Open Web UI as an alterantive interface (which has its own built in web search like Gemini): https://openwebui.com/
And then use LM Studio (or some other MLX backend, or even free online API models) as the ‘engine’
Alternatively, especially if you have a small RAM pool, Gemma 12B QAT Q4_0 is quite good, and you can run it with LM Studio or anything else that supports a GGUF. Not sure about 12B-ish thinking models off the top of my head, I’d have to look around.
Traning data is curated and continous.
In other words, one (for example, Musk) can finetune the big language model on a small pattern of data (for example, antisemetic content) to ‘steer’ the LLM’s outputs towards that.
You could bias it towards fluffy bunny discussions, then turn around and send it the other direction.
Each round of finetuning does “lobotomize” the model to some extent though, making it forget stuff, reducing its ability to generalize, ‘erasing’ careful anti-reptition tuning and stuff like that. In other words, if Elon is telling his engineers “I don’t like these responses. Make the AI less woke, right now,” he’s basically sabotaging their work. They’d have to start over with the pretrain and sprinkle that data into months(?) of retraining to keep it from dumbing down or going off the rails.
There are ways around this, but Big Tech is kinda dumb and ‘lazy’ since they’re so flush with cash, so they don’t use them. Shrug.