You should try watching the live action series next - I bet you’d love it.
- 1 Post
- 603 Comments
The one I grabbed to test was the ROG Azoth.
I also checked my Iris and Moonlander - both cap out at 6, but I believe I can update that to be higher with QMK or add a config key via Oryx on the Moonlander to turn it on.
Per this thread from 2009, the limit was conditional upon using a particular keyboard descriptor documented elsewhere in the spec, but keyboards are not required to use that descriptor.
I tested just now on one of my mechanical keyboards, on MacOS, connected via USB C, using the Online Key Rollover Test, and was able to get 44 keys registered at the same time.
- hedgehog@ttrpg.networktoFuck AI@lemmy.world•AI Set To Consume Electricity Equivalent To 22% of US Homes By 2028, New Analysis Says2·3 days ago
From the Slashdot comments, by Rei:
Or, you can, you know, not fall for clickbait. This is one of those…
Ultimately, we found that the common understanding of AI’s energy consumption is full of holes.
“Everyone Else Is Wrong And I Am Right” articles, which starts out with…
The latest reports show that 4.4% of all the energy in the US now goes toward data centers.
without bothering to mention that AI is only a small percentage of data centre power consumption (Bitcoin alone is an order of magnitude higher), and…
In 2017, AI began to change everything. Data centers started getting built with energy-intensive hardware designed for AI, which led them to double their electricity consumption by 2023.
What a retcon. AI was *nothing* until the early 2020s. Yet datacentre power consumption did start skyrocketing in 2017 - having nothing whatsoever to do with AI. Bitcoin was the big driver.
At that point, AI alone could consume as much electricity annually as 22% of all US households.
Let’s convert this from meaningless hype numbers to actual numbers. First off, notice the fast one they just pulled - global AI usage to just the US, and just households. US households use about 1500 TWh of the world’s 24400 TWh/yr, or about 6%. 22% of 6% is ~1,3% of electricity (330 TWh/yr). Electricity is about 20% of global energy, so in this scenario AI would be 0,3% of global energy. We’re just taking at face value their extreme numbers for now (predicting an order of magnitude growth from today’s AI consumption), and ignoring that even a single AI application alone could entirely offset the emissions of all AI combined. Let’s look first at the premises behind what they’re arguing for this 0,3% of global energy usage (oh, I’m sorry, let’s revert to scary numbers: “22% OF US HOUSEHOLDS!”):
- It’s almost all inference, so that simplifies everything to usage growth
- But usage growth is offset by the fact that AI efficiency is simultaneously improving at faster than Moore’s Law on three separate axes, which are multiplicative with each other (hardware, inference, and models). You can get what used to take insanely expensive, server-and-power-hungry GPT-4 performance (1,5T parameters) on a model small enough to run on a cell phone that, run on efficient modern servers, finishes its output in a flash. So you have to assume not just one order of magnitude of inference growth (due to more people using AI), but many orders of magnitude of inference growth. * You can try to Jevon at least part of that away by assuming that people will always want the latest, greatest, most powerful models for their tasks, rather than putting the efficiency gains toward lower costs. But will they? I mean, to some extent, sure. LRMs deal with a lot more tokens than non-LRMs, AI video is just starting to take off, etc. But at the same time, for example, today LRMs work in token space, but in the future they’ll probably just work in latent space, which is vastly more efficient. To be clear, I’m sure Jevon will eat a lot of the gains - but all of them? I’m not so sure about that. * You need the hardware to actually consume this power. They’re predicting by - three years from now - to have an order of magnitude more hardware out there than all the AI servers combined to this point. Is the production capacity for that huge level of increase in AI silicon actually in the works? I don’t see it.
- hedgehog@ttrpg.networktoFuck AI@lemmy.world•Duolingo CEO says AI is a better teacher than humans—but schools will exist ‘because you still need childcare’2·4 days ago
There’s a difference between a tool being available to you and a tool being misused by your students.
That said, I wouldn’t trust AI assessments of students to determine if they’re on track right now, either. Whatever means the AI would use needs to be better than grading quizzes, homework, etc., and while I’m not a teacher, I would be very surprised if it were better than any halfway competent teacher’s assessments (thinking in terms of high school and younger, at least - in university IME the expectation is that you self assess during the term and it’s up to you to seek out learning opportunities outside class if you need them, like going to office hours for your prof or TA).
AI isn’t useless, though! It’s just being used wrong. For example, AI can improve OCR, making it more feasible for students to hand in submissions that can be automatically graded, or to improve accessibility for graders. But for that to actually be helpful we need better options on the hardware front and for better integration of those options into grading systems, like affordable batch scanners that you can just drop a stack of 50 assignments into, each a variable number of pages, with software that will automatically sort out the results by assignment and submitter, and automatically organize them into the same place that you put all the digital submissions.
- hedgehog@ttrpg.networktoXbox@lemmy.world•Take-Two Boss Says Video Game Prices Have Been Coming Down For The Past 20 YearsEnglish4·5 days ago
They also had much higher distribution costs and a much smaller audience back then, so even if prices have gone down since the late 70s, profits haven’t.
Pac-Man was the best selling Atari 2600 game and it sold 8 million copies. Mario Kart on the Switch, by contrast, has sold over 60 million copies. A mere 1% of PC game sales are physical and a mere 16% of console game sales are physical.
- hedgehog@ttrpg.networktoFuck AI@lemmy.world•Why we're unlikely to get "AI" (aka artificial general intelligence) anytime soon1·6 days ago
Though… If a computer has a real biological brain in it doing the thinking, is it artificial intelligence?
The person who came up with the Chinese Room Argument argued that if a brain was completely synthetic, even if it were a perfect simulation of a real brain, it would not think - it would not have a genuine understanding of anything, only a simulation of an understanding. I don’t agree (though I would still say it’s “artificial”), but I’ll let you draw your own conclusions.
From section 4.3:
Consider a computer that operates in quite a different manner than an AI program with scripts and operations on sentence-like strings of symbols. The Brain Simulator reply asks us to suppose instead the program parallels the actual sequence of nerve firings that occur in the brain of a native Chinese language speaker when that person understands Chinese – every nerve, every firing. Since the computer then works the very same way as the brain of a native Chinese speaker, processing information in just the same way, it will understand Chinese. Paul and Patricia Churchland have set out a reply along these lines, discussed below.
In response to this, Searle argues that it makes no difference. He suggests a variation on the brain simulator scenario: suppose that in the room the man has a huge set of valves and water pipes, in the same arrangement as the neurons in a native Chinese speaker’s brain. The program now tells the man which valves to open in response to input. Searle claims that it is obvious that there would be no understanding of Chinese. (Note however that the basis for this claim is no longer simply that Searle himself wouldn’t understand Chinese – it seems clear that now he is just facilitating the causal operation of the system and so we rely on our Leibnizian intuition that water-works don’t understand (see also Maudlin 1989).) Searle concludes that a simulation of brain activity is not the real thing.
However, following Pylyshyn 1980, Cole and Foelber 1984, and Chalmers 1996, we might wonder about gradually transitioning cyborg systems. Pylyshyn writes:
If more and more of the cells in your brain were to be replaced by integrated circuit chips, programmed in such a way as to keep the input-output function each unit identical to that of the unit being replaced, you would in all likelihood just keep right on speaking exactly as you are doing now except that you would eventually stop meaning anything by it. What we outside observers might take to be words would become for you just certain noises that circuits caused you to make.
These cyborgization thought experiments can be linked to the Chinese Room. Suppose Otto has a neural disease that causes one of the neurons in his brain to fail, but surgeons install a tiny remotely controlled artificial neuron, a synron, alongside his disabled neuron. The control of Otto’s artificial neuron is by John Searle in the Chinese Room, unbeknownst to both Searle and Otto. Tiny wires connect the artificial neuron to the synapses on the cell-body of his disabled neuron. When his artificial neuron is stimulated by neurons that synapse on his disabled neuron, a light goes on in the Chinese Room. Searle then manipulates some valves and switches in accord with a program. That, via the radio link, causes Otto’s artificial neuron to release neuro-transmitters from its tiny artificial vesicles. If Searle’s programmed activity causes Otto’s artificial neuron to behave just as his disabled natural neuron once did, the behavior of the rest of his nervous system will be unchanged. Alas, Otto’s disease progresses; more neurons are replaced by synrons controlled by Searle. Ex hypothesi the rest of the world will not notice the difference; will Otto? If so, when? And why?
Under the rubric “The Combination Reply”, Searle also considers a system with the features of all three of the preceding: a robot with a digital brain simulating computer in its aluminum cranium, such that the system as a whole behaves indistinguishably from a human. Since the normal input to the brain is from sense organs, it is natural to suppose that most advocates of the Brain Simulator Reply have in mind such a combination of brain simulation, Robot, and Systems or Virtual Mind Reply. Some (e.g. Rey 1986) argue it is reasonable to attribute intentionality to such a system as a whole. Searle agrees that it would indeed be reasonable to attribute understanding to such an android system – but only as long as you don’t know how it works. As soon as you know the truth – it is a computer, uncomprehendingly manipulating symbols on the basis of syntax, not meaning – you would cease to attribute intentionality to it.
- hedgehog@ttrpg.networktoFuck AI@lemmy.world•An Aggravating New Barrier Is Making It Harder Than Ever to Land a Job. Candidates Are Pissed.13·6 days ago
after the interview, the robot — and the company — then ghosted them with no future contact.
For fuck’s sake, if you’re going to use a GenAI powered interview process to filter through resumes, interview and eliminate candidates, spend the ten extra seconds it takes to set up an automated follow-up email to the candidates you ruled out.
You can run a NAS with any Linux distro - your limiting factor is having enough drive storage. You might want to consider something that’s great at using virtual machines (e.g., Proxmox) if you don’t like Docker, but I have almost everything I want running in Docker and haven’t needed to spin up a single virtual machine.
This is an interesting parallel, but I feel like I missed some key part of it.
In the US, at least, we historically killed off a lot of deer’s natural predators - mostly wolves - and as a result, the deer population can get out of control, causing serious problems to the ecosystem. Hunters help to remedy that. The relatively small violences that they perform on an individual basis add up to improving the overall ecosystem.
That isn’t the same as being a bigot, or a sexist, or a fascist… and I don’t know why anyone would assume that a person holds those views because they’re mean and petty. They hold those views for a variety of reasons - sometimes because they’re a child or barely an adult and that’s just what they learned, and they either don’t know any better or haven’t cared enough to think it through; sometimes because they’ve been conditioned to think that way; sometimes because they’re sociopaths who recognize that it’s easier to oppress that particular group.
It doesn’t really matter what their reason is. Either way, they’re a worse person because of it, and often they’re overall a bad person, regardless of the rest of their views, actions, and contributions.
Being a hunter, by contrast, is neutral leaning positive.
It makes sense that a rational person who loves being in nature, who loves animals, who wants their local ecosystem to be successful, would as a result want to help out in some small way, even if that means they have to kill an animal to do so. It doesn’t make sense that a rational person who loves all people, who wants their local communities to be successful, would as a result want to oppress and harm the people in already marginalized groups.
I don’t think equating being bigoted with holding unjustifiable opinions does it justice. The way we use the word opinion generally applies to things that are trivial or unimportant, that don’t ultimately matter, e.g., likes and dislikes. Being a bigot is a viewpoint; it shapes you. For many bigots, their entire perspective is warped and wrong. And there’s a common misunderstanding that you can’t argue with someone’s opinions; because it’s just how they “feel.” But being a bigot, whether you’re sexist, racist, transphobic, queerphobic, homophobic, biphobic, etc., is a belief, and it’s one that, in most cases, the bigot chooses (consciously or not) to keep believing.
If an adult with functioning cognitive abilities refuses to question their bigoted beliefs, then they’ve made a choice to be a bigot.
- hedgehog@ttrpg.networktoLocalLLaMA@sh.itjust.works•What model to grade practice test?English3·14 days ago
Assuming you’re using ollama (is there another reason to use ollama.com?), you can use compatible files from huggingface directly in ollama. The model page will give you the instructions for the command to run; I always change
ollama run
toollama pull
, though. Instructions: https://huggingface.co/docs/hub/ollamaYou should be able to fit Qwen3 32B at
Q4_K_M
with an acceptable context, and it did very well on math benchmarks (with thinking enabled). You can disable thinking by including/no_think
at the end of your prompt to speed up responses, but I’m not sure how well it handles math under those circumstances. I wouldn’t even consider disabling thinking unless you were grading one question per prompt.The ollama Qwen3 page is https://ollama.com/library/qwen3:32b and the default 32B quant is
Q4_K_M
. I personally am using theQ6_K
quant by unsloth, and their quants have been great (when supported by ollama), often being the first to fix bugs impacting other quantizations.I’m not sure if
Q4_K_M
is the optimal quant style for Intel Arc, but the others that might be better are not supported by ollama, anyway, as far as I know.Qwen3’s real world knowledge is bad, so if there are questions that rely on that you may need to include the relevant facts as part of the prompt or use an ollama frontend that supports web searches.
Other options: This does seem like something Gemma3 27B would be good at, so it’s too bad you can’t use it. Older Gemmas may be good, but I’m not sure. Llama3.3 70B is also out, unless you have a decent amount of system RAM and are okay with offloading less than half to GPU. I could see it outperforming my recommendation below but I would be very surprised for the 8B version to outperform it. Older Qwen2.5 is decent at math but unless you grab QwQ doesn’t include thinking.
- hedgehog@ttrpg.networktoFuck AI@lemmy.world•Doctors personally liable if mandatory NHS AI transcriber gets it wrong5·20 days ago
If you’re not indemnified, you might be found liable, but you’re not necessarily liable. It depends on the circumstances.
- hedgehog@ttrpg.networktoFuck AI@lemmy.world•Doctors personally liable if mandatory NHS AI transcriber gets it wrong6·21 days ago
Headline is clickbait and is incorrect per the text of the article. It should read “Doctors not indemnified if AI transcriber mandated by NHS gets it wrong.”
- hedgehog@ttrpg.networktoPiracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ@lemmy.dbzer0.com•Meta mocked for raising “Bob Dylan defense” of torrenting in AI copyright fight - Ars TechnicaEnglish1·1 month ago
You don’t have to finish the file to share it though, that’s a major part of bittorrent. Each peer shares parts of the files that they’ve partially downloaded already. So Meta didn’t need to finish and share the whole file to have technically shared some parts of copyrighted works. Unless they just had uploading completely disabled,
The argument was not that it didn’t matter if a user didn’t download the entirety of a work from Meta, but that it didn’t matter whether a user downloaded anything from Meta, regardless of whether Meta was a peer or seed at the time.
Theoretically, Meta could have disabled uploading but not blocked their client from signaling that they could upload. This would, according to that argument, still counts as reproducing the works, under the logic that signaling that it was available is the same as “making it available.”
but they still “reproduced” those works by vectorizing them into an LLM. If Gemini can reproduce a copyrighted work “from memory” then that still counts.
That’s irrelevant to the plaintiff’s argument. And beyond that, it would need to be proven on its own merits. This argument about torrenting wouldn’t be relevant if LLAMA were obviously a derivative creation that wasn’t subject to fair use protections.
It’s also irrelevant if Gemini can reproduce a work, as Meta did not create Gemini.
Does any Llama model reproduce the entirety of The Bedwetter by Sarah Silverman if you provide the first paragraph? Does it even get the first chapter? I highly doubt it.
By the same logic, almost any computer on the internet is guilty of copyright infringement. Proxy servers, VPNs, basically any compute that routed those packets temporarily had (or still has for caches, logs, etc) copies of that protected data.
There have been lawsuits against both ISPs and VPNs in recent years for being complicit in copyright infringement, but that’s a bit different. Generally speaking, there are laws, like the DMCA, that specifically limit the liability of network providers and network services, so long as they respect things like takedown notices.
- hedgehog@ttrpg.networktolinuxmemes@lemmy.world•I'd just like to interject for a moment18·1 month ago
I’d just like to interject for a moment. What you’re referring to as Alpine Linux Alpine Linux is in fact Pine’s fork, Alpine / Alpine Linux Pine Linux, or as I’ve taken to calling it, Pine’s Alpine plus Alpine Linux Pine Linux. Alpine Linux Pine Linux is an operating system unto itself, and Pine’s Alpine fork is another free component of a fully functioning Alpine Linux Pine Linux system.
The energy consumption of a single AI exchange is roughly on par with a single Google search back in 2009. Source. Was using Google search in 2009 unethical?
- hedgehog@ttrpg.networktoStable Diffusion@lemmy.dbzer0.com•HiDream - a new 17B parameters open-weights image generative foundation model | CivitaiEnglish5·1 month ago
According to https://www.nextdiffusion.ai/blogs/hidream-the-new-top-open-source-image-generator it’s an uncensored image generation model developed by Vivago. In the benchmarks they highlighted - DPG-Bench, GenEval, and HPSv2.1 - it was ranked number 1. It’s said to be very good at following complex prompts.
Most anti-car people are in favor of improving public transit options.
I had never heard of this show before today but what you just described makes it sound cool as fuck, I’m gonna check it out now