

Pocketpal is what I run. It works well on Android at least.
https://play.google.com/store/apps/details?id=com.pocketpalai
Pocketpal is what I run. It works well on Android at least.
https://play.google.com/store/apps/details?id=com.pocketpalai
Not exactly. Digits still uses a Blackwell GPU, only it uses unified RAM as virtual VRAM instead of actual VRAM. The GPU is probably a down clocked Blackwell. Speculation I’ve seen is that these are defective and repurposed Blackwells; good for us. By defective I mean they can’t run at full speed or are projected to have the cracking die problem, etc.
The new $3000 NVidia Digit has 128 GB of fast RAM in an Apple-M4-like unified-memory configuration, reportedly. NVidia claims it is twice as fast as an apple stack at least at inference. Four of these stacked can run a 405B model, again according to NVidia.
In my case I want the graphics power of an GPU and VRAM for other purposes as well. So I’d rather buy a graphics card. But regarding a 90B model, I do wonder if it is possible with two A6000 at 64 GB and a 3 bit quant.
I tried Mistral Nemo 12B instruct this morning. It’s actually quite good. I’d say it’s close to dolphin mistral 8x7B which is a monster in size and very smart, about 45 or 50GB. So I’d say Arli is a good deal Mistral Nemo 12B for 4 or $5 per month and privacy so they claim.
If you don’t mind logging for some questions, you can get access to very good or if not the best models at lmsys.org without monetary cost. Just go to the “Arena”. This is where you contribute with your blind evaluation by voting which of two is better. I often get models like 4o and sonnet 3.5 by Anthropic, google’s best, etc., and at other times many good 70B models. You see two answers at once and vote your favorite between the two. In return, you get “free” access.
Be careful with AMD GPUs as they are not as well supported for local AI. However, support is gaining ground. Some people are doing it but it takes effort and hassle, from what I’ve read.
I know that people are using P40 and P100 GPUs. These are outdated but still work with some software stacks / applications. The P40 GPU, once very cheap for the amount of VRAM, is no longer as cheap as it was probably because folks have been picking them up for inference.
I’m getting a lot done with an NVidia GTX 1080 which only has 8GB VRAM. I can run a quant of dolphin Mixtral 7x8B and it works well enough. It takes minutes to load, almost too long for me, but after that I get 3-5 TPS with some acceptable delay between questions.
I can even run Miqu quants at 2 or 3 bits. It’s super smart even at these low quant levels.
llama 3.1 8B runs great with this 1080 8BG GPU at 4_K_M and also 5 or 6_K_M. I believe I can run gemma 9B f16 at 8 bpw.
I installed it in Linux and it’s headed for a live environment.
Starling looks good so far.
One improvement I’d recommend is to make links visible. They are currently the same color as general text in the chat, black by default. I’d recommend blue.
Checking out [email protected] I saw very few posts by bots. Mainly saw posts by you. I saw one post coming from alien.top .
What’s interesting is that only posts by bots have any comments. So maybe this could be a good way to get communities started.
Therefore, if it’s okay with the admins at the following community, I’d nominate [email protected]
There’s almost nothing happening there.
Did you check with the admins on lemmy first or are you bot posting without permission?
Let’s say I have a favorite sport and there exists a sub_ named: r/.
Let’s also say there already exits a Lemmy community and that community is struggling to get off the ground: [email protected]
I can see a value add if your project directly helps [email protected] get started; but I don’t see how it does. If anything wouldn’t your project compete with [email protected] and therefore hinder it?
It might be different if your project directly tied r/ to [email protected] but it doesn’t.
If downvotes are the issue, beehaw.org doesn’t allow downvotes. Those folks are automatically eliminated from that. You can then just ignore the comments you don’t like and it’s all good. 👍
Testing feedback:
The links in this post don’t load, https://feddit.nl/post/3654890
Testing feedback: There appears to be a missing feature. When a post includes a gallery of images, no thumbs or images are shown as a preview. Only an empty gray box is displayed.
Something has to be done to bring back the onside kick. It’s not dangerous from the standpoint of concussions.
4th and 25 or 15 is not as interesting and favors passing teams.
N.B. This flying squid article is behind a pay wall.
“What hump?”
This piped link did not lead me to a video.
I like the sky graphic.