@[email protected] to [email protected] • 10 months agoyou don't need more 4GB of RAMlemmy.dbzer0.commessage-square203fedilinkarrow-up11.48K
arrow-up11.48Kimageyou don't need more 4GB of RAMlemmy.dbzer0.com@[email protected] to [email protected] • 10 months agomessage-square203fedilink
Just using local llama takes 32GB ram
depends on quantization