@[email protected] to [email protected] • 8 months agoyou don't need more 4GB of RAMlemmy.dbzer0.commessage-square204fedilinkarrow-up11.47K
arrow-up11.47Kimageyou don't need more 4GB of RAMlemmy.dbzer0.com@[email protected] to [email protected] • 8 months agomessage-square204fedilink
Just using local llama takes 32GB ram
depends on quantization