@[email protected] to [email protected] • 2 months agoyou don't need more 4GB of RAMlemmy.dbzer0.commessage-square213fedilinkarrow-up11.46K
arrow-up11.46Kimageyou don't need more 4GB of RAMlemmy.dbzer0.com@[email protected] to [email protected] • 2 months agomessage-square213fedilink
Just using local llama takes 32GB ram
depends on quantization