@[email protected] to [email protected] • 7 months agoyou don't need more 4GB of RAMlemmy.dbzer0.commessage-square206fedilinkarrow-up11.47K
arrow-up11.47Kimageyou don't need more 4GB of RAMlemmy.dbzer0.com@[email protected] to [email protected] • 7 months agomessage-square206fedilink
Just using local llama takes 32GB ram
depends on quantization