Jesus to Political [email protected] • 3 days agoWhat could possibly go wronglemmy.worldmessage-square78fedilinkarrow-up1558
arrow-up1558imageWhat could possibly go wronglemmy.worldJesus to Political [email protected] • 3 days agomessage-square78fedilink
minus-square@[email protected]linkfedilinkEnglish58•3 days agoIt’s open source. Apparently folks have already made mods of it that add CCP-sensitive info back in. Disclaimer: I have yet to see this for myself.
minus-square@[email protected]linkfedilinkEnglish56•3 days agoThe answer I got out of DeepSeek-R1-Distill-Llama-8B-abliterate.i1-Q4_K_S
minus-square@[email protected]linkfedilink29•3 days agoSo a real answer, basically. Too bad your average person isn’t going to bother with that. Still nice it’s open source.
minus-square@[email protected]linkfedilink11•3 days agoSeems like the model you mentioned is more like a fine tuned Llama? Specifically, these are fine-tuned versions of Qwen and Llama, on a dataset of 800k samples generated by DeepSeek R1. https://github.com/Emericen/deepseek-r1-distilled
minus-square@[email protected]linkfedilinkEnglish8•edit-23 days agoYeah, it’s distilled from deepseek and abliterated. The non-abliterated ones give you the same responses as Deepseek R1.
minus-square@[email protected]linkfedilink4•edit-21 day agojust running it locally, apparently. The output of this model is being filtered by another AI, but only on the public-hosted copy.
minus-squarestebolinkfedilink3•3 days agoif it’s open source, can we also see what words/topics are being blocked?
It’s open source. Apparently folks have already made mods of it that add CCP-sensitive info back in. Disclaimer: I have yet to see this for myself.
The answer I got out of DeepSeek-R1-Distill-Llama-8B-abliterate.i1-Q4_K_S
So a real answer, basically. Too bad your average person isn’t going to bother with that. Still nice it’s open source.
Seems like the model you mentioned is more like a fine tuned Llama?
https://github.com/Emericen/deepseek-r1-distilled
Yeah, it’s distilled from deepseek and abliterated. The non-abliterated ones give you the same responses as Deepseek R1.
deleted by creator
just running it locally, apparently. The output of this model is being filtered by another AI, but only on the public-hosted copy.
if it’s open source, can we also see what words/topics are being blocked?