In light of the recent Crowdstrike crash revealing how weak points in IT infrastructure can have wide ranging effects, I figured this might be an interesting one.
The entirety of wikipedia is periodically uploaded here, along with many other useful wikis and How To websites (ex. iFixit tutorials and WikiHow): https://download.kiwix.org/zim
You select the archive you want, then the language and archive version (for example, you can get an archive with no pictures, to save on space). For the totality of the english wikipedia you’d select the “wikipedia_en_all_maxi_2024-01.zim”
The archives are packed as .zim files, which can be read with the Kiwix app completely offline.
I have several USBs I keep that have some of these archives along with the app installer. In the event of some major catastrophe I’d at least be able to access some potentially useful information. I have no stake in Kiwix, and don’t know if there are other alternative apps and schemes, just thought it was neat.
The text version of Wikipedia*
The images and other media are a hell of a lot more.
it’s 102GB with images, 53GB without
I presume this is images directly hosted on English Wikipedia and not the entirety of Commons where the vast majority of images are kept, right?
Wikimedia Commons is 373TB images. https://commons.m.wikimedia.org/wiki/Special:MediaStatistics
So I have to upgrade my NAS again, ay?
You’re not already running petabyte NAS???
Kinda interesting at a broad level … that there’s still something to the efficiency of language.
Sure storage is cheap now, but so much of the calculation of the utility of data in modern tech is the presumption of an internet connection and retrieval of information over the network.
With the internet going to shit in various ways, local or decentralised computing is making more sense, at least depending on your priorities and perspective. And so all of a sudden, storage tradeoffs become a bit more meaningful. Do I need all of the pictures and media … or would a simple textual description suffice for most instances with high res media available at a more centralised archive if I’m really interested? A picture is worth 1000 words, but takes a hell of a lot more digital storage space!
So many home instructions are so much easier with a photograph or two, or better yet a video.
The 100Gb version mentioned above does only have thumbnails/lowres pictures, yeah. Better than nothing for some types of articles, but not everything. The true text-only version is actually only ~53Gb though.
Some of the high res photos are ridiculous.
Like a 8000x9000 uncompressed image of someone’s hand and weighs about 22mb.
I know that because I use a lot of royalty free images.
Is there an index of the images or something like that?
https://commons.wikimedia.org/
The images are categorised and there’s a search function.
Thank you very much!
Without images Wikipedia is a “mere” 22.14gb.
https://en.m.wikipedia.org/wiki/Wikipedia:Size_of_Wikipedia#:~:text=The total number of pages,about 22.14 GB without media.
I’ve installed game patches that were larger than this.
They should put it in a popular game patch.
So something akin to this joke image I saw the other day is actually feasible for Wikipedia?
Chatgpt is also probably around 50-100GB at most
Probably a lot less, keep in mind that whenever it answers a question the whole model is traversed multiple times, going through multiple GBs is not possible in the matter of seconds the model answers.
I’d be surprised if it was significantly less. A comparable 70 billion parameter model from llama requires about 120GB to store. Supposedly the largest current chatgpt goes up to 170 billion parameters, which would take a couple hundred GB to store. There are ways to tradeoff some accuracy in order to save a bunch of space, but you’re not going to get it under tens of GB.
These models really are going through that many Gb of parameters once for every word in the output. GPUs and tensor processors are crazy fast. For comparison, think about how much data a GPU generates for 4k60 video display. Its like 1GB per second. And the recommended memory speed required to generate that image is like 400GB per second. Crazy fast.
Plus input data?
No, but it’s the model after the input that you need.
So it would fit on a Bluray disc
I mean, you can self-host your own local LLMs using something like Ollama. The performance will be bound by the disk space you have (the complexity of the model you’re able to store), and the performance of the CPU or GPU you are using to run it, but it does work just fine. Probably as good results as ChatGPT for most use cases.
We do this at work (lots of sensitive data that we don’t want Openai to capitalize on) and it works pretty well. Hosted locally, setup by a data security and privacy sensitive admin, who specifically runs the settings to not save any queries even on the server. Bit slower than chatgpt but not by much
https://m.youtube.com/watch?v=1lRI35gKSPA
Aside from the text clarification, this is also only the US version of Wikipedia.
What worries me though is that most videos linked on Wikipedia are hosted on YouTube. That’s a pretty dangerous choke point.
Videos aren’t an essential part of an encyclopedia.
I never even noticed any videos on Wikipedia. Maybe for some cinema articles.
My brain immediately thought archive.org but after the last incident, I kinda feel like archive org is going to get lawsuited into oblivion
I tried searching but found nothing. What incident?
This saved my ass at my engineering chemistry exam (still a requirement, even for software engineers) where only offline tools were allowed. Love Kiwix!
LOL… Malicious compliance at its best…
DYK that Kiwix was actually created by Wikipedia? Back in the late 2000s there was this gigantic effort to select and improve a ton of articles to make an offline “Wikipedia 1.0” release. The only remains of that effort are Kiwix, periodic backups, and an incredibly useful article-rating system.
Can you write more about the rating system you mentioned?
and you should donate to wikipedia if you are gonna do that
I couldn’t afford to donate for a long time but I used it near daily. So now I do monthly, probably larger than average, contribution to make up for sibs from other cribs that can’t afford it. Pay it forward is indeed a golden rule.
Do you wear a cape? Or are you one of those who doesn’t wear one?
No cape. I’m brown so I’m on the radar bad enough as it is as soon as I leave major cities lol.
The benefit of text not taking up much space.
Is there a git repo for it or do I have to redownload the whole thing to do an update?
i remember a time when it was only 2gb for all of wikipedia. usain bolt had just burst onto the world stage at the time.
And by now he’s exited the solar system at incomprehensible speeds.
I did! I do! Also all public domain books as part of the project Gutenberg
I know there are a few companies working on DNA storage. From the comment below about the entirety of Wikipedia and Wiki Commons, I’d say that’d be a pretty practical thing to store.
Here’s the wiki article about it.
Imagine downloading it just after some troll changed critical information lmao
I imagine you could also download with all the history of every article
I am currently reading on terrorists while in the states. But something tells me I will get my IP banning me. But I have read a shitton and I highly doubt its just 100gb. Otherwise you would see it more on piracy sites.
But it’s freely and easily available to download, why would it be on piracy sites?
China is making a copy. For… reasons.
How high were you when you wrote this?
Currently where I am working can’t get high. But can get drunk. But was neither when I wrote that. My ISP is very brutal on looking up stuff or downloading shit.
What on Earth do you mean? Piracy sites share things which aren’t available easily for free otherwise.
https://en.m.wikipedia.org/wiki/Wikipedia:Database_download
And the text only version of Wiki is just 22.14gb.
https://en.m.wikipedia.org/wiki/Wikipedia:Size_of_Wikipedia#:~:text=The total number of pages,about 22.14 GB without media.
deleted by creator
I tried to download it but couldn’t get it to work :(
Download the kiwix app for whatever OS you’re using, then go into Kiwix and click on the folder icon in the app and navigate to where the .zim file you downloaded is located. If you click it it should automatically pop-up and be viewable.
If you did that and it’s still failing, is it giving you a specific error or anything?
It’s already been done: https://m.youtube.com/watch?v=1lRI35gKSPA
What’s already been done…?
Sorry, I meant to reply to the commenter with the chatgpt on a dvd pic saying that it’s actually feasible for Wikipedia.