One of the great joys of working on a search engine is that you get to reverse engineer SEO spam, and overall study how it evolves over time.
I’ve been noticing the search engine spam strategy of adding ‘reddit’ to page titles for a few years now, but it feels like it’s been growing a lot recently. I don’t think it’s actually working, but it’s so cute that they are trying.
Mine is Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0. Joke is, this is the trimmed version (about:config Xorigin and trimming settings) and some pages already have problems with it. If you strip out the OS part, pages like google.com won’t work anymore. Despite that you shouldn’t parse the UA string…
Trick is I took out the actually useful parts like Chrome, Firefox, Edge, etc. And the OS. All the agents these days have AppleWebKit and Mozilla just so old websites that look for it don’t downgrade the experience.
Oh gee, I wasn’t aware there was more to it than the UA. Thanks for opening my eyes.
Edit: I checked your link, most of the parameters on the test require client side execution. That (client side tracking) is absolutely unrelated to what (server side tracking) I was talking about, and is something you can control (by not allowing JavaScript, for example). Please do not confuse the two. There is literally nothing you can do against server side tracking.
Yeah this isn’t my UA but I’m just saying these parts are what’s considered the supported featureset rather than information about what software the device is running.
Yes, I get that point, but I also think that it’s tempting for the privacy-minded novice to think “the less information I provide, the better!”, while in actuality, it is better to provide “more” information: the most common UA, even if it means lying about your featureset. In this case, truly, more is less.
As useful as Mozilla/5.0; AppleWebKit/537.36 (KHTML, like Gecko) Safari/537.3
Mine is Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0. Joke is, this is the trimmed version (about:config Xorigin and trimming settings) and some pages already have problems with it. If you strip out the OS part, pages like google.com won’t work anymore. Despite that you shouldn’t parse the UA string…
What browser agent is that?
Trick is I took out the actually useful parts like Chrome, Firefox, Edge, etc. And the OS. All the agents these days have AppleWebKit and Mozilla just so old websites that look for it don’t downgrade the experience.
Yeah, make your user agent absolutely unique. Too much entropy will surely confuse the shit out server side HTTP Header tracking. 😬
Having a non-unique user agent probably doesn’t make you not unique.
Oh gee, I wasn’t aware there was more to it than the UA. Thanks for opening my eyes.
Edit: I checked your link, most of the parameters on the test require client side execution. That (client side tracking) is absolutely unrelated to what (server side tracking) I was talking about, and is something you can control (by not allowing JavaScript, for example). Please do not confuse the two. There is literally nothing you can do against server side tracking.
Yeah this isn’t my UA but I’m just saying these parts are what’s considered the supported featureset rather than information about what software the device is running.
Yes, I get that point, but I also think that it’s tempting for the privacy-minded novice to think “the less information I provide, the better!”, while in actuality, it is better to provide “more” information: the most common UA, even if it means lying about your featureset. In this case, truly, more is less.
Firefox doesn’t pretend to use AppleWebKit. It’s actually the only one which identifies itself correctly… mostly, at least:
While about:support says “Window Protocol: wayland”. But that’s ok websites shouldn’t care anyway.
It’s other browsers who send things like “like Gecko” to sneak past old browser-detection code.
Probably Netscape
deleted by creator