- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
cross-posted from: https://lemmy.zip/post/446751
Alternative title: Musk sues data scrapers, blames them for Twitter’s “impaired user experience”
cross-posted from: https://lemmy.zip/post/446751
Alternative title: Musk sues data scrapers, blames them for Twitter’s “impaired user experience”
Maybe if Twitter provided a reasonably priced API, people wouldn’t have to resort to scrapers.
Scrapers aren’t ever going to use an API, but Twitter could have just blocked unrecognized traffic if they wanted. I’m sure it’s covered by the TOS.
i think what the previous dude said is, people wouldn’t not have resorted to use scrapping if there was a good api. one of the major reason people/organisations choose scrapping is because it’s better than paying insane amounts of money for insane(ly low)amount of api calls.
There was a good API before Elon essentially shut it down. But that is irrelevant.
The reason mass data scraping like OpenAI does would not rely on an API is because they are getting data from the entire Internet (and other sources that aren’t online). They want raw data and they want as much and as varied as possible. It’s much easier, cheaper and practical to build tools that scrape websites generically than to integrate with thousands of completely independent and different APIs.
It’s the same reason that Reddit complaining about “AI taking all of our data” is bullshit. “AI” is just a convenient excuse and the most recent tech buzzword.
They are largely mad because of how effective the AI is. If data was being used just to improve Swype for texting people would care less. I care more about artists’ complaints about getting replaced than big tech companies complaining that content they didn’t create is being used to create things.
Also I decided to read openAI’s GPT2 paper and they were pretty clear about their created dataset:
"Instead, we created a new web scrape which emphasizes document quality. To do this we only scraped web pages which have been curated/filtered by humans. Manually filtering a full web scrape would be exceptionally expensive so as a starting point, we scraped all outbound links from Reddit, a social media platform, which received at least 3 karma. This can be thought of as a heuristic indicator for whether other users found the link interesting, educational, or just funny.
The resulting dataset, WebText, contains the text subset of these 45 million links."
That’s a nice sized dataset from real people that’s already somewhat filtered by quality. They were totally scraping Reddit very specifically and now that people see it’s effective, anyone else who wants to make their own chatgpt or wants to improve their models will do the same.
Very interesting, thanks for sharing!
It’s worth noting here for context that:
How exactly do you expect them to do that? It’s not inherently a trivial problem.
Twitter can remove their servers from the public facing internet if they don’t want the traffic.
Best solution imho
Twitter very much does not want that to happen. Remember two weeks ago when Musk reversed his decision to block anyone but registered users from seeing tweets because Google started removing links to Twitter since they were dead?
Right-wingers don’t just want to be bigoted assholes with megaphones, they want to make sure decent people have to hear them too.
Yeah, people think this is like trying to stop drive by bots that are looking for PHP vulnerabilities, it isn’t.
Usually you are attempting to stop someone who is spending their entire day trying to scrape your site. It’s a full time job trying to stop them and even then it’s a cat and mouse game at best.
Still don’t think Elon is going to get anywhere with this though.
The same way any large web service would identify a sudden increase in traffic, whether malicious or not. For the servers I manage, we end up dealing with more unintentionally out-of-control bots than we do legitimate hack or DDoS attempts.