Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
what kind of semen retention scheme is this
No Nut Neuravember
I want my kids to be science experiments as there is no other way an ethics board would approve this kind.
Iām pretty sure there are some other factors heās gonna need to sort out before having kids is even an actual question. For example, finding a woman who wants to have his kids and let him fuck with their infant brains.
Also given how we see the brain develop in cases of traumatic injury I would expect to see that neuroplasticity route around any kind of implant under most circumstances. Nerves arenāt wires and you canāt just plug 'em in and wait for a software patch.
Turns out some Silicon Valley folk are unhappy that a whole load of waymos got torched, fantasised that the cars could just gun down the protesters, and use genai video to bring their fantasies to some vague approximation of ālifeā
https://xcancel.com/venturetwins/status/1931929828732907882
The author, Justine Moore is an investment partner at a16z. May her future ventures be incendiary and uninsurable.
(via garbageday.email)
I wonder if some of those chuds think those waymos might be conscious
What is it with every fucking veo3 video being someone talking to the camera?! Artificial slop model tuned on humanmade slop.
Seeing shit like this alongside the discussions of the use of image recognition and automatic targeting in the recent Ukrainian drone attacks on Russian bombers is not great.
Also something something sanitized violence something something. These people love to fantasize about the thrill of defending themselves and their ideology with physical force but even in their propaganda are (rightly) disgusted and terrified by the consequences that such violence has on actual people.9
At first I was like:
But then I was like:
In other news, reports of AI images turning everything yellow have made it to Know Your Meme.
āCursor YOLO deleted everything in my computerā:
Hi everyone - as a previous context Iām an AI Program Manager at J&J and have been using Cursor for personal projects since March.
Yesterday I was migrating some of my back-end configuration from Express.js to Next.js and Cursor bugged hard after the migration - it tried to delete some old files, didnāt work at the first time and it decided to end up deleting everything on my computer, including itself. I had to use EaseUS to try to recover the data, but didnāt work very well also. Lucky I always have everything on my Google Drive and Github, but it still scared the hell out of me.
Now Iām allergic to YOLO mode and wonāt try it anytime soon again. Does anyone had any issue similar than this or am I the first one to have everything deleted by AI?
The response:
Hi, this happens quite rarely but some users do report it occasionally.
My T-shirt is raising questions already answered, etc.
(via)
I looked this up because I thought it was a nickname for something, but no, Cursor seems to have a setting thatās officially called YOLO mode. As per their docs:
With Yolo Mode, the agent can auto-run terminal commands
So this guy explicitly ticked the box that allowed the bullshit generator to execute arbitrary code on his machine. Why would you ever use that? Whatās someoneās rationale for enabling a setting like that? They even name it YOLO mode. Itās like the fucking red button in the movie that says, donāt push the red button, and promptfans are still like, yes, that sounds like a good idea!
Can you imagine selling something like a firewall appliance with a setting called āYolo Modeā, or even a tax software or a photo organizer or anything that handles any data, even if only of middling importance, and then still expect to be taken seriously at all?
Setting my oven to YOLO Mode and dying in a fire 7 seconds later
I set my car to YOLO more by pointing the vehicle roughly the direction I wish to travel and then dropping a brick on the accelerator.
We already have this, itās just a Tesla with āFSDā on
I thought FSD only works if there are kids around for the car to aim at.
Yolo charging mode on a phone, disable the battery overheating sensor and the current limiter.
I suspect that they added yolo mode because without it this thing is too useless.
My tax prep software definitely has a mode called āgive me Deus Exā
There is an implicit claim in the red button that it was worth including.
It is like Googleās AI overviews. There can not be a sufficient disclaimer because the overview being on the top of Google search implies a level of usefulness which it does not meet, not even in the āevil plan to make more money brieflyā way.
Edit: my analogy to AI disclaimers is using āthis device uses nuclei known to the state of California toā¦ā in place of ādrop and runā.
Well, they canāt fully outsource thinking to the autocomplete if they get asked whether some actions are okay.
deserved tbh
I was reading a post by someone trying to make shell scripts with an llm, and at one point the system suggested making a directory called
~
(which is a shorthand for your home directory in a bunch of unix-alikes). When the user pointed out this was bad, the llm recommended remediation usingrm -r ~
which would of course delete all your stuff.So, yeah, donāt let the approximately-correct machine do things by itself, when a single character substitution can destroy all your stuff.
And JFC, being surprised that something called āYOLOā might be bad? What were people expecting?
--all-the-red-flags
The basilisk we have at home
And back on the subject of builder.ai, thereās a suggestion that it might not have been A Guy Instead, and the whole 700 human engineers thing was a misunderstanding.
https://blog.pragmaticengineer.com/builder-ai-did-not-fake-ai/
Iām not wholly sure I buy the argument, which is roughly
- people from the company are worried that this sort of new will affect their future careers.
- humans in the loop would have exhibited far too high latency, and getting an llm to do it would have been much faster and easier than having humans try to fake it at speed and scale.
- there were over a thousand āexternal contractorsā who were writing loads of code, but thatās not the same as being Guys Instead.
I guess the question then is: if they did have a good genai tool for software dev⦠where is it? Why wasnāt Microsoft interested in it?
Bringing over aioās comment from the end of last weekās stubsack:
This week the WikiMedia Foundation tried to gather support for adding LLM summaries to the top of every Wikipedia article. The proposal was overwhelmingly rejected by the community, but the WMF hasnāt gotten the message, saying that the project has been āpausedā. It sounds like they plan to push it through regardless.
Way down in the linked wall oā text, thereās a comment by āChaotic Enbyā that struck me:
Another summary I just checked, which caused me a lot more worries than simple inaccuracies: Cambrian. The last sentence of that summary is āThe Cambrian ended with creatures like myriapods and arachnids starting to live on land, along with early plants.ā, which already sounds weird: we donāt have any fossils of land arthropods in the Cambrian, and, while there has been a hypothesis that myriapods might have emerged in the Late Cambrian, I havenāt heard anything similar being proposed about arachnids. But thatās not the worrying part.
No, the issue is that nowhere in the entire Cambrian article are myriapods or arachnids mentioned at all. Only one sentence in the entire article relates to that hypothesis: āMolecular clock estimates have also led some authors to suggest that arthropods colonised land during the Cambrian, but again the earliest physical evidence of this is during the following Ordovicianā. This might indicate that the model is relying on its own internal knowledge, and not just on the contents of the article itself, to generate an āAI overviewā of the topic instead.
Further down the thread, thereās a comment by āGnomingstuffā that looks worth saving:
There was an 8-person community feedback study done before this (a UI/UX text using the original Dopamine summary), and the results are depressing as hell. The reason this was being pushed to prod sure seems to be the cheerleading coming from 7 out of those 8 people: āHumans can lie but AI is unbiased,ā āI trust AI 100%,ā etc.
Perhaps the most depressing is this quote ā āThis also suggests that people who are technically and linguistically hyper-literate like most of our editors, internet pundits, and WMF staff will like the feature the least. The feature isnāt really āforā themā ā since it seems very much like an invitation to ignore all of us, and to dismiss any negative media coverage that may ensue (the demeaning āinternet punditsā).
Sorry for all the bricks of text here, this is just so astonishingly awful on all levels and everything that I find seems to be worse than the last.
Another comment by āCMDā evaluates the summary of the dopamine article mentioned there:
The first sentence is in the article. However, the second sentence mentions āemotionā, a word that while in a couple of reference titles isnāt in the article at all. The third sentence says ācreating a sense of pleasureā, but the article says āIn popular culture and media, dopamine is often portrayed as the main chemical of pleasure, but the current opinion in pharmacology is that dopamine instead confers motivational salienceā, a contradiction. āThis neurotransmitter also helps us focus and stay motivated by influencing our behavior and thoughtsā. Where is this even from? Focus isnāt mentioned in the article at all, nor is influencing thoughts. As for the final sentence, depression is mentioned a single time in the article in what is almost an extended aside, and any summary would surely have picked some of the examples of disorders prominent enough to be actually in the lead.
So thatās one of five sentences supported by the article. Perhaps the AI is hallucinating, or perhaps itās drawing from other sources like any widespread llm. What it definitely doesnāt seem to be doing is taking existing article text and simplifying it.
but the WMF hasnāt gotten the message, saying that the project has been āpausedā. It sounds like they plan to push it through regardless.
Classic āYesā / āask me laterā. You hate to see it.
The thing that galls me here even more than other slop is that there isnāt even some kind of horrible capitalist logic underneath it. Like, what value is this supposed to create? Replacing the leads written by actual editors, who work for free? You already have free labor doing a better job than this, why would you compromise the product for the opportunity to spend money on compute for these LLM not-even-actually-summaries? Pure brainrot.
Some AI company waving a big donation outside of the spotlight? Dorks trying to burnish their resumes?
Ya gotta think itās going to lead to a rebellion.
Maybe someone has put into their heads that they have to āgo with the timesā, because AI is āinevitableā and āhere to stayā. And if they donāt adapt, AI would obsolete them. That Wikipedia would become irrelevant because their leadership was hostile to āprogressā and rejected āemerging technologyā, just like Wikipedia obsoleted most of the old print encyclopedia vendors. And one day they would be blamed for it, because they were stuck in the past at a crucial moment. But if they adopt AI now, they might imagine, one day they will be praised as the visionaries who carried Wikipedia over to the next golden age of technology.
Of course all of that is complete bullshit. But instilling those fears (āuse it now, or you will be left behind!ā) is a big part of the AI marketing messaging which is blasted everywhere non-stop. So I wouldnāt be surprised if those are the brainworms in their heads.
Thatās probably true, but it also speaks to Ed Zitronās latest piece about the rise of the Business Idiot. You can explain why Wikipedia disrupted previous encyclopedia providers in very specific terms: crowdsourced production to volunteer editors cuts costs massively and allows the product to be delivered free (which also increases the pool of possible editors and improves quality), and the strict* adherence to community standards and sourcing guidelines prevents the worse loss of truth and credibility that you may expect.
But there is no such story that I can find for how Wikipedia gets disrupted by Gen AI. At worst it becomes a tool in the editorās belt, but the fundamental economics and structure just arenāt impacted. But if youāre a business idiot then you canāt actually explain it either way and so of course it seems plausible
Example #āIāve lost countā of LLMs ignoring instructions and operating like the bullshit spewing machines they are.
A comparison springs to mind: inviting the most pedantic nerds on Earth to critique your chatbot slop is a level of begging to be pwned thatās on par with claiming the female orgasm is a myth.
So, Iāve been spending too much time on subreddits with heavy promptfondler presence, such as /r/singularity, and the reddit algorithm keeps recommending me subreddit with even more unhinged LLM hype. One annoying trend Iāve noted is that people constantly conflate LLM-hybrid approaches, such as AlphaGeometry or AlphaEvolve (or even approaches that donāt involve LLMs at all, such as AlphaFold) with LLMs themselves. From their they act like of course LLMs can [insert things LLMs canāt do: invent drugs, optimize networks, reliably solve geometry exercise, etc.].
Like I saw multiple instances of commenters questioning/mocking/criticizing the recent Apple paper using AlphaGeometry as a counter example. AlphaGeometry can actually solve most of the problems without an LLM at all, the LLM component replaces a set of heuristics that make suggestions on proof approaches, the majority of the proof work is done by a symbolic AI working with a rigid formal proof system.
I donāt really have anywhere Iām going with this, just something I noted that I donāt want to waste the energy repeatedly re-explaining on reddit, so Iām letting a primal scream out here to get it out of my system.
Relatedly, the gathering of (useful, actually works in real life, can be used to make products that turn a profit or that people actually want, and sometimes even all of the above at the same time) computer vision and machine learning and LLMs under the umbrella of āAIā is something I find particularly galling.
The eventual collapse of the AI bubble and the subsequent second AI winter is going to take a lot of useful technology with it that had the misfortune to be standing a bit too close to LLMs.
Yes, thank you, Iām also annoyed about this. Even classic āAIā approaches for simple pattern detection (what used to be called āMLā a few hype waves ago, although itās much older than that even) are now conflated with capabilities of LLMs. People are led to believe that ChatGPT is the latest and best and greatest evolution of āAIā in general, with all capabilities that have ever been in anything. And itās difficult to explain how wrong this is without getting too technical.
Related, this fun article: ChatGPT āAbsolutely Wreckedā at Chess by Atari 2600 Console From 1977
Did you know thereās a new fork of xorg, called x11libre? I didnāt! I guess not everyone is happy with wayland, so this seems like a reasonable
Itās explicitly free of any āDEIā or similar discriminatory policies⦠[snip]
Together weāll make X great again!
Oh dear. Project members are of course being entirely normal about the whole thing.
Metux, one of the founding contributors, is Enrico Weigelt, who has reasonable opinions like everyone except the nazis were the real nazis in WW2, and also had an anti vax (and possibly eugenicist) rant on the linux kernel mailing list, as you do.
In sure itāll be fine though. Heās a great coder.
(links were unashamedly pillaged from this mastodon thread: https://nondeterministic.computer/@mjg59/114664107545048173)
Ok, maybe someone can help me here figure something out.
Iāve wondered for a long time about a strange adjacency which I sometimes observe between what I call (due to lack of a better term) āunix conservativismā and fascism. Itās the strange phenomenon where ideas about āclassicā and āpureā unix systems coincide with the worst politics. For example the āsucklessā stuff. Or the ramblings of people like ESR. Criticism of systemd is sometimes infused with it (yes, there is plenty of valid criticism as well. But thereās this other kind of criticism Iāve often seen, which is icky and weirdly personal). And Iāve also seen traces of this in discussions of programming languages newer than C, especially when topics like memory safety come up.
This is distinguished from retro computing and nostalgia and such, those are unrelated. If someone e.g. just likes old unix stuff, thatās not what I mean.
You may already notice, I struggle a bit to come up with a clear definition and whether there really is a connection or just a loose set of examples that are not part of a definable set. So, is there really something there or am I seeing a connection that doesnāt exist?
Iāve also so far not figured out what might create the connection. Ideas I have come up with are: appeal to times that are gone (going back to an idealized computing past that never existed), elitism (computers must not become user friendly), ideas of purity (an imaginary pure āunix philosophyā).
Anyway, now with this new xlibre project, thereās another one that fits into itā¦
I think the common ground is a fear of loss of authority to which they feel entitled. They learned the āoldā ways of SysV RC, X11, etc. etc. and that is their domain of expertise, in which they fear being surpassed or obsoleted. From there, itās easy to combine that fear with the fears stoked by adjacent white/male supremacist identity politics and queerphobia, plus the resentment already present from stupid baby slapfights like vi vs emacs or systemd vs everything else, and generate a new asshole identity in which they feel temporarily secure. Fear of loss of status drives all of this.
Except my feeling is itās mostly people who have grown up with Linux as a settled fact of computing life, not Unix greybeards.
Absolutely. Take the reverence for āSysVā init* to the point where the init system has all but eclipsed the AT&T Unix release as the primary meaning of āSystem Vā. The BSDs (at least the Net/Open branch, not sure about FreeBSD) adopted a simplified BSD init/rc model ages ago and Solaris switched to systemd-esque SMF with little uproar. Personally I even prefer SMF over its Linux equivalents, despite the cumbersome XML configuration.
I somewhat understand the terminalchud mindset, a longing for a supposed simpler time where a nerd could keep a holistic grasp of oneās computing system in their head. Combine that with the tech industryās pervasive male chauvinism and dogmatic adherence to a law of āsimplify and reduce weightā (usually a useful rule of thumb) and you end up with terrible social circles making bad software believing theyāre great on both fronts.
* Rather, the Linux implementation of the concept
Nostalgia has a lowkey reactionary impulse part(see also why those right wing reactionary gamer streamers who do ten hour reactive criticize a movie streams have their backgrounds filled with consumer nerd media toys (and almost never books)) and fear of change is also a part of conservatism. āEngineering mindsā who think they can solve things, and have a bit more rigid thinking also tend to be attracted to more extremist ideologies (which usually seems to have more rigid rules and lesser exceptions), which also leads back to the problem where people like this are bad at not realizing their minds are not typical (I can easily use a console so everyone else can and should). So it makes sense to me. Not sure if the ui thing is elitism or just a strong desire to create and patrol the borders of an ingroup. (But isnt that just what elitism is?)
I sometimes feel that I, as someone who also likes retro computing and even deliberately uses old software because it feels familiar and cozy to me, and because itās often easier to hack and tweak (in the same way that someone would prefer a vintage car they can maintenance themselves, I guess), I get thrown in with these people ā and yes, I also find it super hard to put a finger on it.
I also feel theyāre very prominent in the Vim community for the exact same reasons you mentioned. I like Vim, I use it daily and itās my favorite editor because itās what I am used to and I know how to tweak it, and I canāt be bothered to use anything else (except Emacs, but only with evil-mode), but fuck me if Vim evangelists arenāt some of the most obnoxious people online.
Donāt have much to add, other than I first became aware of this connection when Freenode imploded. I wrote in a short essay that
[the] dominant ideology of new Freenode is free speech, anti-LGBT, and adherence to fringe Unix shibboleths such as anti-systemd, anti-Codes of Conduct, and anti anti-RMS.
(src)
Maybe itās connected to the phenomenon of old counter-cultural activist become massive racists.
The whole Linux userbase loves x11libre, an initiative to preserve X11 alive as an alternative to Wayland! 5 seconds later We regret to inform you x11libre guy is a Nazi apologist
not
evenjust an apologist, a literal hardcore german neonazi. matthew garrett surfaced a 2018 mail to a devuan mailing list chock full of nazi historical revisionism that you donāt get into casually.Holy shit, yup, thatās a literal neo-nazi.
milkshakeLibre
@rook
It seems to be so libre that itās liberating itself of people wanting to use/contribute to it!
@BlueMonday1984(this probably deserves its own post because it seems destined to be a shitshow full of the worst people, but I know nothing about the project or the people currently involved)
New Zitron dropped, and, fuck, I feel this one in my bones.
What does the ābetterā version of ChatGPT look like, exactly? Whatās cool about ChatGPT? [ā¦] Because the actual answer is āa ChatGPT that actually works.ā [ā¦] A better ChatGPT would quite literally be a different product.
This is the heart of recognizing so much of the bullshit in the tech field. I also want to make sure that our friends in the Ratsphere get theirs for their role in enabling everyone to pretend thereās a coherent path between the current state of LLMs and that hypothetical future where they can actually do things.
But the Ratspace doesnāt just expect them to actually do things, but also self improve. Which is another step above just human level intelligence, it also means that self improvement is possible (and on the highest level of nuttyness, unbound), a thing we have not even seen if it is possible. And it certainly doesnāt seem to be, as the lengths between a newer better version of chatGPT seems to be increasing (an interface around it doesnāt count). So imho due to chatgpt/LLMs and the lack of fast improvements we have seen recently (some even say performance has decreased, so we are not even getting incremental innovations), means that the ācould lead to AGI-foomā possibility space has actually shrunk, as LLMs will not take us there. And everything including the kitchen sink has been thrown at the idea. To use some AI-weirdo lingo: With the decels not in play(*), why are the accels not delivering?
*: And lets face it, on the fronts that matter, we have lost the battle so far.
E: full disclosure I have not read Zitrons article, they are a bit long at times, look at it, you could read 1/4th of a SSC article in the same time.
Can confirm that about Zitronās writing. He even leaves you with a sense of righteous fury instead of smug self-satisfaction.
And I think that the whole bullshit āfoomā argument is part of the problem. For the most prominent āthinkersā in related or overlapping spaces with where these LLM products are coming from the narrative was never about whether or not these models were actually capable of what they were being advertised for. Even the stochastic parrot arguments, arguably the strongest and most well-formulated anti-AI argument when the actual data was arguably still coming in, was dismissed basically out of hand. āSomething something emergent something.ā Meanwhile they just keep throwing more money and energy into this goddamn pit and the real material harms keep stacking up.
hacker news is illiterate
https://news.ycombinator.com/item?id=44245053
I question whether or not some of these commenters have a theory of mind. The product under discussion is a horror show of reified solipsism. For the commenters, books are merely the written form of the mouth noises they use to get other meat robots to do things and which are sometimes entertaining when piled up in certain ways.
āWords or bodies?ā you might ask. Yes.
PS: channeling the spiritu drilum
https://news.ycombinator.com/item?id=44246874
You cannot stop people from making the world worse or better. The best you can do is focus on your own life.
In time many will say we are lucky to live in a world with so much content, where anything you want to see or read can be spun up in an instant, without labor.
And though most will no longer make a living doing some of these content creation activities by hand and brain, you can still rejoice knowing that those who do it anyway are doing it purely for their love of the art, not for any kind of money. A human who writes or produces art for monetary reasons is only just as bad as AI.
so much content
The choice of, or instinctive reaching for, the word content speaks volumes.
where anything you want to see or read can be spun up in an instant, without labor.
āWithout labor,ā sure.
Gross and heartbreaking
If we do so much shit for āmonetary reasonsā then why do I give so much of my money to a landlord every month? Or a fucking grocery store?
I donāt have the headspace to sneer it properly at this moment, but this article fucking goes places might even be worthy of its own techtakes post
Shawn Schneider ā a 22-year-old who dropped out of his Christian high school, briefly attended community college, dropped out again, and earlier this year founded a marketing platform for generative AI ā tells me college is outdated. Skipping it, for him, is as efficient as it is ideological. āIt signals DEI,ā he says. āIt signals, basically, woke and compromised institutions. At least in the circles I run in, the sentiment is like they should die.ā
Schneider says the women from his high school in Idaho were āso much better at doing what the teacher asks, and that was just not what I was good at or what the other masculine guys I knew were good at.ā Heās married with two children, a girl and a boy, which has made him realize that schools should be separated by gender to āmake men more manly, and women more feminine.ā
Least fascist programmer
Nothing in the article suggests he is a programmer, or that being a programmer is inherently fascist.
Youāre both incorrect. I am the least fascist programmer and Iām here to tell you programming is inherently fascist.
They say that you canāt destroy the masterās house with the masterās tools, but what about hammers?
Yea, what if the master owns a wrecking ball, a bulldozer, a heavy duty excavator and a bunch of dynamite?
Yes, this is a metaphor for C programming, how did you know?
Does master own a Sawzall?
Funny thing, the sawzall vanished right before his catalytic converter went missing
That was one wild read even worse than I was expecting. Holy sexism Batman, the incel to tech pipeline is real.
āIn college, you donāt learn the building skills that you need for a startup,ā Tan says of his decision. āYouāre learning computer science theory and stuff like that. Itās just not as helpful if you want to go into the workforce.ā
I remember when a large part of the university experience was about meeting people, experiencing freedom from home for the first time before being forced into the 9-5 world, and broadening your horizon in general. But maybe thatās just the European perspective.
In any case, these people are so fucking startup-brained that it hurts to think about.
Now 25, Guild dropped out of high school in the 10th grade to continue building a Minecraft server he says generated hundreds of thousands of dollars in profit.
Serious question: how? Isnāt Minecraft free to play and you can just host servers yourself on your computer? I tried to search up āhow to make money off a Minecraft serverā and was (of course) met with an endless list of results of LLM slop I could not bear to read more than one paragraph of.
Amid political upheaval and global conflict, Palantir applicants are questioning whether college still serves the democratic values it claims to champion, York says. āThe success of Western civilization,ā she argues, ādoes not seem to be what our educational institutions are tuned towards right now.ā
Yes, because Palantir is such a beacon of defending democratic values and not a techfash shithouse at all.
how? Isnāt Minecraft free to play and you can just host servers yourself on your computer?
For years now, custom plugins have made public Minecraft servers much less āblock building gameā than ārobust engine for MMOs that every kid with a computer already has the client for,ā and even though itās mostly against Mojangās TOS, all the kinds of monetization youād expect have followed. When you hear āMinecraft server that generated hundreds of thousands of dollars in profit,ā imagine āfreemium PC game that generated hundreds of thousands of dollars in profitā and youāll get roughly the right picture. Peer pressure-driven cosmetics, technically-TOS-violating-but-who-cares lootboxes, $500 "micro"transaction packages, anything they can get away with. It puts into perspective why you hear so much about Minecraft YouTubers running their own servers.
Uni is also a good place to learn to fail. A uni run startup imitation place can ensure both problems (guided by profs if needed) and teach people how to do better, without being in the pockets of VCs also better hours, and parties.
In the Year of Our Lord 2025 how does anyone, much less a published journalist, not recognize āWestern Civilizationā as a dog whistle for white (or at least European) supremacy rather than having anything to do with representative government or universal human rights or whatever people like to pretend.
Re: minecraft - kids/people who arenāt very good at technology canāt or are unwilling to learn how to host their own servers, so thatās your potentially paying audience. Or people who want to play with a ton of other people, not just their family/friends. And you can do some interesting things with custom scripts and so on on a server, I remember briefly playing on a server which had its own custom in-game currency (earned by selling certain materials) and you could buy potions, equipment and various random perks for it (and of course there are ways to connect that to real money, although you might get banned for it).
deleted by creator
Got a hilarious story for today: ChatGPT Lost a Chess Game to an Atari 2600
Got curious and wanted to see if I could beat the Atari 2600. Found an online emulator here.
āEasiestā difficulty appears to be 8, followed by 1, then increasing in difficulty up to 7. I can beat 8, and the controls and visuals are too painful for me to try anything more than this.
At the same time, we have a Heartbreaking: The Worst Person You Know etc in the article itself:
What does a human slowly going insane look like to a corporation?ā Mr. Yudkowsky asked in an interview. āIt looks like an additional monthly user.ā
Couple months ago I saw a flurry of posts from far-right accounts going āJeffrey Epstein Innocent (he didnāt do it).ā Now itās morphing into āJeffrey Epstein Innocent (he DID do it, but ackshually itās ephebaphilia and if ONLY someone would do something about those pesky Age of Consent lawsā¦)ā
PS: AT deleted her post, fortunately someone saved it to Internet Archive
Yeah, its the BAP crew. Last few years saw an inrush of far right lolicon fans into that space.
ran across this, just quickly wanted to scream infinitely
(as an aside, Iāve also recently (finally) joined the ACM, and clicking around in that has so far been ⦠quite the experience. I actually want to make a bigger post about it later on, because it is worth more than a single-comment sneer)
- You will understand how to use AI tools for real-time employee engagement analysis
- You will create personalized employee development plans using AI-driven analytics
- You will learn to enhance employee well-being programs with AI-driven insights and recommendations
You will learn to create the torment nexus
- You will prepare your career for your future work in a world with robots and AI
You will learn to live in the torment nexus
- You will gain expertise in ethical considerations when implementing AI in HR practices
I assume itās a single slide that says āLOL who caresā