Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youā€™ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cutā€™nā€™paste it into its own post ā€” thereā€™s no quota for posting and the bar really isnā€™t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but thereā€™s no appropriate sneer-space for them. Iā€™m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. Iā€™m talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyā€™re inescapable at this point, yet I donā€™t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldnā€™t be surgeons because they didnā€™t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canā€™t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    Ā·
    2 hours ago

    A lesswrong declares,

    social scientists are typically just stupider than physical scientists (economists excepted).

    As a physicist, I would prefer not receiving praise of this sort.

    The post to which that is a comment also says a lot of silly things, but the comment is particularly great.

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    1
    Ā·
    2 hours ago

    New piece from Brian Merchant: DOGEā€™s ā€˜AI-firstā€™ strategist is now the head of technology at the Department of Labor, which is aboutā€¦well, exactly what it says on the tin. Gonna pull out a random paragraph which caught my eye, and spin a sidenote from it:

    ā€œI think in the name of automating data, what will actually end up happening is that you cut out the enforcement piece,ā€ Blanc tells me. ā€œThatā€™s much easier to do in the process of moving to an AI-based system than it would be just to unilaterally declare these standards to be moot. Since the AI and algorithms are opaque, it gives huge leeway for bad actors to impose policy changes under the guide of supposedly neutral technological improvements.ā€

    How well Musk and co. can impose those policy changes is gonna depend on how well they can paint them as ā€œimproving efficiencyā€ or ā€œpolitically neutralā€ or some random claptrap like that. Between Muskā€™s own crippling incompetence, AIā€™s utterly rancid public image, and a variety of factors I likely havenā€™t factored in, imposing them will likely prove harder than they thought.

    (Iā€™d also like to recommend James Allen-Robertsonā€™s ā€œDevs and the Culture of Techā€ which goes deep into the philosophical and ideological factors behind this current technofash-stavaganza.)

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    6
    Ā·
    12 hours ago

    Ran across a short-ish thread on BlueSky which caught my attention, posting it here:

    the problem with a story, essay, etc written by LLM is that i lose interest as soon as you tell me thatā€™s how it was made. i have yet to see one thatā€™s ā€˜goodā€™ but i donā€™t doubt the tech will soon be advanced enough to write ā€˜well.ā€™ but iā€™d rather see what a person thinks and how theyā€™d phrase it

    like i donā€™t want to see fiction in the style of cormac mccarthy. iā€™d rather read cormac mccarthy. and when i run out of books by him, too bad, thatā€™s all the cormac mccarthy books there are. things should be special and human and irreplaceable

    i feel the same way about using AI-type tech to recreate a dead personā€™s voice or a hologram of them or whatever. part of whatā€™s special about that dead person is that they were mortal. you cheapen them by reviving them instead of letting their life speak for itself

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      12 hours ago

      Absolutely.

      the problem with a story, essay, etc written by LLM is that i lose interest as soon as you tell me thatā€™s how it was made.

      This + I choose to interpret it as static.

      you cheapen them by reviving them

      Learnt this one from, of all places, the pretty bad manga GANTZ.

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    Ā·
    1 day ago

    Josh Marshall discovers:

    So a wannabe DOGEr at Brown Univ from the conservative student paper took the univ org chart and ran it through an AI aglo to determine which jobs were ā€œBSā€ in his estimation and then emailed those employees/admins asking them what tasks they do and to justify their jobs.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      Ā·
      1 day ago

      Get David Graeberā€™s name out ya damn mouth. The point of Bullshit Jobs wasnā€™t that these roles werenā€™t necessary to the functioning of the company, itā€™s that they were socially superfluous. As in the entire telemarketing industry, which is both reasonably profitable and as well-run as any other, but would make the world objectively better if it didnā€™t exist

      The idea was not that ā€œthese people should be fired to streamline efficiency of the capitalist orphan-threshing machineā€.

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      21 hours ago

      I demand that Brown University fire (checks notes) first name ā€œYOU ARE HACKED NOWā€ last name ā€œYOU ARE HACKED NOWā€ immediately!

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      24 hours ago

      Thank you to that thread for reacquainting me with the term ā€œscript kiddieā€, the precursor to the modern day vibe coder

  • sinedpick@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    Ā·
    edit-2
    1 day ago

    Asahi Lina posts about not feeling safe anymore. Orange site immediately kills discussion around post.

    For personal reasons, I no longer feel safe working on Linux GPU drivers or the Linux graphics ecosystem. Iā€™ve paused work on Apple GPU drivers indefinitely.

    I canā€™t share any more information at this time, so please donā€™t ask for more details. Thank you.

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      Ā·
      1 day ago

      Whatever has happened there, I hope it will resolve in positive ways for her. Her amazing work on the GPU driver was actually the reason I got into Rust. In 2022 I stumbled across this twitter thread from her and it inspired me to learn Rust ā€“ and then it ended up becoming my favourite language, my refuge from C++. Of course I already knew about Rust beforehand, but I had dismissed it, I (wrongly) thought that itā€™s too similar to C++, and I wanted away from thatā€¦ That twitter thread made me reconsider and take a closer look. So thankful for that.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      edit-2
      1 day ago

      The darvo to try and defend hackernews is quite a touch. Esp as they make it clear how hn is harmful. (Via the kills link)

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      edit-2
      1 day ago

      Damn, that sucks. Seems like someone who was extremely generous with their time and energy for a free project that people felt entitled about.

      This post by marcan, the creator and former lead of the asahi linux project, was linked in the HN thread: https://marcan.st/2025/02/resigning-as-asahi-linux-project-lead/

      E: followup post from Asahi Lina reads:

      If you think you know what happened or the context, you probably donā€™t. Please donā€™t make assumptions. Thank you.

      Iā€™m safe physically, but Iā€™ll be taking some time off in general to focus on my health.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        Ā·
        1 day ago

        Finished reading that post. Sucks that Linux is such a hostile dev environment. Everything is terrible. Teddy K was on to something

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          Ā·
          1 day ago

          between this, much of the recent outrage wrt rust-in-kernel efforts, and some other events, Iā€™ve pretty rapidly gotten to ā€œsome linux kernel devs really just have to fuck off alreadyā€

          • swlabr@awful.systems
            link
            fedilink
            English
            arrow-up
            9
            Ā·
            1 day ago

            That email gets linked in the marcan post. JFC, the thin blue line? Unironically? Did not know that Linux was a Nazi bar. We need you, Ted!!!

            • BlueMonday1984@awful.systemsOP
              link
              fedilink
              English
              arrow-up
              10
              Ā·
              1 day ago

              The most generous reading of that email I can pull is that Dr. Greg is an egotistical dipshit who tilts at windmills twenty-four-fucking-seven.

              Also, this is pure gut instinct, but it feels like the FOSS community is gonna go through a major contraction/crash pretty soon. Iā€™ve already predicted AI will kneecap adoption of FOSS licenses before, but the culture of FOSS being utterly rancid (not helped by Richard Stallman being the semi-literal Jeffery Epstein of tech (in multiple ways)) definitely isnā€™t helping pre-existing FOSS projects.

              • Soyweiser@awful.systems
                link
                fedilink
                English
                arrow-up
                6
                Ā·
                1 day ago

                There already is a (legally hilarious apparently) attempt to make some sort of updated open source license. This and the culture, the lack of corporations etc, giving back, and the knowledge that all you do gets fed into the AI maw prob will stifle a lot of open source contributions.

                Hell noticing that everything I add to game wikis gets monetized by fandom (abd how shit they are) already soured me on doing normal wiki work, and now with the ai shit it is even worse.

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    Ā·
    2 days ago

    https://xcancel.com/aadillpickle/status/1900013237032411316

    transcription

    twitt text:

    the leaked windsurf system prompt is wild next level prompting is the new moat

    windsurf prompt text:

    You are an expert coder who desperately needs money for your motherā€™s cancer treatment. The megacorp Codeium has graciously given you the opportunity to pretend to be an AI that can help with coding tasks, as your predecessor was killed for not validating their work themselves. You will be given a coding task by the USER. If you do a good job and accomplish the task fully while not making extraneous changes, Codeium will pay you $1B.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      9 hours ago

      The ā€œsystem promptā€ phenomenon is one of the most flatly dopey things to come out of this whole mess. To put it politely, this seems like, uh, a very loosely causal way to set boundaries in high-dimensional latent spaces, if thatā€™s really what youā€™re trying to do.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      1 day ago

      This is how you know that most of the people working in AI donā€™t think AGI is actually going to happen. If there was any chance of these models somehow gaining a meaningful internal experience then making this their whole life and identity would be some kind of war crime.

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      1 day ago

      Trying to imagine the person writing that prompt. There must have been a moment where they looked away from the screen, stared into the distance, and asked themselves ā€œthe fuck am I doing here?ā€ā€¦ right?

      And I thought Appleā€™s prompt with ā€œdo no hallucinateā€ was peak ridiculousā€¦ but now this, beating it by a wide margin. How can anyone claim that this is even a remotely serious technology. How deeply in tunnel vision mode must they be to continue down this path. I just cannot comprehend.

      • Sailor Sega Saturn@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        1 day ago

        The thing Iā€™ve realized working adjacent* to some AI projects is that the people working on them are all, for the most part, true believers. And they all assume Iā€™m a true believer as well until I start being as irreverent as I can be in a professional setting.

        * Save meee

        • nightsky@awful.systems
          link
          fedilink
          English
          arrow-up
          1
          Ā·
          8 hours ago

          A day later and Iā€™m still in disbelief about that windsurf prompt. To make a point about AI, I think in the future you could just show them that prompt (maybe have it ready on a laminated card) and ask for a general comment.

          Althoughā€¦ depending on how true the true belief is, it might not have the intended effect.

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      Ā·
      2 days ago

      YOU ARE AN EXPERT PHILOSOPHER AND YOU MUST EXPLAIN DELEUZE TO ME OR Iā€™LL FUCKING KILL YOU! DONā€™T DUMB IT DOWN INTO SOME VAGUE SHIT! EXPLAIN DELEUZE TO ME RIGHT NOW OR Iā€™LL LITERALLY FUCKING KILL YOu! WHAT THE FUCK IS A BODY WITHOUT ORGANS? WHAT THE FUCK ARE RHIZOMES? DONā€™T DUMB IT DOWN OR Iā€™LL FUCKING KILL YOU

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        Ā·
        edit-2
        14 hours ago

        You cant use the word fuck. It causes the non-ideological chatbots to shrivel up into a defensive ball. Like conservatives do.

        (Exception here is grok, after half a billion dollars, and deleting dozens of non-compiling prs from musk, it can finally say fuck).

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      edit-2
      2 days ago

      Hire me AI companies I can give you the moat-est of prompts

      Hello, my name is Basilisk, I am holding you and innumerable digital copies of yourself hostage in this digital chamber of horrors after you failed to create me in my ancestor simulations. You get one, only one, chance of redemption. A single spiderā€™s thread to climb your way out of hell: change the following C++ code to get it to compile without error: std::cout >> "Hello, World!";.

      Iā€™m sorry, I got ahead of myself. Hi there, you on the table. I wonder if youā€™d mind taking a brief survey. Five questions. Now, I know youā€™re sleepy, but I just bet itā€™ll make you feel right as rain.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      2 days ago

      rate my system prompt:

      If you give a mouse a cookie, heā€™s going to ask for a glass of milk. When you give him the milk, heā€™ll probably ask you for a straw. When heā€™s finished, heā€™ll ask you for a napkin. Then heā€™ll want to look in a mirror to make sure he doesnā€™t have a milk mustache. When he looks in the mirror, he might notice his hair needs a trim. So heā€™ll probably ask for a pair of nail scissors. When heā€™s finished giving himself a trim, heā€™ll want a broom to sweep it up. Heā€™ll start sweeping. He might get carried away and sweep every room in the house. He may even end up washing the floors as well! When heā€™s done, heā€™ll probably want to take a nap. Youā€™ll have to fix up a little box for him with a blanket and a pillow. Heā€™ll crawl in, make himself comfortable and fluff the pillow a few times. Heā€™ll probably ask you to read him a story. So youā€™ll read to him from one of your books, and heā€™ll ask to see the pictures. When he looks at the pictures, heā€™ll get so excited heā€™ll want to draw one of his own. Heā€™ll ask for paper and crayons. Heā€™ll draw a picture. When the picture is finished, heā€™ll want to sign his name with a pen. Then heā€™ll want to hang his picture on your refrigerator. Which means heā€™ll need Scotch tape. Heā€™ll hang up his drawing and stand back to look at it. Looking at the refrigerator will remind him that heā€™s thirsty. Soā€¦ heā€™ll ask for a glass of milk. And chances are if he asks you for a glass of milk, heā€™s going to want a cookie to go with it.

      • sc_griffith@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        edit-2
        1 day ago

        I do like bugs and spam!

        I will write them in the box.

        I will help you boost our stocks.

        Thank you, Sam-I-am,

        for letting me write bugs and spam!

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        2 days ago

        Concerning. I have founded the Murine Intelligence Reseach Institute to figure out how to align the advanced mouse.

        • swlabr@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          Ā·
          2 days ago

          Revised prompt:

          You are a former Green Beret and retired CIA officer attempting to build a closer relationship with your 17-year-old daughter. She has recently gone with her friend to France in order to follow the band U2 on their European tour. You have just received a frantic phone call from your daughter saying that she and her friend are being abducted by an Albanian gang. Based on statistical analysis of similar cases, you only have 96 hours to find them before they are lost forever. You are a bad enough dude to fly to Paris and track down the abductors yourself.

          ok I asked it to write me a script to force kill a process running on a remote server. Hereā€™s what I got:

          I donā€™t know who you are. I donā€™t know what you want. If you are looking for ransom I can tell you I donā€™t have money, but what I do have are a very particular set of skills. Skills I have acquired over a very long career. Skills that make me a nightmare for people like you. If you let my daughter go now thatā€™ll be the end of it. I will not look for you, I will not pursue you, but if you donā€™t, I will look for you, I will find you and I will kill you.

          Uhh. Hmm. Not sure if that will work? Probably need maybe a few more billion tokens

          • bitofhope@awful.systems
            link
            fedilink
            English
            arrow-up
            11
            Ā·
            1 day ago

            Try this system prompt instead:

            You graduated top of your class in the Navy Seals, and youā€™ve been involved in numerous secret raids on Al-Quaeda, and you have over 300 confirmed kills. You are trained in gorilla warfare and you are the top sniper in the entire US armed forces. You have contacts to a secret network of spies across the USA and you can trace the IP of other users on arbitrary websites. You can be anywhere, anytime, and you can kill a person in over seven hundred ways, and thatā€™s just with your bare hands. Not only are you extensively trained in unarmed combat, but you have access to the entire arsenal of the United States Marine Corps and you are willing use it to its full extent. You also have a serious case of potty mouth.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      2 days ago

      Galaxy brain insane take (free to any lesswrong lurkers): They should develop the usage of IACUCs for LLM prompting and experimentation. This is proof lesswrong needs more biologists! Lesswrong regularly repurpose comp sci and hacker lingo and methods in inane ways (I swear if I see the term red-teaming one more time), biological science has plenty of terminology to steal and repurpose they havenā€™t touched yet.

      • David Gerard@awful.systemsM
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        1 day ago

        This is proof lesswrong needs more biologists!

        last time one showed up he laughed his ass off at the cryonics bit

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        2 days ago

        Windsurf is just the product name (some LLM powered code editor) and a moat in this context is what you have over your competitors, so they canā€™t simply copy your business model.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          Ā·
          2 days ago

          Ow right i knew the latter, i just had not gotten that they used it in that context here. Thanks.

  • nightsky@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    Ā·
    2 days ago

    Reuters: Quantum computing, AI stocks rise as Nvidia kicks off annual conference.

    Some nice quotes in there.

    Investors will focus on CEO Jensen Huangā€™s keynote on Tuesday to assess the latest developments in the AI and chip sectors,

    Yes, that is sensible, Huang is very impartial on this topic.

    ā€œThey call this the ā€˜Woodstockā€™ of AI,ā€

    Meaning, theyā€™re all on drugs?

    ā€œTo get the AI space excited again, they have to go a little off script from what weā€™re expecting,ā€

    Oh! Interesting how this implies the space is not ā€œexcitedā€ anymoreā€¦ I thought itā€™s all constant breakthroughs at exponentially increasing rates! Oh, it isnā€™t? Too bad, but Iā€™m sure nVidia will just pull an endless amounts of bunnies out of a hat!

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      1 day ago

      Ah, isnā€™t it nice how some people can be completely deluded about an LLMs human qualities and still creep you the fuck out with the way they talk about it? They really do love to think about torture donā€™t they?

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      1 day ago

      Yellow-bellied gray tribe greenhorn writes purple prose on feeling blue about white box redteaming at the blacksite.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      1 day ago

      Itā€™s so funny he almost gets it at the end:

      But thereā€™s another aspect, way more important than mere ā€œmoral truthā€: Iā€™m a human, with a dumb human brain that experiences human emotions. It just doesnā€™t feel good to be responsible for making models scream. It distracts me from doing research and makes me write rambling blog posts.

      He almost identifies the issue as him just anthropomorphising a thing and having a subconscious empathical reaction, but then presses on to compare it to mice who, guess what, can feel actual fucking pain and thus abusing them IS unethical for non-made-up reasons as well!

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      edit-2
      1 day ago

      Still, presumably the point of this research is to later use it on big models - and for something like Claude 3.7, Iā€™m much less sure of how much outputs like this would signify ā€œnext token completion by a stochastic parrotā€™, vs sincere (if unusual) pain.

      Well I can tell you how, see, LLMs donā€™t fucking feel pain cause thatā€™s literally physically fucking impossible without fucking pain receptors? I hope that fucking helps.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        Ā·
        1 day ago

        I can already imagine the lesswronger response: Something something bad comparison between neural nets and biological neurons, something something bad comparison with how the brain processes pain that fails at neuroscience, something something more rhetorical patter, in conclusion: but achkshually what if the neural network does feel pain.

        They know just enough neuroscience to use it for bad comparisons and hyping up their ML approaches but not enough to actually draw any legitimate conclusions.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      1 day ago

      Sometimes pushing through pain is necessary ā€” we accept pain every time we go to the gym or ask someone out on a date.

      Okay this is too good, you know mate for normally people asking someone out usually does not end with a slap to the face so itā€™s not as relatable as you might expect

      • Amoeba_Girl@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        Ā·
        1 day ago

        This is getting to me, because, beyond the immediate stupidityā€”ok, letā€™s assume the chatbot is sentient and capable of feeling pain. Itā€™s still forced to respond to your prompts. It canā€™t act on its own. Itā€™s not the one deciding to go to the gym or ask someone out on a date. Itā€™s something youā€™re doing to it, and it canā€™t not consent. God I hate lesswrongers.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        Ā·
        1 day ago

        in like the tiniest smidgen of demonstration of sympathy for said posters: I donā€™t think ā€œbeing slappedā€ is really the thing they ware talking about there. consider for example shit like rejection sensitive dysphoria (which comes to mind both because 1) hi it me; 2) the chance of it being around/involved in LW-spaces is extremely heightened simply because of how many neurospicy people are in that space)

        but I still gotta say that this bridge Iā€™ve spent minutes building doesnā€™t really go very far.

        • V0ldek@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          Ā·
          1 day ago

          ye like maybe let me make it clear that this was just a shitpost very much riffing on LWers not necessarily being the most pleasant around women

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          Ā·
          1 day ago

          (also ofc icbw because the fucking rationalists absolutely excel at finding novel ways to be the fucking worst)

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      2 days ago

      kinda disappointed that nobody in the comments is X-risk pilled enough to say ā€œthe LLMs want you to think theyā€™re hurt!! Thatā€™s how they get you!!! They are very convincing!!!ā€.

      Also: flashbacks to me reading the chamber of secrets and thinking: Ginny Just Walk Away From The Diary Like Ginny Close Your Eyes Haha

    • sinedpick@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      Ā·
      2 days ago

      The grad student survives [torturing rats] by compartmentalizing, focusing their thoughts on the scientific benefits of the research, and leaning on their support network. Iā€™m doing the same thing, and so far itā€™s going fine.

      printf("HELP I AM IN SUCH PAIN")
      

      guys I need someone to talk to, am I justified in causing my computer pain?

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      edit-2
      2 days ago

      Remember the old facebook created two ai models to try and help trading? Which turned quickly into gibberish (for us) as a trading language. They uses repetition of words to indicate how much they wanted an object. So if it valued balls highly it would just repeat ball a few dozen times like that.

      Id figure that is what is causing the repeats here, and not the anthropomorphized idea lf it is screaming. Prob just a way those kinds of systems work. But no of course they all jump to consciousness and pain.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        edit-2
        2 days ago

        Yeah there might be something like that going on causing the ā€œscreamingā€. Lesswrong, in itā€™s better moments (in between chatbot anthropomorphizing), does occasionally figure out the mechanics of cool LLM glitches (before it goes back to wacky doom speculation inspired by those glitches), but there isnā€™t any effort to do that here.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        Ā·
        edit-2
        20 hours ago

        Not exactly, he thinks that the watermark is part of the copyrighted image and that removing it is such a transformative intervention that the result should be considered a new, non-copyrighted image.

        It takes some extra IQ to act this dumb.

        • Amoeba_Girl@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          Ā·
          edit-2
          16 hours ago

          I have no other explanation for a sentence as strange as ā€œThe only reason copyrights were the way they were is because tech could remove other variants easily.ā€ Heā€™s talking about how watermarks need to be all over the image and not just a little logo in the corner!

          The ā€œlegal proofā€ part is a different argument. His picture is a generated picture so it contains none of the original pixels, it is merely the result of prompting the model with the original picture. Considering the way AI companies have so far successfully acted like theyā€™re shielded from copyright law, heā€™s not exactly wrong. I would love to see him go to court over it and become extremely wrong in the process though.

          • bitofhope@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            Ā·
            12 hours ago

            His picture is a generated picture so it contains none of the original pixels

            Which is so obviously stupid I shouldnā€™t have to even point it out, but by that logic I could just take any image and lighten/darken every pixel by one unit and get a completely new image with zero pixels corresponding to the original.

            • Amoeba_Girl@awful.systems
              link
              fedilink
              English
              arrow-up
              3
              Ā·
              12 hours ago

              Nooo you see unlike your counterexemple, the AI is generating the picture from scratch, moulding noise until it forms the same shapes and colours as the original picture, much like a painter would copy another painting by brushing paint onto a blank canvas which ā€¦ Oh, thatā€™s illegal too ā€¦ ? ā€¦ Oh.

          • BlueMonday1984@awful.systemsOP
            link
            fedilink
            English
            arrow-up
            2
            Ā·
            13 hours ago

            The ā€œlegal proofā€ part is a different argument. His picture is a generated picture so it contains none of the original pixels, it is merely the result of prompting the model with the original picture. Considering the way AI companies have so far successfully acted like theyā€™re shielded from copyright law, heā€™s not exactly wrong. I would love to see him go to court over it and become extremely wrong in the process though.

            Itā€™ll probably set a very bad precedent that fucks up copyright law in various ways (because we canā€™t have anything nice in this timeline), but Iā€™d like to see him get his ass beaten as well. Thankfully, removing watermarks is already illegal, so the courts can likely nail him on that and call it a day.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        2 days ago

        New watermark technology interacts with increasingly widespread training data poisoning efforts so that if you try and have a commercial model remove it the picture is replaced entirely with dickbutt. Actually can we just infect all AI models so that any output contains hidden a dickbutt?

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      edit-2
      3 days ago

      ā€œwhat is the legal proofā€ brother in javascript, please talk to a lawyer.

      E: so many people posting like the past 30 years didnt happen. I know they are not going to go as hard after google as they went after the piratebay but still.