As soon as Apple announced its plans to inject generative AI into the iPhone, it was as good as official: The technology is now all but unavoidable. Large language models will soon lurk on most of the world’s smartphones, generating images and text in messaging and email apps. AI has already colonized web search, appearing in Google and Bing. OpenAI, the $80 billion start-up that has partnered with Apple and Microsoft, feels ubiquitous; the auto-generated products of its ChatGPTs and DALL-Es are everywhere. And for a growing number of consumers, that’s a problem.

Rarely has a technology risen—or been forced—into prominence amid such controversy and consumer anxiety. Certainly, some Americans are excited about AI, though a majority said in a recent survey, for instance, that they are concerned AI will increase unemployment; in another, three out of four said they believe it will be abused to interfere with the upcoming presidential election. And many AI products have failed to impress. The launch of Google’s “AI Overview” was a disaster; the search giant’s new bot cheerfully told users to add glue to pizza and that potentially poisonous mushrooms were safe to eat. Meanwhile, OpenAI has been mired in scandal, incensing former employees with a controversial nondisclosure agreement and allegedly ripping off one of the world’s most famous actors for a voice-assistant product. Thus far, much of the resistance to the spread of AI has come from watchdog groups, concerned citizens, and creators worried about their livelihood. Now a consumer backlash to the technology has begun to unfold as well—so much so that a market has sprung up to capitalize on it.


Obligatory “fuck 99.9999% of all AI use-cases, the people who make them, and the techbros that push them.”

  • @[email protected]
    link
    fedilink
    English
    36 months ago

    No, both of those examples involve both design and selection, which is reminiscent to the AI art process. They’re not just typing in “make me a pretty image” and then refreshing a lot.

    • @[email protected]
      link
      fedilink
      36 months ago

      They’re not just typing in “make me a pretty image” and then refreshing a lot.

      The only explanation I’ve received so far sounded exactly like this, just with more steps to disguise the underlying process.

      • @[email protected]
        link
        fedilink
        English
        16 months ago

        It isn’t. People design a scene and then change and refine the prompt to add elements. Some part of it could be refreshing the same prompt, but that’s just like a photographer taking multiple photos of a scene they’ve directed to catch the right flutter of hair or a dress or a creative director saying “give me three versions of X”.

        Ready to get back to my original questions?