• @LostWanderer
    link
    English
    4018 days ago

    I think Apple is handling their foray into the LLM space better by making Apple Intelligence opt-in instead of opt-out. I took umbrage with Microsoft and Google due to not being able to at least opt-out and remove the ‘features’ from their respective OS.

    Apple setting a better example is a good thing to see.

    • @Drewelite
      link
      English
      1718 days ago

      Obviously I have no idea what your opinion is beyond this comment. But from my own view of lemmy it’s so funny to open the thread about windows and people are like:

      “I don’t care if I can disable it. There’s absolutely no reason an operating system should collect that data, except for their own toxic capitalist greed. I want a tool to rip every line of this code out, or I’m installing Arch and never looking back.”

      To the thread on Apple doing it:

      “Apple setting a better example is a good thing to see.” 😂

      • @LostWanderer
        link
        English
        2118 days ago

        No, I purely meant Apple making AI an opt-in feature is setting is an appropriate choice. Users should and have full control over their data and how a company can or cannot access it. My opinion on AI (LLMs in disguise) is that it’s very much a project which is not ready for general use beyond Autocorrection and Grammar checking.

        I am no Apple Fanboy, but a decision like this in regards to Apple Intelligence being opt-in is a better move than what Microsoft and Google have done. I sure as shit will be keeping an eye on Apple as I don’t trust them enough to give them the keys to my data readily. They were a better option at the moment until Linux Phones are amazing enough to abandon iOS.

    • @[email protected]
      link
      fedilink
      English
      318 days ago

      Opt-in should be mandatory for all services and data sharing. I would start my transition to Linux today if this were opt-out, though the way Apple handles this for other services makes me believe opt-in will be temporary.

      Currently, when you setup any device as new, even an offline/local user on macOS, the moment you log into iCloud it opts-almost-every-app-and-service-into iCloud, even one’s you have never used and always disabled on every device. There’s seemingly no way to prevent this behavior on any device, let alone at an account level.

      Currently, even though my iPhone and language support offline (on-device) Siri, and I’ve disabled all analytics sharing options, I must still agree to Apple’s data sharing and privacy policy to use Siri. Why would I need to agree to a privacy policy if I only want to use Siri offline, locally on my device, and disable it from accessing Apple’s servers or anything external to the content on my phone? Likely because if you enable Siri, it auto-enables (opts in) for every app and service on your device. Again, no way to disable this behavior.

      I understand the majority of users do not care about privacy or surveillance capitalism, but for me to trust and use a personal AI assistant baked into my devices OS, I need the ability to make it 100% offline, and fine grained network control for individual apps and processes, including all of the OS’s processes. It would not be difficult to add a toggle at login to “enable iCloud/Siri for all apps/services” or “let me choose which apps/services to use with iCloud/Siri, individually”. Apple needs stronger and clearer offline controls in all its software, period.

      • @LostWanderer
        link
        English
        218 days ago

        I 100% agree, LLMs are a security threat at the moment because and need far more work before I would consider them remotely safe! Users who aren’t technically savvy should not be forced to harbor LLMs on their systems. As the risk of a malicious user breaching and siphoning that data off is ever present. There have to be huge guardrails in place which allow users to have precise control over their data and where it goes.

        In regards to iCloud, users should always have a choice as to which apps are opted-in to iCloud at start-up. I know they think iCloud is the best shit, however, letting the user decide is king. The same could be said for all the data harvesting enabled by default on iOS/Mac OS (I vindictively turned that shit off making a WTF face).

        As for Apple making Apple Intelligence temporarily opt-in, I’m not sure they would do that. As they’ve seen the outrage caused by LLMs, I think Apple might make an exception and remain opt-in. Though, this is only an opinion and could be proven wrong in the near future.

        As for Linux, I did switch almost a week and a half ago to Ubuntu because Microsoft pissed me off! I experienced the pain points caused due to reacquainting myself with the OS, found out several tools I loved and used back in the 16.04 days do not play nicely with 24.04; I borked Ubuntu 3 times before getting it right. ROFL Now it works just fine since Canonical pushed patches that solved underlying issues in their code. I was able to customize and play games, it’s just the lack of proprietary software for iPhone management. I’ll have to get a Mac Mini for that purpose.

        • @[email protected]
          link
          fedilink
          218 days ago

          The privacy and security issues with LLMs are mitigated by the majority of it being on-device. Anything on device, in my opinion, has zero privacy or security issues. Anything taking place on a server has a potential to be a privacy issue, but Apple seems to be taking extraordinary measures to ensure privacy with their own systems, and ChatGPT, which doesn’t have the same protections, will be strictly opt in separately from Apple’s service. I see this as basically the best of all options, maximizing privacy while retaining more complex functionality.

          • @LostWanderer
            link
            English
            118 days ago

            ChatGPT is a disaster in my opinion, it really soured my opinion on LLMs. Despite your educated opinion on the matter of Apple Intelligence; I have deep-seated mistrust of LLMs. Hopefully, it does turn out fine in the case of Apple’s implementation. I’m hesitant to be as optimistic about it. Once this is out in the wild and has been rigorously tested and prodded like ChatGPT; only then might my opinion on Apple Intelligence be changed.

            • @[email protected]
              link
              fedilink
              218 days ago

              Is the distrust in the quality of the output? If so, I think the main thing Apple has going for it is that they use many fine tuned models for context constrained tasks. ChatGPT can be arbitrarily prompted and is expected to give good output for everything, sometimes long output. Being able to do that is… hard. However, most of apple’s applications are much, much narrower. Like, the writing assistant which will rephrase at most a few paragraphs: the output is relatively short, and the model has to do exactly one task. Or in Siri: the model has to take a command, and then select one or more intents to call. It’s likely that choosing which intents to call, and what kinds of arguments to provide are handled by separate models optimized for each case. Despite all that, it is very possible that errors can still occur, but there are fewer chances for them to occur. I think part of Apple’s motivation for partnering with OpenAI specifically for certain complex Siri questions, is that this is an area they aren’t comfortable putting Apple branding on due to output quality concerns, and by providing it with a partner, they can pass blame onto the partner. Someday if LLMs are better understood and their output can be better controlled and verified for open ended questions, that’s when Apple might dump OpenAI and advertise their in house replacement as being accurate and reliable in a way ChatGPT isn’t.

              • @LostWanderer
                link
                English
                118 days ago

                I think it’s due to a combination of the tech still being relatively young (it’s made leaps and bounds) and its thoughtless hallucinations that pass as valid answers. If the training data is poisoned by disinformation or misinformation, it makes any output potentially useless at best, at worst it’s harmful. The quality of LLM results purely depends on the people in charge of creating them and the source of its data. After writing it out, I feel that I mistrust the people in control of LLM development because it’s so easy to implement this tech incorrectly and for the people in charge to be completely irresponsible. Since, the techbros behind this latest push for making LLMs into AI are so gung-ho about it, the guard rails have been pushed aside. That makes it all the easier for my fears to become manifest.

                Once again, it sounds all well and good what Apple is likely trying to do with their implementation of LLM. However, I can’t help but wonder about how terribly wrong it can all go.