• Trigg
    link
    fedilink
    763 months ago

    Man updating packages by compiling them is so stupid

    Oh look 15 updated packages from AUR

    • funkajunk
      link
      fedilink
      English
      283 months ago

      I always go with the binary version if it’s available in the AUR, ain’t nobody got time for that.

    • @[email protected]
      link
      fedilink
      83 months ago

      I mean yes if time is an issue, but compiled code on your own hardware is specifically tuned to your machine, some people want that tiny tweak of performance and stability.

      • Trigg
        link
        fedilink
        123 months ago

        The point being most AUR packages are compiled on each update

        • @[email protected]
          link
          fedilink
          13 months ago

          But compiled on some other machine. Compiling on your own hardware optimizes it for that specific hardware and what that chip supports etc.

          • exu
            link
            fedilink
            English
            143 months ago

            No, AUR packages are compiled on your machine.

            • @[email protected]
              link
              fedilink
              1
              edit-2
              3 months ago

              Ah, thought you meant in the AUR. I’m used to OBS where you have binaries and source available (OBS meaning OpenBuildService, not the screen recorder)

      • @[email protected]
        link
        fedilink
        23 months ago

        I use both for different purposes. Gentoo’s feature flags are the reason I wait for compiles, but only for computers a touch the keyboard with. Everything else gets Arch.

      • adONis
        link
        fedilink
        13 months ago

        would you mind elaborating on the benefits? like what does one actually gain in a real-world scenario by having the software tuned to a specific machine?

        disk space aside, given the sheer amount of packages that come with a distro, are we talking about 30% less CPU and RAM usage (give or take), or is it more like squeezing out the last 5% of possible optimization?

        • @[email protected]
          link
          fedilink
          43 months ago

          Closer to thr 5% . Between the intermediate code and final code writing there is an optimization stage. The compiler can reduce redundant code and adjust based on machine. i.e. my understanding is an old 4700 can have different instruction sets available than the latest intel.gen chip features. Rather than compile for generic x86 the optimization phase can tailor to the machine’s hardware. The benefits are like car tuning, at some point you only get marginal gains. But if squeezing out every drop of performance and reducing bytes is your thing then the wasted compiling time may not been seen as waste.

      • Trigg
        link
        fedilink
        83 months ago

        Clearly I shouldn’t have missed the /s