I see a lot about source codes being leaked and I’m wondering how it that you could make something like an exact replica of Super Mario Bros without the source code or how you can’t take the finished product and run it back through the compilation software?

  • @[email protected]OP
    link
    fedilink
    81 year ago

    Thank you, sorry to push further but my understanding is that computers deal with binary so every language is compiled to machine code, which I took as binary.

    So if the language has elements being removed and the machine doesn’t need them shouldn’t you get back out exactly what is needed to do the task? Like if you compiled some code and then uncompiled it you would get the most efficient version of it because the computer took what it needed, discarded the rest and gave it back to you?

    • @[email protected]
      link
      fedilink
      16
      edit-2
      1 year ago

      It depends on the specifics of how the language is compiled. I’ll use C# as an example since that’s what I’m currently working with, but the process is different between all of them.

      C#, when compiled, actually gets compressed down to what is known as an intermediate language (MSIL for C# specifically). This intermediate file is basically a set of genericized instructions that are not linked to any specific CPU. This is useful because different CPUs require different instructions.

      Then, when the program is run, a second compiler known as the JIT (just-in-time) compiler takes the intermediate commands and translates them into something directly relevant to the CPU being used.

      When we decompile a C# dll, we’re really converting from the intermediate language (generic CPU-agnostic instructions) and translating it back into source code.

      To your second point, you are correct that the decompiled version will be more efficient from a processing perspective, but that efficiency comes at the direct cost of being able to easily understand what is happening at a human level. :)

      • @[email protected]OP
        link
        fedilink
        11 year ago

        Could I trouble you to go deeper? I’m think I’m getting it but if we were to say uncompile GTA V or Super Mario Bros, could we make changes and figure it out from there or would it be complete nonsense with no way points to jump in at and get a grip on what is being done.

        On a side note I was told once that everything is 1s and 0s and as a result that someone could type a picture of you if they got the order right. This could be why I’m so wrong in my understanding given I’m now assuming this was bullshit.

        • folkrav
          link
          fedilink
          31 year ago

          At a very low level, yes, everything is 1s and 0s. However, virtually nobody deals with binary anymore. Programming languages are abstractions over abstractions over abstractions not to have to deal with typing binary.

          The point of programming languages is for humans to be able to read it and make sense out of it. It’s a way to represent in a kind of intermediate language that’s halfway between something humans can read and computers can interpret.

          Say the game’s programmer wants to handle moving your character right on pressing the right arrow key. They might write some function called “handleRightArrow()”, which does whatever. Then your compiler will turn this to some instructions - read stuff in RAM at address XYZ, copy it over, etc. The original code with readable names, comments, documentation, proper organization, it’s gone. Once you decompile, it’s gonna be random function/variable names, compiler might have rewritten some parts of the implementation as automatic optimizations, unlined some functions, etc. The human readable meaning of the code is lost. It does the same thing as the original code, but it isn’t the original code either.

    • @[email protected]
      link
      fedilink
      English
      10
      edit-2
      1 year ago

      The main issue is that to make code human-readable, we include a lot of conventions that computers don’t need. We use specific formatting, name conventions, code structure, comments, etc. to help someone look at the code and understand its function.

      Let’s say I write code, and I have a function named ‘findUserName’ that takes a variable ‘text’ and checks it against a global variable ‘userName’, to see if the user name is contained in the text, and returns ‘true’ if so. If I compile and decompile that, the result will be (for example) a function named ‘function_002’ that takes a variable ‘var_local_000’ and checks it against ‘var_global_115’. Also, my comments will be gone, and finding where the function was called from will be difficult. Yes, you could look at that code and figure out that it’s comparing the contents of two variables, but you wouldn’t know that var_global_115 is a username, so you’d have to go find where that variable was set and try to puzzle out where it was coming from, and follow that rabbit hole backwards until you eventually find a request for user input which you’d have to use context clues to determine the purpose of. You also wouldn’t have the context around what ‘var_local_000’ represented unless you found where the function was called, and followed a similar line backwards to find the origin of that variable.

      It’s not that the code you get back from a decompiler is incorrect or inefficient, it’s that it’s very much not human-readable without a lot of extra investigatory work.

      • @[email protected]
        link
        fedilink
        11 year ago

        This might change now relatively fast, now that large language models can process code, you could give the function to LLM to rename the function. Iterating over the code and rename all functions and variables.

        This won’t of course reproduce exact code, but it makes one really heavy part of reconstruction to human readable much lighter.

    • @[email protected]
      link
      fedilink
      English
      5
      edit-2
      1 year ago

      The implicit assumption with decompiling code is that the goal is either to inspect how the code works, or to try compiling for a different machine. I’ll try to explain why the latter is quite difficult.

      As you said, compilation to machine code only keeps the details needed for the CPU to accomplish what was instructed. And indeed, that is supposed to be efficient to run on that CPU, by reason of being targeted exactly for that CPU. But when decompiling, the resulting code will reflect the specificity to that same CPU. If you then try to compile that code for a different CPU, it will likely work, but will likely be inefficient because the second CPU’s unique advantages won’t be leveraged.

      To use an example, consider how someone might divide two large numbers. Person A learned long division in school, and so takes each number and breaks it down into a series of smaller multiplications and subtractions. Person B learned to do division using a calculator, which just involves entering the two numbers and requesting that they be divided.

      Trying to do division by blindly giving Person B that series of multiplications and subtractions to do on the calculator is extremely inefficient because Person B knows how to do division easily. But Person B is following Person A’s methods, without knowing that the whole point of this exercise is to just divide the two original numbers. Compilation loses context and intent, which cannot be recovered from decompilation, for non-trivial programs.

      Here is an example why source code is useful when it provides context: https://en.m.wikipedia.org/wiki/Fast_inverse_square_root#Overview_of_the_code . Very few people would be able to figure out how this works from just the machine code.

      • @[email protected]
        link
        fedilink
        2
        edit-2
        1 year ago

        follow up, would it be easier to read this context-less source code or stay at assembly? If for example you’d like to modify a closed source app

        • @[email protected]
          link
          fedilink
          English
          2
          edit-2
          1 year ago

          Like many things, it’s very fact-intensive, varying in different circumstances. As others have noted, the abilities of the person undertaking the decompilation will influence the decision. But so will strategy: the overall goal can drive how decompilation is approached.

          For example, suppose you’re working for an airline company and need to rewrite some software used on an ancient IBM System/360 machine and was written in the COBOL language, for which no source code is available and you cannot find many people who even know COBOL. Here, since the task is to rewrite the code, decompilation is just to tell you how it works and then you’ll want to write the new program in a modern language. It may be useful to decompile to a different language if such a decompiler is available, say to the C language, which you better understand.

          Sure, it may be that C isn’t what the new program will be written in, but if your C reading skills are sufficient, then this is a valid strategy.

          The skill of a decompiling engineer – or any engineer really – is leveraging your skills and your tools to tractably attack the difficult problem at hand. Many equally-skilled engineers can plausibly approach the same problem differently.

          • @[email protected]
            link
            fedilink
            21 year ago

            oh wow, I now respect pirates even more. No wonder there are only like 3 guys that can and will do this.

            If you decompile you need such an understanding of the language. I could see someone looking at this and going “oh yeah that compares cases”, but then die of old age before finishing the sentance.

            And if you don’t decompile you are coding assembly.

    • jayrhacker
      link
      fedilink
      21 year ago

      if you compiled some code and then uncompiled it you would get the most efficient version of it … ?

      Sorta, an optimizing compiler will always trim dead code which isn’t needed, but it will also do things that are more efficient but make the code harder to understand like unrolling loops. e.g. you might have some code that says “for numbers 1-100 call some function” the compiler can look at this and say “let’s just go ahead and insert 100 calls to that function with the specific number” so instead of a small loop you’ll see a big block of function calls almost the same.

      Other optimizations will similarly obfuscate the original programmers intent, and thinks like assertions are meant to be optimized out in production code so those won’t appear in the de-compiled version of the sources.