Okay, I need to know. What does GCC have to do with anything here?
LLVM has LLVM-IR, and QEMU has TCG ops. Those are two, possibly very different approaches to an intermediate representation but they have absolutely nothing to do with gcc, llvm-gcc, or any other gcc. gcc is a C compiler frontend and backend, neither LLVM nor the TCG use either.
EDIT: llvm-gcc is just gcc's C frontend, but using LLVM as it's backend for code generation rather than GNU's own
*sigh* You really must think I'm an idiot or something.
QEMU didn't always use TCG. Back when llvm-qemu was done it was using a very different approach. The intermediate language was defined as a series of functions that performed operations on the intermediate data set. The intermediate language was never stored in a discrete form, instead target instructions were converted to intermediate representation than host on the fly. The host conversion was accomplished by a set of C functions that implemented the intermediate language. QEMU would have a GCC compiled object file of these functions and would extract their bodies (skipping the prologue/epilogue code) and paste the contents into the output recompiled blocks.
In llvm-qemu these blocks were being generated by llvm-gcc for obvious reasons. In the normal version of QEMU the normal version of GCC was used. Obviously the quality of GCC's output played a role in how QEMU's translation performed. Do you understand what I was saying now?
In theory this copy and paste object file approach seemed like a good idea because it was more portable. In practice, the rules for parsing the function bodies required almost as much platform specific information as writing code generators would have. The approach also had far less optimization potential because every IR instruction was translated in total isolation. So they moved to TCG, eventually.