mozz wrote:
I think gcc and g++ use the same back end, so the question is, if you use a certain C++ construct, will the front end and/or the optimizer parts, be able to turn it into something equivalent to what the C compiler would have produced on morally equivalent C code? When you inline a method, that is as good as using a macro in C.
g++ and gcc are essentially identical. g++ just invokes gcc with some silent options. Everything you get with gcc you get with g++.
mozz wrote:
Most uses of templates are probably safe. I'm not sure how well it handles things like partial specialization--you're making the compiler work at it a bit, but it probably comes up with good code.
The concepts of partial specialization and template code that is deemed "difficult" for the compiler to digest are really ones that are a step or few abstracted from the mechanics of code optimization. The idea behind template metaprogramming and other similar techniques is to shift work from the run-time to the compile-time phase. Ten years ago when C was the staple, we would shift work from the run-time to the preprocessing phase. Ten years before that, we'd make a lookup table
, which was effectively shifting program A's runtime to some infrequently run program B.
mozz wrote:
RTTI and exceptions I would just avoid.
Exceptions are difficult to work with, and not very well supported in C++. They are hard to use effectively in real-time systems, but are often very useful in data-processing tools; offline converters and such. They're hard to retrofit into systems; it's similar to taking a legacy codebase and attempting to make it const-correct. If it's done from the start, things fall into place, but once the architecture is laid, there's a cascade of irritating changes that need to be made throughout the entire system. Best to just rewrite it. (or stay away from exceptions altogether
The last handheld game project i worked on shipped with RTTI enabled. It wasn't used heavily, and what it was used for could have been done in other ways, but judicious use of it can be extremely helpful. It's one of those spooky handwavy features that people stay away from in general and its negatives get amplified in the absence of application; sort of like your first experience with source control tools automatically merging check-in conflicts.
mozz wrote:
Traditionally gcc is not the best at optimizing on x86 (where the Microsoft compiler might produce 10% faster code, and the Intel compiler might produce slightly faster--though larger--code than Microsoft's). They've been improving the optimizer in gcc/g++ over the last few years, but optimizing well for x86 is challenging because x86 has so few general registers, and modern x86 CPUs are pretty complicated to compensate for this. For non-x86 chips (especially popular ones like PPC and Sparc), I would guess that gcc is probably as good as anything else.
gcc has been inferior in my tests (admittedly a few years ago) on x86 architectures; the last rev i worked with was 2.9x. I'm not sure how the new 3.x and 4.x codelines fare, but i would expect them to be somewhat better. The problem does seem to be the excruciatingly small register file on x86.
In my experience, gcc really shines on RISC architectures. Our first PSP title built on a relatively expensive commercial compiler. After we shipped, i spent a couple of days getting it building under the run-of-the-mill port of gcc 3.3 that comes along with the SDK. To my amazement, i got an instant 15-20% performance improvement-- just from a recompile. This project was pretty standard C++ with heavy use of the STL. gcc's code generation was *much* better than the other guys. My faith in gcc was restored, tenfold.
mozz wrote:
If you are worried about a certain feature, keep in mind you can get the compiler to spit out an assembly listing and then look at it and see if there's any obvious suckitude in it.
This is easiest if you can isolate the thing you want to check into a tiny program by itself---then you can write an equivalent tiny program in C and compare the two listings and see if the C++ compiler did a comparatively bad job or not.
Of course, this gets harder as time goes on and current-gen architectures become less dependent on cpu instruction counts and more dependent on cache behavior, bus bandwith, and pipelining. Reading the asm can be misleading sometime.
Tangentially, does anyone know of any good, free cache profiling tools for Windows/x86, other than vtune? Sadly, valgrind is only available on linux.
mozz wrote:
As far as this bit goes -- my wild-ass guess is that g++ does a good enough job at optimizing templates. You can probably use them with impunity and not get penalized. At least as good as any other C++ compiler I can think of---the Microsoft compiler was weak in that area a few years ago (it would choke on certain complex template things in the STL, for example). Maybe they have improved it, but I don't know. I haven't used Microsoft's compiler for a few years.
.NET 2003 seems much better than vc6. gcc's standards compliance is excellent at this point. Templates are a great tool for your C++ toolbox. As with any well-honed tool, improper use will result in disaster. You can easily bloat your codespace by using them without discipline, and you probably want to stay away from reading your ROM images in byte-by-byte and jamming them into a std::vector...