The GHz rating is how many cycles per second the processor is running at. This is usually a fixed number based on either a crystal (in the old days) or a generated signal.
MIPS, as I'm sure you know, is M
illions of I
econd. The problem is, each instruction takes a different number of CPU cycles to complete. Some instructions can complete in a single cycle, and some can take dozens of cycles. The mix of instructions can make the actual MIPS number vary quite a bit. A text editor just moving ASCII bytes will be completing more instructions that a graphics program that's doing floating point calculations and manipulating colors and pixels.
On top of that, the real effective MIPS rating can vary based on a fast processor having to wait for the next instruction. This is why there are prefetch caches (L1, L2, etc). This is to try to eliminate the wait for the next instruction.
RISC chips were created to get higher MIPS numbers. Since each instruction takes fewer CPU cycles, the effective MIPS rating is higher.
Anyway, I don't really see MIPS used much any more to rate chips. In general, the clock speed of the processor is usually a pretty good indication of how much performance you'll be getting. That is, for the same program running on the same operating system, with the same chip family, it will run faster on a 3.2 GHz processor than on a 2.7 GHz processor.
If you really do need some kind of MIPS rating, there are benchmark programs available that you can use. Just make sure you keep things equal (OS, memory, etc) when you run them. Just hit Google. You should find quite a few options.
Also make sure you check out what used to be the gold standard of CPU benchmarks, SPEC. http://www.spec.org/benchmarks.html