It's not just that they reduce the number of comparisons. It's that, in a perfect world, without collisions, a hash function should enable you to find something with NO comparisons.
As a simple example, consider that you want to store numbers that range from 1 to 1,000,000 in a table. Each number can be in the table either zero or one times.
If you kept a regular array of the numbers in the table, you'd have to do a search each time you want to check whether or not a number is in the table. That wouldn't be very fast.
A hash table to solve this problem would be an array of size 1,000,000. Each element would be used as a boolean value. With this implementation, if you wanted to add 7,309 to the hash table, you'd go directly to table[7309] and change it to nonzero. If you want to check whether or not 232,531 is in the table, you'd go directly to table[232531] and see if the value contained there is nonzero.
Real hash tables are based on this principle. You find some way to change an item into an index into an array. With integers, it was easy because they're already indexes. With strings, it's a bit harder. You can't make a table big enough to hold every combination, so you use a hash function to generate a semi-unique index for each string.
Semi-unique. That means sometimes, you'll hash two different strings to the same value. That's a collision. That's where hash tables stop being perfect. You now have to check the item at that index and see if it's really the one you're looking for. If not, it might be somewhere else in the list, and you'll have to check elsewhere (depending on the table's policy for handling collisions) to find it or satisfy yourself it's not in the table. Generally, this lookup is a lot faster than a plain linear search, since you actually have some clue about where to start the search.