Here's an idea: for case where you do not (have to) expose what hash you use, start out with a simple, fast hash (could even be CRC32). When you discover that you have a lot of hash collisions (e.g. because a hash chain becomes X items long), rebuild your hash table using a good hash function.
That would mean you (except for that 'am I under attack' check, but that can be a single 'if' in code that, normally, rarely runs) only pay the price of having a robust hash when you are under attack. Your data structure would also have a 'hiccup' when switching hash algorithm, but that isn't that much different from what happens when you resize your hash table as it fills up.
Does that make sense, or am I overlooking something?
Hmm, that's an interesting idea. Although I could imagine that this approach ultimately leads to something like Universal Hashing. For example, if the functions you would use in your example are not randomized in some way, an attacker could still predict predict outputs and when the switches would be made and could adapt appropriately. So you would randomize your functions. From there, it is probably not too far until we end up with Universal Hashing. Still, very interesting thought!
What a delightful breath of fresh air. I get awfully tired of the hash-tables-are-constant-time and worst-case-behavior-is-too-rare-to-worry-about nonsense that floods the web.
And here we have the problem and proposed solutions clearly explained, and presented in a pleasant, readable design.
That would mean you (except for that 'am I under attack' check, but that can be a single 'if' in code that, normally, rarely runs) only pay the price of having a robust hash when you are under attack. Your data structure would also have a 'hiccup' when switching hash algorithm, but that isn't that much different from what happens when you resize your hash table as it fills up.
Does that make sense, or am I overlooking something?