This *is* commonly a problem. Look up "TCP incast".
The scenario is exactly as you describe. A distributed database makes queries over the same switch to K other nodes in order to verify the integrity of the answer. Data is served from memory and thus access times are roughly the same on all the K nodes. If the data response is sizable, then the switch output port is overwhelmed with traffic, and it drops packets. TCPs congestion algorithm gets into play.
It is almost like resonance in engineering. At the wrong "frequency", the bridge/switch will resonate and make everything go haywire.