>Putting overheated bits of gear in the freezer is a novel idea, but i do wonder about the cumulative effects of heating and rapid cooling on circuit board paths, solder, sockets, etc.
Generally semi’s and passive components are spec’d for thermal cycles within a given range over some period (respectable suppliers), years say (this goes to severety of thermal cycles, rate of change), based on calculated statistical failure rates, from these probability of failure rates can be calculated for composite of parts operating in situ. Like in a modern telephone exchange it’s nice to know what the probability of failures might be into the future. A lot of this though depends how hard components are pushed, which has about it switching speeds, currents and voltages applied, max temp’s etc.
Getting adequate cooling is often a problem as equipment becomes more compact and it operates in near proximity to other equipment that drives up the ambient temperature. Like the derating that’d apply to something 25-35 degrees is not the same as from 45-55 degrees.
Laptops are good example of the compactness and limits of cooling, mine regularly overheats, but think it folds back the amount of work the CPU does to attempt to avert that. The grahics drive hardware falls over regularly too (possibly a software bug, but I think it temperature related).
For a lot of stuff convection cooling is hardly enough, but then the sound of fans can be maddening, and fans themselves add a component that can fail, and they do.
Lot off the stuff off the shelf for regular consumers probably only just works reliable af the suggested temp range, if you took some sample from multiple batches from the production line and tested them. Contast this with what is required of satellite systems and all the design and reliability testing that goes into that, which happened to see a documentary regards a way back, quite an education.