Imagine there are two algorithms which solve the same problem, but have different performance. One has worse asymptotic behaviour but smaller constant factor, thus more suitable for short inputs. Another has better asymptotic behaviour but larger constant factor, thus more suitable for long inputs.
How would you call the size of input at which both algorithms finish simultaneously? So that for smaller inputs one should use the first algorithm and for larger ones the second algorithm. Tipping point? Flipping point? Switching point? Breaking point?
Bonus question: how would you call a function which benchmarks algorithms and finds this point?