Imagine there are two algorithms which solve the same problem, but have different performance. One has worse asymptotic behaviour but smaller constant factor, thus more suitable for short inputs. Another has better asymptotic behaviour but larger constant factor, thus more suitable for long inputs.
How would you call the size of input at which both algorithms finish simultaneously? So that for smaller inputs one should use the first algorithm and for larger ones the second algorithm. Tipping point? Flipping point? Switching point? Breaking point?
Bonus question: how would you call a function which benchmarks algorithms and finds this point?
Rather than comparing f(x) and g(x), itâs easier to describe (f-g)(x). If you compare the algorithms by discussing their difference, the terminology is just ârootâ and root-finding algorithm.
âŚhmm - two members of academic who just canât seem to agree on these ânew-fangledâ numbers: yeah, that always helps with adoption of something new!
I can only presume that similar debates were happening during the development of IEEE754âŚbut at least we actually have something to show for it, something that can actually be used right now! But if that IEEE standard is still so repugnantâŚmaybe something can be improvised with 128-bit integers.
As for this topicâs name: if it was about floating-point numbers and their problems:
It occurs to me that in the thermodynamics world people call this kind of thing the transition point. Iâm not sure if itâs perfect, but itâs at least what came to my mind.