Going to try another way of explaining why I think ML product dev is broken, this time with a clear software analogy: (1/6) https://twitter.com/sh_reya/status/1326313829534330880
Problem: some ML API product offerings are done in tiers, where it feels like changes in how the model is constructed and trained dictates the tier, not necessarily a quantifiable value add to a customer at live inference time. (2/6)
The software analogy is if a company released many versions of their API at different prices, and the differences between tiers are the compiler, hardware instruction set, programming language, REST or GraphQL, etc. (3/6)
I want to caveat that occasionally these API differences exist, but not as tiers — usually as phasing out the old version (REST vs GraphQL for example). But that is not what I am talking about here. (4/6)
Some of the software APIs would be “kinda faster” or “more memory safe” or who knows honestly. Unless these differences are clearly quantifiable in terms of business value I don’t know who would release all these versions. (5/6)
For ML, to an end user, they literally could not care less if one API uses more training data than another if they give the same business value. Same with # model params. So quantify the tiers in terms of business value. Hopefully this argument makes sense, lol. (6/6)