Machine Learning: The High Interest Credit Card of Technical Debt is a great non-technical paper from this past NIPS conference that is being passed around online and off among people in my network. For me, the paper centers around the claim that machine learning systems are fundamentally different from traditional software in how they are developed and used:
Arguably the most important reason for using a machine learning system is precisely that the desired behavior cannot be effectively implemented in software logic without dependency on external data. (p. 2, italics theirs)
Machine learning models are useful because they can encode complex world knowledge that would be near-impossible to handle in a deterministic, rule-based setting. But this precise distinction is what makes them enormously brittle in the face of change. Repeated throughout the paper is the principle of “Changing Anything Changes Everything” (CACE):
To make this concrete, imagine we have a system that uses features x1, … xn in a model. If we change the input distribution of values in x1, the importance, weights, or use of the remaining n - 1 features may all change … No inputs are ever really independent. (p. 2)
CACE can be a problem for all sorts of reasons, and if you’ve gotten this far in my post I strongly suggest you read the full paper to understand the implications of building and depending on software which can often be fundamentally opaque and unstable by design.
This is useful to me as a researcher in helping me to understand the divide between research and practice, and to recognize which elements of my methods might be incompatible with those of a practitioner.