AI is generating code faster than ever before, but we are in the process of recreating exactly what lean seeks to eliminate: a human bottleneck at the heart of the flow.
Toyota calls this “muri”, overloading a link in the chain until it becomes the problem. Goldratt also formalised this in the Theory of Constraints: “a constraint that is ignored does not disappear; it moves to a place where it is less visible, until it becomes a failure”.
The temptation seems strong to remove the human from the loop: to speed things up further, to automate the validation process itself? This is where Taleb would intervene: “the higher the throughput, the more dependence on a system lacking judgement becomes a source of systemic fragility”. The gains are visible, linear. The risks, however, are subtle… until they are no longer so.
And what of responsibility? Luciano Floridi describes this phenomenon as a dilution of moral responsibility in distributed systems: “when each actor (human or machine) bears only a fraction of it, no one really takes it on.” Applied to software engineering, this results in: code validated mechanically by an overworked human who can no longer really read what they are approving, produced by a system whose responsibility has evaporated down the chain. Having a human in the loop is not an obstacle to efficiency. But burying them under an avalanche of undifferentiated requests turns them into a sort of ‘automatic approver’ – which is perhaps worse than not having them at all.
The question is not “how to approve things faster”. It is: what should actually be subject to human judgement, under what conditions, and who is willing to take responsibility for it?
Originally written in French and published on LinkedIn. Translated with DeepL.