There isn’t a NEW method, model or process for developing software that does not seem to have a ‘DD’ as a suffix. eg; MDD, TDD, DDD, BDD… I’m not certain if this suffix is supposed to provide instance credibility or if it’s just a vogue thing.
So I hereby declare that FSMDD or FDD, for short, has arrived.
FDD is the implementation of a finite state machine that whose nodes contain state specific information that is used to operate/compute against an event context. Individual FSMs can operate like subroutines with fanOut/fanIn, call, or fork -like process control to/from other FSMs.
What makes this particularly strong is that (a) by reducing the number of conditionals in the compute portion of a state node testing is simplified (b) by using the state machine representation, as data, it is trivial to graph in order to provide self documentation © once the inventory of compute-operations is complete the assembly of the individual FSMs is fairly routine.
As a side note: (1) inventory is constructed. (2) FSMs are assembled.
Another interesting side effect is that capturing the before and after states constitutes “change”. That change can be captured in the FSM machinery instead of the user-space. Making the code more audit-able.
This is just the beginning.
UPDATE: comparing two implementations of the same system; one using this approach and a second using a traditional approach. The comparison of the input/output should be the same and the code they share should be considered constant in time. The FDD approach will have a slightly higher cost since there is a framework directing the code, however, the benefits (readable code, readable logic, visual directed graph, more reusable code, multiple code generators and snippet languages, easier to test) may outweigh and cost.
The FDD implementation is easier to test because the executable fragments do not have conditionals in them and therefore they are easier to test. Consider that a 10K LOC function might need 10K test cases. One need to hope that they get 100% code coverage. Now consider the FDD approach where there is 10K functions with 10K test cases. The likelihood that the FDD approach is going to provide better coverage should be intuitive.
Before you say that on needs to consider the interaction tests in the FDD approach; consider that it’s going to take a lot more than 10K tests to validate a 10K LOC single function.