Mitigating Incomplete Dependencies
09 September 2014
As a consultant, I frequently encounter projects where assumed dependencies are not in place before the beginning of the project. Sometimes, these items might not even be available for months after I am expected to finish and leave. This is especial true of anything related to data (e.g. data services, data access libraries, databases, and sample data). But, it could also be related to non-functional requirements such as authentication and logging frameworks. Despite this, even when estimates were based on these items being ready and the customer knowingly signed off on those assumptions, customers have a hard time understanding the impact to the project schedule and will often argue not to move the target release date. I am not advocating that your team should give in to these unreasonable demands. In fact, I would argue the opposite. However, whether you are waiting on dependencies that should have been completed or had planned to work in parallel from the start, I do recommend architecting your application in such a way as to minimize the damage caused by incomplete dependencies.
Non-Functional Requirements and Dependencies
Non-functional requirements are usually pretty easy to address by defining interfaces and models, creating fake/temporary implementations, and then injecting these dependencies into your application code via a DI framework. Later, you can simply change the configuration in one place (i.e. the DI framework's bootstrapper) and magically you'll be using the real dependency (assuming the dependency actually works).
Consider logging functionality. Your customer or company might not be able to decide between Microsoft's logging, Log4Net, or something home grown. (Yes, these debates do happen in the corporate world, and they do linger on wasting time and money.) But, that doesn't mean that the project has to grind to a halt while they hold up their side of the arrangement. Look at most any logging framework and they all have roughly the same methods and message levels (e.g. trace, debug, info, warning, error, and fatal). The same is true for other non-functional dependencies such as authentication and authorization. Why not define your own interfaces based on the way you want to consume them. Then, give your interfaces to the customer or another team to create the final implementation. Treat these interface as a contract between you and whomever is developing your dependency. Many open source projects such as NancyFx, ServiceStack, and Rhino Service Bus follow this pattern to be flexible in any environment and your project should probably do the same whether you think you need to be flexible or not.
If you can, specify the expectations for your "contract" via tests. However, keep in mind that it may be difficult to do so for non-functional dependencies. Tests for something like logging will almost certainly imply an implementation, and these implementation details should be left to the developer fulfilling that contract. Tests for user management, on the other hand, where there is an external cause and effect (e.g. if I lock an account, that user should not be allowed to authenticate) can and should be explained through tests. The point is to communicate your needs and expectations, and not to be a "control freak." Therefore, if you cannot communicate through tests, be sure to heavily comment your expectations in your "contract" interface. Just be sure to remove these comments after the new dependency is integrated. These types of comments become stale very quickly and may add confusion later in the product's lifetime. If you need to refer back to them, you can always reference the file history in your source control system.
Faking Incomplete Dependencies
While you are waiting on someone else to implement your "contracts," you can create a lightweight implementation that simply wraps your favorite framework or write a fake implementation. The important thing is to make it your own. Do not tie yourself to frameworks that you may not end up using in the end. You will likely end up with an adapter around some common framework. But, this approach means reduced integration time late in the project or backfilling functionality late in the process (i.e. monkey work). Both of which are unpleasant and error prone. Additionally, it loosens up the coupling in your code and makes it easy for things to change even if your customer is not a problem.
For fake implementations of dependencies in .NET, Linq-to-Objects makes things extremely easy. For example, a simple fake authentication/authorization provider might look something like the following:
Application Specific Dependencies
While logging, authentication, and authorization have pretty straightforward expected behaviors, application specific dependencies such as data access layers can be more unclear and need additional considerations. But, this does not mean that you cannot use the same approach. For these situations, models and interfaces should be based on the expected user experience and not necessarily your preferences. Also, it is vital that you produce acceptance tests around these models and interfaces and provide them as an additional part of the contract between you and your customer. Without creating acceptance tests, the models and interfaces are likely to be more vague than you realize. This will cause the developer(s) doing the implementation to make assumptions that are probably wrong. By proving tests, you can lead the implementers to produce exactly what you expect and push responsibilities to them when things go wrong while integrating. More importantly, it also gives you additional benefits:
- a reduction in the integration time needed for the project
- miscommunication is often minimized
- the ability to demo (with fakes) early and often
- the potential to identify problems early
Faking Data Dependencies
Just like non-functional requirements, fake implementations of application specific dependencies are easy in .NET when using Linq-to-Objects.
Specifying Dependency Behavior
To produce acceptance tests around these models / interfaces and make them understandable to the customer, I lean heavily on BDD frameworks such as SpecFlow. For the data dependency above, the specification might look something like the following:
The step definitions for these specifications might look something like the following:
- Keep fake implementations separated from your interfaces and specifications by putting them into a separate assembly.
- Keep dependency interfaces and models in a separate assembly from both your application code and the dependency implementation. This is especially true for WCF data contracts.
- When you need to update your "contracts" and specifications, talk to the person developing that piece and publish the changes to them as soon as possible.
- If you end up not needing something in your "contract," remove it. There is no good reason to have someone build something for you when you know you will end up not needing it. This is true even when the work is already started.
- Don't throw your fake implementations away. Use your fakes during demos! Demos are notorious for going wrong in front of an audience even after being practiced repeatedly. Why gamble that you'll have connectivity to a database or a REST service during your demo?
As developers (and humans for that matter), we cannot always act alone. We inevitably need to depend on others for at least some of the things we need. Because of this, much of what we do as developers involves communication. The biggest obstacle for communication is assumptions. By explicitly defining your needs (i.e. dependencies), you not only help others to help you, but you can avoid late integration problems and maintain your ability to demo early and often. This type of abstraction is valuable for not only fast-tracking your release, but it also leaves you open for change in the future.
is licensed under a Creative Commons Attribution 4.0 International License