Tuesday, May 06, 2008

Consultingware model thwart with inefficiencies

A large proportion of my career has been spent writing consultingware, and I've come to the conclusion that consultingware is inherently an unstable model full of inefficiencies and road blocks that can lead to an un-productive environment.

Let me explain the consultingware model. Consultingware is where a software house (lets call them the supplier) creates a package that satisfies 80% of a generic market niche, and then goes about selling to customers in that market, the base package plus consulting services to customise the remaining 20% to deliver a final product that the client requires.

This sounds fair enough, especially considering that there are many systems that would seem very similar, but because of the way the business is run, just require some tweaking at the end user level to make it fit straight into there current business processes. It is also attractive from the point of view of the supplier producing the consultingware, because although it's not the ideal model of selling bits of paper with license numbers on them (i.e. shrink wrapped software), if you find the right niche, you can charge big bucks for the initial package to each of your clients, and then continue to charge fairly lucrative consulting rates to customise the application because the client is essentially locked in to buying consulting services from the one supplier. However there are a number of inefficiencies and false economies that are inherent in this model if either the supplier or the client are not careful.

Let me start with the ongoing issue of code maintenance and improvement. Usually the supplier has spent a long time producing this base package with little or no financial support from any actual sales, and unless you have an absolutely brilliant development team there are always going to be code maintainability issues, performance issues, architectural problems, scalability concerns etc.... The shrink wrapped and internal development models have fairly well known processes for continuous improvement of code. The issue with consultingware is two fold.

Firstly the client is never interested in things they can't see. They are only ever interested in high level features that they can use in their day to day business. Even if they start to complain about things like performance, they are likely to be reluctant to fully fund the necessary development time to fix these issues considering they are only one of potentially many clients (current and future), who will see the benefit of these improvements. The same logic can also be applied to bug fixes, why should client A pay consulting rates for a bug to be fixed just because they were the first to see it and demand it be fixed. This leaves the supplier with the responsibility for these kinds of structural issues which creates a bit of a dilemma.

Inevitably if these kinds of improvements are done without direct funding from a client, this means that developers who could normally be charged out at the lucrative consulting rate become non-chargeable for a time loosing the supplier money. Not only that but if the changes actually improve developer productivity, (i.e. it makes adding features more efficient), the supplier will start to see a negative shift in their profit/feature ratio as developers start finishing chargeable features quicker.

Admittedly this is a false way to look at the problem. Ideally the way the supplier should look at it is that improving their own efficiency will make their clients more profitable and more likely to expand their business and hopefully require more of the suppliers consulting services as they grow. This is sometimes a very difficult argument to propose to bean counters when it is full of if's and maybe's and things that can't be measured easily.

The next problem is the way in which suppliers attempt to grow their product. They think that they will use their clients consulting dollars to add generic features to the product so that they can build a more out of the box solution for the next client (i.e. turn that 80% functionality into a 85% or even 90%) so that they can get more sales. This is problematic because the client again is really only going to be willing to pay for what they actually get benefit from, and to implement a feature generically so that all clients can use it, will inevitably cost significantly more than implementing it specifically the way the client wishes to do it. For example if Client A has a SOE that dictates everyone uses IE7, they aren't going to be interested in paying the supplier to ensure that the software works with Firefox, safari, opera, etc...

Another issue that frustrates the problem is the upgrade cycle. As more clients come on board, and more features are added, when clients upgrade, they take on risks associated with features they never asked for. This requires a very modular approach to adding features, but even still often changes to the core software are required to add new features, and this will impact on existing features.

This feeds into the final problem I want to discuss, and that is who is responsible for testing? I've seen a number of wrong approaches to this, including "The client will test it". I would suggest that any client that takes on a piece of consultingware needs their head read if they don't do some form of user acceptance testing. However, again, the client will really only be interested in testing the features that matter to them, and not being a software company, they won't really appreciate proper regression test cycles. Ultimately a proper regression test and defect fixing cycle is going to be left to the supplier, but if no client is willing money towards that cycle, it will very quickly eat into the "lucrative" consulting rate the supplier charges for additional features.

So I guess I've really just stated the problem here, and offered no real solutions, and guess what... I don't intend to... at least not in this post. I would like to hear how other people have seen these issue tackled from both sides of the fence (client and supplier), and in a few weeks time, I'll put together another post with what I think are some potential solutions to the problem.

2 comments:

  1. Hi Scott,
    I've just read your post and I think that it's very interesting what you are talking about.
    I've been myself involved in these type of projects several times and I think that there is no easy solution to the problem.
    I can say that you have inspired me and I've tried to propose several ideas of how to reduce the management risk in my Blog http://rmencia.blogspot.com/2008/05/style-definitions-table.html if you are interested.

    Buen viaje a Argentina, mi amigo.

    ReplyDelete
  2. Anonymous12:44 am

    Scott, nice article.
    I have also been interested in consultingware (http://www.consultingware.org), mainly because I feel it is such a big issue to so many developers that really doesn't get any mainstream attention in the developer community (we get a bit too technology focused maybe ? :-) ). My particular interest stems from working in a pure software development shop (rather then a consulting company) but developing software that is then customizable. I think many of the issues are faced by both groups, just that as a dedicated developer you probably have more time and a budget to maybe implement improvements (ie. they are not working on a billable hours basis). So my goal is really to look at tips , tricks and techniques as well as issues faced (why do we keep having to rediscover the wheel right ?) Also, great to see you are in Melbourne (I have just moved there). Regards Alan.

    ReplyDelete