top of page

Procurement vs. Quality of output


Today we see the age-old question of quality of delivery, and the response to (public) failure in execution, is yet again being bounced around in the media.


The response to addressing high-profile IT failures in Public Sector appears to mandate a procurement model [Government response to data-breach] to fix what are delivery failing. In other industries the response is to mandate checks on the cloud/service providers accreditations [UK FCA]. Either way there's a large-assumption that these response will cover-over the cracks in delivery and implementation.


Globally there are a lot of issues with public cloud-services, internet websites and systems administration allowing unintended access to personally identifiable data. Looking for root-causes we see a combination of: old technologies, poor operational maintenance procedures, open exploits, disgruntled employees and, in rare cases, incompetence - in definition of requirements or in execution of delivery.


Operational maturity and professionalism are not endemic in IT, it is sad to say, despite many, many attempts to establish standards (ISO), frameworks (ITIL) and common models. The recurring themes and cycles of the same failings crop-up, ranging from failures in problem definition, understanding requirements, design or implementation. More often the obvious failings are in hand-over and operational running.


Issues in execution commonly stem from lack of checks and controls. It is extremely rare to see issues stemming from the procurement methodology, without correlation between inability to define scope and milestones, or gaps in the ability to measure implementation outcome or delivery.


Artificial markets and industry agreements

An artificial market and contract structure that do not require suppliers to support, maintain and managed certification and accreditation will not offer safeguards in managing delivery quality. Neither will one that does not build in reviews and audits by credible 3rd parties during and after delivery.


Artificially engineering a market typically attracts suppliers who can either afford to keep people not working for extended periods while certify and maintain accreditations, or you get captive suppliers will know try and game the system, because they need to maintain profitability. People sat on the bench is not a profitable model.


Either way you have deliberately removed benefits of market-force competition - so expect to pay a premium.

Additionally you have problem of managing bleeding-edge requirements I.e. how do you certify and establish accreditation for services, technologies and models which are not yet mainstream?

Even worse are the problems that arise when there is a (false?) perception that:

  1. project is delivering a "commodity" outcome or template execution,

  2. cloud software/service is the same as desired business outcome.

Good delivery comes from clear intent and definition, managed execution and credible review.

Peer-review, checks and audits

How do we tell that what is being delivered? We need to known what we get is:

  • what we asked for, and

  • executed to a sufficiently acceptable standard?

To do this you need some way to review, and validate the output. Typically this is achieved using process of assessment and comparison. That usually means:

  • we have clearly defined what it is we were after,

  • we have some way to assess what is delivered

Comparing your output against something which is deemed "acceptable" is typically the purview of business analysis and risk-management for the input (definition), and project-management and operations (solution) for the output.


In cloud-services often the output does not marry with classic pay-as-you-go procurement models, or "collective" agreements because these are based on generic criteria definition models *or* poorly defined outcomes models.

The fly in the soup

If we have defined our requirements sufficiently that our supplier delivers what we asked for, to a sufficiently high enough standard, we can still fail - and this is the scary bit which causes "knee-jerk" reactions.


The deliverable can fail to meet our needs because:

  • we asked for the wrong-thing

  • we didn't have controls to manage the project properly

  • we didn't have proper measures and processes

  • the supplier didn't know enough to correct perceptions

  • the supplier didn't ask the right questions of customer

  • what we asked for doesn't exist yet, in the way we expect

  • ...

...and the list goes on!


Of course if you are tied to an artificial market that doesn't have what you need, you can pick from what you can get, but it's buyer beware. A 'forced' purchasing paradigm is then both friend and foe. Due diligence and understanding your needs clearly enough to go a different way when appropriate, is down to the customer.


In some cases industries have moved to artificial markets, and supplier exchanges - which works for a macro-scale purchasing power in manufacturing, engineering etc. It doesn't work so well for consulting services which focus on establishing actual need, evolving outcomes, or setup of business-specific information processes.


Artificial markets work well where you have tangibles or product of some description. Unless you have very specific packages and outputs in ICT markets e.g. Accounting software or Web-site publishing platform, it is easier to deal with primary vendor - such as Microsoft, Xero - But implementation is often left dangling. In fact with cloud-services key implementation capabilities now seem to be considered superfluous to requirements, but this couldn't be further from the truth. Supporting skills and knowledge for delivery of business solutions on cloud-services is a much harder game than anticipated.


Commoditisation of servers to cloud removes classic tin, floor-space and electric mgmt. but increases the need to have people across multiple platforms with ever-changing technical and functional capabilities - Enter service architecture, solution design and solution configuration and extension consulting.


These offerings don't fit into software definitions e.g. Email management, project planning, online-forms etc or services like accounting and CRM; they are not a developer, project manager, operations and capacity mgmt process …or a raft of other things we can roll-up.


These high-level solution concept services are critical for helping you get it right or disastrously wrong - and only at a point-in-time. Cloud-service change is so rapid that a valid appropriate design and build considerations one month can be massively out-of-step with security and risk mgmt. the next. Just as prevalent is required "cloud-configuration" skill and experience. So with a snap of the fingers classic project checks and balances are gone and time-to-disaster is rapidly compressed.


This makes artificial markets, like those in Manufacturing or Public Sector's, challenging because they don't offer software and the new types of services. Neither can skills required be labelled a "developer" or "system administrator" with classic definitions. So customers are backed into a corner with selecting inappropriate offerings which are then abused to deliver an outcome or not getting correct skills and experience from people can engage. Going outside the market comes with punitive response from organising body because you are not getting "economies of scale".


Unfortunately speed-to-delivery of simple capability e.g. email, file-storage, document/content creation etc is so fast that the businesses can nip ahead of the ability of ventures like industry marketplaces to respond - and we are once again back with classic procurement with small, iterative, rapid deliverables - and little or no governance controls.


Summary

So the issue isn't how to buy skills, capabilities, and services - it's how to manage, validate and maintain an acceptable level of delivery (aka manage the Quality of the output).


Now quality is subjective - based on, and measured against, other things of a similar kind. For that you need clarity in definition and something against which to measure.


Key questions to consider for improving your outcome:

  • have you really understood what it is you want vs. what you have asked for?

  • do you have a baseline to measure against? (does one even exist?)

  • have you checked who you are getting has either knowledge and strong processes they will use?

  • don't confuse this with ability to fill in forms for a panel

  • do you have a process for, and competent party to, review outcomes?

  • are you willing to course-correct, or invest, if things are not quite as expected?

If you can't answer the above, you're probably running a much higher risk of your project not meeting expectations than you think - and your name being in the news.


So anyone that tries to tell you that standard purchasing process ensures a level of "Quality" in the of output is smoking funny things. Don't expect a payment plan to fix your issues with implementation.


If you want to talk about solution governance and quality-management on SharePoint Online and Office 365, drop us a line: hi@timewespoke.com

Related Posts

See All

Intranets: what do you want?

Intranets as a concept have been around over 25 years, and content management solutions allowing business-users to deliver them nearly as long. So what do we do when the recurring intranet project cro

bottom of page