CSI puts the ‘taste’ back in Service Management

Francois
Francois Biccard

This article has been contributed by Francois Biccard, Support Manager.

We have all probably heard the slogan ”Common sense is like deodorant, the people who need it most never use it”. In my observation that probably rings true for many organisations when it relates to a Continual Service Improvement (CSI) plan or strategy.

The more an organisation grows, the more it becomes an essential requirement to its success.

We can all bang the drum of “the customer is king”, “the customer is central”, “the customer is <fill in the blank>”, or whatever slogan the next pundit tries to sell us. If I were to put myself in the customer’s shoes, I would have to fill in the blank with: the customer is well and truly over it. Over the lip service.

You can only have so many mantras, visions, slogans, goals, values – whatever. When all the customer get sold is all the marketing guff but no substance, it is like going to your favourite restaurant, ordering the t-bone steak, and getting one of those fragrance pull-outs from a magazine with a note from the chef saying that he can sell you the smell, but the fusion of smell and taste is just an illusion. Would you accept that? Should your customer accept the same from you?

CSI is the substance – it’s what happens in the background that the customer cannot see – but can taste. It provides substance to your mantra, vision, goals – for your staff, and for your customers.

What is more, it forms the backbone of your strategic plan, and feeds your operational plan.

Without it you are lost  – like a boat without a rudder. You will still go ‘somewhere’, if only by the effort of competent staff tirelessly rowing and steering the organisation through their own little ‘swim lane’ as part of the broader process. However, you won’t have much control – not enough to make sure you set your own destination. Yes, by chance you might end up on a beautiful island, but there are a lot of icebergs and reefs out there as well, and Murphy will probably have the last say.

Misconceptions

There are other misconceptions that may get you stranded too,

“…but all our staff do Continual Improvement every day”.

The problem with this statement is, that if you don’t provide them with a framework and a channel or register where they can document current, or propose new improvement strategies, you are:

  1. Not creating awareness or fostering a culture that is all about continual improvement.
  2. You are not leveraging the power of collective thought – which is extremely important in continual improvement, especially since they are probably the foot soldiers who are most aware of the customers’ most intense frustrations and struggles.

You might not act on every suggestion added to a register, but at the very least it will provide you greater insight – whether that is to be used in planning, resourcing etc.

Another example: “…but we just don’t have the resources or time to act on these suggested improvements”.

Not all improvements will require the same level of resourcing.

Order ideas by least effort and maximum value – then pick one you can afford. Even if you start with the smallest, it’s not always about what is being done, but actually starting somewhere and creating the culture first.

Remember, Continual Improvement is more about creating the culture first. Create the culture and the rest will be much easier.

Golden Rule: Start Now!

So, you haven’t done anything and you now feel like you’re four again, and the plate of vegetables placed in front of you has a mountain of peas, each the size of a small boulder (insert your own nightmarish vegetable of choice).

All is not lost though – you can turn things around, but there’s one golden rule: There is no better time to start than right now! Every moment you delay, opportunities for improvement are lost.

  • Go forth and research!
  • Provide a register where others can contribute ideas or suggestions.
  • Review and decide, as a group, what you can afford that will provide most benefit.
  • Ask some practical questions to get the thinking started:
  1. Where are you now?
  2. Where do you want to be?
  3. How you will get there – what would you have to do?
  4. Where do you need to mature?
  5. What do you have to do to achieve that maturity?
  6. Where are the gaps in your services, organisational skills/training etc.?
  7. What would you have to do to fill or complete those gaps?

If you want to be passionate about Service Management, you have to be passionate about constantly improving and evolving. The nature of Service Management is evolution – if you stop you’ll stagnate.

About Francois:

Francois specialises in continual improvement and applying practical ITSM solutions and strategies in the real world. His career started in Systems Management and IT Operations, and for the last 6 years have been focused in implementing and improving Service Management principles in the Application/Product Development industry. He is passionate about practical ITSM and how to leverage real value for the Customer and Business alike.

BAU Improvements

In my last article on service improvement, I laid out four premises that underlie how I think we should approach CSI:

Process improvements evolve with time on railroads
  • Everything we change is service improvement.
  • Improvement planning comes first.
  • We don’t have enough resource to execute all desired improvements.
  • We choose the wrong unit of work for improvements.

What are the desired business outcomes?

We must focus on what is needed.  To understand the word ‘needed’ we go back to the desired business outcomes.  Then we can make a list of the improvement outputs that will deliver those outcomes, and hence the pieces of work we need to do.

Even then we will find that the list can be daunting, and some sort of ruthless expediency will have to be applied to choose what does and doesn’t get done.

How will you resource the improvements?

The other challenge will be resourcing the improvements, no matter how ruthlessly we cut down the list.  Almost all of us work in an environment of shrinking budgets and desperate shortages of every resource:  time , people and money.  One way to address this is to do some of the work as part of BAU.

These are all aspects of my public-domain improvement planning method, Tipu:

  • Alignment to business outcomes
  • Ruthless decision making
  • Doing much of the work as part of our day jobs

Let me give you two more premises that build on the first four and take us to the heart of how I approached service improvement with Tipu.

Fifth premise: Improvement is part of a professional’s day job

Railroads work this way.  Process improvements evolve over time on the job.    The only time they have a formal process improvement project is for a major review: e.g. a safety campaign with experts checking the practices for safety risks; or a cost-cutting drive with time-and-motion analysts squeezing out efficiencies (we call it Lean these days).  Most of the time, middle managers and line workers talk and decide a better way as part of their day jobs, often locally and often passed on as unwritten lore.  Nobody in head office knows how each industrial track is switched (the wagons shuffled around: loads in, empties out).  The old hands teach it to the newcomers.

Most improvement is not a project.   Improvement is normal behaviour for professionals: to devote a certain percentage of our time to improving the systems we work with.  We should all expect that things will be better next year.   We should all expect that we will make a difference and leave systems better than we found them.   Improvement is part of business as usual.

As a culture, IT doesn’t take kindly to ad-hoc, local grass-roots, unmanaged improvements.  We need to get over that – we don’t have good alternatives if we are going to make progress.

Sixth premise: Software and hardware have to be near-perfect.  Practices and processes don’t.

The tolerances for the gap between wheels or rails are specified in fractions of a millimetre on high-speed track.   Even slow freight lines must be correct to a few millimetres, over the thousands of kilometres of a line.  And no the standard 4’8.5” gauge has nothing to do with Roman chariots.  It was one of many gauges in use for mine carts when George Stephenson started building railways, but his first employer happened to use 4’8”.  Sorry to spoil a good story about horse’s butts and space shuttles.

Contrast the accuracy of the technology with the practices used to operate a railroad.  In the USA, freight train arrival times cannot be predicted to the nearest half-day. (Let’s not get into a cultural debate by contrasting this with say Japanese railroads.  To some, the USA looks sloppy.  They say it is flexible.)   Often US railroads need to drive out a new crew to take over a train because the current crew have done their legally-limited 12 hours.  Train watchers will tell you that two different crews may well switch a location (shuffle the wagons about) differently.  Compared to their technology, railroads’ practices are loose.  Just like us.

In recent years railroad practices have been tightened for greater efficiency (the New Zealand Railways carry more freight now with about 11,000 staff than they once did with 55,000) and especially for greater human safety.  But practices are still not “to the nearest millimetre” by any means.

Perfection is impossible

We operate with limited resources and information in an imperfect world.  It is impossible for an organisation to improve all practices to an excellent level in a useful time.  Therefore it is essential to make the hard decisions about which ones we address.  Equally it is impossible – or at least not practical – to produce the perfect solution for each one.  In the real world we do what we can and move on.  Good enough is near enough except in clearly identified situations where Best is essential for business reasons.  Best Practice frameworks   not a blueprint: they are a comparison reference or benchmark to show what would be achieved with unlimited resources in unlimited time – they are aspirational.

Some progress is better than nothing.  If we try to take a formalised project-managed approach to service improvement, the outcome for the few aspects addressed by the projects will be a good complete solution… eventually, when the projects end, if the money holds.  Unfortunately, the outcome for the many aspects of service delivery not included in the projects’ scope is likely to be nothing.   Most organisations don’t have enough funds, people or time to do a formal project-based improvement of every aspect of service management.  Aim to address a wider scope than projects can – done less formally, less completely, and less perfectly than a project would.

We can do this by making improvements as we go, at our day jobs in BAU.  We will discuss this ‘relaxed’ approach more fully in future.

We need an improvement programme to manage the improvements we choose to make.   That programme should encompass both projects and BAU improvements.

Project management is a mature discipline

The management of projects is a mature discipline: see Prince2 and Managing Successful Programmes and Management of Portfolios and Portfolio Programme and Project Office, to name just the four bodies of knowledge from the UK Cabinet Office.

What we are not so mature about is managing improvements as part of BAU.

The public-domain Tipu method focuses on improving the creation and operation of services, not the actual service systems themselves.  The former is what BAU improvements should focus on.   i.e. Tipu improves the way services are delivered, not the functionality of the service (although  it could conceivably be used for that too).

Service owners need to take responsibility for improvements

The improvement of the actual services themselves – their quality and functionality – is the domain of the owners of the services: our IT customers.   They make those decisions to improve and they should fund them, generally as projects.

On the other hand, decisions about improving the practices we use to acquire/build and operate the IT machinery of services can be taken within IT: they are practices under our control, our authority, our accountability.  They are areas that we are expected to improve as part of our day jobs, as part of business as usual.

We’ll get into the nitty-gritty of how to do that next time.

image credit – © Tomas Sereda – Fotolia.com

Everything is improvement

Traditionally Continual Service Improvement (CSI) is too often thought of as the last bit we put in place when formalising ITSM.  In fact, we need to start with CSI, and we need to plan a whole portfolio of improvements encompassing formal projects, planned changes, and improvements done as part of business-as-usual (BAU) operations.  And the ITIL ‘process’ is the wrong unit of work for those improvements, despite what The Books tell you. Work with me here as I take you through a series of premises to reach these conclusions and see where it takes us.

In my last article, I said service portfolio management is a superset of organisational change management.  Service portfolio decisions are decisions about what new services go ahead and what changes are allowed to update existing services, often balancing them off against each other and against the demands of keeping the production services running.  Everything we change is service improvement. Why else would we do it?  If we define improvement as increasing value or reducing risk, then everything we change should be to improve the services to our customers, either directly or indirectly.
Therefore our improvement programme should manage and prioritise all change.  Change management and service improvement planning are one and the same.

Everything is improvement

First premise: Everything we change is service improvement

Look at a recent Union Pacific Railroad quarterly earnings report.  (The other US mega-railroad, BNSF, is now the personal train-set of Warren Buffett – that’s a real man’s toy – but luckily UP is still publicly listed and tell us what they are up to).

I don’t think UP management let one group decide to get into the fracking materials business and allowed another to decide to double track the Sunset Route.  Governors and executive management have an overall figure in mind for capital spend.   They allocate that money across both new services and infrastructure upgrades.

They manage the new and existing services as a portfolio.  If the new fracking sand traffic requires purchase of a thousand new covered hoppers then the El Paso Intermodal Yard expansion may have to wait.  Or maybe they borrow the money for the hoppers against the expected revenues because the rail-yard expansion can’t wait.  Or they squeeze operational budgets.  Either way the decisions are taken holistically: offsetting new services against BAU and balancing each change against the others.

Our improvement programme should manage and prioritise all change, including changes to introduce or upgrade (or retire) services, and changes to improve BAU operations.  Change management and service portfolio management are both aspects of the same improvement planning activity.  Service portfolio management makes the decisions; change management works out the details and puts them into effect.

It is all one portfolio

Second premise: Improvement planning comes first

Our CSI plan is the FIRST thing we put together, not some afterthought we put in place after an ‘improvement’ project or – shudder – ‘ITIL Implementation’ project.
UP don’t rush off and do $3.6 billion in capital improvements then start planning the minor improvements later.  Nor do they allow their regular track maintenance teams to spend any more than essential on the parts of the Sunset Route that are going to be torn up and double tracked in the next few years.  They run down infrastructure that they know is going to be replaced.  So the BAU improvements have to be planned in conjunction with major improvement projects.  It is all one portfolio, even if separate teams manage the sub-portfolios.  Sure miscommunications happen in the real world, but the intent is to prevent waste, duplication, shortages and conflicts.

Welcome to the real world

Third premise: we don’t have enough resource to execute all desired improvements

In the perfect world all trains would be controlled by automated systems that flawlessly controlled them, eliminating human error, running trains so close they were within sight of each other for maximum track utilisation, and never ever crashing or derailing a train.  Every few years governments legislate towards this, because political correctness says it is not enough to be one of the safest modes of transport around: not even one person may be allowed to die, ever.  The airlines can tell a similar story.   This irrational decision-making forces railroads to spend billions that otherwise would be allocated to better trackwork, new lines, or upgraded rolling stock and locos.  The analogy with – say – CMDB is a strong one: never mind all the other clearly more important projects, IT people can’t bear the idea of imperfect data or uncertain answers.
Even if our portfolio decision-making were rational, we can’t do everything we’d like to, in any organisation.  Look at a picture of all the practices involved in running IT

You can’t do everything

The meaning of most of these labels should be self-evident.  You can find out more here.  Ask yourself which of those activities (practices, functions, processes…  whatever you want to call them) which of them could use some improvement in your organisation.  I’m betting most of them.
So even without available funds being gobbled up by projects inspired by political correctness, a barmy new boss, or a genuine need in the business, what would be the probability of you getting approval and money for projects to improve all of them?  Even if you work at Google and money is no problem, assuming a mad boss signed off on all of them what chance would you have of actually getting them all done?  Hellooooo!!!

What are we doing wrong?

Fourth premise: there is something very wrong with the way we approach ITSM improvement projects, which causes them to become overly big and complex and disruptive.  This is because we choose the wrong unit of work for improvements.

How to cover everything that needs to be looked at?  The key word there is ‘needs’.  We should understand what are our business goals for service, and derive from those goals what are the required outcomes from service delivery, then focus on improvements that deliver those required outcomes … and nothing else.

One way to improve focus is to work on smaller units than a whole practice.  A major shortcoming of many IT service management projects is that they take the ITIL ‘processes’ as the building blocks of the programme.  ‘We will do Incident first’.  ‘We can’t do Change until we have done Configuration’.  Even some of the official ITIL books promote this thinking.

Put another way, you don’t eat an elephant one leg at a time: you eat it one steak at a time… and one mouthful at a time within the meal.  Especially when the elephant has about 80 legs.

Don’t eat the whole elephant

We must decompose the service management practices into smaller, more achievable units of work, which we assemble Lego-style into a solution to the current need.  The objective is not to eat the elephant, it is to get some good meals out of it.
Or to get back to railroads: the Sunset Route is identified as a critical bottleneck that needs to be improved, so they look at trackwork, yards, dispatching practices, traffic flows, alternate routes, partner and customer agreements…. Every practice of that one part of the business is considered.  Then a programme of improvements is put in place that includes a big capital project like double-tracking as much of it as is essential; but also includes lots of local minor improvements across all practices – not improvements for their own sake, not improvements to every aspect of every practice, just a collection of improvements assembled to relieve the congestion on Sunset.

Make improvement real

So take these four premises and consider the conclusions we can draw from them:

  1. Everything we change is service improvement.
  2. Improvement planning comes first.
  3. We don’t have enough resource to execute all desired improvements.
  4. We choose the wrong unit of work for improvements.

We should begin our strategic planning of operations by putting in place a service improvement programme.  That programme should encompass all change and BAU: i.e. it manages the service portfolio.

The task of “eating 80-plus elephant’s legs” is overwhelming. We can’t improve everything about every aspect of doing IT.   Some sort of expediency and pragmatism is required to make it manageable.  A first step down that road is to stop trying to fix things practice-by-practice, one ITIL “process” at a time.

Focus on needs

We must focus on what is needed.  To understand the word ‘needed’ we go back to the desired business outcomes.  Then we can make a list of the improvement outputs that will deliver those outcomes, and hence the pieces of work we need to do.

Even then we will find that the list can be daunting, and some sort of ruthless expediency will have to be applied to choose what does and doesn’t get done.

The other challenge will be resourcing the improvements, no matter how ruthlessly we cut down the list.  Almost all of us work in an environment of shrinking budgets and desperate shortages of every resource:  time , people and money.  One way to address this– as I’ve already hinted – is to do some of the work as part of BAU.

These are all aspects of my public-domain improvement planning method, Tipu:

  • Alignment to business outcomes
  • Ruthless decision making
  • Doing much of the work as part of our day jobs

More of this in my next article when we look closer at the Tipu approach.

Service Improvement at Cherry Valley

Problem, risk, change , CSI, service portfolio, projects: they all make changes to services.  How they inter-relate is not well defined or understood.  We will try to make the model clearer and simpler.

Problem and Risk and Improvement

The crew was not warned of the severe weather ahead

In this series of articles, we have been talking about an ethanol train derailment in the USA as a case study for our discussions of service management.  The US National Transport Safety Board wrote a huge report about the disaster, trying to identify every single factor that contributed and to recommend improvements.  The NTSB were not doing Problem Management at Cherry Valley.  The crews cleaning up the mess and rebuilding the track were doing problem management.  The local authorities repairing the water reservoir that burst were doing problem management.  The NTSB was doing risk management and driving service improvement.

Arguably, fixing procedures which were broken was also problem management.   The local dispatcher failed to tell the train crew of a severe weather warning as he was supposed to do, which would have required the crew to slow down and watch out.  So training and prompts could be considered problem management.

But somewhere there is a line where problem management ends and improvement begins, in particular what ITIL calls continual service improvement or CSI.

In the Cherry Valley incident, the police and railroad could have communicated better with each other.  Was the procedure broken?  No, it was just not as effective as it could be.  The type of tank cars approved for ethanol transportation were not required to have double bulkheads on the ends to reduce the chance of them getting punctured.  Fixing that is not problem management, it is improving the safety of the tank cars.  I don’t think improving that communications procedure or the tank car design is problem management, otherwise if you follow that thinking to its logical conclusion then every improvement is problem management.

A distinction between risks and problems

But wait: unreliable communications procedure and the single-skinned tank cars are also risks.  A number of thinkers, including Jan van Bon, argue that risk and problem management are the same thing.  I think there is a useful distinction: a problem is something that is known to be broken, that will definitely cause service interruptions if not fixed; a “clear and present danger”.  Risk management is something much broader, of which problems are a subset.  The existence of a distinct problem management practice gives that practice the focus it needs to address the immediate and certain risks.

(Risk is an essential practice that ITIL – strangely – does not even recognise as a distinct practice; the 2011 edition of ITIL’s Continual Service Improvement book attempts to plug this hole.  COBIT does include risk management, big time.  USMBOK does too, though in its own distinctive  way it lumps risk management under Customer services; I disagree: there are risks to our business too that don’t affect the customer.)

So risk management and problem management aren’t the same thing.  Risk management and improvement aren’t the same thing either.  CSI is about improving the value (quality) as well as reducing the risks.

To summarise all that: problem management is part of risk management which is part of service improvement.

Service Portfolio and Change

Now for another piece of the puzzle.  Service Portfolio practice is about deciding on new services, improvements to services, and retirement of services.  Portfolio decisions are – or should be – driven by business strategy: where we want to get to, how we want to approach getting there, what bounds we put on doing that.

Portfolio decisions should be made by balancing value and risk.  Value is benefits  minus  costs.  There is a negative benefit and a set of risks associated with the impact on existing services of building a new service:  there is the impact of the project dragging people and resources away from production, and the ongoing impact of increased complexity, the draining of shared resources etc….  So portfolio decisions need to be made holistically, in the context of both the planned and live services.  And in the context of retired services too: “tell me again why we are planning to build a new service that looks remarkably like the one we killed off last year?”.  A lot of improvement is about capturing the  learnings of the past.

Portfolio management is a powerful technique that is applied at mulltiple levels.  Project and Programme Portfolio Management is all the rage right now, but it only tells part of the story.  Managing projects in programmes and programmes in portfolios only manages the changes that we have committed to make; it doesn’t look at those changes in the context of existing live services as well.  When we allocate resources across projects in PPM we are not looking at the impact on business-as-usual (BAU); we are not doling out resources across projects and BAU froma  single pool.  That is what a service portfolio gives us:  the truly holistic picture of all the effort  in our organisation across change and BAU.

A balancing act

Service portfolio management is a superset of organisational change management.  Portfolio decisions are – or should be – decisions about what changes go ahead for new services and what changes are allowed to update existing services, often balancing them off against each other and against the demands of keeping the production services running.  “Sure the new service is strategic, but the risk of not patching this production server is more urgent and we can’t do both at once because they conflict, so this new service must wait until the next change window”.  “Yes, the upgrade to Windows 13 is overdue, but we don’t have enough people or money to do it right now because the new payments system must go live”.  “No, we simply cannot take on another programme of work right now: BAU will crumble if we try to build this new service before we finish some of these other major works”.

Or in railroad terms: “The upgrade to the aging track through Cherry Valley must wait another year because all available funds are ear-marked for a new container terminal on the West Coast to increase the China trade”.  “The NTSB will lynch us if we don’t do something about Cherry Valley quickly.  Halve the order for the new double-stack container cars”.

Change is service improvement

Everything we change is service improvement. Why else would we do it?  If we define improvement as increasing value or reducing risk, then everything we change should be to improve the services to our customers, either directly or indirectly.

Therefore our improvement programme should manage and prioritise all change.  Change management and service improvement planning are one and the same.

So organisational change management is CSI. They are looking at the beast from different angles, but it is the same animal.  In generally accepted thinking, organisational change practice tends to be concerned with the big chunky changes and CSI tends to be focused more on the incremental changes.  But try to find the demarcation between the two.   You can’t decide on major change without understanding the total workload of changes large and small.  You can’t plan a programme of improvement work for only minor improvements without considering what major projects are planned or happening.

In summary, change/CSI  is  one part of service portfolio management which also considers delivery of BAU live services.  A railroad will stop doing minor sleeper (tie) replacements and other track maintenance when they know they are going to completely re-lay or re-locate the track in the near future.  After decades of retreat, railroads in the USA are investing in infrastructure to meet a coming boom (China trade, ethanol madness, looming shortage of truckers); but they better beware not to draw too much money away from delivering on existing commitments, and not to disrupt traffic too much with major works.

Simplifying service change

ITIL as it is today seems to have a messy complicated story about change.  We have a whole bunch of different practices all changing our services, from  Service Portfolio to Change Management to Problem Management to CSI.  How they relate to each other is not entirely clear, and how they interact with risk management or project management is undefined.

There are common misconceptions about these practices.  CSI is often thought of as “twiddling the knobs”, fine-tuning services after they go live.  Portfolio management is often thought of as being limited to deciding what new services we need.  Risk management is seen as just auditing and keeping a list.  Change Management can mean anything from production change control to organisational transformation depending on who you talk to.

It is confusing to many.  If you agree with the arguments in this article then we can start to simplify and clarify the model:

Rob England: ITSM Model
I have added in Availability, Capacity, Continuity, Incident and Service Level Management practices as sources of requirements for improvement.  These are the feedback mechanisms from operations.  In addition the strategy, portfolio and request practices are sources of new improvements.   I’ve also placed the operational change and release practices in context as well.

These are merely  the thoughts of this author.  I can’t map them directly to any model I recall, but I am old and forgetful.  If readers can make the connection, please comment below.

Next time we will look at the author’s approach to CSI, known as Tipu.

Image credit: © tycoon101 – Fotolia.com

Getting started with social IT (Part 2 of 2)

Following on from Matthew Selheimer’s first installment on social IT, we are pleased to bring you the second and final part of his guide to getting started with social IT

Level 3 Maturity: Social Embedding

The saying, “Context is King!” has never been truer and this is the foundational characteristic for attaining Level 3 social IT maturity; Social Embedding.
This level of social IT maturity is achieved by establishing relevant context for social collaboration through three specific actions:

  1. The creation of a social object model
  2. The construction of a social knowledge management system that is both role-based and user-specific
  3. The enhancement of established IT processes with social collaboration functionality to improve process efficiency and effectiveness

The goal at Level 3 maturity is to leverage social embedding to improve IT key performance indicators (KPIs) such as mean-time-to-restore (MTTR) service or change success rate (additional examples are provided below). It is important that you select KPIs that are most meaningful to your organisation; KPIs that you have already baselined and can use to track progress as you increase your social IT maturity.

While the value of Level 2 maturity can be significant in improving the perception of IT’s responsiveness to users, Level 3 social IT maturity is where the big breakthroughs in IT efficiency and quantifiable business value are created.

Focus on key performance indicators

Focus on the KPIs associated with the processes you are enhancing with social collaboration. An incident management KPI measurement, for example, could be to multiply your current mean-time-to-restore (MTTR) service by your cost per hour of downtime or cost of degraded service per application. This will give you a starting point for benefit projections and value measurement over time.

Focus on the KPIs associated with the processes you are enhancing with social collaboration. This will give you a foundation for benefit projections and value measurement over time.

For change management, you might use the number of outages or service degradations caused by changes and multiply that by your cost per hour of downtime and MTTR to arrive at a true dollars and cents measure that you can use to benchmark social IT impact over time. You might also consider other IT process metrics such as first call resolution rate, percentage of time incidents correctly assigned, change success rates, the percentage of outages caused by changes, the reduced backlog of problems, etc.

The point is to select IT process metrics that are meaningful for your organization and enable you to calculate a quantifiable impact or benefit. Decision makers may be skeptical about the value of social IT, so you will need to make your case that there is real quantifiable benefit to justifying the investment to achieve Level 3 maturity.

Relevant Context and Three Required Actions

Let’s now more fully consider the establishment of relevant context and the three actions characteristic of Level 3 maturity previously described: 1) creation of a social object model, 2) construction of a social knowledge management system, and 3) the enhancement of IT processes with social capabilities. We noted earlier that context is defined in terms of relevance to a specific audience. That audience could be a group of individuals, a role, or even a single individual. The most important thing is that context ensures your audience cares about the information being communicated.

How do you go about ensuring the right context? What is needed is a social foundation that can handle a wide variety of different perspectives based on the roles in IT and their experience. The most effective way to do this is to treat everything managed by IT as a social object.page7image27920 page7image28080

What is meant by a social object? Consider, for example, a Wikipedia entry and how that is kept up-to-date and becomes more complete over time through crowd sourcing of knowledge on the subject. The entry is a page on the Wikipedia website. Now imagine if everything that IT is managing—whether it’s a router, a server, an application, a user, a policy, an incident, a change, etc.—was treated along the same lines as a Wikipedia page. Take that further to assume that all the relationships which existed between those entries—such as the fact that this database runs on this physical server and is used by this application—were also social objects that could be created, modified, and crowd-sourced. In this manner, organizational knowledge about each object and its relationships with other objects can be enriched over time—just like a Wikipedia entry.

FIGURE 2: A Social Object Model as delivered in ITinvolve for Service Management™. Leveraging social collaboration principles

Define a taxonomy for your social objects

Knowledge comes from multiple sources. Existing IT knowledge may be scattered in different places such as Excel spreadsheets, Visio diagrams, Sharepoint sites, Wikis, CMDBs, automated discovery tools, etc. but it also resides in the minds of everyone working in IT, and even among your end users. To effectively capture this knowledge, you will need to define a taxonomy for your social objects. You can then begin to source or federate existing knowledge and associate it with your objects in order to accelerate the creation of your social knowledge management system.

With an initial foundation of knowledge objects in place, your next task is to make the system easy to use and relevant to your IT teams by defining perspectives on the objects. Establishing perspectives is critical to a well- functioning social knowledge management system, otherwise, you will fall into pitfall #2 discussed earlier. For example, you might define a Network Engineer’s perspective that includes network devices and the relationships they have to other objects like servers and policies. You might define a Security Administrator’s perspective that focuses on the policies that are defined and the objects they govern like network devices and servers. Without this perspective-based view, your teams will not have the relevant context necessary to efficiently and effectively leverage the knowledge management system in support of their day-to-day roles.

Enrich your knowledge and keep it current

Once you have initially populated your social objects and defined perspectives, you need to keep knowledge current and enrich it over time to ensure your IT staff finds it valuable. This is why defining your objects as social objects is so critical. Just like you might follow someone on Twitter or “friend” someone on Facebook, your teams can do the same thing with your objects. In fact, when you created your perspectives, you were establishing the initial baseline of what objects your teams would follow. In this manner, whenever anyone updates an object or its relationships, those who are following it will automatically be notified along with a dedicated “news feed” or activity stream for the object.

When you create your perspectives, you establish the initial baseline of what objects your teams will follow. In this manner, whenever anyone updates an object or its relationships, those who are following it will automatically be notified along with a dedicated “news feed” or activity stream for the object.

This does two important things. First, it keeps those who “need to know” current on the knowledge about your environment so that everyone has up-to-date information whenever there is an incident, change, or other activity related to the object. Instead of waiting until a crisis occurs and teams are interacting with out-of-date information, wasting valuable time trying to get each other up to speed, you can start to work on the issue immediately with the right information in the right context.

Provide a point of engagement for subject matter experts

Second, it provides a point of engagement for subject matter experts to collaborate around the object when they see that others are making updates or changes to the object and its relationships. This second point should not be underestimated because it taps into a basic human instinct to engage on things that matter to them and directly contributes to the crowd-sourcing motivation and improvement of knowledge accuracy over time.

Your third action is to embed your social knowledge management system into your core IT processes in order to enhance them. This is not simply an add-on, as described in Level 2 social IT maturity, but rather it is deep embedding of the social knowledge management system into your processes as the most trusted source of information about your environment. For example, imagine creating an incident record or change record, initially associating it with one or more impacted social objects, and then being able to automatically and immediately notify relevant stakeholders who are following any of those objects and then engage them in triaging the incident or planning the change. This is the power of social collaboration and why it can deliver new levels of efficiency and value for your IT organization.page10image25552

Create new knowledge objects

As an incident or change is worked using social IT, collaboration in activity streams creates a permanent and ongoing record of information, which at any point can be promoted to become a new knowledge object associated with any other object. For example, let’s say that a change record was created for a network switch replacement. Each of the individuals responsible for the switch and related objects like the server rack is immediately brought into a collaboration process to provide input on the change and contribute their expertise prior to the change going to the Change Advisory Board (CAB) for formal approval.

FIGURE 4: In-context collaboration and promotion to knowledge as supported by ITinvolve for Service Management™

This is just one example of the power of in-context collaboration. The same principles apply to incidents, problems, releases and other IT management processes.

To exit Level 3 and start to move to Level 4 on the maturity scale, you need to be able to provide your IT staff with in-context collaboration that is grounded in a social object model, utilizes a social knowledge management system that is easy to maintain and provides an up-to-date view of your objects and relationships, and enhances your existing IT management processes. But more importantly, you need to be able to show the quantifiable impact on one or more KPIs that matter to your organization.

Level 4 Maturity: Social-Driven

The final stage of social IT maturity is Level 4, the Social-Driven IT organization. The goal at this level is to leverage social collaboration for Continual Service Improvement (CSI).

The value of Level 4 social IT maturity comes in two forms. First, as your organization becomes more adept at leveraging social collaboration, you should benchmark your IT process KPIs against that of other organizations. Industry groups such as itSMF, the Help Desk Institute (HDI), as well as leading industry analyst firms, provide data that you can use. Getting involved in peer-to-peer networking activities with other organizations via industry groups are a great way to assess how you are doing in comparison to others. At this stage, you should be striving to outperform any organization that is not leveraging social collaboration principles across your KPIs, and you should be performing at or above the level of those organizations that have adopted social collaboration principles.

Measure the size of your community

Second, you should measure value in terms of behavioral change in your organization. At maturity Level 4, you should have established a self-sustaining community that is actively leveraging the social knowledge management system as part of its day-to-day work. Measure the size of your community and set goals for increasing the community size. Metcalf’s law applies directly to social collaboration: The “network effect” increases the value of the social knowledge management system exponentially as you add users to the community.

Measure the size of your community and set goals for increasing the community size. Metcalf’s law applies directly to social collaboration: The “network effect” increases the value of the social knowledge management system exponentially as you add users to the community.

One way to foster a larger and more active community is through recognition and rewards. For example, you might choose to publicly recognize and provide spot bonuses for the top contributors who have added the most to the social knowledge management system. Or, you may reward service desk personnel who consult the social knowledge management system before assigning the incident to level 2 personnel. You might also choose to acknowledge your staff with “levels” of social IT expertise, classifying those who participate occasionally as “junior contributors”, those who participate regularly as “influencers”, and those who are most active as “experts” or “highly involved.”page12image22344 page12image22504

What’s Beyond Level 4 Social IT Maturity?

One of the most exciting things about being engaged in advancing your social IT maturity is that we are all, as an industry, learning about and exploring its potential. In the future, we are likely to see new product enhancements from vendors that employ gamification principles that encourage even greater growth of our social collaboration communities.

We may see the integration of information from biometric devices that help us to more quickly assess end user frustration and initiate collaboration to resolve issues prior to the user even contacting the service desk. There are certainly going to be even more use cases for social collaboration than we can imagine today.

Review: itSMF Continual Service Improvement SIG

Like many who work in ITSM, I am of course aware of the need for, and the importance of Continual Service Improvement throughout the Service Management Lifecycle.

But what does it entail in real terms, and not just what I read on the ITIL course/in the books?

I came along to the itSMF CSI SIG, held in London to find out.

CSI in a nutshell

The purpose of CSI is to constantly look at ways of improving service, process and cost effectiveness.

It is simply not enough to drop in an ITSM tool to “fix” business issues, (of course backed up with reasonable processes) and then walk away thinking: “Job well done.”

Business needs and IT services constantly evolve and change and CSI supports the lifecycle and is iterative – the review and analysis process should be a continual focus.

Reality

CSI is often aspired to, and has been talked about in initial workshops, but all too often gets swallowed up in the push to configure and push out a tool, tweak and force in processes and all too often gets relegated to almost “nice to have” status.

A common question one sees in Linked in Groups is:

“Why do ITIL Implementations fail?”

A lack of commitment to CSI is often the reason, and this session looked to try and identify why that might be.

Interactive

I have never been to a SIG before, and it was very clear from the outset that we were not going to be talked at, nor would we quite be doing the speed-dating networking element from my last regional venture.

SIG chair Jane Humphries started us off by introducing the concept of a wall with inhibitors.  The idea was that we would each write down two or three things on post-it notes for use in the “Speakers Corner” segment later in the day.

What I liked about this, though, was that Jane has used this approach before, showing us a wall-graphic with inhibitors captured and written on little bricks, to be tackled and knocked down in projects.

Simple but powerful, and worth remembering for workshops, and it is always worth seeing what people in the community do in practice.

Advocates, Assassins, Cynics and Supporters

The majority of the sessions focussed on the characteristics of these types of potential stakeholders – how to recognise them, how to work with them, and how to prioritise project elements accordingly.

The first two breakout sessions split the room into four groups, to discuss these roles and the types of people we probably all have had to deal with in projects.

There was, of course, the predictable amusement around the characteristics of Cynics – they have been there and seen it all before, as indeed a lot of us had, around the room.

But what surprised me was a common factor in terms of managing these characteristics: What’s in it for me? (WIIFM)

Even for Supporters and Advocates, who are typically your champions, there is a delicate balancing act to stop them from going over to the “dark side” and seeing become cynics, or worse assassins to your initiative.

The exercises which looked at the characteristics, and how to work with them proved to be the easiest.

Areas to improve

What didn’t work so well was a prioritisation and point-scoring exercise which just seemed to confuse everyone.

For our group we struggled to understand if the aim was to deliver quick wins for lower gains, or go for more complex outcomes with more complex stakeholder management.

Things made a little more sense when we were guided along in the resulting wash-up session.

The final element to the day was a take on the concept of “Speakers’ Corner” – the idea being that two or three of the Post-It inhibitors would be discussed.  The room was re-arranged with a single chair in the middle and whoever had written the chosen topic would start the debate.

To add to the debate, a new speaker would have to take the chair in the centre.

While starting the debate topics were not an issue, the hopping in and out of the chairs proved to be hard to maintain, but the facilitators were happy to be flexible and let people add to the debate from where they sat.

Does Interactive work?

Yes and no.

I imagined that most people would come along and attend a Special Interest Group because they are just that – Interested!

But participating in group sessions and possibly presenting to the room at large may not be to everyone’s liking.

I have to admit, I find presenting daunting enough in projects where I am established.  So to have to act as scribe, and then bite the bullet and present to a huge room of people is not a comfortable experience for me, even after twenty years in the industry.

But you get out of these sessions what you put in, so I took my turn to scribe and present.  And given the difficulties we had, as a group, understanding the objectives of the third breakout session, I was pleased I had my turn.

The irony is Continual Service Improvement needs people to challenge and constantly manage expectations and characters in order to be successful.  It is not a discipline that lends itself to shy retiring wallflowers.

If people are going to spend a day away from work to attend a SIG, then I think it makes sense for them to try and get as much out of it as they can.

Perhaps my message to the more shy members in the room who hardly contributed at all is to remember that everyone is there to help each other learn from collective experience.  No-one is there to judge or to act as an Assassin/Cynic so make the most of the event and participate.

For example, in Speakers’ Corner, the debate flowed and people engaged with each other, even if the chair hopping didn’t quite work, but acknowledgement also needs to go to the SIG team, who facilitated the day’s activities very well.

I have attended three events now, a UK event, a Regional Seminar and a SIG and this was by far the most enjoyable and informative so far.

A side note: Am I the only one that hears CSI and thinks of crime labs doing imaginative things to solve murders in Las Vegas, Miami, and New York?  No?  Just me then.

Moving Beyond ITSM Maturity Assessments

Maturity assessments are popular for kick-starting ITSM initiatives. It allows an organization to spot gaps and prioritize areas for improvement.

However, the half-life of a maturity assessment is remarkably short and the impact of the glossy report can quickly fade. The key messages and compelling recommendations can soon be lost in the noise of other projects and new fires to fight.

What stops the shiny benchmark report from collecting dust on the shelf?

Michael Nyhuis, Managing Director of Australian firm Solisma, claims the answer to keeping assessments alive is to transform them into continual service improvement projects.

Their solution Service Improvement Manager (SIM) provides a workspace for teams to baseline their maturity against various standards or frameworks, identify areas for improvement, document risks and then assign tasks to ensure progress.

Built-in assessments include ITIL, ISO 14001, ISO/IEC 27001, ISO 9001, COBIT, and ISO/IEC 20000.

Service Improvement Manager (SIM)
Service Improvement Manager (SIM)

The hosted solution has four main areas:

  1. Assessments – Compliance and Maturity, Baseline Reporting, Benchmarking, Prioritized Improvements
  2. Registers – Improvements and Risks Registers
  3. Initiatives – Activity Planning, Define Costs and Savings, Benefits Realization, Initiative Scoring
  4. Explorer – Management System, Policies and Procedures, Roles and Functions, KPI’s and Metrics

Elevator Pitch Video (<2 min):

I like this collaborative way of working; spreadsheets and email ping-pong are replaced with progress (Assuming the team jumps on board with the idea). No great ideas are allowed to slip through the cracks and an audit trail of improvements and staff suggestions are kept in one place. SIM also allows users to track improvement projects according to weighted scores and ROI.

This is a good presentation framework for benchmarking against standards and ensuring good ideas and opportunities for improvement are put into action. It would be good to see the team behind SIM put more depth into the Assessment libraries; the current questioning format is open to subjective opinion and the individual rigor of the auditor. Since it is a cloud based offering, surely there is the opportunity shared intelligence and the ability to benchmark organizations against each other as well as standards? For example a company could benchmark themselves against companies of a similar size in a similar vertical sector as well as a standard.

Further info at http://www.Service-Improvement.com

If you have experience with SIM or a similar offering I would be pleased to hear about it, please leave a message in the comments below.