Service Improvement at Cherry Valley

Problem, risk, change , CSI, service portfolio, projects: they all make changes to services.  How they inter-relate is not well defined or understood.  We will try to make the model clearer and simpler.

Problem and Risk and Improvement

The crew was not warned of the severe weather ahead

In this series of articles, we have been talking about an ethanol train derailment in the USA as a case study for our discussions of service management.  The US National Transport Safety Board wrote a huge report about the disaster, trying to identify every single factor that contributed and to recommend improvements.  The NTSB were not doing Problem Management at Cherry Valley.  The crews cleaning up the mess and rebuilding the track were doing problem management.  The local authorities repairing the water reservoir that burst were doing problem management.  The NTSB was doing risk management and driving service improvement.

Arguably, fixing procedures which were broken was also problem management.   The local dispatcher failed to tell the train crew of a severe weather warning as he was supposed to do, which would have required the crew to slow down and watch out.  So training and prompts could be considered problem management.

But somewhere there is a line where problem management ends and improvement begins, in particular what ITIL calls continual service improvement or CSI.

In the Cherry Valley incident, the police and railroad could have communicated better with each other.  Was the procedure broken?  No, it was just not as effective as it could be.  The type of tank cars approved for ethanol transportation were not required to have double bulkheads on the ends to reduce the chance of them getting punctured.  Fixing that is not problem management, it is improving the safety of the tank cars.  I don’t think improving that communications procedure or the tank car design is problem management, otherwise if you follow that thinking to its logical conclusion then every improvement is problem management.

A distinction between risks and problems

But wait: unreliable communications procedure and the single-skinned tank cars are also risks.  A number of thinkers, including Jan van Bon, argue that risk and problem management are the same thing.  I think there is a useful distinction: a problem is something that is known to be broken, that will definitely cause service interruptions if not fixed; a “clear and present danger”.  Risk management is something much broader, of which problems are a subset.  The existence of a distinct problem management practice gives that practice the focus it needs to address the immediate and certain risks.

(Risk is an essential practice that ITIL – strangely – does not even recognise as a distinct practice; the 2011 edition of ITIL’s Continual Service Improvement book attempts to plug this hole.  COBIT does include risk management, big time.  USMBOK does too, though in its own distinctive  way it lumps risk management under Customer services; I disagree: there are risks to our business too that don’t affect the customer.)

So risk management and problem management aren’t the same thing.  Risk management and improvement aren’t the same thing either.  CSI is about improving the value (quality) as well as reducing the risks.

To summarise all that: problem management is part of risk management which is part of service improvement.

Service Portfolio and Change

Now for another piece of the puzzle.  Service Portfolio practice is about deciding on new services, improvements to services, and retirement of services.  Portfolio decisions are – or should be – driven by business strategy: where we want to get to, how we want to approach getting there, what bounds we put on doing that.

Portfolio decisions should be made by balancing value and risk.  Value is benefits  minus  costs.  There is a negative benefit and a set of risks associated with the impact on existing services of building a new service:  there is the impact of the project dragging people and resources away from production, and the ongoing impact of increased complexity, the draining of shared resources etc….  So portfolio decisions need to be made holistically, in the context of both the planned and live services.  And in the context of retired services too: “tell me again why we are planning to build a new service that looks remarkably like the one we killed off last year?”.  A lot of improvement is about capturing the  learnings of the past.

Portfolio management is a powerful technique that is applied at mulltiple levels.  Project and Programme Portfolio Management is all the rage right now, but it only tells part of the story.  Managing projects in programmes and programmes in portfolios only manages the changes that we have committed to make; it doesn’t look at those changes in the context of existing live services as well.  When we allocate resources across projects in PPM we are not looking at the impact on business-as-usual (BAU); we are not doling out resources across projects and BAU froma  single pool.  That is what a service portfolio gives us:  the truly holistic picture of all the effort  in our organisation across change and BAU.

A balancing act

Service portfolio management is a superset of organisational change management.  Portfolio decisions are – or should be – decisions about what changes go ahead for new services and what changes are allowed to update existing services, often balancing them off against each other and against the demands of keeping the production services running.  “Sure the new service is strategic, but the risk of not patching this production server is more urgent and we can’t do both at once because they conflict, so this new service must wait until the next change window”.  “Yes, the upgrade to Windows 13 is overdue, but we don’t have enough people or money to do it right now because the new payments system must go live”.  “No, we simply cannot take on another programme of work right now: BAU will crumble if we try to build this new service before we finish some of these other major works”.

Or in railroad terms: “The upgrade to the aging track through Cherry Valley must wait another year because all available funds are ear-marked for a new container terminal on the West Coast to increase the China trade”.  “The NTSB will lynch us if we don’t do something about Cherry Valley quickly.  Halve the order for the new double-stack container cars”.

Change is service improvement

Everything we change is service improvement. Why else would we do it?  If we define improvement as increasing value or reducing risk, then everything we change should be to improve the services to our customers, either directly or indirectly.

Therefore our improvement programme should manage and prioritise all change.  Change management and service improvement planning are one and the same.

So organisational change management is CSI. They are looking at the beast from different angles, but it is the same animal.  In generally accepted thinking, organisational change practice tends to be concerned with the big chunky changes and CSI tends to be focused more on the incremental changes.  But try to find the demarcation between the two.   You can’t decide on major change without understanding the total workload of changes large and small.  You can’t plan a programme of improvement work for only minor improvements without considering what major projects are planned or happening.

In summary, change/CSI  is  one part of service portfolio management which also considers delivery of BAU live services.  A railroad will stop doing minor sleeper (tie) replacements and other track maintenance when they know they are going to completely re-lay or re-locate the track in the near future.  After decades of retreat, railroads in the USA are investing in infrastructure to meet a coming boom (China trade, ethanol madness, looming shortage of truckers); but they better beware not to draw too much money away from delivering on existing commitments, and not to disrupt traffic too much with major works.

Simplifying service change

ITIL as it is today seems to have a messy complicated story about change.  We have a whole bunch of different practices all changing our services, from  Service Portfolio to Change Management to Problem Management to CSI.  How they relate to each other is not entirely clear, and how they interact with risk management or project management is undefined.

There are common misconceptions about these practices.  CSI is often thought of as “twiddling the knobs”, fine-tuning services after they go live.  Portfolio management is often thought of as being limited to deciding what new services we need.  Risk management is seen as just auditing and keeping a list.  Change Management can mean anything from production change control to organisational transformation depending on who you talk to.

It is confusing to many.  If you agree with the arguments in this article then we can start to simplify and clarify the model:

Rob England: ITSM Model
I have added in Availability, Capacity, Continuity, Incident and Service Level Management practices as sources of requirements for improvement.  These are the feedback mechanisms from operations.  In addition the strategy, portfolio and request practices are sources of new improvements.   I’ve also placed the operational change and release practices in context as well.

These are merely  the thoughts of this author.  I can’t map them directly to any model I recall, but I am old and forgetful.  If readers can make the connection, please comment below.

Next time we will look at the author’s approach to CSI, known as Tipu.

Image credit: © tycoon101 – Fotolia.com

Bring me problems not solutions!

Albert Einstein is often quoted as having said ‘If I had one hour to save the world, I would spend fifty-five minutes defining the problem and only five minutes finding the solution’. Since Mr. Einstein was, undoubtedly, a clever man, I’d like to believe that those are his words.

Understand the problem first

Where I work no one seems to care about problems. All I ever hear about are solutions and what to do to make things differently, and hopefully better or right. People even make a point out of the fact that they don’t like problems and therefore don’t care about them much.

Now, one could of course argue that finding solutions and communicating these is better and more productive than finding problems and whining about them. But you should not do one without the other.

We are not after your solutions

Several times a week a general manager, colleague or just plain random person walks by the room where I, and my fellow process managers, hang out and give us solutions.

And we try to embrace these solutions kindly and gently ask ‘what problem does it solve? How will we know that this change will fix anything that’s broken?’

Now, it is appropriate to mention that we are not particularly good at goals and objectives at my company either. If you have clearly-defined goals it is, of course, easier to see the problems that keep you from reaching your goals, therefore you require less effort effort to define these problems.

In many cases problems are about value. At least they are if you, like me, work in the banking industry where all that counts is value. It is probably different if you work in, for instance, healthcare or the military.  If a problem prevents you from gaining value, or if that problem wastes money, you will ensure that you are able to remove it.

PSG (Problem – Solution – Goal)

As one step in the quest to be more effective I decided to write down a few pieces of advice to my coworkers, something to have in mind when talking about problem solving and making improvements. So I put together a one-slide presentation, accompanied by a brief document with my thoughts on the need for a problem definition before starting to think about the solutions. We called it the ‘PSG-model’.

There is, of course, nothing amazing about this. I didn’t invent anything, didn’t have any new bright ideas and I didn’t produce anything original apart from the slide itself. It is merely a simple, common sense method.

But it worked!

The key message was that the solutions are merely the path, and the journey from the problem to the goal. If we don’t have a clear goal, we can use the problem definition (and a sense of ‘right’ or experience, if you will) to define and agree on goals.

We started to turn the solutions presented to us, or the ones we came up with ourselves, into problem definitions. When that had been completed it became notably easier to agree on goals. The solutions we turned to problems without value were discarded, and we managed to swing the mindset at meetings, and our spontaneous walk-ins, to be around problem definitions instead of nifty solutions.

This method became so well accepted that some people apologised when they mentioned solutions during discussions, even though it was perfectly legitimate to do so.

Activity observations

The next step was to create a bit more structure around how we recorded and documented all the problems and goals (former solutions) so that we could act on them and put them to use. A document template was used to guide whoever wrote the problem and goal down, and we purposely left anything regarding solution out of the document. We called them ‘activity observations’.

The time where we actually sat down for a few minutes with the person who came by with a solution also made a big difference to how we were approached all those small enhancements or plain whining of a more personal nature.

I’m certain there are some ITIL abbreviations for what we did and what came out of it. As for now I couldn’t care less, it works and no one can tell me we did it wrong, because we did it our way despite all the prerequisites that form part of this organization.

All and all, we managed to turn the wide plethora of solutions, random ideas and general whining to problem definitions and commonly agreed goals. It has taken us a little further towards valuable IT Service Management.

Two-speed ITIL – what next?

My recent blog Is it time for a two-speed ITIL? seems to have generated a lot of interest. As well as a large number of replies on The ITSM Review site, there were many tweets and Facebook posts where a wide range of people offered their thoughts and opinions.

A variety of approaches

The overall consensus seems to be that we need a fast-moving online repository of up-to-date IT service management guidance. This repository must be moderated, to ensure the quality of the content, but the moderation should allow for a wide variety of different approaches to be published even if they are not yet considered to be best practice, and even if they contradict generally accepted best practice.

Can we get this off the starting blocks?

This should be part of ITIL

Some people agreed with me that it would be best if this repository is managed as part of the ITIL brand, but others seemed to think it would be better if it were completely separate. There are a number of reasons why I think we should first try to do this as part of ITIL:

  • ITIL has a worldwide reputation as a trusted source of best practice. People may be more likely to contribute content, and to find and use the content contributed by others, if it is seen to be related to ITIL
  • If a small group of ITSM people set up the repository then other people may be less included to contribute, and may choose to set up alternative sites of their own, this could lead to a situation where instead of working together to create value we compete for attention, distracting us from the more important things we should be doing
  • If the repository is part of ITIL then it will be able to provide valuable input to future publications, either as updates to the ITIL core publications or as new complementary publications. This will provide a means of progressing ideas from concept through wider publication to accepted best practice.

I will discuss this idea with the Cabinet Office to see if I can persuade them to make it happen. If they are willing to try this then I will do what I can to help it succeed, but if they don’t want to then I will look around for alternative ways we can make this happen.

A moderated community

I have been thinking about how this repository might work, and I think we should consider some of the following:

  • We must have a transparent governance process, with clear criteria for why contributions will or won’t be accepted
  • We need a fair approach to intellectual property rights, encouraging people to contribute material but making sure that others can reuse it without fear of copyright issues
  • Each contribution should have an associated discussion thread, so that people can help improve the content – either by making improvement suggestions or by reporting the results of their attempts to implement the ideas.
  • We need to decide how maintenance of each contribution will take place. Will new versions of a contribution require approval from the original author, or will there be a process for others to create and edit new versions?

What do you think?

What other features and governance principles do you think we should consider?

I have a daytime job, providing strategic ITSM consulting to HP customers, so I can’t arrange a meeting with the Cabinet Office for a few weeks. Once I have spoken to them I’ll let you all know the outcome.

Image credit: © mezzotint_fotolia – Fotolia.com

Assessment Criteria for Incident and Problem Management

We will soon begin our review of Incident and Problem Management offerings in the ITSM market place. As with our previous comparison of REQUEST FULFILMENT – Our goal is to highlight the key strengths, competitive differentiators and innovation in the industry.

During Request Fulfilment our original aim was to look at how the tool supported the process, but refreshingly vendors who participated also shared their experiences and some insight into their consulting approaches.

When assessing the bread-and-butter elements of Incident Management, the challenge will be to identify be true differentiators in a discipline that is quite rigid.

We would like to encourage the same philosophy of identifying how deployment experiences have shaped the evolution of tools.

Incorporating Incident & Problem Management for a review

From my experience, deployments often implement Request Fulfilment, and Incident Management in the early phases of projects, but often Problem Management is left until later phases.

Yet the two processes, in tool terms are often linked together – quite often the record functionality and layout is the same for Incident, Problem (and Request for that matter).

My assessment criteria for the Incident and Problem Management review are below, if you have any comments or recommendations please contact us.


Suggested Criteria for Incident & Problem Management

Overall Alignment

  • Have our target vendors aligned to ITIL and if so, to which version?
  • How do the set up roles and users to perform functions?
  • What demo capabilities can they offer potential customers?

Logging & Categorisation

These can either be made simple, and to great effect, or made so complex, that they become irrelevant as the Service Desk totally ignore them and pick the first thing on the list!

  • What information is made mandatory on the incident and problem record?
  • What categories and/or sub-categories are provided out-of-the-box?
  • How easy is it to customise these fields and values?
  • Show us how incident/problem matching and linkage to known errors are presented to users and/or service staff to expedite the process.
  • How much administration is needed to do bespoke changes?

Tracking

“Oh come on now,” I hear you cry, “What tool cannot track incidents and problems?”

But there can be a lot more to tracking these records than meets the eye:

  • What statuses are included out-of-the-box and how easy is it to add/modify status definitions to suit customer requirements
  • Can your tool show how many “hops” a record may face if wrongly assigned?

Lifecycle Tracking

Perhaps the best way of allowing vendors to show off their tool’s capabilities is for them to really go to town in terms of playing out scenarios.

The aim of this assessment is to look at how tools can help keep communication going during the lifecycle of an incident/problem and its linkage to other processes.

  • First time Fix from the Service Desk
  • Resolved via support group(s)
  • Demonstrating visibility of the incident/problem through its lifecycle, from  end-user, Service Desk and support group(s) points of view
  • Linkage to other processes

Prioritisation

  • How are priorities determined and managed (out-of-the-box)?
  • What happens when the priority is adjusted during the lifecycle of the incident/problem?
  • We would like to also give vendors an opportunity to show us how they link SLAs to Incidents.

Escalations

  • Demonstrate routing to multiple groups
  • Show the tool’s capability for handling SLA breaches – in terms of notifications and the effect on the Incident record during that time
  • Show us what happens when an incident/problem has NOT been resolved satisfactorily
  • Demonstrate integration between incident and other processes

Major Incidents and Problems

Much like categorisation, this can be very simple, or can be made so complex, and more time can be spent negotiating the process than fixing the issue in the first place.

  • Provide and end-to-end scenario to demonstrate how the tool handles the management and co-ordination across multiple groups for a Major Incident & Problem

Incident & Problem Models

All of the above criteria are what I consider the basics of an ITSM tool.

But I am keen to delve deeper into what vendors understand by the concept of Models.

In turn, how can their tools add significant value in this area?

There are several ways of looking at this concept (there will be no points for throwing it over the fence to Problem Management and focussing on Known Errors).

There are assessment criteria around the handling of Models, and we want to see how tools help in this aspect.

  • Demonstrate how your tool facilitates the use of Models (include if/where links to other relevant processes/support groups as part of the demonstration).

Incident & Problem Closure

It makes sense to end our list of assessment criteria examining how tools resolve and/or close incidents and problems by default.

  • Show how an incident/problem is routed for closure.

This assessment will be quite scenario heavy, and we want to give participating vendors the freedom to develop their scenarios without limiting them to defined parameters (for example, specifying which service has failed, or which groups to use).

A key part of the assessment will also include how flexible the tool is with regards to customisation.

Incident Management can sometimes be taken for granted, so we would like participating vendors to really take a look at how Incident Management can made “everyone’s” business.

But more importantly, Problem Management is often left to later phases, while organisations focus on processes like Request Fulfilment, Incident and Change – perhaps there is a case to make for implementing them hand in hand?


What is your view, what have we missed?

Please leave a comment below or contact us. Similarly if you are a vendor and would like to be included in our review, please contact us.

Getting started with social IT (Part 2 of 2)

Following on from Matthew Selheimer’s first installment on social IT, we are pleased to bring you the second and final part of his guide to getting started with social IT

Level 3 Maturity: Social Embedding

The saying, “Context is King!” has never been truer and this is the foundational characteristic for attaining Level 3 social IT maturity; Social Embedding.
This level of social IT maturity is achieved by establishing relevant context for social collaboration through three specific actions:

  1. The creation of a social object model
  2. The construction of a social knowledge management system that is both role-based and user-specific
  3. The enhancement of established IT processes with social collaboration functionality to improve process efficiency and effectiveness

The goal at Level 3 maturity is to leverage social embedding to improve IT key performance indicators (KPIs) such as mean-time-to-restore (MTTR) service or change success rate (additional examples are provided below). It is important that you select KPIs that are most meaningful to your organisation; KPIs that you have already baselined and can use to track progress as you increase your social IT maturity.

While the value of Level 2 maturity can be significant in improving the perception of IT’s responsiveness to users, Level 3 social IT maturity is where the big breakthroughs in IT efficiency and quantifiable business value are created.

Focus on key performance indicators

Focus on the KPIs associated with the processes you are enhancing with social collaboration. An incident management KPI measurement, for example, could be to multiply your current mean-time-to-restore (MTTR) service by your cost per hour of downtime or cost of degraded service per application. This will give you a starting point for benefit projections and value measurement over time.

Focus on the KPIs associated with the processes you are enhancing with social collaboration. This will give you a foundation for benefit projections and value measurement over time.

For change management, you might use the number of outages or service degradations caused by changes and multiply that by your cost per hour of downtime and MTTR to arrive at a true dollars and cents measure that you can use to benchmark social IT impact over time. You might also consider other IT process metrics such as first call resolution rate, percentage of time incidents correctly assigned, change success rates, the percentage of outages caused by changes, the reduced backlog of problems, etc.

The point is to select IT process metrics that are meaningful for your organization and enable you to calculate a quantifiable impact or benefit. Decision makers may be skeptical about the value of social IT, so you will need to make your case that there is real quantifiable benefit to justifying the investment to achieve Level 3 maturity.

Relevant Context and Three Required Actions

Let’s now more fully consider the establishment of relevant context and the three actions characteristic of Level 3 maturity previously described: 1) creation of a social object model, 2) construction of a social knowledge management system, and 3) the enhancement of IT processes with social capabilities. We noted earlier that context is defined in terms of relevance to a specific audience. That audience could be a group of individuals, a role, or even a single individual. The most important thing is that context ensures your audience cares about the information being communicated.

How do you go about ensuring the right context? What is needed is a social foundation that can handle a wide variety of different perspectives based on the roles in IT and their experience. The most effective way to do this is to treat everything managed by IT as a social object.page7image27920 page7image28080

What is meant by a social object? Consider, for example, a Wikipedia entry and how that is kept up-to-date and becomes more complete over time through crowd sourcing of knowledge on the subject. The entry is a page on the Wikipedia website. Now imagine if everything that IT is managing—whether it’s a router, a server, an application, a user, a policy, an incident, a change, etc.—was treated along the same lines as a Wikipedia page. Take that further to assume that all the relationships which existed between those entries—such as the fact that this database runs on this physical server and is used by this application—were also social objects that could be created, modified, and crowd-sourced. In this manner, organizational knowledge about each object and its relationships with other objects can be enriched over time—just like a Wikipedia entry.

FIGURE 2: A Social Object Model as delivered in ITinvolve for Service Management™. Leveraging social collaboration principles

Define a taxonomy for your social objects

Knowledge comes from multiple sources. Existing IT knowledge may be scattered in different places such as Excel spreadsheets, Visio diagrams, Sharepoint sites, Wikis, CMDBs, automated discovery tools, etc. but it also resides in the minds of everyone working in IT, and even among your end users. To effectively capture this knowledge, you will need to define a taxonomy for your social objects. You can then begin to source or federate existing knowledge and associate it with your objects in order to accelerate the creation of your social knowledge management system.

With an initial foundation of knowledge objects in place, your next task is to make the system easy to use and relevant to your IT teams by defining perspectives on the objects. Establishing perspectives is critical to a well- functioning social knowledge management system, otherwise, you will fall into pitfall #2 discussed earlier. For example, you might define a Network Engineer’s perspective that includes network devices and the relationships they have to other objects like servers and policies. You might define a Security Administrator’s perspective that focuses on the policies that are defined and the objects they govern like network devices and servers. Without this perspective-based view, your teams will not have the relevant context necessary to efficiently and effectively leverage the knowledge management system in support of their day-to-day roles.

Enrich your knowledge and keep it current

Once you have initially populated your social objects and defined perspectives, you need to keep knowledge current and enrich it over time to ensure your IT staff finds it valuable. This is why defining your objects as social objects is so critical. Just like you might follow someone on Twitter or “friend” someone on Facebook, your teams can do the same thing with your objects. In fact, when you created your perspectives, you were establishing the initial baseline of what objects your teams would follow. In this manner, whenever anyone updates an object or its relationships, those who are following it will automatically be notified along with a dedicated “news feed” or activity stream for the object.

When you create your perspectives, you establish the initial baseline of what objects your teams will follow. In this manner, whenever anyone updates an object or its relationships, those who are following it will automatically be notified along with a dedicated “news feed” or activity stream for the object.

This does two important things. First, it keeps those who “need to know” current on the knowledge about your environment so that everyone has up-to-date information whenever there is an incident, change, or other activity related to the object. Instead of waiting until a crisis occurs and teams are interacting with out-of-date information, wasting valuable time trying to get each other up to speed, you can start to work on the issue immediately with the right information in the right context.

Provide a point of engagement for subject matter experts

Second, it provides a point of engagement for subject matter experts to collaborate around the object when they see that others are making updates or changes to the object and its relationships. This second point should not be underestimated because it taps into a basic human instinct to engage on things that matter to them and directly contributes to the crowd-sourcing motivation and improvement of knowledge accuracy over time.

Your third action is to embed your social knowledge management system into your core IT processes in order to enhance them. This is not simply an add-on, as described in Level 2 social IT maturity, but rather it is deep embedding of the social knowledge management system into your processes as the most trusted source of information about your environment. For example, imagine creating an incident record or change record, initially associating it with one or more impacted social objects, and then being able to automatically and immediately notify relevant stakeholders who are following any of those objects and then engage them in triaging the incident or planning the change. This is the power of social collaboration and why it can deliver new levels of efficiency and value for your IT organization.page10image25552

Create new knowledge objects

As an incident or change is worked using social IT, collaboration in activity streams creates a permanent and ongoing record of information, which at any point can be promoted to become a new knowledge object associated with any other object. For example, let’s say that a change record was created for a network switch replacement. Each of the individuals responsible for the switch and related objects like the server rack is immediately brought into a collaboration process to provide input on the change and contribute their expertise prior to the change going to the Change Advisory Board (CAB) for formal approval.

FIGURE 4: In-context collaboration and promotion to knowledge as supported by ITinvolve for Service Management™

This is just one example of the power of in-context collaboration. The same principles apply to incidents, problems, releases and other IT management processes.

To exit Level 3 and start to move to Level 4 on the maturity scale, you need to be able to provide your IT staff with in-context collaboration that is grounded in a social object model, utilizes a social knowledge management system that is easy to maintain and provides an up-to-date view of your objects and relationships, and enhances your existing IT management processes. But more importantly, you need to be able to show the quantifiable impact on one or more KPIs that matter to your organization.

Level 4 Maturity: Social-Driven

The final stage of social IT maturity is Level 4, the Social-Driven IT organization. The goal at this level is to leverage social collaboration for Continual Service Improvement (CSI).

The value of Level 4 social IT maturity comes in two forms. First, as your organization becomes more adept at leveraging social collaboration, you should benchmark your IT process KPIs against that of other organizations. Industry groups such as itSMF, the Help Desk Institute (HDI), as well as leading industry analyst firms, provide data that you can use. Getting involved in peer-to-peer networking activities with other organizations via industry groups are a great way to assess how you are doing in comparison to others. At this stage, you should be striving to outperform any organization that is not leveraging social collaboration principles across your KPIs, and you should be performing at or above the level of those organizations that have adopted social collaboration principles.

Measure the size of your community

Second, you should measure value in terms of behavioral change in your organization. At maturity Level 4, you should have established a self-sustaining community that is actively leveraging the social knowledge management system as part of its day-to-day work. Measure the size of your community and set goals for increasing the community size. Metcalf’s law applies directly to social collaboration: The “network effect” increases the value of the social knowledge management system exponentially as you add users to the community.

Measure the size of your community and set goals for increasing the community size. Metcalf’s law applies directly to social collaboration: The “network effect” increases the value of the social knowledge management system exponentially as you add users to the community.

One way to foster a larger and more active community is through recognition and rewards. For example, you might choose to publicly recognize and provide spot bonuses for the top contributors who have added the most to the social knowledge management system. Or, you may reward service desk personnel who consult the social knowledge management system before assigning the incident to level 2 personnel. You might also choose to acknowledge your staff with “levels” of social IT expertise, classifying those who participate occasionally as “junior contributors”, those who participate regularly as “influencers”, and those who are most active as “experts” or “highly involved.”page12image22344 page12image22504

What’s Beyond Level 4 Social IT Maturity?

One of the most exciting things about being engaged in advancing your social IT maturity is that we are all, as an industry, learning about and exploring its potential. In the future, we are likely to see new product enhancements from vendors that employ gamification principles that encourage even greater growth of our social collaboration communities.

We may see the integration of information from biometric devices that help us to more quickly assess end user frustration and initiate collaboration to resolve issues prior to the user even contacting the service desk. There are certainly going to be even more use cases for social collaboration than we can imagine today.

Dancybot V0.1

Hot on the heels of the first 24-hour digital ITSM conference in December… The ITSM Review is pleased to announce the launch of the first digital ITSM celebrity.

It is only a matter of time before speech recognition, bots, virtual service desk operators and ‘ubiquitous intelligent systems’ are common place in IT support – so we need to get used to the idea mashing up real personalities with their digital counterparts.

“We need to admit right now that the digital version of you is augmented in such a way that it has perverted your digital ego and more importantly confused your physical self.” EDICT 10

Announcing the Dancybot Prototype V0.1

Who needs to pay for costly keynote speakers and travel expenses at ITSM Conferences when you can ‘go virtual!’

DANCYBOT V0.1 – Benefits at a Glance:

  • Multi-Instance, Multi-Lingual – No more calendar conflicts, can appear at multiple events and conferences worldwide at the same time.
  • Integrates with every known ITSM product known to man including those not yet invented (*)
  • Fully ITIL 2011, COBIT, USMBOK Compliant (**)
  • Fully SaaS enabled multi-tenant Cloud architecture (***)

Disclaimers

  • *untested
  • ** if you believe this you’ll believe anything
  • *** We couldn’t launch a product in the ITSM space without cloud-washing it could we? 🙂

I admit our Chucky-esque Chris Dancy simulation is a little primitive – but he has a bright future! Maybe one day he’ll be autonomous and self-sufficient from the original ‘meat’? We just need to find our first conference organizer who is brave enough to place him on the wall next to the Twitter feed…

Crowdsourced Favourites: Top 20 Most Read from 2012

Happy New Year to all ITSM Review readers. Thank you for your continued support. Visitors to the site from July to December 2012 were up a whopping 679% on the same period 2011. Or in other words, traffic grew eight-fold. We now regularly serve around 10,000 – 12,000 visitors a month.

As always, please let me know if you would like to see any particular topics covered or guides written. If you would like to contribute please give me a shout.

Of the articles published in 2012, below are the top 20 most popular:

  1. 7 Benefits of Using a Known Error Database (KEDB)
  2. 12 IT Helpdesks for under $1,000
  3. Gartner MQ ITSSM 
  4. Winners and Losers in the ITSM Premier League
  5. Free Access 10 Gartner ITSM Papers 
  6. Rob England: What is a Service Catalogue?
  7. Planning for Major Incidents 
  8. Request Fulfilment Group Test Results
  9. Is it time for a two-speed ITIL?
  10. Fifty Shades of BMC
  11. Review: ServiceNow Request Fulfilment
  12. Top 25 ITSM Pundits by Klout [August 2012 Update]
  13. A structured approach to problem solving
  14. Free ITIL Training
  15. Review: Cherwell Request Fulfilment
  16. A Great Free ITSM & ITAM Process Tool via #Back2ITSM
  17. Review: Marval Request Fulfilment
  18. How to provide support for VIPS
  19. “May you live in interesting times” – The impact of cloud computing
  20. “I’m not saying my opinion is better than yours, but I do have a klout score over 60”