ITSM Back To Basics – The Service Catalogue

Introduction

So here’s the thing. I’ve worked in IT forever and in ITSM for over 15 years and it never fails to amaze me how many failed or unused Service Catalogues there are kicking about out in industry. As a consultant I’ve seen and heard horror stories of clients paying upwards of £60,000 for a Service Catalogue they were told would solve all their problems only to be presented with a 2 page spreadsheet listing a few business services at the end of the engagement. As an Irish person who remembers the halcyon days of the Celtic Tiger, I’m calling this the ITSM industry’s very own “ah here” moment.

So what is the Service Catalogue and does it deserve all the hype?  ITIL defines the Service Catalogue as a database or structured document with information about all live IT services, including those available for deployment. The Service Catalogue is part of the service portfolio and contains information about two types of IT service: customer-facing services that are visible to the business; and supporting services required by the service provider to deliver customer-facing services. In other words the Service Catalogue is a menu of all available services available to the business. It also provides the real link between the business and IT; it defines the business processes based on IT systems enabling IT to focus on ensuring those services perform well. Not too scary so far right?

Purpose:

The Service Catalogue has two main purposes:

  1. To provide and maintain a single source of consistent information on all operational services and those being prepared to be run operationally; essentially acting as a menu for the business to order IT services from. An ex collegue of mine (waves to Pink Elephant UK) used to say that the first rule of ITSM is “always make it easy for people to give you money” aka the Hubbard – Murphy law of ITSM. How can we make it easy for customers to give us lots of lovely money? By giving them a sparkly menu of course.
  2. To ensure that it is widely available to those who are authorised to access it; in order to be effective the Service Catalogue needs to be front and centre of your IT operation so that it’s used consistently. Let’s think about it logically for a moment, if it’s not being used by the business, then what value is it adding? Exactly.

Scope:

The scope of Service Catalogue Management is to provide and maintain accurate information on all services that are being transitioned or have been transitioned to the production environment ie anything that’s live or about to be very shortly.

Value to the business

  • Provides a central source of information about the IT services delivered by the service provider organisation.
  • The Service Catalogue maintained by this process contains:
    • A customer-facing view of the IT services in use
    • A description of how they are intended to be used; in clear business centric language; there’s a time and a place for technical jargon and the Service Catalogue isn’t one of them. et’s not frighten the horses here.
    • A list of the Business processes they enable (this should be fron and centre – remember – make it easy for people to give you money, right?)
    • A description of the levels and quality the customer can expect for each service, preferable one that links to the appropriate SLA, OLA or contract.

Different Views

  • The Business Service Catalogue  – This contains details of all IT Services delivered to the Business (in Business language and available to the Business if required). The Business Service Catalogue should contain the relationships with business units and business processes that are supported by each IT Service. Typically these are in the forms of Service Level Agreements (SLAs).
  • The Technical Service Catalogue – This expands the on the Business Service Catalogue with relationships to supporting services, shared services, components and Configuration Items necessary to support the provision of services to the Business (typically this is an internal document so it’s not available to the Business). The Technical Service Catalogue focuses internally on defining and documenting support agreements and contracts (Operational Level Agreements and contracts with external providers or third parties).

 

OK, so that’s the basics covered, come back soon for our top tips on implementing a Service Catalogue successfully.

 

Image Credit

How to Establish an ITAM Program in Less Than 30 Days

Guest post by our friends Tom Bosch & Cathy Won at BDNA. We met Tom & Cathy at Knowledge 16 earlier this year (Vegas baby!) and got chatting about how we need to be more agile in getting both ITSM and ITAM established.

bdna-header-logo-@1x


Today’s IT departments are grappling with the sea change brought by cloud architectures, virtualization, mergers and acquisitions, software audits, compliance, security, upgrades and a host of other initiatives, such as BYOD (bring your own device). These new challenges only exacerbate current complexities in managing company laptops, desktops, servers, operating systems, network hardware, and now, mobile devices and tablets.

As the IT landscape continues to evolve, it is vital that enterprises gain greater control over the various components within their IT infrastructure. Not only do ITAM solutions help companies detect and prevent IT and regulatory risks, they also maximize a company’s productivity.

The challenge is understanding where and how to begin deploying a new IT Asset Management (ITAM) program, or how to leverage existing ITAM solutions in a way that keeps up with these changes. Below are suggestions for how an effective ITAM program can be established in less than 30 days.

 

Deploy Sooner than Later

Software vendors frequently — and without warning — audit customers to ensure they are in compliance with license contracting terms. An audit can be triggered out of nowhere, and failing one audit could trigger more. For software vendors, it’s simple business logic: Make sure customers pay for any software usage above and beyond what they are entitled. But for customers, providing proof that the organization is using only properly licensed software can be cumbersome and complicated.

According to a recent survey of several hundred IT professionals conducted by Information Week and BDNA, more than 61 percent of companies were audited within the last 18 months, and more than 17 percent of them were audited more than three times within that same 18-month period.

And as anyone who has been through one is aware, software audits carry hefty fines. In addition to the hefty financial burden of paying for the settlement, true-ups and additional IT, legal and PR resources, organizations also find their productivity, credibility, opportunity and reputation impacted post-audit.

Given the high consequences of non-compliance, ITAM can no longer be taken lightly as an optional discipline. The sooner an ITAM program is put into place, the sooner a company is protected from a costly audit.

 

Know the Answers

More than 85 percent of the BDNA survey’s respondents admitted that they were “accidental” software pirates, either deploying software for which they had never paid or exceeding their number of acquired licenses.

When senior executives ask how to make ITAM projects as simple as they can be, they really want a process that answers these three questions:

  • What do I have?
  • Are we using what we’ve purchased?
  • Are we entitled to all we are using?

By achieving greater visibility, an enterprise achieves some key benefits:

  • Stronger negotiation position with suppliers
  • Better security and system integrity
  • Reduced risk and improved governance

 

4 Steps to Greater Visibility

 The most fundamental goal of greater visibility can be achieved in less than 30 days with these four steps:

1) Discovery: The first step to greater visibility into the enterprise is to discover what assets the enterprise holds. Many enterprises already possess the tools to do part of this, but are not properly integrating the tools into their overall IT management processes. Existing tools such as System Center Configuration Manager (SCCM) can often be used to capture raw data about IT assets. Most companies have at least six tools installed and can leverage that data – and ensuring the right processes are in place to do that is key.

2) Reconcile the data: Eliminate unnecessary, irrelevant and duplicate data discovered across multiple tools. That duplicate data can be safely discarded, ridding the enterprise of distracting clutter.

3) Remove redundant and unauthorized applications by identifying overlaps and keeping only those that are being fully utilized.

4) Pair inventory with your procurement process: This improves compliance in preparation for an audit as well as identifying unused resources and licenses, giving you additional leverage to negotiate with your suppliers.

By following these guidelines, enterprises can significantly minimize the risk from an audit and disruption to business as usual in less than a month.

 

About the Authors

Tom Bosch is a Certified Software Asset Manager who operates as director of Sales at BDNA Corporation. He has spent the last five years working with dozens of corporations solving ITAM issues. With a diverse 30-year background in sales, operations and finance management, Tom has been involved in numerous re-engineering projects in which the focal point remains process simplification.

Cathy Won is director of product marketing at BDNA.  Cathy has extensive product marketing and product management experience, including time at NetApp, Juniper Networks, EMC, VERITAS, Legato and Brocade.

 

Featured image credit.

Guest Post: How To Staff Your Windows 10 Migration Team Properly

Guest post from Juriba:

Your business has been relying on Windows 7 or Windows 8.1 for some time now, but it is becoming increasingly apparent that you will soon need to have things ready for a transition to Microsoft’s latest offering — Windows 10.

With Windows 10 comes a set of changes that your IT team may not yet be entirely familiar with, such as new security configurations, how to handle application management or creating policies around Windows 10 branching.

As your enterprise prepares to migrate to Windows 10, you have thousands of users to consider. This is no time to be understaffed, as there will be many moving parts for you to keep track of and under control. This naturally raises the question among IT professionals. How are we to staff our Windows 10 migration team correctly?

Whether you are planning on having a third party help you as a service integrator, considering a hybrid approach, or thinking about running it internally or via your Business As Usual (BAU) teams, it is time to start thinking about what roles to fill for a successful migration. Of course, when you are heading up the migration effort for tens of thousands of employees and a similar amount of devices including desktop and mobile assets, you’ll need to choose the best migration command and control tools to assist the new hires for the team and help them work more efficiently.

This is a project that calls for team stability, and you would be best served by recruiting resource (internal or external) that will be with you for the long haul. Preparation time for a Windows 10 migration in an enterprise could take as long as 9 to 12 months, according to a recent article at Computerworld. During that period, your new or contracted staff will be collecting pertinent information, making images, testing them to see how well they deploy before moving on to readiness tracking/scheduling migration.

You’ll want to define the roles that must be included in the migration team as well as which ones are optional, so you can deploy your resources as efficiently as possible. One of the more important tasks is to determine which skill sets will be required in your team.

Which Roles Should You Include?

A number of roles must be staffed for your successful Windows 10 migration. While every enterprise is different and will naturally have unique circumstances and requirements for staffing, the following is a list of roles that you should most likely need to include to ensure that the migration will go as smoothly as possible.

  • Key Stakeholders.Whilst these resources might not be involved daily with your project, their sponsorship and accountability for escalation is a critical element for project success.  They should be the equivalent of a high-level steering committee, ensuring that the project is delivering to the requirements laid out in the business case and agreed by this same group.
  • Program Manager. You need someone to be accountable for the direction and success of your Windows 10 migration. They should manage the overall budget, ensure the project is moving forward and take the key decisions in conjunction with the businesses that are in scope.
  • Team Managers: A number of manager specialties will also need to be staffed, such as Application Managers. Project Managers, Risk Managers, Technical Project Managers and Deployment Managers. These people will be responsible for keeping their activities on track and reporting status to the Programme Manager.
  • Process & Infrastructure Experts. Expertise in process and infrastructure will be invaluable for your team, so make room for Process Experts and Infrastructure Experts before you begin the migration. They are responsible for your end to end solution design.  Get it right here and the teams below will be much more efficient.
  • Engineering & Software Development. It is difficult to imagine a Windows 10 migration effort that does not make use of Software Developers and Engineers. You will need to utilize some Build Engineers to ready your gold OS image(s) and software re-development experts to help ensure that all applications are made compatible with Windows 10 (especially in-house apps).
  • Application Discoverers and Application Packagers. When it comes to applications, make sure to hire both Application Discoverers and Application Packagers to support your readiness efforts.  It is likely that many applications will need to go through the factory for compatibility and testing.
  • Business Liaison Officers.In order to coordinate your activities with the various business teams involved, you will want to include Business Liaison Officers and Programs Officers.  This role will ensure buy-in for your project within the user base and create the important link between your project goals and their business impacts/priorities.
  • Logistics Coordinators, Communications Experts, and Migration Schedulers.A team of logistics coordinators and schedulers that are focused on keeping things organized, especially when you are migrating many thousands of end users is important to drive migration numbers and ensure any deployment capacity is filled to maximum.
  • Deployment Engineers. Whilst many Windows 10 projects will be delivering multiple zero-touch in-place upgrades, there are still the hardware replacements and rebuilds to manage on-site. This is where a dedicated deployment team can help to drive the migration success and be the ‘feet on the street’ to help with those first-day migration issues.
  • Procurement.  Not a dedicated project role, but an important function in the logistics chain. Ensuring a point of contact and liaison with the hardware and software vendors will guarantee that this function will not become a bottleneck.

Which Roles Are Optional?

Your organization’s definition of “optional” may be very different from similar businesses in the industry. However, it is good to keep in mind the roles that are of lower priority so you can deploy the most needed human resources first. Circumstances may change where what was once optional is now needed to keep the project on track.

For example, you may find that you can do without communications experts if you find that end users can do just fine with the available literature and training materials. Likewise, user acceptance testers who check on how Windows 10 works with the rank and file may be optional, depending on the knowledge and experience of the majority of your end users.

What Skill Sets Are Needed?

While organizing a list of various roles that you’ll need to staff before the Windows 10 migration is necessary, it is also worth noting the skill sets required to make your project a success.

Critical skill sets needed for the migration include asset management and role-based management, noted a recent report from BetaNews. Experience in centralized system packaging is also needed (making an image or through a bundled deployment).

You’ll also want someone on your team who is skilled at managing reboots and ensuring that all implementations are kept track of during installation.

Conclusion

It is not always going to be easy to fill in holes in your workforce, especially when talent is in high demand and your competitors are in the same position regarding staffing up for a Windows 10 migration. Keep in mind that your road to Windows 10 could be smoother if you use a dedicated IT project management tool. For more information about the Juriba’s Dashworks IT project management tool click here.

Image Credit

Integration, Bow Ties & Service Catalogues; Cherwell SIAM Survey Results

Andy White, Cherwell
Andy White, Cherwell

 

 

Our friends over at Cherwell recently conducted a survey on all things SIAM and I was lucky enough to catch up with Andy White, Vice president & General Manager for EMEA to talk through some of the findings.

 

 


Andy’s take on SIAM:

“SIAM provides a performance regime to govern and control so organisations only pay for things they can use and access. It delivers explicit service integration parameters that govern performance, availability, quality but more from a user’s perspective rather than a commercial or vendor perspective. It supports the skills and capabilities required to manage third-party suppliers in a commodity-based environment. SIAM’s really delivering an open view, an open standards view, to delivering workflow, reporting, financial metrics in the entire service delivery to the ultimate end customer.”

In other words, SIAM is a way of delivering value to customers via multiple suppliers in a seamless way that ensures performance, availability and quality requirements are taking care of.  As Andy put it, the bow tie is getting bigger. On one side you have customer perception and on the other side you have the technology available with IT in the middle. Drones, the Internet Of Things, AI, as technology becomes more and more accessible, customer expectations will increase meaning IT departments have to deliver in order to stay relevant.

Here are some of the highlights of the Cherwell study:

  • SIAM isn’t going anywhere. 45% of the survey respondents managed between 21 and 100+ suppliers and nearly a third of all respondents had already implemented SIAM.
  • Those at the sharp end of IT operational issues better understand the benefits of SIAM. The research found that more senior IT professionals (38%) have implemented SIAM processes compared with director level respondents (21%).
  • Whilst obtaining reports and metrics is deemed easy, managing risk is harder. An enormous 93% of those surveyed reported being able to access management information easily. Managing risk effectively in a SIAM environment is a tougher prospect  with over 24% of respondents admitting to finding it hard or very hard to assign tangible risks in a multi vendor environment.
  • Service Management is maturing; 76% of respondents had an integrated Service Catalogue in place to enable end users to select business services.

The top 3 takeaway findings from this survey:

  1. Everyone knows SIAM.
  2. We need to be having the right conversations with C level and above so that SIAM gets on the business agenda.
  3. We need the right tools to be able to visualise performance. Dashboards and reports will supply C level intelligence and help to drive performance.

 

You can check out the survey in full here. What do you think? Let us know in the comments!

Supercharging Your Service Delivery Using SIAM

The lovely people at Cherwell invited me to join one of their webinars to talk about SIAM so here is the link to the recording!

My session was all about how ITIL and SIAM can be used to take your service provision from good to “let’s rock this!” The world and its mum knows that ITIL is the globally recognised framework for ITSM best practice but in a world where outsourcing, co sourcing and multi sourcing models are being more and more common, ITIL alone can’t cope. Enter SIAM; the framework that enables an organisation to manage their service providers in a consistent and efficient way, making sure that performance across a portfolio of multi-sourced goods and services meets user needs. In other words, SIAM is a flavour of ITIL that supports organisations in managing multiple suppliers whilst keeping the user experience seamless for the rest of the business.

We all know that ITIL is inherently all about continual service improvement and a SIAM environment is no different. That said, here are some of the things to look at when looking at SIAM:

Span Of Control

How many vendors can you manage without things being missed, balls being dropped or angry mobs turning up at your door? Look at your span of control and if your numbers of suppliers, partners and vendors is increasing faster than you know what to do with, look at introducing the lead vendor concept to regain some control. Lead vendors.

Understand Vendor Dependencies

However many suppliers or partners you use, the buck will always stop with you from the perspective of the business if something goes wrong. Map out your IT services, how they support business outcomes and which supplier delivers each IT service. The CMDB or a technical view of the Service Catalogue will help you to do this and will enable you to spot any areas of vulnerability so you can plan accordingly.

Organising For SIAM

When preparing for a SIAM environment; having a strategy is key to ensure that you have a holistic view of your end to end service, making sure nothing is lost or missed. This strategy should be used as the basis for policies, procedures and work instructions to ensure that there are consistent ways of working across the board. Each vendor should have a catalogue of service offerings to ensure dependencies are identified and documented.

Relationship Management

An effective SIAM environment is all about the relationships between the customer organisation and its partners. Moving to a SIAM model will require a culture change so building a collaborative culture is vital. One way of nixing a blame culture is to use the practice of unconditional positive regard (UPR). UPR is a term credited to humanistic psychologist Carl Rogers and means accepting and respecting others as they are without judgment or evaluation i.e treating everyone with the best of intentions. If all else fails, there might be another term that works – TCB or tea coffee and biscuits!! The overall aim? There’s no more “them” and “us”, there’s simply one team.

Managing Performance

As with the span of control, the more SLA, OLA and Contract documentation you introduce into your process, the more admin is required and the more difficult it is to keep things under control. One solution is to have a shared service level agreement across all vendors to cover the end to end service which will encourage collaboration and reduce any potential blame culture. Measurements should be based off this single SLA to communicate how processes are performing, identify any improvement areas and to demonstrate that improvement is happening.

Tools

The ITSM tool industry is moving to support SIAM environments. Out of the box integrations, codeless functionality and add ons are all available in the market giving customers options.

Benefits

  • A single point of contact, ownership & control for IT Services
  • Clearly defined roles & responsibilities
  • Optimised cost of services
  • Streamlined management of IT services
  • Consistently applied processes
  • A more transparent IT landscape
  • Increased Customer Satisfaction!

Did you listen to the webinar? Let me know what you thought in the comments!

Image Credit

Governance 101: The role of effective Service Management governance in an IT services organisation and the key features of a governance framework

Delivering consistent and quality IT services for customers is not easy – and can be even more challenging – if they are not governed effectively. For example, how can an IT organisation look to improve if it doesn’t measure the amount of service-impacting incidents properly?

Take the high profile service outages of several major banks in recent years for example. Their customers were unable to make transactions or access services for periods of time. Even in such a highly regulated environment as financial services, where IT is governance is generally tighter, there are no guarantees that the outages could’ve been prevented by governance alone.

Equally, too much governance could be seen as overly bureaucratic. A complicated – and lengthy – change control process could drive the wrong behaviour from some members of the IT organisation in that they may simply bypass the process.

By order of the management, doesn’t always mean effective governance!
By order of the management, doesn’t always mean effective governance!

In any case, a business is often dependent on its IT services, and as such, there needs to be controls in place to not only protect – but gain value for – their customers. This of course needs to be appropriate as not all businesses are financial service providers needing tight control.

What is governance and why is it important?

Before implementing any type of governance, it is worth understanding what it actually is. According to Wikipedia, “governance refers to all processes of governing undertaken…and relates to the interaction and decision-making among the actors involved in a collective problem”.

The Harvard Business School describe IT governance as “specifying the decision rights and the decision-making mechanics to foster the desired behaviour in the use of IT”.

A key thing to note is that governance is not the same as management. Ultimately, ITSM governance is concerned with control, compliance and performance.

It is important that ITSM governance has effective decision-making in place; drives the right behaviours (and, by implication, discourages the wrong behaviour); and has policy and processes are in place so that it is easier to discover issues and remedy them quicker.

Going back to our banking example earlier, HSBC had an issue with ATMs and Online Banking in 2011 but were able to pinpoint it and restore service within 2-3 hours. If they didn’t have good governance in place, it feasibly could have taken considerably longer to obtain information and decisions.

What are the different aspects of ITSM governance?

In order to understand, design and communicate effective ITSM governance, Harvard Business School suggests “a decision, rights and accountability framework” should be created that covers aspects like:

  • What decisions should be made and what information should be considered
  • Who can make decisions and who is accountable for them
  • How can decisions and governance be measured?

You might also want to consider different aspects like those the in the table below:

Aspects Questions or things to consider
1.      People Communicating with guiding principles that inform and involve all relevant staff; leverage their expertise; and ensure strong input from Senior Management
2.      Process Governance should be controlled and executed through policy, process, ownership and performance
3.      Technology What technology and tools are required to support the process?
4.      Information What data such as measurements and metrics are required to inform decision making?
5.      Services What are they; how much do they cost; and how do they add value to the business?
6.      Suppliers What are their processes and metrics and how are they involved in your governance?
7.      Customers Who are your customers and how do they benefit from your governance?

How can you evidence your governance improves service costs, their perception and value delivery?

8.      Corporate Governance How does your governance align to the corporate governance, strategic objectives and architecture; and are IT involved at the right level within the organisation in this regard?

How is ITSM governance executed?

After considering what aspects to include in ITSM governance, it is equally important to consider how to design and execute it in practice. The following are some suggestions you might want to consider when implementing ITSM governance.

Firstly, identify the types of frameworks and methods to be used – particularly if you are starting from scratch. Whilst not exhaustive, the following are some common methods and how they can be applied:

  • COBIT is an IT governance framework that focuses on what should be covered in processes and procedures and they can be directed and controlled.
  • ISO/IEC standards like 20000 (Service Management), 27000 (Security) and 38500 (IT Governance) are international standards provide specific advice and controls IT can be audited against to gain industry recognised certification
  • TOGAF is a framework for enterprise architecture that provides an approach for designing, planning, implementing, and governing an enterprise and service orientated architecture
  • Other specific best practices for governance such as PRINCE2 for projects; USMBOK and ITIL for service; MoR for risk management; CMMI for benchmarking and maturity.

Secondly, ITSM needs to be involved with – or even own – certain internal governing bodies like:

  • IT Pipeline and Portfolio Board to understand the upcoming projects and be ready to design, transition and operate the services being delivered as necessary
  • Architecture Governance Board to influence and ratify all architecture designs and decisions
  • Change Advisory Board to review/approve changes – particularly to the live production environment
  • Other Governance or Steering Groups involving the business to ensure IT is represented appropriately

Thirdly, ITSM Governance needs to ensure key policies, processes and metrics in place. This may vary depending on the needs of the organisation but things like incident, change and release policies should be created to ensure service-related issues or changes are controlled, evaluated, measured and resolved in appropriate way to ensure minimum risk and impact to the business.

Finally, and arguably, the most important thing is to build an improvement culture that involves the support of the whole IT organisation. By establishing quick wins; involving staff in the policy development; and empowering them to take ownership as appropriate; and using improvement techniques Deming’s Plan Do Check Act cycle; ITSM governance is more likely to be established accepted and acted upon by the IT organisation.

Summing Up

The key things to remember when implementing ITSM governance are to:

  • Ensure it is appropriate for your organisation and limit bureaucracy were possible
  • Remember that governance is not management and is primarily about driving effective decision-making and ensuring control and performance of services
  • Make sure it aligns to the strategic and corporate governance and objectives of your organisation
  • Control, improve and mature governance through policy, process, benchmarks and measurements using industry best practice if practicable to do so.
  • Develop and maintain an improvement culture within the IT organisation so that staff understand the value of – and contribute to the success of – ITSM governance

References:

Image Credit

Jon Morley

 

This article was contributed by Jon Morely – Vice-Chair of the itSMF UK Service Transition Special Interest Group and  IT Service Transition Manager at the University of Nottingham.

 

 

How predictive analytics have turned Incident Management on its head

22698329527_fb5a82e3e2_z

Predictive analytics is set to turn the world of IT service management, and in particular Incident Management, on its head. After all, it has already done this for IT Capacity Planning, where it is now possible to predict and avoid future incidents at a workload level.

Within IT capacity planning, forecasting (predicting, if you like) has always been a key feature of the discipline. It was used to ensure that large chunks of demand, either through growth or change, could be met while focusing on the strategic horizon rather than the day to day operation. If there are capacity issues, the Service Operation process of Incident Management informs the Service Design process of Capacity Management to allow it to be dealt with as part of future Service Design activity.

Incident Management should inform IT Capacity Planning about incidents logged due to capacity or performance issues, whereby this intelligence would then be used to assist in the diagnosis and resolution of incidents. The idea that Capacity Management informs Incident Management of future and avoidable incidents, or indeed how to deal with them, is a relatively new concept.

Playing the tactical game

5003567111_5f05155923_z

Technological advances have opened many new areas of innovation and opportunity in this space. Virtualization, automation, big data and predictive analytics have empowered IT capacity planning to extend into day to day management at a more granular and forensic level, rather than focusing solely on strategic activity. The following are the four major drivers which have spurned on this evolution:

  • Virtualization – or more importantly – the hypervisor

Whilst allowing multiple virtual workloads to operate on a single physical machine should make life more difficult, it actually simplifies things by reducing the number of information sources that need to be interrogated.

  • Data Automation

When dealing with different system management tools, vendors and formats consider the amount of data points generated. Let’s take a 10,000 server estate over a single 24 hour period, capturing data at 5 minute intervals – this would generate almost 3 million data points. For the information to be used for predictive analysis, we would recommend at least 30 days’ worth of monitoring data in order to gain worthwhile insight. Without automation it would take an army to schedule the retrieval, aggregation, cleansing, loading and transforming of the data from a number of bespoke sources in a meaningful timeframe.

  • Big Data

Big Data delivers the ability to store the massive amounts of data in a way that makes sense and allows for further manipulation. With associated hardware advances, the cost of storage, scalability and more powerful compute have made Big Data a reality.

  • Predictive Analytics

And finally, analytics provides the ability to churn data in a multitude of ways, using pattern matching and algorithms to analyse and provide insight into an organisation’s IT operation that would otherwise go unnoticed. Whether that be an over utilisation of, or an impending shortfall of resources. The analytics available today are essential if IT managers want to keep on top of the complexity and scale of their IT estate. In the IT environment of today, IT managers need to be confident in their knowledge of their IT infrastructure, and the various changing demands placed on it, in order to see what’s around the corner and avoid potential incidents.

Zooming in

For IT capacity planning, the unit of currency has reduced from physical machine to individual workload. Reducing the timeframe to provide short term tactical information while improving our ability to understand and model long term strategic actions. Changing the relationship between incident management and IT Capacity Planning allows you to identify shortfalls in advance, sidestep the avoidable and turn your Incident Management process on its head.

Screen-Shot-2016-03-11-at-11.42.51-300x289

 

This article was contributed by Stuart Higgins, Technical Evangelist at Sumerian.

Analytics Image Credit

Tactics Image Credit

 

Guest Post from TOPdesk; SHIFT LEFT (LEFT) AND KCS: WORKING TOWARDS BETTER SERVICES

Joost Wapenaar is a Technical Product Consultant at TOPdesk, as well as the Project Manager for the implementation of KCS within the Support department. This is his take on using the shift left principle to empower others.

JoostW

Background

Our Support department receives around 5,000 calls every month. Our 40-strong group, who catch and resolve these calls as efficiently as they can, were given a rating of 8 out of 10 by our customers. That’s not too bad – still, we’re always looking into how we can make our services smarter, quicker and more scalable. And we think we found an answer.

Providing answers

We asked ourselves how we could improve the availability of existing information to our customers. Every day we find that we’re discussing a solution with a customer that was discussed with a different customer just the day before. Or we’ve spent time researching a problem that a colleague’s already figured out. There had to be a better way. In the search for a smarter way to share our knowledge, we discovered the principles of ‘Shift Left’ and ‘Shift Left Left’. Behind these two principles is the idea that you can give customers answers to their questions more actively. ‘Shift Left’ means that skilled technicians make their answers more available to less experienced colleagues, so they in turn can help customers using already posted solutions. ‘Shift Left Left’ is the next logical step: provide your customers with access to these solutions, and they’re able to find the answer to their question themselves.

Shift Left at TOPdesk Support

We didn’t have to change our methods much in order to start exchanging information between colleagues: being helpful to others is integral to our ideology. And the more knowledge you have, the better you can help someone; for example, we hold knowledge days to facilitate knowledge sharing from the second line to the first line. These days include sessions organised by specialists in which they might share knowledge about authentication, performance or a specific module.

With Shift Left getting under control, we wanted to take a step towards Shift Left Left: making our knowledge available to our customers.

Shift Left Left at TOPdesk Support

We answer many of our requests over the phone or by email. Customer satisfaction reports show that this method works well, but it’s not going to be scalable when we’re only sharing knowledge one-on- one. We also have a website with manuals: help.topdesk.com. Although this platform can be used to enable service for many customers at the same time, it mostly contains generic information about TOPdesk and less about customer-specific situations such as error messages, workarounds, etc. In order to start implementing Shift Left Left, we had to figure out a way to share this type of knowledge with our customers as well.

Knowledge Centred Support

We soon encountered the concept of Knowledge Centred Support (KCS)*. This is a best practice for publishing and managing knowledge – a sort of ITIL for knowledge management – and assumes that the support department fills and manages a knowledge base with items that can be shared with end users. This changes knowledge management from a task done by specific people to a task for every person in the Support department – as a part of solving calls.

Getting started with KCS at TOPdesk

We first created a project plan for the implementation of the KCS method at TOPdesk. We set up a pilot in which we examined whether the KCS method would help the Support department work more efficiently. Ten of our forty support employees took part in this pilot. The introduction of the KCS method changed the way the pilot group worked. We had weekly evaluations to make sure the change was successful, discussing the challenges of KCS and how we could overcome them. The method was continuously adjusted and optimised. By working as a group and taking on individual challenges together, we successfully managed to go through this change. We also discussed the successes during our evaluations: what is the added value for us as the Support department? What gives us satisfaction and makes us happy? Because we were experiencing the challenges and successes as a team, the pilot was a success not only in numbers but also in process change. We were also giving weekly updates to our department about the changes within the pilot group and the effect on our work. Sharing the success of KCS with the entire department was essential to give KCS a positive image – and for it to remain so. These weekly updates made the people who were not taking part in the pilot very enthusiastic, and many wanted to join in with the implementation.

How does TOPdesk work with KCS?

When applying the KCS method, we used TOPdesk’s Knowledge Base module. We made a separate branch in the knowledge base to save items that were created for KCS in a fixed and recognisable place. The knowledge base at TOPdesk Extranet features hundreds of KCS items. The moment a customer asks our Support department a question, a call is logged. Based on this call a Support employee can search for relevant items in the knowledge base. When we find an item that answers the question, we add this to the call. When the item from the knowledge base is added, TOPdesk creates a link between the call and the knowledge item. This makes it possible to create selections and reports that provide insight into the way the item are used. Which items are used to resolve calls and which ones are used more frequently? If the item describes the answer for the most part, but perhaps is still missing some essential information, we can add this before we share it with the customer; thus the items are continuously updated. Are there not any items in the knowledge base that can answer the question? Then a new item can be created immediately when processing the customer’s question.

The results

Since the introduction of the KCS method at TOPdesk Support, we’ve written thousands of items with answers to customer questions. A large number of these items have been re-used many times to answer the same question. Using KCS has also shown us that we are getting more to grips with the Shift Left principle. Knowledge is now centrally stored in our knowledge base, making it available to both first and second line operators. Operators with less experience are now able to find answers to the more difficult questions in no time, helping them develop their knowledge more quickly. We have also seen that the average lead time of a calls has reduced and fewer calls are escalated to the second line. What’s more, the operators working with the method – in our case support staff – enjoy higher job satisfaction. When they answer a question, they’re not only helping the customer in one go, but they’re also sharing their knowledge with colleagues.

The future

Writing a large number of items shouldn’t be a goal in itself. The final goal is being able to process calls more quickly and give end users the opportunity to find their own answers. During this pilot we saw that the number of new items decreased and the number of calls with links to current items increased. The availability of our department’s knowledge is now better than ever before. The pilot results were very positive. Using existing items helped operators process calls more easily and quickly. At the start of the pilot we were resolving 10-15% of the calls with information from an existing knowledge source; at the end of the pilot this came closer to 40-50%. Because of the success of the KCS method during the pilot, we decided to get the entire Support department to start working with it. But if you then want to start working according to Shift Left Left, you need to give your end users access to your knowledge. You want to give them the ability to search the knowledge base. In this way, customers will increasingly be able to find answers to their questions, and will no longer always have to contact the Support department.

 

* Knowledge Centred Support is a methodology developed by the Consortium for Service Innovation. Everything in this article is an interpretation of this methodology and in no way suggests the correct one. All rights and interpretations belong to the Consortium for Service Innovation and can be found on www.serviceinnovation.org.

 

Image Credit

The Holy Trinity of IT Service Management

2117651980_9ce329c4de_z

People, technology and process are the compounds that construct the IT Service Management triumvirate. Having already identified the technology trends, and in particular how predictive analytics will impact incident management, what can we say about the other two members of this very exclusive club?

While process tends to lead the way, it needs people to champion it, and technology to support it. Technology, in the grand scheme of things, tends to be the easiest part to implement as long as it exists and is fit for purpose.

Low level detection

The ability to detect and avoid incidents isn’t something that’s included in the ITIL manual. We could spin it into something to do with Continual Service Improvement, but activities in this area tend to be run on a project basis. They are in effect more likely to be elements of a change programme.

So what can be done when dealing with information relating to the future at such a granular level on a daily basis? The simplest thing would be to treat predictive events as actual incidents, pop them into a team’s queue and let them deal with them alongside everything else.

But what priority should they be given? The predicted incident can’t be high as nothing is broken, and nobody is screaming. On the other hand, if they are treated as a low priority, the issue may never be dealt with in a timeframe that permits the incident to be avoided. Medium, then? Perhaps not, as if the resolution requires additional spend then you need to conform to a purchasing timeframe and once again the benefit of being able to avoid a failure, may be lost.

The answer, unsurprisingly, is that it depends. It will depend on the organisation and how mature its processes are, how stable its services are, and its attitude to risk.

A stitch in time?

How many organisations will zealously fund proactive remedial work? Securing the budget to keep things in a current and supported state is difficult and at times impossible. I’m sure every organisation has a server somewhere that has effectively been shrink wrapped as it is no longer supportable and needs to be kept as protected from change as much as possible, as the service it supports provides good, perhaps even essential value to the business.

It is unlikely that an IT department will be given a blank cheque book to allow it to respond to predicted events. Does this mean that that things will knowingly be left to fail?

Therein lies another people aspect. How are IT Service Management staff rewarded?

Fire fighter or keeper of the peace?

If services operate without issue the IT department becomes the focus of cost cutting.

If on the other hand systems fail, all thanks are given to those that worked tirelessly through the night, surviving only on pizzas and vending machine coffee. Like or lump it, the reality is that in these types of scenarios, those that are seen to be doing are those that progress.

IT has a very real culture of martyrdom embedded within it that will be difficult to change.

Of course there will still be unexpected incidents that can’t be predicted but in a world where we can now identify and avoid incident there needs to be a balance that encourages and rewards the proactive as much as the reactive.

Different thinking is needed together with a different reward structure. Pavlov discovered long ago that you have to reward the behaviours you want.

Are your service team keeping the peace or fighting fires? I’d suggest you want people calmly going about their business to let business go about business. Avoiding the avoidable helps them to do just that.

Screen Shot 2016-03-11 at 11.42.51

 

This article was contributed by Stuart Higgins, Technical Evangelist at Sumerian

 

Image credit

ITIL Practitioner: Thoughts on the experience so far

3623768629_955cfaedca_oDoes ITIL Practitioner live up to the hype? Having followed the teasers, blog posts, promotional videos and (some of the) discussions during 2015 I find the question difficult to answer, even after reading the book and taking the certification exam.

In fact, it might be stretching it to call it a hype, even though the press release stated that it was the “most significant evolution in the ITIL best practice framework since the launch of AXELOS”, and The IT Sceptic stated that even he “might even consider doing it”. A Google search on “ITIL Practitioner” today gives me slightly more than 90 000 hits, which is significantly less than “ITIL Foundation” (580 000 hits) and even “ITIL Expert” (350 000 hits). Compared to obvious IT hypes like “Big data” (54 million hits) ITIL Practitioner appears to be hardly noticeable. Nevertheless, my expectations were pretty high by the time I got my hands on some actual reading material.

In my experience, the syllabus is usually a good starting point for familiarizing oneself with new certifications, and this was no exception. The six learning objectives all began with “Be able to [do something]”, which did nothing to lower my expectations, and the assessment criteria filled them out very nicely. Time, then, to dig into the book.

Reading the book

Instant gratification!

Being impatient, I skipped the foreword and the presentation of the (quite impressing) team, and went straight for the good stuff. The introduction left me yearning for more. I loved the simple language, the examples and the down to earth approach. The description of a service strikes me as better than any I have seen in other ITIL books. If I could hand out the first chapter to ITIL Foundation course participants, I would. It really sums up the essence of ITSM in an easily understandable and well-formulated way.

If I should put my finger on anything in the introduction, it would be the presentation of efficiency and effectiveness as concepts. Personally, I would have preferred a little more focus and weight on effectiveness. Efficiency is running fast, effectiveness is choosing the smartest and quickest route. “Doing the right thing” should come before “doing the thing right”, or else you very quickly end up doing the wrong things very efficiently.

As I read on, the book continued to impress me. The easy language, the good examples, the references to other frameworks and methods, it all contributed to the overall great impression.

The guiding principles were very good, easy to follow and to agree with, and I especially liked the emphasis that they are not unique to ITIL or ITSM. I would have liked to see more on the interfaces between them, and how they interact with one another, but then again, they are guiding principles, not directing processes.

The many references to the Toolkit left me in an ambiguous state of mind. On the one hand, it was great to get tips of templates and tools, especially because they were placed close to descriptions of the activities they are meant to support. On the other hand, they had a tendency to break my concentration and flow because I felt I had to look them up immediately. I guess I’ll be less distracted by this the next time I read the book. The Toolkit itself was a great resource, with ample information and references.

In fact, references to other frameworks and methods such as Lean, Kanban, Scrum and agile were abundant throughout the book. Several pages at the end of chapter 7 was dedicated to describing these, and others. I found it very refreshing and appropriate, very much in line with my expectations. The only thing that gave me pause is that I found no mention of Kepner-Tregoe, which, in my opinion, would be a very relevant and useful tool for several topics.

Overall, I was very satisfied with the book, as I am sure most others will be too.

Passing the certification test

As with other comparable certifications, my exam preparations consisted mainly of working with the two sample exams I had at hand. The format was recognizable, with scenarios and multiple choice questions. Having taken a fair share of such exams, I entered into the task with my usual enthusiasm and optimism, both of which was soon put to the test.

ITIL Practitioner operates on Bloom’s level 3 and 4, same as the nine intermediate ITIL exams. Thus, the questions should test the candidates’ ability to effectively apply concepts, principles, methods and new information to concrete situations (level 3), and analyze situations, identify reasons and causes, and reach conclusions (level 4).

In my experience, both the mock exams and the actual certification test fall somewhat short of achieving this. I like scenario based tests; they feel more realistic and appropriate, but you need a certain amount of details to make it work. The intermediate exams handles this by limiting the number of questions, and so giving the candidate more time per question to handle the amount of information given, as well as using gradient style multiple choice.

The Practitioner exam is sort of a blend of the Foundation and the Intermediate type of exams, and ends up being a hybrid; more than Foundation, but not quite Intermediate. While this fits well with the announced placement in the ITIL hierarchy, I still feel that the test uses Blooms level 2 and 3 type of questions to test level 3 and 4 type of knowledge.

In summary, I think some of the questions are too open for interpretations, thus leaving the rationale open for doubt.

In fact, while I can agree with most of the answers and explanations in the rationale, I flat out disagree with a few of them. In my opinion, the rationales are the weakest part of the Practitioner experience so far, and I hope to see revised versions soon. Disagreeing with the rationale does not instill confidence before taking the actual certification test.

As for the test itself, the usual advices apply; read and understand all text, use the book actively, answer all questions. I am also looking forward to seeing some statistics on the pass rate.

In conclusion

So, does ITIL Practitioner live up to the hype? As mentioned, I don’t really think it is a hype yet, so I’ll leave that particular question unanswered.

Does it meet my expectations so far? I’m inclined to say yes. The important part, the book, is definitely worth the read, and that really is what matters most.

KristianSpilhaug_head_350x450

 

This article was contributed by Kristian Spilhaug of Sopra Steria . Kristian is a Norwegian instructor and senior consultant, delivering ITIL, PRINCE2 and Kepner-Tregoe courses and advice. He is usually denying the “senior” part, as there is still tons of stuff to learn. He is really enjoying delivering courses, though. You can connect with him on LinkedIn.

 

Image credit