Self-service should not be an after-thought in cloud projects, it's the beginning of the journey. Self-service drives the standardization of offerings and reduces the labor costs that arise from designing, specifying, ordering, procuring, and configuring computing, storage and network resources on custom basis.
This standardization and automation also applies to application components, security and network services such as LDAP, DNS, load balancers, etc. I say "drives" because the moment an organization decides to provide infrastructure on demand, three questions arise that are at the heart of the beginning of the cloud journey:
- Who are my customers?
- What are my offerings?
- What are my existing delivery processes?
Like any other business, the question of who are we serving, what do they want to buy, and how do we deliver, leads to many other questions. These questions are at the beginning of journey to the cloud operating model.
But, haven't we answered these questions before when we built our ITSM catalog?
Answer: NOT with rigor required by a self-service and automation program.
Once we decide to offer a cloud service, these questions need to be answered with a BRUTAL and RIGOROUS specificity to be codified into self-service choices. Usually, until the decision to deliver cloud services, these "customer" catalogs are often vague, warmed-over recapitulation of some well-intentioned static "service-catalog."
In my experience, very little of that prior work is usable when confronted with the needs of the cloud project. I'm loathe to say it's useless; sometimes the project team that did the prior catalog is actually in the best position to understand the short comings of the prior work and can be a good contributor to the cloud project.
Getting back on point, once the decision to offer infrastructure as a service to the user community, At this point, the cloud teams faces three tasks:
FIrst, define the customer as a consumer of a set of offerings that address some activity the customer needs to carry out. For example, "test a patch to the scheduling system." The team needs to figure out the proper metaphors, abstractions, rules for configuration, rules for consumption, and the choices allowed the customer. And this needs to be in the language and domain of competence of the customer.
This is hard for us technical people. We like lots of choices, we understand a lot of things our users won't. The exercise of understanding the customer requires a lot of trial and error.
Sometimes I see some "cloud interfaces" that are really admin consoles to configure compute, network and storage resources. The UI's are targeted at unicorn users (they don't exist) -- because developers, the real target of cloud-- are usually not network experts, or other than disk space, they don't know much about storage. Thus this is a unicorn customer--she doesn't exist!
Second, the cloud team now needs to break down the end to end service delivery processes into it's component processes, specifying all the hand-offs and activities, the tools and systems used, and the data required to both execute activities and make decisions.
This is where standardized offerings become the difference between success and abject failure as they simplify decisions and data gathering.
If every car is a Model-T, then manufacturing, supply chain, capacity management, procurement, planning are all much easier. if you have a large variety of options, it's much harder. Start small. Amazon web services first compute offering was small linux. That's it. "sm1.linus" (the original name) is the Model T of the cloud era.
Third, a good gap analyis and coverage plan is needed. What we tend to find at this stage of cloud design is a gospel of gaps: rules in people's heads, governance rules that get in the way (hello CAB!), existing systems built for responding in days and weeks rather than minutes and seconds.
Also there are missing systems. Sources of record are unstructured (like a word document or wiki's) rather than a database or a structured data model. The few tools that make it lack API's, were built for a single tenant, do not enforce role-base-access control, or were not designed for consumer use.
Good process design needs to inventory these system and data gaps.
For example, take the task "Assign IP" address. Today, Jason the network admin gets an e-mail, and he opens his spreadsheet and sends it to someone who then assigns it to the virtual machine. Now, we need to enable the user to assign an IP address on self-service basis. So no Jason, no spreadsheet, no manual steps. But we do need to say Yes to a IP Addreses Manager, portlet, lifecycle manager, consumption and reclaim process.
This one example. If you don't have simple standards upfront, the number of rules and interactions reaches a complexity where it just doesn't work and it's a maintenance nightmare.
Sounds daunting, doesn't it? Well it is, if the cloud team doesn't do the proper gap analysis upfront. When they do, it's just a project like any other.