You Debate. We’ll Innovate.

Ok..You Debate…We’ll Innovate….

In his Government Computing News column, Mike Daconta , impassionedly argues that the NIST definition for Cloud Computing, with its complexity, is cause for government IT managers  to stay the course with business as usual; intonating that you should buy hardware, software, people, property, plant and repeat every 36 months.  Your budget cycles and key performance indicators are aligned with this legacy and entrenched model.  It worked for a few decades and is well cared for in +$80B worth of budget dollars but it is not without its own complex challenges. The legacy model provided us 2nd order derivatives like misplacing 1000 data centers  , a weakened cyber posture and application constructs designed to encourage vendor lock-in.  I agree it would be easy to stay the course but I strongly disagree that definition is a compelling enough cause not to innovate.

The good people at NIST, GSA, Leaf, McClure, et. al, have been leading the charge in this definition work and  driving innovation in their agencies and  the sector writ large. It is absolutely possible and absolutely needed to continue to refine definitions but not at the expense of deploying immediately needed innovation. These folks should be lauded. Not harassed.

Had they waited for debate society work to culminate – innovative projects like Army Private Cloud   and Apps.Gov  never would have been started. APC is going to save millions in costs, improve the operating picture for warfighters and do it in a more secure fashion. offers untold levels of transparency and easier procurement models.

It is the promise of Cloud Computing which is being realized today by these innovators and projects.

Don’t we owe it to our constituencies:  warfighters, authorizers, civilian services, taxpayers … to refine and innovate?

You know what’s really cool? An Exabyte.

You know what’s really cool? An Exabyte.

One of the more interesting (and challenging) parts of working in the cloud storage sector is the sheer volume of data that organizations are attempting to manage.  From the first RAMAC in the late 50’s to contemporary 4TB spindles there has been outrageous growth at both the individual drive level and aggregate counts across your organizations. Just a few short years ago it was uncommon for all but the most data intensive companies and government services to exceed more than 1PB of capacity under management. Today – that is about 1 chassis of capacity in the most dense of configurations. This week alone I spoke with 6 organizations that each have +100 Petabytes of capacity under management with a YoY growth rate approaching 50%. Back of the napkin – that means in roughly 24 business quarters these very well run organizations will each roughly have an Exabyte under management.

Where is the data growth and volume coming from?

Unstructured content — Files, Blobs, Rich Media and Consumer Generated (digital images).

In a rough approximation – across the organizations I saw this week the split looks something like this –


Unstructured tiers have eclipsed the combined quantity of DB, Messaging and Backup.

This offers an amazing opportunity for you to control costs and decouple the administrative burden from the growth curve.

Cloud based platforms (on and off-premise) are purposefully designed for these data scales and offers a cost model better suited for unstructured content. When evaluating platforms or applications that benefit from the approach focus in on 4 key themes : Meta Data, Multi-Tenancy,  Metering and Mult-Site. If your application stack passes through these screens you should be evaluating cloud storage based architectures (and business models) to help you on the path to an Exabyte under management.

Persistent Reservations

Some standing reservations have immeasurable value. Date night with your spouse. Game day with your kids. If you travel frequently you may have a favorite airplane seat or favorite car rental type that you consistently try to reserve. Maybe even a standing lunch or dinner reservation. These types of reservations are beneficial for family, social and business relationships.


Persistent Reservations in the Cloud Computing context offer limited utility. They go by a handful of names; Reserved Instances, Reserved Pools, Standby Capacity, etc. The names are used to describe a portion of your aggregate resource pool that is in a warm-idle state – essentially, waiting for you to have a defined use. It could be for burst/surge capacity, it could be in the context of COOP/DR , support of a periodic batch job or even as part of a Q/A cycle. We all have these type of workloads but if you are leveraging resource reservations to tackle them – consider the alternative – building your application to scale and workload sizing. Given the austere condition of budgets – resources without a defined purpose need to go.

Supply side Vendors, love, love, this model because it allows them to monetize a portion of their infrastructure that would otherwise be without commitment. The expense line to manage them is some small number over zero and the likelihood of you calling on their use is some other number slightly over zero. I think about it almost like one of those insurance premiums you do not really need but continue buying on a yearly basis because it was always included in your binder even though you never knew what it was.

Consider having an open dialogue with your procurement partners about why you have the resource reservations and how often have they been leveraged in the previous calendar year. My guess is not very often. The inspection – I think will lead you down one of two different paths. Either the application that leverages the persistent reservation was not sized correctly or there was a rouge IT project that claimed use of the resource.

Claim it back. Work on sizing the application properly. Tools to help in this regard are plentiful.

That is Anthony Bourdain saying “No Reservations” … No I do not know why he is covered in mud looking like a zombie. I just happen to like the show and tried to get a reservation at his place in NYC recently…..

Current Top Ten List FAQ’s


I’ve been traveling a bit recently and wanted to share with you the top 10 questions I’m most frequently asked about right now by IT leadership and executive professionals evaluating cloud investments in 2012.

1.      Should I build my own or rent one from someone else?

Your teams are already doing it without your knowledge. Formalize an initiative, do both by building a hybrid model. And do not penalize the early movers. Figure out how to empower them and move your data sets back inside of your governance model at the same time. Inspect your T+E reports for consistent spend on Amex cards to cloud providers if you have any doubt about the usage.

2.     Its hype right? I lived through time sharing, the xSP blowup and invested in an IPO for tall building “riser” cross-connects . This is the same thing, right?

You can’t be faulted for being cautious. Its different this time. The tools are better. The access is better. And the return can be measured.

3. My equipment refresh cycle doesn’t start for 24 months and I do not have capital dollars to commit. What should I do?

Scrape together enough to fund a pilot, higher a new numbers person, tell your board you need a plus up or update your resume and consider a consulting gig. Someone else will figure out how to fund it.

4. Risk, compliance and audit hasn’t approved the model.

Build one internally, get your accreditation and certification, teach them why its advantageous.

5. I can’t measure my risk profile today.

Yep. I get it. See #4. Consolidate, build a higher wall and defend deeper. You think your cyber posture is scary now? You don’t even know about the folks who have been thinking about Offensive Cyber in the Cloud for almost 5 years now….

6. It doesn’t make sense for my Tier1 applications where I will get the biggest return — what should I do?

It does but don’t start there. Grab a business partner who has a tier 2/3 app and make a deal. Try an approach like this “lets implement the new model and if you don’t like it we’ll go old school. But when it works I want you on Youtube and at the leadership meeting touting the advantages.” No one will say no.

7.Cloud doesn’t work with my existing authorize/authenticate/2 form factor/cac, etc.

It does. You are asking the wrong questions – should be considering how it can improve upon your audit trail and offer a better concept of operations with continuous monitoring.

8. Me, my boss, our authorizers and my team all have ipads, mobrile devices and I would like to make our data sets mobile enabled but we have a very well respected consultant who says it will take two years and another 24 months of retainers/studies to implement.

If I was hourly billed I would tell you the same. Check out my ipad — 60 days flat from white board idea to roll out. Securely and privately.

9. What’s the method to measure return.

Nothing clever. A bit of elbow work with your own numbers. Cloud investments are typically accretive inside of your existing hurdle rates/thresholds.

10. I believe you but after exhaustive review there is no capital or people but I still want to do this and I want to offer it as a service to my constituency 

No sweat — if you are really interested in doing it we can rent you balance sheet, gear and people. Takes about 90 days start to finish to do it the right way.


Who will supply your cloud?

1 pipe.  Many uses.  

One of the questions that CIO’s, IT leadership and Accreditation Authorities are dealing with is how to blend the economic benefits of cloud infrastructures with both the real and perceived security challenges presented by this new Service Delivery method.

When faced with inflection points like this I’ve often found that companies will reach out to their top two or three strategic technology suppliers to kick around ideas, learn from best practices and look for creative ways to fund Pilot projects.

If your Networking Service Provider, Hosting Partner and Wireline carrier are not on the list of initial partners you consult for advice…. It’s a position I think you may want to reconsider.

Heres why:

 As then, is now and forever shall be: Last Mile continues to matter. 

But not for the same reason you might suspect. I offer that the security implications for the last mile more than the contemporary performance arguments are a significant motivating factor for you to consider. And who better to help on your cloud journey than the service provider community you are already drawing on for network support.

You see for many of the large end user facing clouds, data is transported outside of your firewall and across the Public network. Depending upon your mission type – this transport method might be a complete non-starter.  Cleartext, Public Transport… these are not things your IA folks like to hear.

Enter: The power of Cross-Connect.

You’re networking service provider, hosting partner and wireline carriers who you already have a trusted relationship with for ping, power, pipe, can easily extend your private network into a cloud infrastructure in a low-friction – highly secure manner.


It’s good for you because it can deliver results immediately, good for your organization because procurement can leverage the volume of many different contracting vehicles and good for the provider because it allows them to monetize additional traffic on their (very expensive to build) network plants.

I’ve shared this previously in a brief post last year  and helped to build a handy list providers you’ll want to consider speaking with here  .

Which of your networks would benefit from this approach?

The Cloud is Dead.

The Cloud is dead.  Well. Version 1 is.

Long Live the Cloud!

I was asked to give some remarks this week about innovation, ideation, to characterize the past 4 years spent working with cloud infrastructures and to share some thoughts on the next development cycle.

Travel back in time for a minute with me; May 2007.

The DOW was at 14,000, final preparations were being made for the VMW IPO, your house was probably worth a bit more and there were only a handful of funded teams working on cloud projects in the Americas.  The first public cloud was about a year old and no one knew what a private cloud was. 800-146 was in its first draft for comments; Merrill Lynch (still an independent company) wouldn’t release its report calling cloud a $100B opportunity for another year. The first Federal CIO appointment was still 2 years away.

The industry was a few dozen folks on the west coast and a few dozen folks on the east coast with fundamentally different views on how the burgeoning technologies centric to disk, compute and network were going to be adopted.  Reflection 1: both teams were right.  Meta-Data matters, hardware abstraction through virtualization was to be a key enabler and end users needed to be able to self-provision resources pools.

As so often happens, competitive positioning and marketeering for a brief period was more mature than the initial platform deliverables from the nascent development community. Large development organizations got serious about adding to their talent pool, unique IP was created along with multi-year roadmaps to justify advancing the spend columns.  Reflection 2: Limit version 1 deliverables. Pay attention when well managed open source projects start competing with your big idea.

Early adopter customers began lining up to bring technologies into their wet labs to proof enterprise capabilities and to help start delivering Infrastructure as a Service. Reflection 3: Early adopters are an absolute key to success. Nurture them. Walk over coals for them. Take them to lunch, the bar, ballgame. Send their spouses flowers. Own the failures and let them take positive credit for favorable outcomes.

Usage cases are a funny thing. One builds a technology for a discreet, validated market and another set of end users (if you are lucky) will adopt it for challenges you never considered. Some will appear, crazy, some visionary. Reflection 4: listen to the crazies. They’re smarter than you and really only want to help you out.

What are the crazies asking for right now? Version 2 and 3 capabilities.

  1. DevOps level instrumentation and reporting.
  2. Platform portability.
  3. More robust rule engines for provisioning, ilm and distribution
  4. Commercial management tools that work with open source platform derivatives


People who write software write bugs. Anyone who has been a producer or consumer of technology knows this maxim to be true. Its how you deal with these exceptions through redundancy, fault vectors, recovery scenarios and managed customer experience that are important.  Today, there are many organizations impacted by the outage of a contemporary public cloud provider. Marketing teams around the globe are gearing up with subtle barbs, whisper talking points, and placed opinion pieces with disparaging tones. I find the approach (and the outage) a bit unfortunate because I believe it has the potential to pause the pace of innovation by organizations who leverage the cloud infrastructure to serve their ultimate end users. With any, radical paradigm shift, there are going to be bumps along the way. In my experience its how you deal with them and engender customer support that people remember.  What I do hope comes of this is a spirited debate about the merits of cloud architecture, which applications can benefit from its use and what risk profile/governance is required for continued operation.


Get every new post delivered to your Inbox.