The Cloud: A View from Above – Private Cloud and the Hybrid Evolution

As if the term “Cloud Computing” wasn’t already severely overloaded, terms within the overarching technologies are even more overloaded, and many are quite misunderstood.

In this post (and in others in this series), I’m going to try to clarify a few of the definitions, and the subtle differences between various definitions as they are used. I’ll cover the Private Cloud and the Hybrid Cloud (links to Wikipedia, but my thoughts follow):

Private Cloud: In short, a Private Cloud is a cloud where the data access is restricted to specific users, typically within the same organization (or company) and behind a corporate firewall. Beyond the basic advantages of Cloud Computing (reduced IT infrastructure costs and management, “always up”, increased business and IT agility), there are several business reasons for keeping data in a Private Cloud:

  • Your applications store customer data containing Personally-Identifiable Information (PII), which could incur legal or financial risks if compromised.
  • Your application manages e-commerce transactions, credit card numbers, shipping addresses, etc.
  • You store corporate-owned, sensitive, mission-critical or proprietary data.

In these (and in many other) cases, the knee-jerk reaction of IT and Business is to keep these applications and data on-premises, safe behind the corporate firewall. In some organizations it may be difficult to argue against this mindset, but there are alternatives that enable businesses to enjoy the basic benefits of Cloud Computing in a secure manner.

A Private Cloud typically begins life as an application or services deployed in an on-premises data center. Access to the data (Authorization, Authentication and Accounting, commonly known as the AAA Protocol) is clearly defined and controlled by local IT resources. On-premises users can get to the application over their LAN, external users can use IPsec or VPN protocols to access the application securely from outside.

Now, with proper security (AAA over secure IP protocols, as noted above) a Private Cloud can exist in a vendor data center, provided the organization utilizes the same security protocols and IT controls as they would for an on-premises deployment. The rub? Well, read the news (link to a Bing search for the latest .. there’s always more). Suffice to say: many enterprises want absolute assurances data held away from their premises will be secure.

That said, it’s not that simple. Beyond advanced and highly-controlled access security, there are a few other bits and pieces that a hosted Private Cloud (one that is hosted at a vendor data center) would need to navigate:

  • Privacy: Monitoring, monitoring, monitoring. No, not performance monitoring. The monitoring to which I refer applies to communications in and out of a Private Cloud, based on the widely-discussed “NSA has massive database of Americans’ Phone Calls” (link to USA Today) story that broke a while back. Maybe the data itself isn’t directly accessible, but inferences about how the data is being used can be captured. This isn’t just a Cloud issue, by the way; vendors and enterprises will experience these challenges; hosted or on-premises.
  • Compliance: contractual and financial assurances (read: protections and remedies) that can be activated should a vendor fail to assume the risk of protecting the data using recognized practices and protocols. Note: this requirement brings with it a handy-dandy audit cycle that a vendor must also navigate.
  • Legal durability: last I checked, a subpoena is durable (a court order for information that stands up up nicely in the courtroom) should a governing body (State or Federal) “request” (quotes are mine) data from a non-enterprise-owned data center. A vendor would surrender the data without many questions. An enterprise would consider their options. Enterprises will consult in-house counsel before releasing data.

This is why enterprises will tend to run scared of deploying content in a non-enterprise-owned data center. Can you blame them? Before we find ourselves in the courtroom, let’s discuss for a bit. The logical evolution is not necessarily to avoid hosted private clouds, but to evaluate the content stored in on- and off-premises data centers. In this exercise, an Enterprise will identify types of data, including sensitive data (this is a short list):

  • Static public content (easily hosted in CDNs worldwide .. icons, static “about” pages, legal pages, etc.).
  • Some dynamic content that needs to be available to the public (and therefore, will need to scale, or be redirected to public, scalable resources) .. calendar- or location-based query results, catalogs or pricing data (updated via business rules), and so on.
  • Other dynamic content that needs to be held securely, and exposed only during relevant need. This can include PII, Credit Card, Customer status, and much more. In fact, some of these data need not be exposed at all; rather, secure queries to an internal system can yield responses that let the application get what it needs without viewing the actual data (querying if a token to a credit card account has sufficient balance, or confirming a shipping address via an encrypted form post).
  • Mission-critical data that has explicitly-defined audiences and uses.

Avoiding the issue of publicly-available data (the first two bullets, above), we raise several questions for Enterprises regarding sensitive data:

  • Are there ways that an Enterprise can protect sensitive data in an Internet paradigm?
  • How should an Enterprise control access to sensitive data by authorized entities?
  • How can an Enterprise protect sensitive and mission-critical data?

In this post, I am not proposing the answers. Not yet, anyway. I am, however, posing questions an Enterprise should ask. For starters:

  • Perform an analysis and inventory of systems, audiences and security requirements.
  • Prioritize systems based on business need and expected life; consider replacing, rewriting or redirecting system assets based on audiences, expected life and other factors.
  • Create a project plan with clear (and widely-publicized) milestones so the enterprise is aware of progress and potential impacts to system availability.

In this exercise, you will discover your enterprise is describing an evolution of establishing secure access to assets residing in a local data center or in a Private Cloud. The analysis will further suggest certain assets be addressed in another logical paradigm: the Hybrid Cloud. So, let’s talk about the Hybrid Cloud. My thoughts follow:

Hybrid Cloud: Loosely stated, a Hybrid Cloud consists of data and services held in on- and off-premises facilities, with access to sensitive data secured by VPN and IPsec protocols. Consider a company who stores customer address data in their local data center, under the physical control of their Enterprise IT. IT enables access FROM public resources (catalog and shipping sites) via secure protocols.

Here lies the objective of this post: in considering the evolution from Private to Hybrid, Enterprise will arrive at the fact some data must reside under the control of on-premises IT .. control over these bits will include the questions above. That said, I am not suggesting (extraction of any suggestions are at the risk and responsibility of the affected parties) that Enterprises expose their data to the world at large, without adequate (and tested) protections.

Solutions? Yah. Lots:

  • Windows Azure offers a the AppFabric Service Bus, a component that provides endpoint security .. a paradigm where secure connectivity is maintained by connecting applications to single points of access to other components. Disparate applications can connect to a single endpoint, simplifying and securing Hybrid Cloud components.
  • Amazon Web Services offers the Amazon Virtual Private Cloud (VPC), which enables an enterprise to launch a private and isolated section of AWS in a user-defined virtual network.
  • VMWare offers their vCloud product which enables enterprises to deploy workloads on shared infrastructure with built-in security and role-based access controls.

In these three cases (and there are others), Out-of-Cloud access can be enabled via IPsec and VPN. Your mileage may vary widely, depending on the analysis of your infrastructure and mapping this analysis against your requirements.

I do not intend this to be a pitch for deploying a Hybrid Cloud. However, I do suggest that enterprises consider and weigh their options when identifying the types of data that should be hosted on-premises, versus a trusted vendor.

Want to know more? Please read my collection of Cloud Computing posts, or reach out to me for more detail.

Cloud Computing: How-To eBook for Office 365

This 337-page gem comes to us free, courtesy of Microsoft Press and the Office 365 team.

The book shows you how businesses can use the online versions of Microsoft productivity applications to collaborate and work more flexibly than ever before. It also covers creating and administrating Office 365 accounts, team and online meeting sites and ways to work on- and off-line.

Office 365 includes a wide range of services, including Exchange Online (email, contacts and calendar), Lync Online (communications and meetings), SharePoint Online (document and workflow collaboration) and Office Web Apps (online versions of Excel, Word, OneNote and PowerPoint, plus online storage).

If you’re a business or IT department seeking to plan a deployment or learn more about the product, this is a terrific starting point for you.

Office 365: Connect and Collaborate virtually anywhere, anytime.

Want Some Windows Azure AppFabric Goodness?

I just finished a project producing scripts and demos for count ‘em: lucky 13 videos for Windows Azure AppFabric. I managed to spend more time in Visual Studio than I have in years .. and it was actually quite fun.

These videos target the breadth developer and get you off on the right foot with "An Introduction to Windows Azure AppFabric”. From there, we introduce several new services, highlighted below.

Windows Azure AppFabric Cache
The AppFabric Cache is a distributed, in-memory, application caching service for Windows Azure and SQL Azure applications. The cache  provides applications with high-speed access, scale and high availability to application data. The service requires no installation or instance management, and can be resized dynamically to meet your needs.

Windows Azure Service Bus Topics
Topics enable one-to-many message delivery and filter rules which ensure delivery to relevant systems in a publish / subscribe model. Topics are provisioned in code, requiring no installation or instance management.

Windows Azure Service Bus Queues
Queues enable your application to be more resilient by providing an always-present receipt location for messages, even if the receiving listener is offline. Queues can also load-level your application when traffic spikes occur, or load-balance .the application to improve processing performance.

Windows Azure Service Bus Relay
The Service Bus Relay supports Service Remoting: a way to expose on-premises services so they be called by Cloud components, and One-Way messaging, a means to send to one or many recipients.

The video series includes both “what is” and “how to”pivots, complete with code samples so you can try these new services out for yourself. Visit the AppFabric Demos Channel on CodePlex.

“Anything” as a Service (XaaS) .. you knew this was coming ..

In the Cloud Computing world, where so many things are in flux, it should come as no surprise that virtually “anything” can (and will) be provided “as a Service”. For starters, we had (I go into more detail on this from my “Cloud Computing: IT as a Service” post):

  • IaaS (Infrastructure as a Service): Virtual, but logically-physical hardware. Servers in the sky that you can connect to remotely as though they were actual hardware. You expend a lot of effort into managing servers (imaging, patching, load-balancing, etc.), but have high flexibility as they support most types of existing applications you can deploy without rewriting or porting code.
  • PaaS (Platform as a Service): A virtual platform, where applications are deployed as services. You have next to no server management, and automagic scalability is built-in, but an existing application code must be written or ported into the new environment.
  • SaaS (Software as a Service): An application you customize / configure atop a base application, owned by the service provider. Some allow only configuration (tailoring organization- and and user-specific information), while others have higher levels of UI customization opportunities; think adding applications to Facebook or customizing your iGoogle or MSN home pages.

Then, realization and logical extension brought us (in no particular order):

  • IT as a Service (ITaaS): Standard IT applications, like Email, online file sharing,  online backup, online meetings online workspace collaboration and more.  The keyword here is ‘online’, of course, but these are commodity (common, and available from multiple providers) applications that every organization utilizes to one extent or another. ITaaS creates the opportunity for an organization to make a ‘rent’ versus ‘buy and maintain’ decision that can help them preserve capital.
  • Extended ITaaS applications grew out of the above and include online disaster recovery (backup plus online storage), online synchronization (synchronization plus online storage), online content sharing (photo uploads, players for slide shows plus online storage) and more. Thanks to Cloud Computing, any size company can offer value-added services to enable these functions, acquire customers and pocket the difference between what they collect from their users and their monthly Cloud Computing fees.

For background, let me discuss some earlier methodologies and new technologies:

  • Application programming interfaces (APIs): Command and content structures exposed by a software program to allow access by another program. APIs allow the second program to control and obtain data from the first. We’ve had APIs for years and years.
  • Web Services: APIs exposed to the Internet and accessible by web sites, web applications and other web services. Web services are used to provide data to most client applications .. odds are, your mobile phone gets weather data from one web service, bus information from another, and so on. I discuss this at length in “Composite Applications: Do You Use Them?”. The answer is: “yes”, though you may not realize it.

With these, evolution brings us to:

  • Content as a Service: Previously known as “web pages” (I’m kidding .. a little). Once a standard connection methodology (Web Services) allowed programmatic access to applications (via APIs), the sky becomes the limit. Content contained web pages and enjoyed by end users could now be mashed into other applications on other sites. The new content enhances the host application, making it a more valuable resource. Zillow is a good example of a site that does this: publicly-available data like maps and real estate tax records are mashed together with local multiple-listing services data (which may or may not be available at no cost), resulting in a site that displays map with home plats, taxes, prices and realtor references (and more) that the user can use to do research.
  • Data as a Service: Lots of companies have lots of accumulated data. Some accept the data in the form of online customer backups from their products, like Intuit (Quicken). Intuit could (I do not know if they do this) create an anonymous data warehouse with this end user data, assembling income and spending patterns by geography, and providing this data as a service to companies making business decisions about branch locations based on these criteria.

While assembling these data is only an API access away, creating and validating business use cases for these assemblies is the real magic in this cauldron. Many companies are providing programmatic access to composite content and data as a service, and for a price. Aggregators of these data (Microsoft’s Windows Azure Marketplace, for one) broker transactions and collect fees for data access. Virtually any company can buy and sell data through this kind of marketplace, making even more interesting business models.

There’s more (there always is). Applications as a service is a paradigm that has been around for a long time, but are now expanding into more fee- and transaction-based models, including those with API access.

Back to the original topic: Are we now in the world of ‘anything-as-a-service’? Does XaaS exist?

It does, and you’re using it .. even though you might not realize it.

How Well Does Your Architecture Accommodate Change?

Designing and maintaining a flexible architecture is the grail of the IT, application development and business triumvirate in a company. A flexible architecture allows:

  • Rapid addition of new, and extending existing application features.
  • Enhancing applications by including data from internal and external sources.
  • Exposing application data to a wider variety of devices and audiences.

With this flexibility, the business can pursue opportunities with minimal impact to baseline infrastructure.

The sad part: many architectures evolve (or mutate) over time, adding new, function-enhancing components to existing components as afterthoughts. In short, this is not an ideal scenario. The company may have achieved short-term business goals, but in an inflexible (and risk-ridden) way.

Without a flexible architecture, we see more than technical challenges: we see friction in company units:

  • IT and development can be seen as blockers, while the business is viewed as making unreasonable demands.
  • Time-to-market, and therefore, potential competitive advantage is lost.
  • Development cycles can be disrupted, adding significant expense to projects and products .

How do flexibility constraints manifest in the enterprise?

  • Physical and IT: not enough servers and / or not enough time / resources to deploy them.
  • Development of new features takes too long to code, or application / IT infrastructure won’t support enablement without changes to underlying components.
  • Access to internal and external data is restricted by policy, especially if business requirements require enhanced security levels in light of the modern online world.

If poorly-supported changes are implemented, success can become company’s worst enemy. Launching a product or feature atop an architecture that isn’t ready creates a new set of issues:

  • End user impact: users have a less-than-positive experience with your product.
  • Competitive risk: your great and game-changing ideas are exposed to the world before your application is ready for prime time.
  • Unanticipated downtime / impact on other systems: ‘bolt-on’ additions to an existing architecture can pose risks to the original components.

Avoiding issues and achieving success requires planning, execution and resources. The first two are wholly depending on an organization’s ability to complete IT and development projects. The last item is a hardware and resource issue that extending components into a solution that includes cloud computing (even if only on an interim basis) can help manage. Identifying your business goals and performing an inventory on your current state is an excellent place to start; a skilled architect can help you describe the future state and a migration path to your grail.

Cloud: Oh EC2 .. Say it ain’t so!

Seen today when starting a single large EC2 instance:

Untitled

Gives another new meaning to the term ‘unlimited’.

Cloud Computing: Migrating to Microsoft BPOS

To make the search  engines happy: “Migrating Microsoft Small Business Server Exchange Server to BPOS”. Note that I did a full migration; not an email co-existence (where some users are BPOS and others are local).

This turned out to be pretty much a no-brainer .. although in the process, I made it harder than it needed to be (not on purpose, of course).  Here’s the short story:

Original State:

  • Microsoft Small Business Server 2000 (with Exchange 2000)
  • Microsoft POP3 Connector collecting mail from an internet email server
  • A mix of Windows XP and Windows 7 clients
  • A mix of Microsoft Office 2003, 2007 and 2010

Final State:

  • Exchange Online
  • A mix of Windows XP and Windows 7 clients (upgrade work in-progress)
  • (Mostly) Microsoft Office 2010

    My biggest concerns:

    • Avoiding mail interruption (MX record delays, profile confusion, etc.)
    • Preventing loss of email content
    • Migrating existing email to the online services

    As I’d migrated my own (a much smaller) domain to the cloud a few months ago, I figured I had it cooked. However, I was wrong. My steps were:

    • Create BPOS users and assign email services
    • Cut the MX record away from my domain to BPOS
    • Set up a second profile (pointing to BPOS) on each Outlook 2010 client
    • Drag and drop old email between profiles in Outlook.

    While this worked nicely (I have only five users on my domain), it was not optimal. I avoided email interruption and loss of email content, but I went to a much greater level of effort in the process. To my defense, Microsoft had not yet released the Exchange Email Migration Process document .. as it did not exist.

    Sometimes it’s hell to be an early adopter. That said: Kudos to the team at MS who built the migration tool.

    The correct (and easier) way? Well, start with a plan that includes:

    • Users: User list, size of each mailbox, user (or workstation) availability during / after the migration. Depending on the size of your domain, be prepared to create logical groups you can migrate in succession. Suggestion: Prepare good communication and ensure user rights: to keep your hands off the workstations, your users must be able to install software on their local machines.
    • Clients: I upgraded to Outlook 2010 as part of the migration; some before the migration, some after. Outlook 2003 and 2007 work just fine if you choose to remain on older versions. There appears to be no client-side difference as to when you do the upgrade.
    • Schedule: Migration is simple, but it does take time. If executed properly, users will receive new emails immediately, but may not have access to older emails without incurring complexity on the workstation (i.e., accessing multiple profiles).
    • Your availability: depending on the technical prowess of your users and the clarity of your instructions, you will have questions to address. Make sure you have time to work with your users and watch your migration process.

    I’m serious: make a plan. To get you started (these steps can occur in parallel):

    • With your LiveID, create your Microsoft Online Services account, identifying the services (and quantity of each) you want. You have an option of a 30-day trial before signing up for a one-year commitment. The full-meal deal is $10 / user / month. You’ll be asked to create an internal domain (something like yourdomain.microsoftonline.com) and an administrator account.
    • In the Microsoft Online Services Administration Center, create your users and assign desired services to each. Have the creation process email you their initial (system-assigned) password.
    • Ask your users to install the Microsoft Online Services Sign-in Client. This is a fairly simple process and a fast install. I asked my users to do the install but not the login, as I wanted to make sure everyone could get installed before wrapping myself up in loops that included system troubleshooting.
    • Validate your MX record. While you’re at it, confirm you can access your domain registrar to make changes to it. The validation process is harmless; that is, it won’t make any changes to your email domain. However, you must confirm with BPOS that you can redirect the record when the time comes. It takes about a day to confirm. Microsoft has registrar-specific instructions on the Migration pane in the Microsoft Online Services Administration Center.
    • Install Microsoft Online Services Migration Tools on a capable workstation (mine was pretty low-end). See the minimum requirements at Migration Tools Prerequisites. As the tool only migrations one email account at a time, you might try installing on more than one machine. The link to install is in the Microsoft Online Services Administration Center, under the ‘Migration’ tab.
    • Confirm your Migration Tool can connect to your local Exchange Server.

    When your MX record is validated AND your users have the Microsoft Online Services Sign-in Client installed AND you can connect to your local Exchange Server with the Migration Tools (the following steps can also occur in parallel):

    • Edit each user in the Microsoft Online Services Administration Center, changing the domain from yourdomain.microsoftonline.com to yourdomain.com.
    • Forward the system-assigned password instructions to your users (note: Microsoft could do a little better job of automating this, starting with a subject line that includes the user name), changing the domain name to yourdomain.com. Your users will sign in with the system-assigned password, changing it on their first login. When they do, the login client will set up a new Outlook profile with their full email address. Caveat: I don’t know what will happen if the current profile is named with full email address; mine were all ‘Outlook’. Possibility: if you create the users after you validate the MX record, you might be able to avoid the ‘changing the domain name’ step above. I didn’t.
    • Warn your users: they will see an empty Inbox after the previous step.  Have them close Outlook and select their old profile until their migration is initiated. Once their individual mailbox migration is initiated, they should use the new profile instead of the old. Microsoft did a nice job of email forwarding for in-process mailbox migrations; mail is delivered to both profiles. Possibility: you may want to run migrations after COB, advising users to log onto their new profiles in the morning. This depends on the number of mailboxes you need to migrate, and the size (i.e., time required ) of each.

    You’re now ready to go. The rest is actually simple:

    • Update your MX record (instructions for this on the Migration tab in the Microsoft Online Services Administration Center). Email will go to the old Exchange location and be forwarded to the new until the record is propagated. Once email stops showing up in the old profiles, the MX record is moved.
    • Run the Migration Tool, identifying the users you want to migrate.
    • Advise the users to start using their new profile.
    • Wait (bandwidth, mailbox size).
    • Watch for errors; I had a few users with corrupt records. Start another user while dealing with these.

    Once migration is complete (confirm this by email delivery to only the new profiles):

    • Write your users with instructions on how to change their default profile to the new. You can avoid deleting the old (depending on your retention policies); the OST will persist until you remove it.
    • Users will need to set up their email signatures; they can pull existing from their Sent Items folder.
    • Shut down your local Exchange Server services; archiving per your local policies.

    I managed to get all but one of the migrations without issue (record corruption errors I referenced above).  However, this was just in time for a service outage. Service outages happen; usually for only a few minutes.  During this time, the Outlook icon in the status bar will show a tiny hourglass, and your Outlook windows may freeze. Release this choked thread: mouse over the status bar icon, right-click and click ‘cancel server request’. Your mail windows will refresh so you can save drafts and documents.

    You can check the status of the service in your region with the links below .. HOWEVER it requires a login (dumb). If login service is hosed, so is the ability to get service status:

      Here’s what I saw tonight .. argh:

    image

    That said, the service is quite reliable, and I don’t need to manage my Exchange Servers anymore. Add to this the improved access to mobile and web clients, and you have a terrific commodity-based service you can share with your customers.

    Cloud Computing: BPOS Fun Facts

    I rummaged around the Microsoft Business Productivity Online Suite (BPOS) site on behalf of a customer over the weekend. The customer currently has an (aging) Exchange Server on-premises, and we’re all keen to eliminate the silicon (and the associated pains) therein.

    Just what do you get from BPOS, you might ask? Besides getting out of managing your Exchange and SharePoint servers, it is suited for organizations that:

    • View messaging and collaboration as mission-critical.
    • Require security, reliability and access flexibility.
    • Want to avoid managing commodity IT functions in favor of focusing on strategic IT initiatives.

    BPOS is sold on a per-user, monthly charge basis; $10 / user / month for the suite (present-day pricing; see current pricing on the home page). Exchange Online is $5 / user / month by itself, other ala carte pricing can be found on the BPOS Home Page.

    Looking to do your own BPOS implementation? Microsoft has been most helpful in providing reams of documents to confirm you can use the service successfully.

    The simple answer? If your organization is, or can host these servers on-premises, you can enjoy (celebrate) the advantages of hardware-free management and reliable uptime.

    We’re sold; will capture and share our migration experience.

    Test Driving Office Web Apps: First Blush

    Had some spare time over the Thanksgiving holiday and thought I’d take the online versions of Microsoft Office applications for a little test drive.

    As I’ve been spending a lot of time in PowerPoint lately, I started here first. Most notable:

    • Lack of transitions
    • Lack of animations

    Not fatal, in themselves. However, most everything else I typically use is intact. I was able to create the short deck below in under ten minutes. Especially nice is support for SmartArt, those handy little art-lets to which you can add text in outline format (you’ll see flow slides in the presentation below):

    Not bad for ten minutes’ work; an additional minute to figure out how to share it across the web, and voila, instant availability.

    Excel is no slouch, either. Shortcuts like cell ranges (type ‘January’ in the first cell, then grab-and-drag the lower right to create the rest of the years’ labels), drag-and-copy and drag-and select work well. Formulas are a bit tricky at first (I expected to find them on the ‘Insert’ menu), but press the ‘=’ sign and let cell context-sensitivity do the rest .. I was able to total up rows and columns with ease. Control-key formatting (B, U and I) works with single cells and selected ranges. Graphing was a nice surprise; I built out a quick calendar and column range and it updated in near-real time as I changed data. Very nice.

    The best of the three (IMHO) is Word. Tables, spell-check, style-based headings and a bunch of other goodies are supported, along with bullets, numbered paragraphs, text alignment and quotations. I created the following document in less than five minutes with the help of my friends at Lorem Ipsum:

    http://cid-ecddcf497d93928b.office.live.com/embedicon.aspx/TestDocument.docx

    (update: no document preview because no ’embed’ option for the link).

    Absent was support for table of contents, though .. will discuss that (and a few other bits) later.

    What about saving? With the Word WebApp, you have a save icon that stores the document in your SkyDrive. The PowerPoint and Excel WebApps have a “Where’s the save button” button that returns a response of “saved automatically” (also to SkyDrive). This does impede performance a bit, as there’s a fair bit of round-trip traffic going between your PC and the server. The “Save As” button will save a local copy, and the “Open in xyzzy” works nicely with local copies of the software (xyzzy, if installed).

    All in all, a very nice set of features to create, store and access documents from virtually anywhere. Unlike Office Live, you can work with Microsoft Office documents directly from http://office.live.com even on the Google Chrome browser.

    Oh: did the above paragraph confuse you? You’re not alone:

    • http://office.live.com gives you access to the online version of Microsoft Office applications and saves these documents to SkyDrive.
    • http://officelive.com is free web hosting and document storage .. however, you can also edit documents with the same online versions of Microsoft Office applications (as above) .. and they’re still stored in SkyDrive.

    .. someone should call the branding police Winking smile This may be why Microsoft is rebranding the whole online document thing under the umbrella of Office 365: now called Office Professional Plus, these WebApps join Exchange Online, SharePoint Online and Lync Online (formerly Microsoft Office Communicator).

    In my next pass at this, I’ll upload some locally-created documents with advanced features and see how the online versions deal with them.

    Cloud Computing: IT as a Service

    At PDC last week, I heard Microsoft utter the umbrella term: “IT as a Service” to describe their Cloud Computing direction. In short, ITaaS encompasses the three commonly-known ‘as a Service’ offerings:

    • IaaS (Infrastructure as a Service): Virtual, but logically-physical hardware. Servers in the sky that you can connect to remotely as though they were actual hardware. You expend a lot of effort into managing servers (imaging, patching, load-balancing, etc.), but have high flexibility as they support most types of existing applications you can deploy without rewriting or porting code. Amazon has capability in this space, as does VMWare.
    • PaaS (Platform as a Service): A virtual platform, where applications are deployed as services. You have next to no server management, and automagic scalability is built-in, but an existing application code must be ported to the new environment. Google AppEngine fits here, as does the application platform side of the Windows Azure Platform.
    • SaaS (Software as a Service): An application built atop a base application. Some allow only configuration (tailoring organization- and and user-specific information), while others have higher levels of UI customization opportunities; think adding applications to Facebook or customizing your iGoogle or MSN home pages. On the business size, Force.com is the big player in this place with their CRM application.

    Looking at the features now available in the Windows Azure Platform update, Microsoft looks to have the broadest offering in Cloud Computing; while light on packaged SaaS platforms, they’re heavy in the application development space.

    But, there’s more (there always is): The Microsoft offering has also taken familiar server-based applications into the cloud. Exchange Online, SharePoint Online, Office Communications Online and Office LiveMeeting and made them available as subscription services, wrapping them under the umbrella brand: the Business Productivity Online Suite (BPOS).

    Not to confuse the market with Darwinian brand evolution, Microsoft also announced Office 365, adding Office Professional Plus (web versions of Word, Excel, etc.) and Lync Online (unified communications) to the mix.

    Like the rest of the cloud, the business model is pay-as-you-go and your IT Department don’t need to muck about with hardware for near-commodity application functionality.

    Google is in the fray as well; Google Docs and GMail are Cloud alternatives for productivity applications and are mostly free for smaller organizations, although I found their documentation a bit daunting the last time I explored moving my domain there.

    While there are several TCO analyses out there; grab one (or ping me and I’ll assist). Suffice to say: for energy (not) expended managing servers, cloud computing should be on the radar of all organizations.

    %d bloggers like this: