Geographic Replication in Windows Azure

Windows Azure had some downtime this past February 29th. Let’s get the news out of the way:

Then, the facts from the Windows Azure Blog:

    I did some tinkering on a few late-nights this past week, recognizing that a reliable solution that does not include dedicated hardware (who wants to buy hardware that you might never need?) would be to locate a customer site in a site that is geographically distant from another. While this is not perfect (there are always risks):
  • Some higher-level controller goes out (in the case of Azure, this could be authentication in the Fabric Controller).
  • Risk of CNAME or DNS update delays.
  • Physical connection to a data center or region.
    What to do, what to do?
    Enter the Windows Azure Traffic Manager. Way back when (in fact, I administered a test with this question): load-balancing in Windows Azure was completely automatic. With the Windows Azure Traffic Manager, you have control over how traffic is routed to your virtual machines:
  • Performance
  • Round Robin
  • Failover

The Windows Azure Traffic Manager allows you to manage traffic between multiple instances of Web Roles for scalability and uptime, based on the criteria above. Further, you can create routing policies to manage geo-routing of incoming user requests so they go to an instance closest to the user. You can set these two policies to enable geographical failover in short order.

  • The first thing you need is a deployed hosted service. Please see the deployment lab in the WATK for details on how to set this up. Select as the primary geographical datacenter the one that suits your needs.
  • Then, you need multiple instances of your roles .. note that you need this to ensure you receive the uptime SLAs anyway. You can deploy this in your Service Configuration when you deploy your project, or in the Windows Azure Portal at runtime.
  • With your application deployed in one data center, repeat the steps above by setting up a hosted instance in a different data center and deploying your code to it. Note that these will have different URL prefixes; but don’t let that bug you.

Verify your deployments in the Windows Azure Portal. You should see both deployments in different data centers. Then:

  • Run the Windows Azure Traffic Manager from the Windows Azure Portal.
  • Create a Traffic Manager Policy, selecting your subscription and the ‘Performance’ option under Load Balancing.
    • Select the two internal DNS names for your disparate services.

With this, your solution is complete. The Traffic Manager will talk with the Fabric Controller, and when performance or access is degraded on one set of instances, it will redirect traffic to the other instances. This should increase your stability in the Cloud!

I’ll see you there.

Advertisements

Building a Windows Azure Development Environment

Happily, this post has become irrelevant thanks to the New and Improved Microsoft Web Platform Installer (WebPI). That spiffy little kit interrogates your system for the proper dependencies and installs the Microsoft Web Stack, tools, SDKs and the like. To get started, click on the link.

Microsoft updated the Windows Azure Training Kit to June 2012 as well; you’ll find plenty of information therein .. especially labs using the new bits and highlighting the new features of the platform.

The content below is preserved for archive only:

As I’ve been working with some of the best and the brightest the WAISG has to offer, I think it’s time to provide a link to assist others in some Windows Azure 101 (a/k/a “Getting Started”) bits and pieces. In this post, I’ll cover setting up your development environment on a Windows 7 or Windows Server 2008 R2 system.

  • Make sure you have the current Service Pack for your operating system. The easiest way to do this is to click on the Orb (or Start, in WS2008R2) and type ‘Windows Update’. Windows Update will detect if your system is patched to current levels.
  • Install Visual Studio 2010 or Visual Studio 2010 Express (the free version). Be sure to check Windows Update again after installation (and rebooting) to ensure you have the current Service Pack (SP1) installed.
  • Using the Web Platform Installer, Install the Windows Azure SDK for .NET.
  • Visual Studio 2010 Express installs SQL Express by default, but your development environment may include a full (or development) version of SQL Server. In either case, check Windows Update for a current Service Pack for your version. The SQL Express management UI is a separate download: SQL Express Management Studio Express. If you have full SQL Server, the UI is included. For help running the UI, please see “Using SQL Server Management Studio” on MSDN.
  • Install the Windows Azure Training Kit. This kit is chock-full of information, tutorials and source code. The current version is “January 2012”. As this is updated frequently, I suggest you do the full install (about 500 mb) and install into a separate folder on your hard drive.
  • Get the additional bits you need for the labs. To do this, click on “Prerequisites.htm” in the folder you installed the Windows Azure Training Kit. This will start an application that will interrogate your system and advise any components you need for the labs you want to run.

:: whew ::    Only a few more steps (I promise).In order to use the local emulators:

  • Compute: You MUST run Visual Studio as an administrator (link to a Windows 7 Forum, but works for both W7 and 2008).
  • Storage: Your logged in user MUST be a member of the SQL Server sysadmin group (link to David Browne, who provides a script that does this for a local user .. as long as that local user is an administrator of the local system .. otherwise, contact your IT). This is required as the local user must be able to create databases for storage of Blobs, tables and Queues during development.

With these bits installed, you should be able to conquer any of the labs in the Windows Azure Training Kit with ease .. and speed your way into the Cloud!

I’ll see you there.

Integration Architecture and the Baggage (mis-) Handlers

This is a bit of a stream of consciousness post. No agenda; it’s just something I observed and extrapolated into (near-) relevancy.

There’s the plane.

Then, there’s the conveyor belt with a guy at each end; one who loads the incoming bags on the belt, and the other who grabs them from the belt and swings them onto the waiting trailer.

When the trailer fills (or the plane empties), the truck comes over and hauls it off to Baggage Claim.

It’s a dance. Can be amusing to watch, though. Beneath the grins, it’s a system rife with opportunity for error. Consider:

  • There’s only one door, so the physical world requires the guy on the plane to move further away from the door to fetch more bags.
  • If the belt gets ahead of the guy on the ground, the bags get to the ground, too. Sure, he can stop the belt, but if he does, the guy in the plane gets held up.
  • If the trailer is full, the belt must stop.
  • If the truck is busy, the trailer sits.

Once unloaded, it all happens in reverse.

The airline needs to turn (unload, reload and go) a plane quickly. You may assume outgoing bags are loaded onto a trailer at the terminal and delivered to the plane, even while incoming bags are still being handled. Consider:

  • What if you’re short handlers? Bags don’t move on / off planes, onto belts or into / out of trailers.
  • What if you’re short belts, or a belt is out of service? Bags don’t move on / off planes or in / out of trailers.
  • What if you’re short trailers? Bags get handled twice, and are left on the ground, either coming or going.
  • What if you’re short trucks? Bags sit on loaded trailers. Empty trailers sit where they don’t need to be.

Then, there’s the endpoints:

  • The plane is early: resources (handlers, trailers and trucks) are redirected, putting other arrivals / departures at risk.
  • The plane is late: resources (handlers, trailers and trucks) wait, or are redirected. Outgoing bags wait somewhere, along with the handlers, trailers and trucks.
  • The guys in Baggage Claim are behind; the loaded trailer sits, which holds up loading bags that need to be on an outgoing flight.

Each connection represents a potential choke point: a place where the process runs the risk of coming to a halt. Failure at any point will impact other parts of the system, causing delays.

I’m just so glad I pack light enough to carry on.

I didn’t write this to complain about baggage handlers. Rather, how about we imagine modeling this a data workflow? What would you do to protect against delays in the system?

Note there are components in place in this system to mitigate some of the risks of delaying the process:

  • Belts reduce the distance a handler must travel, increasing capacity and saving time. Time = money.
  • Using trailers rather than trucks (trailers can be left at endpoints) creates a buffer in which bags can be stored, for short term intervals.
  • Using trailers also reduces the number of trucks and drivers while increasing truck utilization. The accountants will enjoy maximum utilization of a capital expense.
  • Proper staffing of handlers at both the ends of belts and in Baggage Claim keeps things moving.

Some rough equivalents for a workflow architecture:

  • Bags: data, packaged in a mostly standardized form (the real world just sucks sometimes).
  • The plane baggage compartment: data / application silo in which users can create, manipulate and store data. The plane baggage compartment has a finite capacity.
  • Belts: FIFO queue (first-in-first-out queue) with finite capacity and known duration to offload data packets from the data / application silo.
  • Trailers: LIFO stack with finite capacity. A stack is essentially a LIFO queue, accepting data packets from the belt queue, but rather inefficiently: the last bag in is the first bag out by default. However, there can be exceptions: see Handlers, next.
  • Handlers: processes with finite capacity and the power to evaluate data packages (i.e., reading certain baggage tags for expedited handling). Premium tags are placed in a place in the trailer so they can be retrieved first by the handlers in Baggage Claim, rather than the standard LIFO treatment.
  • Baggage Claim: data / application silo that serves processed data back to users (or to other systems).

Use of components like queues and stacks can enable your workflow architecture to scale to Internet capacity. You should establish service-level agreements (SLAs) at each touchpoint to ensure you’re sending / receiving data in an acceptable timeframe and in the proper formats.

In past lives, I worked extensively as an Integration Architect. This was in the days before Windows Workflow Foundation and BizTalk. BizTalk 2004 was a godsend: in fact, I still have a current BizTalk 2006 implementation on a VPC with which I tinker when I feel the need to code. It’s not as sexy as a hot web application built in Silverlight or WPF, but it keeps me thinking of ways to add business value by integrating data trapped within cranky silos with other applications and end users.

A data model, application architecture or process workflow works the best when it’s modeled as close to the real-life process it represents. Once modeled, you can look closely at ways to improve it in an iterative fashion.

Original Post: October 2007

Want Some Windows Azure AppFabric Goodness?

I just finished a project producing scripts and demos for count ‘em: lucky 13 videos for Windows Azure AppFabric. I managed to spend more time in Visual Studio than I have in years .. and it was actually quite fun.

These videos target the breadth developer and get you off on the right foot with "An Introduction to Windows Azure AppFabric”. From there, we introduce several new services, highlighted below.

Windows Azure AppFabric Cache
The AppFabric Cache is a distributed, in-memory, application caching service for Windows Azure and SQL Azure applications. The cache  provides applications with high-speed access, scale and high availability to application data. The service requires no installation or instance management, and can be resized dynamically to meet your needs.

Windows Azure Service Bus Topics
Topics enable one-to-many message delivery and filter rules which ensure delivery to relevant systems in a publish / subscribe model. Topics are provisioned in code, requiring no installation or instance management.

Windows Azure Service Bus Queues
Queues enable your application to be more resilient by providing an always-present receipt location for messages, even if the receiving listener is offline. Queues can also load-level your application when traffic spikes occur, or load-balance .the application to improve processing performance.

Windows Azure Service Bus Relay
The Service Bus Relay supports Service Remoting: a way to expose on-premises services so they be called by Cloud components, and One-Way messaging, a means to send to one or many recipients.

The video series includes both “what is” and “how to”pivots, complete with code samples so you can try these new services out for yourself. Visit the AppFabric Demos Channel on CodePlex.

How to store and access (a lot) of protected content

Just read an article about iCloud on the Datamation site: “How Apple’s iCloud Will Rain On Google’s Parade”.

On rain? Everything about iCloud is a secret at the moment, except the (assumed) name. That said, Mr. Jobs may have a trick or two up his turtleneck when he announces specifics next week at the Apple Worldwide Developer Conference.

However, I come here today not to bury Apple or Google, nor to praise them. I just got to thinking about how I’d design a system that could store a massive amount of DRM-protected media (media bound for playback to a specific device or a user through a solo-use token). Then, I got to thinking other things, and magic happened.

Critical Mass: Bigger is Better
If you consider the huge number of users who purchased “I’m Yours” by Jason Mraz and then consider what it would be like to store that ONE song and DRM deltas for millions of users .. you’re looking at a lot of disk space.

iCloud and iTunes share a happy technical relationship with their users: they know who you are and what you bought. As a result, they can (theoretically) store each song in the library only once .. and then simply issue a rights token for the song’s use when requested for download or playback. Other providers may find themselves storing multiple copies of the same media (especially if unprotected), and while data deduplication is a great thing, it is only as good as an algorithm; and a 1% loss of efficiency translates into the need for many, many disks when applied in massive scale.

Connections: One Size Fits These
From my limited experience with an iPhone, an iPod and an iPad (relegated to setting them up for friends and getting them connected to Wi-Fi networks), I see the joy of having an AppleID for a consumer. Like the Windows LiveID and the Google Account, a single identity for many services is significant; especially for today’s consumers who are confronted with too many passwords for too many sites.

Ditto for the architecture: a single identity (via single sign-on) enables the ability to pass only one token while acquiring content from the system for more efficient operation. It also makes it easier to limit playback or download by device by rapid revocation whenever a device contacts the DRM provider.

On connecting to, and presenting content? A secure and homogeneous API layer for these devices that selects the content in the proper form factor for the device, verifies the AppleID and makes the user happy.

Developers, Developers, Developers
Back when silicon dinosaurs roamed the Earth (and I was coding in Visual Basic), Microsoft used a highly-successful strategy of driving adoption by luring developers to the platform. Developers deployed applications rapidly with easy-to-use tools, showed them to their employers in the enterprise, and enterprises followed suit.

Apple has done something even more interesting: they’ve achieved massive penetration into the consumer, and the developers have followed that breadth market instead. With over 350,000 applications in the App Store, developers are following the money.

There’s more (there always is).

“Anything” as a Service (XaaS) .. you knew this was coming ..

In the Cloud Computing world, where so many things are in flux, it should come as no surprise that virtually “anything” can (and will) be provided “as a Service”. For starters, we had (I go into more detail on this from my “Cloud Computing: IT as a Service” post):

  • IaaS (Infrastructure as a Service): Virtual, but logically-physical hardware. Servers in the sky that you can connect to remotely as though they were actual hardware. You expend a lot of effort into managing servers (imaging, patching, load-balancing, etc.), but have high flexibility as they support most types of existing applications you can deploy without rewriting or porting code.
  • PaaS (Platform as a Service): A virtual platform, where applications are deployed as services. You have next to no server management, and automagic scalability is built-in, but an existing application code must be written or ported into the new environment.
  • SaaS (Software as a Service): An application you customize / configure atop a base application, owned by the service provider. Some allow only configuration (tailoring organization- and and user-specific information), while others have higher levels of UI customization opportunities; think adding applications to Facebook or customizing your iGoogle or MSN home pages.

Then, realization and logical extension brought us (in no particular order):

  • IT as a Service (ITaaS): Standard IT applications, like Email, online file sharing,  online backup, online meetings online workspace collaboration and more.  The keyword here is ‘online’, of course, but these are commodity (common, and available from multiple providers) applications that every organization utilizes to one extent or another. ITaaS creates the opportunity for an organization to make a ‘rent’ versus ‘buy and maintain’ decision that can help them preserve capital.
  • Extended ITaaS applications grew out of the above and include online disaster recovery (backup plus online storage), online synchronization (synchronization plus online storage), online content sharing (photo uploads, players for slide shows plus online storage) and more. Thanks to Cloud Computing, any size company can offer value-added services to enable these functions, acquire customers and pocket the difference between what they collect from their users and their monthly Cloud Computing fees.

For background, let me discuss some earlier methodologies and new technologies:

  • Application programming interfaces (APIs): Command and content structures exposed by a software program to allow access by another program. APIs allow the second program to control and obtain data from the first. We’ve had APIs for years and years.
  • Web Services: APIs exposed to the Internet and accessible by web sites, web applications and other web services. Web services are used to provide data to most client applications .. odds are, your mobile phone gets weather data from one web service, bus information from another, and so on. I discuss this at length in “Composite Applications: Do You Use Them?”. The answer is: “yes”, though you may not realize it.

With these, evolution brings us to:

  • Content as a Service: Previously known as “web pages” (I’m kidding .. a little). Once a standard connection methodology (Web Services) allowed programmatic access to applications (via APIs), the sky becomes the limit. Content contained web pages and enjoyed by end users could now be mashed into other applications on other sites. The new content enhances the host application, making it a more valuable resource. Zillow is a good example of a site that does this: publicly-available data like maps and real estate tax records are mashed together with local multiple-listing services data (which may or may not be available at no cost), resulting in a site that displays map with home plats, taxes, prices and realtor references (and more) that the user can use to do research.
  • Data as a Service: Lots of companies have lots of accumulated data. Some accept the data in the form of online customer backups from their products, like Intuit (Quicken). Intuit could (I do not know if they do this) create an anonymous data warehouse with this end user data, assembling income and spending patterns by geography, and providing this data as a service to companies making business decisions about branch locations based on these criteria.

While assembling these data is only an API access away, creating and validating business use cases for these assemblies is the real magic in this cauldron. Many companies are providing programmatic access to composite content and data as a service, and for a price. Aggregators of these data (Microsoft’s Windows Azure Marketplace, for one) broker transactions and collect fees for data access. Virtually any company can buy and sell data through this kind of marketplace, making even more interesting business models.

There’s more (there always is). Applications as a service is a paradigm that has been around for a long time, but are now expanding into more fee- and transaction-based models, including those with API access.

Back to the original topic: Are we now in the world of ‘anything-as-a-service’? Does XaaS exist?

It does, and you’re using it .. even though you might not realize it.

How Well Does Your Architecture Accommodate Change?

Designing and maintaining a flexible architecture is the grail of the IT, application development and business triumvirate in a company. A flexible architecture allows:

  • Rapid addition of new, and extending existing application features.
  • Enhancing applications by including data from internal and external sources.
  • Exposing application data to a wider variety of devices and audiences.

With this flexibility, the business can pursue opportunities with minimal impact to baseline infrastructure.

The sad part: many architectures evolve (or mutate) over time, adding new, function-enhancing components to existing components as afterthoughts. In short, this is not an ideal scenario. The company may have achieved short-term business goals, but in an inflexible (and risk-ridden) way.

Without a flexible architecture, we see more than technical challenges: we see friction in company units:

  • IT and development can be seen as blockers, while the business is viewed as making unreasonable demands.
  • Time-to-market, and therefore, potential competitive advantage is lost.
  • Development cycles can be disrupted, adding significant expense to projects and products .

How do flexibility constraints manifest in the enterprise?

  • Physical and IT: not enough servers and / or not enough time / resources to deploy them.
  • Development of new features takes too long to code, or application / IT infrastructure won’t support enablement without changes to underlying components.
  • Access to internal and external data is restricted by policy, especially if business requirements require enhanced security levels in light of the modern online world.

If poorly-supported changes are implemented, success can become company’s worst enemy. Launching a product or feature atop an architecture that isn’t ready creates a new set of issues:

  • End user impact: users have a less-than-positive experience with your product.
  • Competitive risk: your great and game-changing ideas are exposed to the world before your application is ready for prime time.
  • Unanticipated downtime / impact on other systems: ‘bolt-on’ additions to an existing architecture can pose risks to the original components.

Avoiding issues and achieving success requires planning, execution and resources. The first two are wholly depending on an organization’s ability to complete IT and development projects. The last item is a hardware and resource issue that extending components into a solution that includes cloud computing (even if only on an interim basis) can help manage. Identifying your business goals and performing an inventory on your current state is an excellent place to start; a skilled architect can help you describe the future state and a migration path to your grail.

Composite Applications: Do You Use Them?

You probably do.

Simply put, composite applications assemble data from disparate sources and present the data in a single interface. An application that displays the system time is technically a composite application (although not a particularly interesting one).

You’ll find composite applications in consumer and business settings. They include:

  • Business process / supply chain management
  • Medical diagnostics
  • Financial systems
  • Location-based services

Their most valuable use case for a composite application is presenting multiple sources of data to a user in an appropriate context.

  • A BPM / SC dashboard shows real-time inventory levels against real-time production demands, culled from disparate systems. This dashboard can alert the user to the risk of production delays due to stock levels.
  • Medical diagnostic software shows bodily statistics (heart, lung, oxygen levels, etc.) in response to outside stimuli (exertion or adding oxygen).
  • Financial software shows the response of a stock price due to news, and then reflects price changes in portfolio valuation.
  • LBS-enabled solutions create massive business opportunities simply by knowing where you are .. and what you might be able to buy / do while you are there.

In all cases, the ultimate recipient of the data is the user; we are the ultimate aggregators and consumers of the data that matters to us. A well-designed composite application will address our needs and use cases in context when gathering data to present to us.

I liken a composite application to a smart phone; in fact, I would argue that a smart phone is a composite application. If the smart phone has a robust enough operating system to permit user customizations (loading the content and the applications we deem most relevant), AND includes pillars like location and search, our aggregation and consumption of the data is second nature to us.

For example, a GPS-enabled phone can provide:

  • The weather in your current location, and as a result, what to wear.
  • The store to buy something you need that ‘s close by (possibly even the clothes you need because you didn’t check the weather first).
  • Directions to the store.
  • Your bank account balance to ensure you can buy what you need.
  • The method of payment for a treat along the way (I use the Starbucks Card Widget for my Android Aria to pay for my coffee these daze).
  • .. and so on.

If you build a composite application (correctly), it will get used. Further, if you watch how they are used, you’ll learn how to improve your design to deliver what your customers need.

Some Bits on Infrastructure

Over the weekend, I got to thinking about infrastructure, or more specifically, enabling technologies that might benefit an infrastructure, as in always available in the context of a workload.

As always, I’m looking from the business perspective. Here are a few analogies:

  • When you build tracks, you can run trains.
  • When you build airports, you can land planes.

Both of the above are physical infrastructures. They enable business opportunities; income derived from the transport of people or goods.

Infrastructure matters.

Infrastructure isn’t just physical. Some more analogies:

  • An income tax referendum was put before the voters in Washington State this past year. Proponents said the tax would affect only the wealthy; roughly the top 2% income bracket.
  • Your city installs camera-equipped speed detectors in a school zone, complete with automatic citation delivery via US Post. They enforce the 20 mph speed limit during posted school hours (7am-9am and 3pm-5pm).

These cases represent infrastructure as well. In the former, the apparatus to levy taxes. In the latter, increase safety with the threat of corrective action. Here’s the rub:

  • With the taxation infrastructure in place, the government could apply the tax to the top 3%, 10% or everyone, for that matter.
  • Once the cameras, radar and mailing infrastructure is in place, the government can change the speed at which a driver gets nabbed during non-school hours. The regular speed limit is 30 mph; setting the infrastructure to 32 mph would raise funds for the city.

Despite the downsides above, once an infrastructure is built, it can (and frankly, should) be leveraged.

When building integration or cloud architectures, I examine current infrastructure as containing opportunities and add other services (including ephemeral services) I can reuse throughout the application portfolio. If something we need doesn’t exist, I look at deploying it as a service the rest of the system can consume.

How about this: a time service, whose primary job is to keep all  the system clocks in synch. Knowing that it exists allows me to leverage this service to collect multi-machine activities as transactions in a single log file for auditing.

While time is important, so is location (enables mapping and directions), associated entities (enables B2C and B2B), shared storage (enables collaboration) and so on.

The key point: consider all the services available to you when evaluating an architecture extension or application extension project.

Cloud Computing: IT as a Service

At PDC last week, I heard Microsoft utter the umbrella term: “IT as a Service” to describe their Cloud Computing direction. In short, ITaaS encompasses the three commonly-known ‘as a Service’ offerings:

  • IaaS (Infrastructure as a Service): Virtual, but logically-physical hardware. Servers in the sky that you can connect to remotely as though they were actual hardware. You expend a lot of effort into managing servers (imaging, patching, load-balancing, etc.), but have high flexibility as they support most types of existing applications you can deploy without rewriting or porting code. Amazon has capability in this space, as does VMWare.
  • PaaS (Platform as a Service): A virtual platform, where applications are deployed as services. You have next to no server management, and automagic scalability is built-in, but an existing application code must be ported to the new environment. Google AppEngine fits here, as does the application platform side of the Windows Azure Platform.
  • SaaS (Software as a Service): An application built atop a base application. Some allow only configuration (tailoring organization- and and user-specific information), while others have higher levels of UI customization opportunities; think adding applications to Facebook or customizing your iGoogle or MSN home pages. On the business size, Force.com is the big player in this place with their CRM application.

Looking at the features now available in the Windows Azure Platform update, Microsoft looks to have the broadest offering in Cloud Computing; while light on packaged SaaS platforms, they’re heavy in the application development space.

But, there’s more (there always is): The Microsoft offering has also taken familiar server-based applications into the cloud. Exchange Online, SharePoint Online, Office Communications Online and Office LiveMeeting and made them available as subscription services, wrapping them under the umbrella brand: the Business Productivity Online Suite (BPOS).

Not to confuse the market with Darwinian brand evolution, Microsoft also announced Office 365, adding Office Professional Plus (web versions of Word, Excel, etc.) and Lync Online (unified communications) to the mix.

Like the rest of the cloud, the business model is pay-as-you-go and your IT Department don’t need to muck about with hardware for near-commodity application functionality.

Google is in the fray as well; Google Docs and GMail are Cloud alternatives for productivity applications and are mostly free for smaller organizations, although I found their documentation a bit daunting the last time I explored moving my domain there.

While there are several TCO analyses out there; grab one (or ping me and I’ll assist). Suffice to say: for energy (not) expended managing servers, cloud computing should be on the radar of all organizations.

%d bloggers like this: