Memo to @Clear: Amazing @ 0115

This seems to be a “what can I say?” moment for me.

After three calls to Clear today, I actually got some useful information:

  • My tower is at 24th and 15th in North City
  • Both my devices are connecting to it
  • The tower IS overwhelmed during peak (most) hours.

However, here in the wee hours .. wowza!

Test Results
Initial test with the home modem, the Clear Modem – Series G in current run state.
Reboot the home modem, reconnect and test with same server. (didn’t bother)
Disable Wi-Fi adapter, test with the Clear 4G Mobile USB.

It’s capacity, capacity, capacity .. as in, there ain’t none during the peak hours. There’s an augment (capacity upgrade) planned ‘in the Spring’ .. last I checked, that is March 21st; a few weeks ago. As I just got billed for this month’s (lack of) service, I’m feeling a little miffed.

Time for another call to customer service. It will be fun.

April Fool’s Web Fun 2011

From LinkedIn, folks you may just want to know:

image

From Bing, we get a playful seal:

image

Google gives us body-motion-driven-GMail:

image

Chrome gives us Chromercise, which are exercise routines for your fingers  (I’m not kidding .. but they are):

image

CompUSA was attacked by aliens:

image

Hulu did a Wayback Machine view of how the site may have looked in the early days of the web, complete with blue links and scrolling text (yuck) .. fortunately, the text takes you back to the modern version of the site:

image

Speaking of that, here’s my original web site from June 2001:

image

But: whither Facebook? What did I miss? I would have, at the least .. a funny hat:

image

Yahoo was plain, but recommended the Yahoo version of Internet Explorer 9:

image

MSN was plain too, but recommended the MSN version of Internet Explorer 9:

image

Microsoft (corporate) didn’t join the fray.

What are your favorites?

Memo to @Clear: Where you’re good ..

.. you are VERY good.

This, from downtown Kirkland:

Please make this happen at home.

“Anything” as a Service (XaaS) .. you knew this was coming ..

In the Cloud Computing world, where so many things are in flux, it should come as no surprise that virtually “anything” can (and will) be provided “as a Service”. For starters, we had (I go into more detail on this from my “Cloud Computing: IT as a Service” post):

  • IaaS (Infrastructure as a Service): Virtual, but logically-physical hardware. Servers in the sky that you can connect to remotely as though they were actual hardware. You expend a lot of effort into managing servers (imaging, patching, load-balancing, etc.), but have high flexibility as they support most types of existing applications you can deploy without rewriting or porting code.
  • PaaS (Platform as a Service): A virtual platform, where applications are deployed as services. You have next to no server management, and automagic scalability is built-in, but an existing application code must be written or ported into the new environment.
  • SaaS (Software as a Service): An application you customize / configure atop a base application, owned by the service provider. Some allow only configuration (tailoring organization- and and user-specific information), while others have higher levels of UI customization opportunities; think adding applications to Facebook or customizing your iGoogle or MSN home pages.

Then, realization and logical extension brought us (in no particular order):

  • IT as a Service (ITaaS): Standard IT applications, like Email, online file sharing,  online backup, online meetings online workspace collaboration and more.  The keyword here is ‘online’, of course, but these are commodity (common, and available from multiple providers) applications that every organization utilizes to one extent or another. ITaaS creates the opportunity for an organization to make a ‘rent’ versus ‘buy and maintain’ decision that can help them preserve capital.
  • Extended ITaaS applications grew out of the above and include online disaster recovery (backup plus online storage), online synchronization (synchronization plus online storage), online content sharing (photo uploads, players for slide shows plus online storage) and more. Thanks to Cloud Computing, any size company can offer value-added services to enable these functions, acquire customers and pocket the difference between what they collect from their users and their monthly Cloud Computing fees.

For background, let me discuss some earlier methodologies and new technologies:

  • Application programming interfaces (APIs): Command and content structures exposed by a software program to allow access by another program. APIs allow the second program to control and obtain data from the first. We’ve had APIs for years and years.
  • Web Services: APIs exposed to the Internet and accessible by web sites, web applications and other web services. Web services are used to provide data to most client applications .. odds are, your mobile phone gets weather data from one web service, bus information from another, and so on. I discuss this at length in “Composite Applications: Do You Use Them?”. The answer is: “yes”, though you may not realize it.

With these, evolution brings us to:

  • Content as a Service: Previously known as “web pages” (I’m kidding .. a little). Once a standard connection methodology (Web Services) allowed programmatic access to applications (via APIs), the sky becomes the limit. Content contained web pages and enjoyed by end users could now be mashed into other applications on other sites. The new content enhances the host application, making it a more valuable resource. Zillow is a good example of a site that does this: publicly-available data like maps and real estate tax records are mashed together with local multiple-listing services data (which may or may not be available at no cost), resulting in a site that displays map with home plats, taxes, prices and realtor references (and more) that the user can use to do research.
  • Data as a Service: Lots of companies have lots of accumulated data. Some accept the data in the form of online customer backups from their products, like Intuit (Quicken). Intuit could (I do not know if they do this) create an anonymous data warehouse with this end user data, assembling income and spending patterns by geography, and providing this data as a service to companies making business decisions about branch locations based on these criteria.

While assembling these data is only an API access away, creating and validating business use cases for these assemblies is the real magic in this cauldron. Many companies are providing programmatic access to composite content and data as a service, and for a price. Aggregators of these data (Microsoft’s Windows Azure Marketplace, for one) broker transactions and collect fees for data access. Virtually any company can buy and sell data through this kind of marketplace, making even more interesting business models.

There’s more (there always is). Applications as a service is a paradigm that has been around for a long time, but are now expanding into more fee- and transaction-based models, including those with API access.

Back to the original topic: Are we now in the world of ‘anything-as-a-service’? Does XaaS exist?

It does, and you’re using it .. even though you might not realize it.

My Coffee Card (aka I knew I was on to something)

Drama ensues: My Starbucks Card application was suddenly absent on my Android phone.

A few months back, I was cheering the application that enabled me to pay for my Starbucks drink (of which, I order many) with my Android phone instead of my Starbucks Card. Not that my life depended on this, of course .. even I can do without caffeine. But .. still fun .. a bit of a cool factor .. and many baristas approved (there were cheers in the Kirkland Parkplace Starbucks .. I’m serious).

So, imagine my surprise while ordering my signature: triple-tall, vanilla soy latte (remind me to post the Haiku for same on this new blog) .. I couldn’t find the application on my phone. Embarrassed? A little .. but I still had my card, so I wasn’t carted off to the Starbucks prison.

The Internet rumor: Starbucks / My Coffee Card application developer (Birbeck) was served a cease-and-desist order for offering the widget. May be a rumor, but the foundation is based in the fact that an international corporation will try to protect their brand for an application that:

  • Displays a bar code to be scanned by the Starbucks POS system.
  • Presents current card balance and purchase history.
  • Presents “stars” balance (when you get 15 stars, you get a snail mail card for a free drink . Starbucks: this sucks, btw .. the free drink should be credited to the account to ensure the bonus is not lost).
  • Locates the closest Starbucks on Google Maps) .. including directions from my current location, courtesy of the GPS on my phone.

To me (as an end user) .. EVERYTHING above is goodness. So: Why would Starbucks object? Why did my “Starbucks Card” application disappear? Well:

  • Alphabetization. The word ‘My’ comes before ‘Starbucks’. I wasn’t looking for ‘my’ after the most recent application update.
  • Branding. Not the fun hot-iron-on-your-arm kind of stuff. Instead it’s the icky legal-that-lands-you-in-court stuff. You know court: horrible rooms with wooden benches where you get to listen to old white men drone on and on (no, no: not church).
  • Liability: Starbucks doesn’t want to be responsible for an application that has access to personal and financial data .. they might consider it if they owned the application, but common sense dictates the company protect itself for assets it doesn’t own.

So .. hardly a mystery, but great fun to research and collect content.

Oh: the new logo is kind cool (and may be the next thing for which Birbeck receives mail).

Plug: get the My Coffee Card Pro application for your Android .. only $1.99 (support your developer) from the Android Market.

How Well Does Your Architecture Accommodate Change?

Designing and maintaining a flexible architecture is the grail of the IT, application development and business triumvirate in a company. A flexible architecture allows:

  • Rapid addition of new, and extending existing application features.
  • Enhancing applications by including data from internal and external sources.
  • Exposing application data to a wider variety of devices and audiences.

With this flexibility, the business can pursue opportunities with minimal impact to baseline infrastructure.

The sad part: many architectures evolve (or mutate) over time, adding new, function-enhancing components to existing components as afterthoughts. In short, this is not an ideal scenario. The company may have achieved short-term business goals, but in an inflexible (and risk-ridden) way.

Without a flexible architecture, we see more than technical challenges: we see friction in company units:

  • IT and development can be seen as blockers, while the business is viewed as making unreasonable demands.
  • Time-to-market, and therefore, potential competitive advantage is lost.
  • Development cycles can be disrupted, adding significant expense to projects and products .

How do flexibility constraints manifest in the enterprise?

  • Physical and IT: not enough servers and / or not enough time / resources to deploy them.
  • Development of new features takes too long to code, or application / IT infrastructure won’t support enablement without changes to underlying components.
  • Access to internal and external data is restricted by policy, especially if business requirements require enhanced security levels in light of the modern online world.

If poorly-supported changes are implemented, success can become company’s worst enemy. Launching a product or feature atop an architecture that isn’t ready creates a new set of issues:

  • End user impact: users have a less-than-positive experience with your product.
  • Competitive risk: your great and game-changing ideas are exposed to the world before your application is ready for prime time.
  • Unanticipated downtime / impact on other systems: ‘bolt-on’ additions to an existing architecture can pose risks to the original components.

Avoiding issues and achieving success requires planning, execution and resources. The first two are wholly depending on an organization’s ability to complete IT and development projects. The last item is a hardware and resource issue that extending components into a solution that includes cloud computing (even if only on an interim basis) can help manage. Identifying your business goals and performing an inventory on your current state is an excellent place to start; a skilled architect can help you describe the future state and a migration path to your grail.

Cloud: Oh EC2 .. Say it ain’t so!

Seen today when starting a single large EC2 instance:

Untitled

Gives another new meaning to the term ‘unlimited’.

Memo to @Clear: a kudos .. but why so un-clear?

I like Clear / Clearwire .. I like the concept, flexibility and pricing. Further:

  • I’m happy they’re trying to be friends with me .. and I’d like us to be friends. The company has a great product, right time and right place.
  • After going round with them on support chats and calls, I’d like to give the gang a kudos for trying to resolve my connection challenges .. and for their interim solutions.
  • However, after multiple contacts with their tech and customer support departments, unanswered questions remain.
    FWIW: I do not mean for this to be a negative post. I am not trashing them. I am staying as close to the facts as possible. I do not intend for this to sound like a rant; please comment liberally if I do.

For starts, I’m going to cover some history in an effort to set the stage.

Clear / Clearwire is a wireless internet service provider, connecting computers and devices wirelessly across far greater than home Wi-Fi distances. I first contracted with the service in August 2010, and enjoyed speeds up to 6 mbps down, 1 mbps up. While not setting any speed records, this level of service provides enough for my general purposes at home. The cost? $45 / month for the 6/1 ‘unlimited’ (quotes mine .. see below) package.

A month or so into the contract, download speeds became inconsistent and very, very slow, with no no real day part pattern: we had great speeds during peak hours during one night, and dismal speeds during the peak hours of the next. Late night / early morning speeds were typically better, but nowhere near the speeds we enjoyed the initial few weeks.

During my first IM chat with them, I went through several steps, including:

  • Checking the speed on http://speedtest.net.
  • Confirming there wasn’t “something on my PC”, i.e., a virus or other malware, followed by clearing my internet cache. Memo to Clear: get real.
  • Confirmed the three Rs of the wireless modem: reboot, relocate and refresh (they initiated a firmware patch / refresh on the last item .. and it disconnected my from the chat session).
  • Reconnected and re-tested,
  • Initiated another chat session (even though connected to different support professional, Clear keeps track of the chat records, so little time was lost).
  • I connected directly to the wireless modem (I use a firewall / router to protect my internal network and wanted to take that variable out of the loop).
  • Refreshing and reloading my address resolution protocol (ARP) entries. Memo to Clear: this isn’t for the novice user; I’m an experienced IT professional .. I only went along because I know how to fix it if it goes awry.

After all these steps, I was advised that “there is an issue with your account” and asked to call their toll-free line and speak directly to another support professional. Ouch moment here; if an account issue, why did we jump through all the previous hoops? “What is the issue?”, I ask. The reply: “All I can tell you is there’s an issue with your account and that you need to call in”. Clear: this is broken. Please give your level 1 support better information. Keep reading, and you’ll see why.

While on a short hold, I perused the Clear support forums (mentioned prominently during the hold recording). Wow! There are a LOT of folks who are complaining about bandwidth. I found a few threads that mirrored my own and recognize the support staff is exercising reasonable due diligence in ensuring no issues on the local side. I saw lots of references to ‘managed’ accounts .. translating to ‘managed bandwidth’ accounts .. further translating to a few possibilities:

  • You’re using too much bandwidth, so Clear is ‘managing’ (throttling) your download speed.
  • The tower to which you are connected is overwhelmed (too many users, or broken in some way), so Clear is ‘managing’ download speed on your tower so everyone some bandwidth.
  • A public safety / security issue exists, so Clear is ‘managing’ everyone in the affected area (likely well beyond your tower). This can be region-wide, and beyond.

When the friendly Clear support person got on the line, my first question was about the “issue with my account”. In (what I now recognize as) typical Clear response, I got a question in return: “Do you download a lot of movies, or torrents?” Umm .. no, no I don’t. I reiterated my question, trying to discover if the “issue with my account” is because of my usage, or a technical / tower issue. No Clear answer (pun intended) from the service professional. It was late, so I let it go for the night.

Over the next few months, speeds were up and down, and I made a few more calls. It all came to a head for me around February 1, when download speeds dropped to 0.30 mbps while upload speeds maintained 1.0 mbps. Something else was wrong.

Another series of calls over a few days. The following bits represent an amalgam of these calls, with my speaking with the next level of technical support and supervisors as a matter of course (I now have a history with Clear, it seems):

  • I assured the agent I had IT skills and a reliable network (some resistance, as they wanted to run through the gamut of local tests again). I pushed back on the local tests, assuring the agent that I had connected a Wireless Ethernet Bridge to my neighbor’s Comcast-powered network (with their permission, of course; I had set up their network, so they were willing to grant me the favor) and measured 10 mbps download through my router and network (wired and wireless, connected to different switches, to boot).
  • Another objection and question: “Do you download a lot of movies, or torrents?” Umm .. no, no I don’t. My account usage (available in the account management tools) demonstrates this as well.
  • Another objection and I assured the agent I could supply speed test logs for a wide range of day parts, demonstrating my network can accept their data as fast as they can push it down .. which, at times is pretty fast.
  • Last objection, assuring the agent my next call would be to customer service to cancel my Clear account, and the call after that would be to the Better Business Bureau. Aggravated? A bit, a bit.

My main issue: why can’t you tell me if I am being ‘managed’ because of usage, or something else? If it’s me, please tell me:

  • Why doesn’t ‘unlimited’ mean ‘unlimited’? My contract says ‘unlimited’.
  • If ‘unlimited’ doesn’t really mean ‘unlimited’, what does it really mean? At what usage level will I be ‘managed’? I can monitor my bandwidth, so at the least, I will understand.
    If it’s NOT my usage or my modem, please give me some information I can use.
    I stayed on it .. more calls, all with level 2 tech support and supervisors. I collected lots of fun facts:
  • The modem will grab the tower with the strongest signal .. regardless of the capacity / usage of the tower.
  • The user does not have any way to tell to which tower they are connected. However, the account management tools (currently in beta) allow the user to see the locations of towers in their area and Clear coverage for their location (nice, actually).
  • The user does not have any way to change the tower to which they are connect, save by moving the modem to another place in the house, rebooting and hoping another tower will step up.
  • The agent CAN disconnect the modem from the current tower, but on other events (including a modem reboot), the modem will again grab the strongest signal.
  • Second-to-last: the tower with the strongest signal is overloaded.
  • Last: the affected tower is scheduled for an augment (increased capacity), ‘sometime in the Spring’.

Ouch, that was painful .. and a shame I had to drag it out of them. I am guessing I have more tenacity than many other users; would have been far easier to have cancelled my account and selected another provider. Memo to Clear: please be more clear.

My resolution? Still a work-in-progress, however, Clear is working with me:

  • They provided me with a USB modem to test in my location (you’ll love the results .. watch for my next post).
  • They covered one month’s usage charges (I will revisit this in the next month or two with them if service is still poor).
  • They forward me to Level 2 support and supervisors before I even ask (I think they’re protecting their staff .. noble).

For that, I’ll keep working with them.

Composite Applications: Do You Use Them?

You probably do.

Simply put, composite applications assemble data from disparate sources and present the data in a single interface. An application that displays the system time is technically a composite application (although not a particularly interesting one).

You’ll find composite applications in consumer and business settings. They include:

  • Business process / supply chain management
  • Medical diagnostics
  • Financial systems
  • Location-based services

Their most valuable use case for a composite application is presenting multiple sources of data to a user in an appropriate context.

  • A BPM / SC dashboard shows real-time inventory levels against real-time production demands, culled from disparate systems. This dashboard can alert the user to the risk of production delays due to stock levels.
  • Medical diagnostic software shows bodily statistics (heart, lung, oxygen levels, etc.) in response to outside stimuli (exertion or adding oxygen).
  • Financial software shows the response of a stock price due to news, and then reflects price changes in portfolio valuation.
  • LBS-enabled solutions create massive business opportunities simply by knowing where you are .. and what you might be able to buy / do while you are there.

In all cases, the ultimate recipient of the data is the user; we are the ultimate aggregators and consumers of the data that matters to us. A well-designed composite application will address our needs and use cases in context when gathering data to present to us.

I liken a composite application to a smart phone; in fact, I would argue that a smart phone is a composite application. If the smart phone has a robust enough operating system to permit user customizations (loading the content and the applications we deem most relevant), AND includes pillars like location and search, our aggregation and consumption of the data is second nature to us.

For example, a GPS-enabled phone can provide:

  • The weather in your current location, and as a result, what to wear.
  • The store to buy something you need that ‘s close by (possibly even the clothes you need because you didn’t check the weather first).
  • Directions to the store.
  • Your bank account balance to ensure you can buy what you need.
  • The method of payment for a treat along the way (I use the Starbucks Card Widget for my Android Aria to pay for my coffee these daze).
  • .. and so on.

If you build a composite application (correctly), it will get used. Further, if you watch how they are used, you’ll learn how to improve your design to deliver what your customers need.

Cloud Computing: Migrating to Microsoft BPOS

To make the search  engines happy: “Migrating Microsoft Small Business Server Exchange Server to BPOS”. Note that I did a full migration; not an email co-existence (where some users are BPOS and others are local).

This turned out to be pretty much a no-brainer .. although in the process, I made it harder than it needed to be (not on purpose, of course).  Here’s the short story:

Original State:

  • Microsoft Small Business Server 2000 (with Exchange 2000)
  • Microsoft POP3 Connector collecting mail from an internet email server
  • A mix of Windows XP and Windows 7 clients
  • A mix of Microsoft Office 2003, 2007 and 2010

Final State:

  • Exchange Online
  • A mix of Windows XP and Windows 7 clients (upgrade work in-progress)
  • (Mostly) Microsoft Office 2010

    My biggest concerns:

    • Avoiding mail interruption (MX record delays, profile confusion, etc.)
    • Preventing loss of email content
    • Migrating existing email to the online services

    As I’d migrated my own (a much smaller) domain to the cloud a few months ago, I figured I had it cooked. However, I was wrong. My steps were:

    • Create BPOS users and assign email services
    • Cut the MX record away from my domain to BPOS
    • Set up a second profile (pointing to BPOS) on each Outlook 2010 client
    • Drag and drop old email between profiles in Outlook.

    While this worked nicely (I have only five users on my domain), it was not optimal. I avoided email interruption and loss of email content, but I went to a much greater level of effort in the process. To my defense, Microsoft had not yet released the Exchange Email Migration Process document .. as it did not exist.

    Sometimes it’s hell to be an early adopter. That said: Kudos to the team at MS who built the migration tool.

    The correct (and easier) way? Well, start with a plan that includes:

    • Users: User list, size of each mailbox, user (or workstation) availability during / after the migration. Depending on the size of your domain, be prepared to create logical groups you can migrate in succession. Suggestion: Prepare good communication and ensure user rights: to keep your hands off the workstations, your users must be able to install software on their local machines.
    • Clients: I upgraded to Outlook 2010 as part of the migration; some before the migration, some after. Outlook 2003 and 2007 work just fine if you choose to remain on older versions. There appears to be no client-side difference as to when you do the upgrade.
    • Schedule: Migration is simple, but it does take time. If executed properly, users will receive new emails immediately, but may not have access to older emails without incurring complexity on the workstation (i.e., accessing multiple profiles).
    • Your availability: depending on the technical prowess of your users and the clarity of your instructions, you will have questions to address. Make sure you have time to work with your users and watch your migration process.

    I’m serious: make a plan. To get you started (these steps can occur in parallel):

    • With your LiveID, create your Microsoft Online Services account, identifying the services (and quantity of each) you want. You have an option of a 30-day trial before signing up for a one-year commitment. The full-meal deal is $10 / user / month. You’ll be asked to create an internal domain (something like yourdomain.microsoftonline.com) and an administrator account.
    • In the Microsoft Online Services Administration Center, create your users and assign desired services to each. Have the creation process email you their initial (system-assigned) password.
    • Ask your users to install the Microsoft Online Services Sign-in Client. This is a fairly simple process and a fast install. I asked my users to do the install but not the login, as I wanted to make sure everyone could get installed before wrapping myself up in loops that included system troubleshooting.
    • Validate your MX record. While you’re at it, confirm you can access your domain registrar to make changes to it. The validation process is harmless; that is, it won’t make any changes to your email domain. However, you must confirm with BPOS that you can redirect the record when the time comes. It takes about a day to confirm. Microsoft has registrar-specific instructions on the Migration pane in the Microsoft Online Services Administration Center.
    • Install Microsoft Online Services Migration Tools on a capable workstation (mine was pretty low-end). See the minimum requirements at Migration Tools Prerequisites. As the tool only migrations one email account at a time, you might try installing on more than one machine. The link to install is in the Microsoft Online Services Administration Center, under the ‘Migration’ tab.
    • Confirm your Migration Tool can connect to your local Exchange Server.

    When your MX record is validated AND your users have the Microsoft Online Services Sign-in Client installed AND you can connect to your local Exchange Server with the Migration Tools (the following steps can also occur in parallel):

    • Edit each user in the Microsoft Online Services Administration Center, changing the domain from yourdomain.microsoftonline.com to yourdomain.com.
    • Forward the system-assigned password instructions to your users (note: Microsoft could do a little better job of automating this, starting with a subject line that includes the user name), changing the domain name to yourdomain.com. Your users will sign in with the system-assigned password, changing it on their first login. When they do, the login client will set up a new Outlook profile with their full email address. Caveat: I don’t know what will happen if the current profile is named with full email address; mine were all ‘Outlook’. Possibility: if you create the users after you validate the MX record, you might be able to avoid the ‘changing the domain name’ step above. I didn’t.
    • Warn your users: they will see an empty Inbox after the previous step.  Have them close Outlook and select their old profile until their migration is initiated. Once their individual mailbox migration is initiated, they should use the new profile instead of the old. Microsoft did a nice job of email forwarding for in-process mailbox migrations; mail is delivered to both profiles. Possibility: you may want to run migrations after COB, advising users to log onto their new profiles in the morning. This depends on the number of mailboxes you need to migrate, and the size (i.e., time required ) of each.

    You’re now ready to go. The rest is actually simple:

    • Update your MX record (instructions for this on the Migration tab in the Microsoft Online Services Administration Center). Email will go to the old Exchange location and be forwarded to the new until the record is propagated. Once email stops showing up in the old profiles, the MX record is moved.
    • Run the Migration Tool, identifying the users you want to migrate.
    • Advise the users to start using their new profile.
    • Wait (bandwidth, mailbox size).
    • Watch for errors; I had a few users with corrupt records. Start another user while dealing with these.

    Once migration is complete (confirm this by email delivery to only the new profiles):

    • Write your users with instructions on how to change their default profile to the new. You can avoid deleting the old (depending on your retention policies); the OST will persist until you remove it.
    • Users will need to set up their email signatures; they can pull existing from their Sent Items folder.
    • Shut down your local Exchange Server services; archiving per your local policies.

    I managed to get all but one of the migrations without issue (record corruption errors I referenced above).  However, this was just in time for a service outage. Service outages happen; usually for only a few minutes.  During this time, the Outlook icon in the status bar will show a tiny hourglass, and your Outlook windows may freeze. Release this choked thread: mouse over the status bar icon, right-click and click ‘cancel server request’. Your mail windows will refresh so you can save drafts and documents.

    You can check the status of the service in your region with the links below .. HOWEVER it requires a login (dumb). If login service is hosed, so is the ability to get service status:

      Here’s what I saw tonight .. argh:

    image

    That said, the service is quite reliable, and I don’t need to manage my Exchange Servers anymore. Add to this the improved access to mobile and web clients, and you have a terrific commodity-based service you can share with your customers.