Unified Boxes, The Sum of all fears

Correction: By mistake I included SQL in the supportability statement, apparently I was speaking about the stack as hall including backup, sorry for that.

Hi there, earlier this week, fellow MVP Michel Di Rooij published a blog post http://eightwone.com/2014/07/02/exchange-and-nfs-a-rollup/ speaking about NFS/Exchange support “Again”, the post motivated me to delve into the pool and add my experience.

There was some hesitation in the MVP community about if we should blog/speak about it or not, Michel was so brave to jump and speak about the topic, and after exchanging some emails, we (including Fellow MVP Dave Stork) agreed that this blog is critical and we created it.

IF you want to read more, check Tony Redmond’s article http://windowsitpro.com/blog/raging-debate-around-lack-nfs-support-exchange 

So, from where the story begins ???!!!

I am currently working for a major data center provider. In my current role we try to find new ways, innovate and find new technologies that will save us time, effort and money and my team was working on investigating the unified boxes option.

But before delving into the technical part, let me give you a brief background from where I am coming, my position as an architect in a service provide is an awkward position, I am a customer, partner and a service provide, so I don’t innovate only, I don’t design only, I don’t implement only, I don’t support only and I don’t operate only, I do all of that, and that makes me keen investigating how every piece of new innovations will be designed, implemented, supported and operated.

Now speaking about the unified boxes, I was blown away with their capabilities. The capabilities of saving space, time and effort using these boxes are massive, but there is a catch, they use NFS, the source of all evil.

NFS has been used for years by VMware to provide “cost effective” shared storage option, a lot of customer adopted NFS over FC because of the claimed money saving and complexity, but NFS has its own issues (we will see that later).

I was a fan of the technology, and created a suggestion on ideascale.com to bring the issue to the PG attention, we did our best but Microsoft came back and informed us that NFS won’t be supported, they have their own justifications, we are not here to speak about it because we can’t judge Microsoft, but the bottom line, NFS is not supported as storage connectivity protocol for Exchange.

Now the reason of this post is to highlight to the community 2 things:

  • NFS is not supported by Microsoft for Exchange (any version), there is no other workaround this.
  • Choosing a unified box as a solution has its own ramifications that you must be aware about.

I am not here to say nutanix/simplivity/VMware VSAN..etc are good or bad, I am highlighting the issues associated with them to you, and the final decision will be yours, totally yours.

I was fortunate to try all of the above, got some boxes to play with and tested them to the bone, the testing revealed some issues, they might not to you, but they are from my point of view:

  • Supportability: Microsoft doesn’t support placing Exchange on NFS, with the recent concerns about the value of Exchange virtualization (see a blog post from fellow MVP Devin Ganger http://www.devinonearth.com/2014/07/virtualization-still-isnt-mature/) using these boxes and these set of technologies might not the best way for those specific products, you might want to choose going with physical servers or other options for Exchange/SQL rather than going with non-supported configurations, although that vendors might push you to go for their boxes and blinding you with how great and shiny these solutions are. The bottom line, they are not supported by Microsoft and they won’t in the near future.
  • Some of the above uses thin provisioned disks, meaning that disks are not provisioned ahead for Exchange which is the only supported configuration for virtual harddisks for Exchange. Disks are thinly provisioned meaning they are dynamically expanded on the fly as storage consumed which is another not supported configuration.
  • The above boxes have no extensibility to FC, also you are limited to a max of 2 * 10 GbE connections (I don’t know if some have 4 but I don’t think so) meaning that you have no option to do FC backups, all the backups will have to go through Gbe Network, we can spend years discussing which is faster or slower, in my environment I run TBs if not PBs of backups and they were always slow on GbE networks, all of our backups as to be done over FC.
  • The above means you will run backup, operations, production and management traffic on single team on shared networks, maybe 2 teams or will run it over 1 GbE, this might be fine with you, but for larger environments, it is not.
  • The above limitations limits you to a max number of network connection, a single team with 2 NICs might be sufficient to your requirements, 2 teams maybe, but some of my customers have different networking requirements and this will not fit them.
  • Some of the above boxes does caching for reads/writes, I have some customers ran into issues when running Exchange jetstress and high IO applications, the only solution as provided by the vendor’s support is to restart the servers to flush the cache drives.
  • Some of the vendors running compression/deduplication in software and this requires a virtual machine of 32 GB or larger to start utilizing deduplication.
  • All of the above uses NFS, meaning you will lose VAAI, VAAI is very critical as it accelerates storage operations by offloading those tasks directly to the array, you can use VAAI with NFS with virtual machines that has snapshots or running virtual machines, meaning that you rely on the cache or you must shutdown the virtual machines to use VAAI, VAAI is very important and critical element, so you must understand what are the effects of losing it.
  • Those boxes don’t provide tiering, tiering is another important if you are running your own private cloud, by allowing you to provision different storage grades to different workloads, also it is important if you want to move hot data to faster tiers and cold data to slower tiers. Tiering touches the heart and soul every cloud (private or public) and you must understand how this will affect your business, operations, charging and business model.
  • From support/operations and compliance point of view, you still running unsupported configuration from disk provisioning and storage backend, again it is your call to decide.

I am not saying that unified boxes are bad, they are a great solution for VDI, Big Data, branch offices, web servers and applications servers and maybe databases that support this sort of configuration, but certainly not for Exchange.

We can spend years and ages discussing if the above is correct or not, valid or not and logic or not, but certainly they are concerns that might ring some bills at your end, also it is certain that the above configurations are not supported by Microsoft, and unless Microsoft changes its stance, we can do nothing about it.

We, as MVPs, have done our duty and raised this as a suggestion to Microsoft, but the decision was made not support it, and it is up to you to decide if you want to abide to this or not, we can’t enforce you but it is our duty to highlight this risk and bring it to your attention. And as MVPs and independent experts, we are not attracted to the light like butterflies, it is our duty to look deeper and further beyond the flashlights of the brightest and greatest and understand/explain the implications and consequences of going this route so you can come up with the best technical architecture for your company.

Optimizing WAN Traffic Using Riverbed Steelhead–Part 2-Optimizing Exchange and MAPI traffic

Optimizing WAN Traffic Using Riverbed Steelhead

In part one http://www.sureskillz.com/2014/01/02/enhancing-wan-performance-using-riverbed-steelheadpart1-file-share-improvements/ we explored how we can optimize SMB/CIFS traffic using Steelhead appliances, in part 2 we will explore how we can optimize MAPI Connections.

WARNING: Devine Ganger, a fellow Microsoft Exchange MVP warned me that MAPI traffic optimization works in very specific scenarios, so you might want to go ahead and try it, but I checked the documentation an in my lab and it worked, of course my lab doesn’t reflect real life scenarios.

Joining Steelhead to Active Directory Domain:

In order to optimize MAPI traffic, you must join the Steelheads to Active Directory domain, this is because if you don’t you will see the MAPI traffic but Steelheads won’t be able to optimize it because it is encrypted, to allow Steelhead to Decrypt the traffic you need to join it to Active Directory and configure delegation.

image

as you can see above, the Steelhead compressed the traffic, but didn’t have a visibility on the contents and couldn’t optimize it further, now let us see what to do.

To join the Steelhead to Active Directory, visit the configuration/Windows Domain and add the Steelhead as RODC or Workstation if you prefer:

image

(You need to do this for both sides steelheads).

Once done, you will see the Steelhead appear in AD as RODC:

image

Now you need to configure account delegation, create a normal AD account with mailbox, I will call this account MAPI, once created, add the SPN to it as following:

setspn.exe -A mapi/delegate MAPI

Once done, Add the delegation to the Exchange MDB service in the delegation tab:

image

Once add, go to Optimization/Windows Domain Auth and add the account:

image

Test the delegation and make sure it works fine:

image

Now go to Optimizaiton/MAPI and enable Outlook Anywhere optimization and MAPI delegated Optimization:

image

And restart the optimization service, then configure the other Steelhead with the same config.

Now let us test the configuration and see if Steelhead works or not Winking smile.

 

while checking the realtime monitoring, the first thing you will not that the appliance detected the traffic as Encrypted MAPI now:

image

I will send a 5 MB attachment from my client which resides at the remote branch to myself (sending and receiving), let us see the report statistics:

image

image

You can see now the some traffic flows, since it is decrypted now it has been compressed and reduced in size, the WAN traffic is 3 MB and WAN traffic is 1.8 MB, then while receiving the email, it received the email as 5 MB but can you see the WAN traffic, it is 145 KB only, because the attachment wasn’t sent over the WAN it was received by the client from the Steelhead.

now let us send the same attachment again and see how the numbers will move this time.

image

can you see the numbers, the WAN traffic was around 150 KB (the email header..etc), but the attachment didn’t travel over the WAN, it is clear the attachment traveled over the LAN in sending and receiving but didn’t traverse the WAN and the WAN traffic was massively reduced, impressive ha…

Installing Symantec Encryption Server & Exchange 2010 Configuration Part3–Sending Encrypted Emails

In part1 and part 2 we explored the basics of installing the SES and configuring and managing encryption Keys, in this part we will glue part1 and part2 and send encrypted emails.

Understanding Email Policies:

Email policies are the foundation block for handling email, they determine how emails from specific senders sent to specific recipients with specific contents will be handled.

There are set of defaults policies created by default:

image

they determine how outbound/inbound emails will be handled, the default policy has the following settings:

image

the outbound client has the following settings:

image

which tell the SES to encrypt the emails if the source client is SMTP/MAPI to send it to the outbound chain which does the encryption actions:

image

if we explore the outbound chain, we will find the following settings:

image

which instructs the SES how to handle specific emails with specific conditions, so I edited this rule and added the “confidential rule”, which encrypts emails sent internally or externally with the word “confidential” in the subject line. You can add your own set of rules to meet your business and enforce certail delivery types link web or protected PDF:

image

Once you set the rules, you can send encrypted emails, let us see how:

from outlook client, I will send normal email to user@domain.com (which is fictional domain), the client will detect the policy that is set on the server and will send the email out of message steam to the SES:

image

Because we can’t find a key for user@domain.com, we will send the email to the SES server and the SES will send the user an email notifying him that there is a message waiting him:

image

In the above email, I am opening the EML file via notepad (I do have only SMTP server at the recipient side), so the message contains the link to open the email (take a look to how the email flowed from the client to keys “the SES Server” to Exchange to the recipient server)

when opening the link, the client will be prompted with the registration (to register in the SES portal with a passphrase), Then the user can login:

image

Once user login, he can see the email through the portal; The user can reply and interact securely with the internal user or ask for email delivery via secure PDF:

image

image

We reached the end of this series, we can send and exchange emails securely with Symantec Encryption Server now. I hope that you liked this series.