Upgrading Unity Connection 9.1.2SU2 to 11.5.1SU2

Prologue: This post is intended as an informative prep document for those planning to upgrade Unity Connection (in an HA setup) from 9.1.2SU2 to 11.5.1SU2 using the OS platform upgrade and not PCD (because reasons…) – your upgrade mileage may vary and likely will. These are notes from my upgrade experience and while many of these steps are universal to the upgrade process, each upgrade is a unique snowflake. Do not use this post as a step by step How to Guide. You will be sorry, and I promise TAC won’t be impressed when you tell them that you followed a process from a blog, even one as awesome as mine.

So let’s get started on this upgrade for Unity Connection 9.1.2SU2 to 11.5.1SU2 – the best document to begin with is the Unity Connection upgrade guide found here. Your best bet is to read it through several times and to not be surprised when you can quote any particular paragraph and page number in your sleep.

Also, reviewing the release notes found here is a must.

Once you’ve read the docs and are starting to put together a plan, here’s some guidance on what that plan should entail:

Licensing. You will need some.  Be sure to order your upgrade through PUT and be sure that your licensing server can handle the version of Unity Connection you are going to.  You may need to upgrade your license server or add license definitions to your licensing server pull this off.

Installing a .cop file. Installing a .cop file for RSAv3 keys is listed in the upgrade guide. Digging deeper into this, if you are at version 9.1.2SU2 (AKA 9.1.2.12901-3), this build contains the same fix as the .cop file.

To quote the release notes for this .cop file:

If you are already running CUCM release 10.x or higher you already have this fix and do not need to install this Cisco Options Package (COP) file. If you are running an
Engineering Special (ES)or Service Update (SU)that already contains CSCua88701you do not need to install this COP file. CSCua88701 is included in: 8.5.1.17123-1 and higher, 8.6.2.24122-1 and higher, 9.1.2.11018-1 and higher.

I skipped the installation of this .cop file per this information and also receiving no errors for the .cop results of the “run cuc preupgrade test” output run before the upgrade. For more details, check out this document.

Confirm ESXi support. You will want to be sure the ESXi version your virtual machine is on is supported for the version of Unity you want to be on, and that information lives here.

Determine OVA changes. This step requires checking here and comparing these requirements to your current virtual machine.You will likely need to change your virtual machine vNIC to VMXNET 3 as VMXNET3 is required. You may need to engage a little VMware PowerCLI magic to pull this off.*  Note, if you need more RAM or hard drive space, TAC says a rebuild of the server is required. I’ve heard rumors of engineers changing the memory but the official stance of support is not to touch that stuffs.

Determine your disaster rebuild strategy(s). I recommend downloading and installing COBRAS, getting it running, and exporting all your data from your voicemail servers. With this data, you can import into a fresh build if it comes to that. You should also be sure to have a good DRS backup, which when restored will have all your settings versus COBRAS which will require you to recreate some settings/features.

For me, the basic flow of the upgrade went something like:

Exported data using COBRAS
Confirmed available space on the common partition, need at least 25 GB – I did not have to change high and low water marks, refer to the guide if you need to.
Confirmed nightly backup
Verified voicemail ports registered pre-upgrade
Confirmed HA and replication status pre-upgrade
Ran “run cuc preupgrade test”
Chose not to use utils iothrottle disable command since 24/7 production environment – if I had used this command, after entering it, a reboot is required, re-enabling it post upgrade would also be another reboot
Installed upgrade to publisher, automatic switch version required
Waited for services to establish
Installed upgrade to subscriber, automatic switch version required
Waited for services to reestablish and replication to be good
Powered off pub, ran VMware PowerCLI commands to change vNIC
Powered on pub, waited for services to reestablish
Powered off sub, ran VMware PowerCLI commands to change vNIC
Power on sub, waited for services to reestablish
Confirmed replication and tested voicemail box, call handlers, and MWI functionality
Confirmed all voicemail ports registered to CUCM cluster
Checked RTMT tool for any system error messages
Tested call handlers; tested MWI and messaging waiting indicators
Emailed licensing@cisco.com with generated license request from license server, Active SWSS/UCSS contract number, and PUT order number. Requested only Unity licenses only be migrated.

While this upgrade completed successfully without any downtime, I did run into several issues whose resolutions aren’t exactly satisfying.  If you’ve seen these or have insights into them, please leave a comment, I am sure I’m not the only one who has encountered these and would love to hear if others saw and dealt with them.

After the publisher rebooted when the install completed, the secondary’s web interface was a 404 and would not come back even after a reboot of the secondary. This forced the upgrade to be initiated via CLI for the secondary, but the issue went away in the new version.

Both servers after reboots established voicemail services (voicemail boxes available, call handlers working, etc…) in a matter of 10 – 15 minutes, but watching the RTMT tool, the CM platform services go into a CRITICAL DOWN state for about 30 mins, before eventually settling in an up state. Cisco Unified Serviceability was unreachable during this time, but voicemail services were working. No answer from TAC or forums as to anyone seeing this before and if this is going to cause long term issues.

A red DNS warning shows up in the OS administration and CLI for the servers even though DNS is working fine. Some forums suggest that I may have two entries in DNS forwarding, but that hasn’t panned out to be true. TAC suggests it’s a “cosmetic bug” and that since the functionality is clearly working, not to worry about it – yay?

Had to manually stop and restart replication after the power down/up for vNIC changes, hoping this was an after upgrade fluke, but time will tell.

*If you are looking for some VMware PowerCLI magic for those vNIC changes, I offer this – without support or guarantee, only that these commands did the trick for me.

PowerCLI C:\> Connect-VIServer [esxi IP address]
get-vm “servername”
get-vm “servername” | get-networkadapter
get-vm “servername” | get-networkadapter | set-networkadapter -type “vmxnet3”
Get-View -ViewType VirtualMachine -Filter @{“Name” = “servername”} | %{$_.reload()}

 

Published 1/8/2018

Changing your Unity Connection SMTP domain

Changing the SMTP domain on a Unity Connection server really isn’t that big of a deal, but as with all things voice, no change works as initially advertised.  Previously, I had never had cause to mess with the SMTP domain address, but recently one of the major cell providers quit delivering our voice mail message notifications to devices and, not surprisingly, users were none to happy about it.

My guess was that the carrier in question didn’t much like the format of the sender address, since it included a sub domain: unityconnection@myservername.mydomainname.com.  I quickly decided that changing the SMTP domain on the server would easily test that theory and seemed far less painful than opening a ticket with a large service provider*. I did open a TAC case just to see if there were any caveats in making this change I might want to be aware of. That’s a whole lot of voice experience talking…err, writing. The voice paranoia runs deep for a reason.

TAC indicated that not only was this a simple change as thought, but that only one service would have to be restarted – the Connection Conversation Manager, and that wasn’t to be a big deal. Well, finding that hard to believe**, I proceeded to make said change and found that there’s a little more to the story.

First – there are *three* services that have to be restarted, and since two of them are critical services, you experience a failover if you are running in HA. The system does warn you this will happen, and for what it’s worth I did not experience a loss of service doing this. Certainly don’t blame me, though, if you do have an outage and aren’t in a maintenance period when you attempt this change.

smtpUnity

 

Of course, you get to rinse and repeat on the secondary server in an HA environment. Personally, nothing about the warning prompt I got on the secondary server indicates this is not a big deal, but hey, maybe that’s just me…

smtp-connection conversation manager secondary

That being said, once the services were restarted, and my blood pressure returned to normal, I expected to see the SMTP domain updated and a happy dance to ensue.  Alas, that was not the case.  After consulting with TAC, and without the least bit of surprise whatsoever, I found that a reboot was “sometimes”, infer always, required.

This did fix my problem and the delayed happy dance was epic. And thankfully not recorded for the sake of my remaining pride.

 

Published 1/26/2015

*Almost all levels of hell are more pleasant than opening a case with a carrier. I suspect Satan actually admires the ingenuity of carriers and any future levels of hell are modeled on their expertise and innovation in human suffering.

**There is seemingly far more proof of the existence of the Loch Ness Monster, Big Foot, and the Abominable Snowman than of something involving voice being “easy”

Upgrades 101

I find the upgrade research process far more entertaining if you approach it like a less than ideal scavenger hunt – a poorly planned, slightly brutal game of talent and luck- mixed with just a pinch of despair.

In this example, which will not begin to cover all the ways/paths/routes you can go for an upgrade, we will take a common scenario and walk through the basic research process. This is more to outline the process and things you should be checking rather than trying to be a comprehensive list of links for upgrades.

I apologize in advance for all the links in this post that will inevitably become dead after a matter of time, but there’s only so much in the universe I can control. Also, I shouldn’t have to say this, but ALWAYS check the latest versions of the documentation – things change – but it’s highly unlikely I’ll modify this post to reflect it.  I’m just lazy that way.

Let’s say in our example you have 5 unified communications servers in your cluster and are looking to upgrade to the latest & greatest versions of each product:
2 Call Managers – MCS7825-H3 with 2×160 GB drives and 2 GB of memory running 7.1.3.20000-2
1 Unity server – MCS7825-H3 with 2×160 GB drives and 2 GB memory running Unity 8.0
2 UCCX servers – MCS7825-H3 with 2×160 GB drives and 2 GB memory running 7.0(1)SR05_Build504

Let’s start with determining the upgrade path for your Unity server:
In case you have been living under a rock, the reign of tyranny Unity has enjoyed over voice engineers has been given an official end date, so if you are looking to upgrade your voicemail server, Unity Connection is the way to go.  The breeze with which this product installs in comparison to it’s predecessor will make you want to kiss your mother-in-law and hug your neighbor’s yappy little dog.

Step one in the Unity Connection research process is to determine your hardware compatibility. In this case, you get to play the find-your-server-model on the Big Long List ‘o Compatibility for the version you want to go to – in this case I pulled up the compatibility list for 8.x of Unity Connection: http://www.cisco.com/en/US/docs/voice_ip_comm/connection/8x/supported_platforms/8xcucspl.html

In our current example, you can see that this model of server is supported for versions of Unity Connection 8.x.   Yay, you! Specifically, it’s supported for Platform Overlay 1 which requires 4 gig of RAM and 2 250 gig drives.

We will want to confirm that our proposed Unity Connection version is compatible with both our current version of CUCM and our proposed upgrade version of CUCM – this typically isn’t an issue since Unity Connection plays very nicely with almost all versions of CUCM, but definitely a check you want to make: http://www.cisco.com/en/US/docs/voice_ip_comm/connection/compatibility/matrix/cucsccpmtx.html

In the case of Unity Connection upgrades, you are going to want to look closely at the DiRT tool and the COBRAs tool to make the transition from the old to the new – there are excellent tutorials/instructions/information on this web page: http://ciscounitytools.com/

Moving on, let’s tackle the process of determining the upgrade path for CUCM:

Step one is once again to determine your hardware compatibility.  Find your model of server on this chart clearly constructed by evil forces seeking to wreak havoc on the universe: http://www.cisco.com/en/US/prod/collateral/voicesw/ps6790/ps5748/ps378/prod_brochure0900aecd8062a4f9.html

Note that if you actually want to see your server model AND the headers at the same time, you are just out of luck unless you have a really, really large monitor or can read really, really tiny font. I’ve been known to take a screen shot of the header row and a screen shot of my server model row and line the two up. The entire time chanting curses upon the chart creators.*

As you can see, our example server does support CUCM 8.6 but there is both an X and a (2).  If you look at the sub notations on this already freaking fabulous chart (note sarcasm font), you will see indicators that while this server is supported, there are certain memory and hard drive upgrade considerations.

In this case, our sample server meets the hard drive specs (160 GB drives), but is going to need an extra two gig of RAM to make the leap into hyperspace. One note here, be sure to watch out for servers that are supported only for bridged upgrades – this means you can take the server up to the latest version, but the only thing it’ll be good for is to take a backup that can be restored onto a supported server. That, and I’m sure it makes a really useful, if somewhat noisy, door stop.

Once you’ve confirmed hardware compatibility of your CUCM servers, you will need to check your upgrade path.  Even though ideally you would like to go directly to the latest and greatest CUCM version, unfortunately you may run across a you-can’t-get-there-from-here scenario – especially if you are looking at upgrading from versions of 6.x.  In our sample case we are looking to go from 7.1.3.20000-2 to 8.6.2.  Time to pull out the magic eight ball.  Nah, actually, just use this document:  http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/compat/ccmcompmatr.html#wp373165

Note that you will likely need to know what your full version number translates to in SU-speak, in this case 7.1.3.20000-2 is actually 7.1(3a).  What you are looking for is your release in SU-speak under the Direct Upgrade column of the release you want to go to. If it’s not there, you are going to get to do a step upgrade – intermediate upgrades FTW! (sarcasm font again)

Be sure if you find yourself in this situation to pick a stable intermediate version.

The last piece of our CUCM cluster upgrade is UCCX (formerly IPCC):
Step one is, of course, to check the hardware compatibility, and there’s a doc for that: http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/crs/express_compatibility/matrix/crscomtx.pdf

This document may make your head spin a little, but first look for the version of UCCX you want to go with, in our case 8.5(1). Then check and see of your server makes the cut.

In our example case, we do make the cut – our version of UCCX 7.0(1)SR05_Build504 is even listed under supported upgrade paths- BUT looking at the document closely, UCCX 8.5(1) isn’t supported with our current CUCM 7.1.  Welcome back step upgrade.  You will need to keep your UCCX and CUCM versions compatible, so upgrading to UCCX to 8.02 then upgrading CUCM to 8.6, then upgrading UCCX to 8.5(1) will do the trick. And lead to excessive drinking.

Note, however, that our example CUCM version is not explicitly listed in the chart- only 7.1(3) and 7.1(3b) – so if it were me, I would confirm with Cisco support that 7.1(3a) indeed did support UCCX 8.02. Make no assumptions when it comes to the documentation- ever.

After you’ve survived all of this, I’d say your about 1/2 way through the research process.  Among other things, you will still need to review the release notes for each application, check for COP file requirements, check the prerequisites for each proposed version, and confirm phone firmware compatibility. You will also need to check third party application support for all your extra voice applications**, determine the order in which the upgrades need to happen, and review detail over again***.  You’ll also need to determine down times and back out options should upgrades not go as planned. Of course upgrades always go as planned, right? (sarcasm font at it’s finest)

This will get you started on your hardware check, help determine if you need to invest in new servers, and give you an idea of what version you should be targeting and what it’s going to take to get your hardware there.

*To say I hate this chart with the passion of a thousand suns is a woefully pathetic understatement. If it burned in the fires of Hades for all eternity, that wouldn’t be long enough.
**Be sure not to forget to check the compatibility for your Presence servers, your CER servers, your recording servers, your paging servers, your fax servers, your voice gateways, your legacy PBX, and anything else that integrates with CUCM
***Note that in some cases you may actually lose some functionality your users are dependent on, specifically I would mention the loss of built in Attendant Console when making the jump to CUCM 8 and above. Double check everything for gotchas and caveats, it’ll save your arse and your upgrade.

Follow that call!

In this entertaining episode of stump-the-voice engineer, users report a directory number is being routed through the Bermuda Triangle and landing in no-man’s voicemail land.  In fact, the mysterious voicemail greeting being heard by callers sounds like 3 minutes of someone pocket dialing their neighbor’s cat.

Proceeding to the standard fact collection, I find that the path of the call goes something like this:

PSTN -> gateway -> call manager-> directory number 1000 -> call forwarded to CTI route point 1001 -> 1001 call forwarded to Unity Connection voicemail -> voicemail greeting of unknown origin

If you’ve ever had to troubleshoot a similar issue, you probably already know where this is headed.

First test call: dial 1001, listen to greeting. Confirm correct greeting heard. Check.

Second test call: unforward 1000, let it go to voicemail. Surprise users by finding their cryptic voicemail greeting in record time.  Yep, users of this voicemail box had unintentionally recorded 3 minutes of scratchy noises and dead air, somehow managed to enable it as their standard voicemail greeting, and all without being the slight bit conscious of the process. Amazing.

While this of course boggles the mind, it doesn’t actually fix the real issue here – the fact that users want callers to hear the voicemail greeting for 1001 when 1000 is called and 1000 is forwarded to 1001.

For those not familiar, Unity Connection – in this particular case 8.x – has a couple of options on how to handle forwarded calls like the ones we are dealing with here.

The default behavior of Unity Connection is to use the first number when dealing with incoming calls that have been forwarded.  So in our case, even though 1001 is the directory number that forwarded to voicemail, the message for 1000 is being played out for the caller.  You can change this behavior globally by going to System Settings -> Advanced -> Conversations -> scrolling down toward the bottom and checking the Use Last (Rather than First) Redirecting Number for Routing Incoming Call box.

It’ll look something like this:

The problem with this solution is that some people, including the users in this case, actually like routing to voicemail boxes based on the first number called, so changing the behavior globally isn’t the best way to make friends and influence people.

Instead you can create a Forwarded Routing Rule in Unity Connection that changes the behavior just for a single the directory number – and only when that directory number is a redirecting number.

You will want to navigate to Call Routing -> Forwarded Routing Rules -> Add New. The rule looks something like this:

You will then set where you want the call to go. You can direct the call to a Call Handler, a Directory Handler, a Conversation, or to a User with a Voicemail Box.

Lastly, you will click Add New under Routing Rule Conditions to create the match statement.

Your match statement should look something like this:

Be sure to save your condition and your routing rule, and voilà! Incoming calls to voicemail containing the forwarding number you specified will now follow the rule you created. And there will be much rejoicing. If only in your own head.

Published 5/29/2012

Runt post: When voicemail is a dirty word

New deployments often require configuring a direct transfer to voicemail. Not too long ago @ifoam wrote this great piece on the steps involved in setting up this up: Transfer to Voicemail, which I recently referred to when I found that my configuration wasn’t working.

The article, however, confirmed my suspicions that I hadn’t missed any steps in the system configuration process, but when calls were transferred straight to the voicemail server, the user extensions weren’t coming along for the ride.

Fast forward several research minutes later to this obscurity, in particular the third entry by Randall White: https://supportforums.cisco.com/thread/2053902

Upon first reading, I found the fix too absurd to be likely, which I’m sure why Mr. White added the “no, I’m not joking” part.  The solution being proposed was the removal of the word “voicemail” from the alerting name of the CTI route point.

For those of us in voice, we’re rather familiar with what the alerting name controls, and no, it doesn’t usually have anything to do with this.  Alerting name shows up on phone displays and it’s generally one of those put-whatever-you-want-here-the-system-doesn’t-care fields. Except in this case it did. It cared a lot.

So instead of calling my CTI route point Direct To Voicemail – I changed it to Direct To VM.   Yep, that was it. I removed the offending vocabulary, quit infringing on the voicemail server’s sensitivities, and all was set right with the world.

And this is why voice engineers drink.

 

Publish Date: 2011/11/28