Contents tagged with SharePoint 2010

  • SharePoint: Specifying Content Database for new Site Collections when using Host Named Site Collections

    Tags: SharePoint 2013, SharePoint 2010

    Over the last few months I’ve been asked numerous times and I’ve seen quite a few e-mail conversations on how to work with new Host Named Site Collections (HNSC) and Content Databases. In this post I will show you how I have solved the problem using the native API hooks in SharePoint.

    Background

    Host Named Site Collections are not a new thing in SharePoint, it has been with us for quite some time, but not been extensively used due to previous limitations (and there still are some). With SharePoint 2013 one strong recommendation is to consider using HNSC, in contrast to the traditional path based site collections. It gives you a couple of benefits in management, performance and is required for Apps to work properly. On the other hand it also has a couple of downsides such as not being able to create new Site Collections in the UI.

    If you are using HNSC, a single content Web Application and you also use the same Web Application for Personal Sites (My Sites) then you might have stumbled upon the problem that your personal sites will be mixed in the content databases with the “normal” site collections. This is in many cases not a desirable solution, since you might have different SLA’s on personal sites and “normal” sites. Personal sites are created automatically when someone hits the My Site Host and you have no option to select the content database, compared to if you pre-create sites and use the –ContentDatabase parameter of the New-SPSite cmdlet. Fortunately there is a solution to this problem!

    A custom Site Creation Provider

    As you already might know SharePoint uses a very simple (stupid) algorithm by default to select the content database in which new Site Collections should be created – it takes the database with the least number of sites (somewhat simplified), not the one with least amount of data. This can actually, and possibly should, be overwritten with a custom algorithm. Such a custom algorithm can be implemented in something called a Site Creation Provider.

    A custom Site Creation Provider implements the abstract class SPSiteCreationProvider and then implements its logic in the SelectContentDatabases() method. The SelectContentDatabases method returns a list of possible content database candidates – if it only returns one that one will be used, if it returns many the default algorithm will be used on that list of databases and if it returns zero or null then it will of course not be created anywhere.

    The problem with the Site Creation Provider and its implementation is the limited number of options we have to do the selection of the content database. All we have is the set of content databases attached to the Web Application and a number of properties, defined in the SPSiteCreationParameters object passed into the method. We can get the the URL for the site to be created and but not the template used, so we have to be a bit clever when implementing it.

    What we could do is to make sure that all our content databases follows a strict naming schema – for instance databases that should contain Personal Sites always has a name like this for instance: FarmX_WSS_Content_MySite_nnn. Secondly we can get the URL to the My Site host from the User Profile Service. Using this we can create an algorithm that says: that any Sites (remember we don’t know what template is used) that is created on or under the My Site Host URL will end up in databases containing the text “MySite”.

    This is how a custom Site Creation Provider using this algorithm can be implemented:

    public sealed class WictorsSiteCreationProvider : SPSiteCreationProvider
    {
        public override IEnumerable<SPContentDatabase> SelectContentDatabases(
            SPSiteCreationParameters creationParameters, 
            IEnumerable<SPContentDatabase> contentDatabases)
        {
            SPServiceContext context = SPServiceContext.GetContext(
                creationParameters.WebApplication.ServiceApplicationProxyGroup,
                creationParameters.SiteSubscription == null ?
                    new SPSiteSubscriptionIdentifier(Guid.Empty) :
                    creationParameters.SiteSubscription.Id);
    
            UserProfileManager upManager = new UserProfileManager(context);
    
    
            List<SPContentDatabase> databases = new List<SPContentDatabase>();
            if (new Uri(upManager.MySiteHostUrl).DnsSafeHost == creationParameters.Uri.DnsSafeHost)
            {
                // find the My Sites databases
                SPContentDatabase smallestContentDb =
                    contentDatabases.
                        Where(contentDb => contentDb.Status == SPObjectStatus.Online).
                        Where(contentDb => contentDb.Name.Contains("MySite")).
                        OrderBy(contentDb => contentDb.DiskSizeRequired).First();
                databases.Add(smallestContentDb);
            }
            if (databases.Count == 0)
            {
                // choose from all databases
                SPContentDatabase smallestContentDb =
                    contentDatabases.
                        Where(contentDb => contentDb.Status == SPObjectStatus.Online).
                        Where(contentDb => !contentDb.Name.Contains("MySite")).
                        OrderBy(contentDb => contentDb.DiskSizeRequired).First();
                databases.Add(smallestContentDb);
            }
    
            if (databases.Count == 0)
            {
                // Log some error!!!
            }
    
            return databases.AsEnumerable();
        }
    }

    As you can see we use the site creation parameters to locate the UserProfileManager which can give us the My Site Host Url. In this case we expect the My Site host be a HNSC and all Personal Sites to be path based underneath that HNSC, so all we need to do is to compare the DnsSafeHost. If they match we retrieve all the databases containing the word “MySite” and add them to a list which will be returned by the method. If it does not find any databases or if the URL’s don’t match we retrieve the databases without the “MySite” in its name and return them.

    For the sharp eyed one you will notice that in this implementation we do not return a list of content databases, but instead always one – and we select only the one with the least amount of used space. Very useful if you want to keep a similar size of all your content databases – if not just remove the OrderBy() and First() statements.

    Registering the Site Provider

    Of course you need to build this as a full trust solution – any other way just don’t work. To register it with your farm you can either do it in PowerShell or using a Farm scoped feature with a Feature Receiver. The registration could look something like this:

    SPWebService contentService = SPWebService.ContentService;
    contentService.SiteCreationProvider = new WictorsSiteCreationProvider();
    contentService.Update();
    

    To unregister it you just set the value to null instead.

    Summary

    There you have it! It wasn’t that hard! Just a few lines of code and you have much more control over your site collections and content databases in the Host Named Site Collection scenario. Just remember to have proper error handling and logging in your Site Creation Provider, any errors or exceptions will guarantee that you don’t have any site collections at all.

  • SharePoint 2013 Managed Metadata field and CSOM issues in 2010-mode sites

    Tags: SharePoint 2013, SharePoint 2010, CSOM, JSOM

    Introduction

    SharePoint 2013 introduces a new model on how Site Collections are upgraded using the new Deferred Site Collection Upgrade. To keep it simple it means that SharePoint 2013 can run a Site Collection in either 2010 mode or 2013 mode, and SharePoint 2013 contains a lot of the SharePoint 2010 artifacts (JS files, Features, Site Definitions) to handle this. When you’re doing a content database attach to upgrade from 2010 to 2013, only the database schema is upgraded and not the actual sites (by default). The actual Site Collection upgrade is done by the Site Collection administrator when they feel that they are ready to do that and have verified the functionality of SharePoint 2013 (or you force them to upgrade anyways). But, the Site Collection admin might have to upgrade sooner than expected for some sites.

    Upgrade troubles

    This post is all about what has happened to us when our Office 365 SharePoint Online (SPO) tenant had been upgraded (and right now the Site Collection upgrade is disabled for us so we can’t do anything about it). The limitations of customizations in the 2010 version of SPO was very limited (to be kind) and we embraced JavaScript (which people have despised for a decade and now suddenly think is manna from heaven). We also leveraged Managed Metadata in a lot of lists, libraries and sites. We’ve built Web Parts using JavaScript CSOM to render information and also .NET CSOM stuff running in Windows Azure doing background work. Once our tenant was upgraded to 2013 all of the customizations using Managed Metadata stopped to work…

    JSOM sample with Managed Metadata

    I will show you one example of what works in SharePoint 2010 and what does not work in SharePoint 2013 when you’re site is running in 2010 compatibility mode.

    Assume we have a simple list with two columns; Title and Color, where Color is a Managed Metadata Field. To render this list using JSOM we could use code like this in a Web Part or Content Editor or whatever.

    var products;
    function loadProducts() {
    	document.getElementById('area').innerHTML = 'Loading...';
    	var context = new SP.ClientContext.get_current();
    	var web = context.get_web();
    	var list = web.get_lists().getByTitle('TestList');
    	products = list.getItems('');
    	context.load(products, 'Include (Title, Color)');
    	
    	context.executeQueryAsync(function() {
    		var collection = products.getEnumerator();
    		var html = '<table>'
    		while(collection.moveNext()) {
    			var product = collection.get_current();
    			html +='<tr><td>'
    			html += product.get_item('Title');
    			html += '</td><td>'
    			html += product.get_item('Color').split('|')[0];
    			html += '</td></tr>'			
    		}
    		html += '</table>'
    		document.getElementById('area').innerHTML = html;
    	}, function() {
    		document.getElementById('area').innerHTML = 'An error occurred';
    	});
    }
    ExecuteOrDelayUntilScriptLoaded(loadProducts, 'sp.js');

    As you can see a very simple approach. We’re reading all the items from the list and rendering them as a table, and this html table is finally inserted into a div (with id = area in the example above). This should look something like this when rendered:

    JSOM Rendering in SharePoint 2010

    The key here is that Managed Metadata in 2010 JSOM is returned as a string object (2010 .NET CSOM does that as well). This string object is a concatenation of the Term, the pipe character (|) and the term id. So in the code sample above I just split on the pipe character and take the first part. There was no other decent way to do this in SharePoint 2010 and I’ve seen a lot of similar approaches.

    Same code running in SharePoint 2013 on a SharePoint 2010 mode site

    If we now take this site collection and move it to SharePoint 2013 or recreate the solution on a 2010 mode site in SharePoint 2013. Then we run the same script, then this is what we’ll see, something goes wrong…

    Failed JSOM Rendering

    You might also see a JavaScript error, depending on your web browser configuration. Of course proper error handling could show something even more meaningful!

    JSOM Runtime exception

    Something is not working here anymore!

    What really happens is that the CSOM client service do not return a string object for Managed Metadata but instead a properly typed TaxonomyFieldValue. But that type (SP.Taxonomy.TaxonomyFieldValue) does not exist in the 2010 JSOM. Remember I said that SharePoint 2013 uses the old 2010 JavaScripts when running in 2010 compatibility mode. Unfortunately there is no workaround, unless we roll our own SP.Taxonomy.TaxonomyFieldValue class (but that’s for another JS whizkid to fix, just a quick tip to save you the trouble – you cannot just add the 2013 SP.Taxonomy,js to your solution).

    So why is this so then?

    If we take a closer look at what is transferred over the wire we can see that when running on SharePoint 2010 the managed metadata is transferred as strings:

    Fiddler trace on SharePoint 2010

    But on SharePoint 2013 it is typed as a TaxonomyFieldValue object:

    Fiddler trace on SharePoint 2013

    It’s a bit of a shame, since the server is actually aware of that we’re running the 2010 (14) mode client components! (SchemaVersion is what we sent from the CSOM and LibraryVersion is the library used on the server side)

    Fiddler trace on SharePoint 2013

    I do really hope that the SharePoint team think about this for future releases – respect the actual schema used/sent by the Client Object Model!

    Of course, this JavaScript solution will not work as-is when upgrading the Site Collection to 2013 mode. That is expected and that’s what the Evaluation sites are for.

    What about .NET CSOM?

    We have a similar issue in .NET CSOM, even though we don’t get a SharePoint CSOM runtime exception. Instead of returning a string object you will get back a Dictionary with Objects as values – but if you’re code is expecting a string you still get the exception. So in 99% of the cases it will fail here as well.

    Nothing exciting here, move along...

    Summary

    Deferred Site Collection update might be a good idea and you might think that your customizations will work pretty well even after an upgrade to SharePoint 2013, just as long as you don’t update your actual Site Collections to 2013. But you’ve just seen that this is not the case.

    Happy easter!

  • SharePoint Mythbusting: The response header contains the current SharePoint version

    Tags: SharePoint 2010, SharePoint

    I thought it was about time to bust one quite common myth in the SharePoint world (and there are lot of them!). This one in particular is interesting because it can cause you some interesting troubles, or at least some embarrassment. This is about that you can determine the current SharePoint [2010] version by checking the HTTP Response Header called MicrosoftSharePointTeamServices. So let’s bust that myth, or at least try!

    Confirmed, plausible or busted!?

    Background

    On a standard installed SharePoint Farm when you request a page you will get a set of response headers back. One of these response headers are named MicrosoftSharePointTeamServices and it has a version number as value, like this:

    The response headers

    As you can see in the image the version number is 14.0.0.6114, which corresponds to SharePoint 2010 (14) and December 2011 Cumulative Update (6114).

    HTTP Response Headers in IISThis header is not added by SharePoint (in the pipeline) but rather by IIS, even though it is SharePoint who adds it to IIS.

    If you fire up the IIS manager and go to a SharePoint web site and check the HTTP Response Headers Feature, you can see that it is configured there.

    If you check the Central Administration Web Site you’ll it also have the same exact version value.

    You can even change the headers in the IIS Manager (what good that now would do). So we basically already busted the myth! But let’s assume that you don’t fiddle with this and continue our research…

    Applying a cumulative update

    Let’s take this 6114 farm, which was a slipstreamed 6114 install, and apply the April 2012 CU (6120) and see what happens. After applying the April CU we’re seeing this in the returned header on our Central Administration that the response headers shows the correct value – 14.0.0.6120.

    The build is shown

    Now, take a look at the header on the content web application that we looked at before the B2B upgrade:

    Upgraded web app shows 6115

    6115!!! What!? After checking both Todd and Todd’s version lists we have no idea what version this is?!? It is something in-between December 2011 and February 2012! Ok, let’s apply June 2012 CU, which is build number 6123, on top of this. After running the Configuration Wizard again this is response header we’re receiving.

    Upgraded web app shows 6122

    6122! Close but no cigar! Central Admin on the now patched farm still shows the correct build (6123).

    Adding a new Web Application

    Now let’s do another experiment – add a new Web Application to this farm. After that we’ll see that it actually contains the correct version header (6123).

    New web app shows 6123

    Adding a new server to the farm

    One final experiment, let’s add a new server to this farm, that was upgraded from December 2011, to April 2012 and then to June 2012 CU, with a slipstreamed June 2012 machine. This is how it looks if we browse to the original web application (that we updated twice).

    New farm shows 6123

    The newly created one, shows the “correct” build. Also a check in the IIS verifies that both web sites have the “correct” build number in the response headers feature.

    This is interesting, this means that in a load balanced scenario you could actually get different build values from the response header!

    Digging deeper…

    Let’s do this experiment once again but slightly different. This time instead of running the Configuration Wizard after applying the patch, let’s do it the hard core way by using psconfig.exe:

    psconfig.exe -cmd upgrade -inplace b2b -wait

    The key here is that I use the –wait flag, which makes sure that the upgrade process is executed as the user running the command, instead of in the owstimer.exe process (which is running as the farm account).

    Low and behold, now the response header shows the correct build version (6123) in Central Admin and on the content web application!

    Why these strange build numbers?

    The local Administrators groupTo make a long story short it all comes down to how the farm is updated, specifically which account you’re doing the upgrade with. We can also say that if you’re running the Configuration Wizard and get the “correct” build number in the response headers, you’re farm is misconfigured :-). When I ran the psconfig.exe command with the –wait switch I used an account that was a member of the local Administrators group, whereas my farm account (running the owstimer.exe process) is not.

    Deep down between the zeroes and ones in the SharePoint code there is actually a code snippet that checks if the account running the upgrade is a member of the local Administrators group. If the account is member it will nice and quietly update the IIS metabase with the (file) build number, taken from the current assembly which is Microsoft.SharePoint.dll. But if the user is not a local Administrator (and cannot edit the metabase), the operation is passed on to the WSSAdmin component (running as Local System) which uses another assembly. This assembly is called Microsoft.SharePoint.AdministrationOperation.dll, and the metabase is updated with the (file) build number for that assembly. This table below shows you the version info for the two CU’s we’ve been looking at.

      Microsoft.SharePoint.dll Microsoft.SharePoint.AdministrationOperation.dll
    April 2011 Cumulative update 14.0.6120.5000 14.0.6115.5000
    June 2012 Cumulative update 14.0.6123.5002 14.0.6122.5000

     

    Fortunately in these cases the AdministrationOperation dll has had changes between released CU’s, but what will happen the one day when there is no need to patch that assembly. Well, in that case the build number will stay as in the previous version/build/CU.

    So there you have it – you cannot trust this MicrosoftSharePointTeamServices response header as a single source of trust when you need to remotely find out your SharePoint build and version. Even if you keep track of all the specific builds of the administration assembly, since it might not be updated, and you cannot be sure that everyone runs psconfig with the –wait switch.

    The myth is busted!

    Busted

    SharePoint Designer shows the correct version!?

    Let’s take a look at how you can retrieve the correct version number remotely. It hurts me to tell you – but the best source for this is to use SharePoint Designer! Yes, SharePoint Designer shows you the current build in site information panel.

    Build number in SharePoint Designer

    Note: SharePoint Designer uses a call to /_vti_bin/shtml.dll/_vti_rpc to find out the version number

    “I read somewhere that the Client Object Model has the correct version number!”

    This is a side note, but I’ve seen references to using the JavaScript Object Model (JSOM) to get the current version and build. Let’s do this quickly by using Internet Explorer Developer Toolbar and run this little script:

    SP.ClientContext.get_current().executeQueryAsync(null,null) 

    When we take a look at the response from this request we’ll see this:

    JSOM result

    6108! That is somewhere in-between the June 2011 CU and the August 2011 CU, a year old stuff! It is basically the same reason as previously, this build number is fetched from the Microsoft.SharePoint.Client.ServerRuntime.dll assembly, and that one has not been/needed to be updated since last year (someone give that PM a nice cake!).

    How about removing the header?

    Someone of you might think that it is a good idea to remove this header, since it can at least tell you (or your enemies) something about your farm. Ok, for a public facing WCM site (or public facing blog sites :-) it might make sense, but if you do it you must know that it will cause troubles when crawling, people will not be able to work with documents etc etc. So just don’t do it without a proper cause and testing.

    Note: Disabling Client Integration on the web application will remove the response header.

    Summary

    That was fun, right? We did bust a myth about how you can find out the current SharePoint version just by looking at the HTTP response headers. The MicrosoftSharePointTeamServices response header really doesn’t say anything (except some kind of minimum build level) since you don’t know how they built/slipstreamed the install or upgraded the farm.

  • Visual guide to upgrading a SharePoint 2010 Shared Services farm to SharePoint 2013

    Tags: SharePoint 2013, SharePoint 2010

    Introduction

    SharePoint 2010 introduced the Service Application concept and that architecture model also includes the possibility to publish and consume service applications between farms. For instance you could have the Managed Metadata service application in one of your farms and use it in another farm. There are several interesting and valid scenarios for this and some of them include having dedicated Shared Services farms, that is a farm that’s only hosting service applications and not any content applications. If you have one of these farms, or farms that publishes or consumes service applications you are facing an interesting upgrade scenario when looking at SharePoint 2013. In this Visual Guide I’ll try to go through all the required steps for a successful upgrade to SharePoint 2013

    Note: this is written for SharePoint 2013 Preview. Things might change over the next few months up until RTM. If I remember I’ll go back and revisit the post when RTM is due, if not, remind me.

    SharePoint upgrade

    As you perhaps know by now, there is no inplace upgrade in SharePoint 2013, which means that you have to do the database attach approach in a completely new SharePoint 2013 farm. If you’re still reading I assume that you have farms and service applications that are published and consumed, and you might start to get a slight headache now when you realize that you might have to synchronously create new farms and move all your databases. Well, don’t be sad – I have good news for you.

    SharePoint 2013 allows it’s published service applications to be consumed by a SharePoint 2010 farm. This means that we can upgrade the farm that is hosting the service applications first! In a services farm case, it means that the services farm can be upgraded to SharePoint 2013 and then you can later upgrade the consuming farms, one at a time. This will save you lot’s of administration and testing and is just plain awesome! Of course this is not a recommended long-term scenario – you would like to get up on SharePoint 2013 with all your farms, but you can take one farm at a time which allows for better resource management and most likely you make it an easier decision for the ones owning the budget. So let’s get started…

    Note: I’m not going to cover content application upgrade or upgrade of all service applications in this article, I’ll save that for later or for someone else…

    Prerequisites and setup

    In this guide I have a two farm setup; one farm with content applications and one farm with service applications. I will only use one service application – the Managed Metadata Service Application (which is the SA that is the best candidate for SA publishing). Beware of that not all service applications can be published and you cannot take advantage of the new 2013 Translation Service (which can be published) in a 2013 farm. Also you cannot take advantage of all the new features in the new service applications until you have fully upgraded the consuming farms to 2013.

    In this guide I’m running SharePoint 2010 Service Pack 1 (December 2011 CU) in both my 2010 farms. I’ve not seen any specific requirements of version or CU level, if someone knows, please chime in. The farms in this case isn’t that spectacular it is single machines with Windows 2008 R2 and SQL 2008 R2 installed, but it sounds better with saying farm. They are all in the same domain, and the DC is located on its dedicated box.

    The consuming farm machine is called SP2010A and has one content web application, with a Site Collection that has a Managed Metadata site column. The Managed Metadata is consumed from the other farm (machine name SP2010B).

    Site Column with remote MMS

    Step 1: Set up a new Services Farm using SharePoint 2013

    In order to start our upgrade path the first thing we need to do (except all the backup you normally do anyways, no?) is to set up a new parallel SharePoint 2013 farm. This farm is set up on a machine, called SP2013A, with SQL Server 2008 R2 and SQL Server 2012 SP1 CTP (going a little bit crazy here…). It’s a vanilla install, no farm configuration wizard or other voodoo. The 2013 farm has all the pre-requisites installed as well as the three KB articles KB2708075, KB2554876 and KB2472264. The installation of the 2013 farm was also done using the same accounts as used in the 2010 service farm.

    Before proceeding I need to setup the publishing trust between the 2010 content farm and the 2013 services farm. This is done by exchanging certificates. Let’s start with exporting the certificates from the 2010 consuming farm:

    asnp microsoft.sharepoint.powershell
    $rootCert = (Get-SPCertificateAuthority).RootCertificate
    $rootCert.Export("Cert") | Set-Content "C:\ConsumingFarmRoot.cer” -Encoding byte
    $stsCert = (Get-SPSecurityTokenServiceConfig).LocalLoginProvider.SigningCertificate
    $stsCert.Export("Cert") | Set-Content "C:\ConsumingFarmSTS.cer" -Encoding byte

    CertificatesThis creates two certificate files, copy those to the new 2013 services farm and import them like this:

    asnp microsoft.sharepoint.powershell
    $trustCert = Get-PfxCertificate C:\ConsumingFarmRoot.cer
    New-SPTrustedRootAuthority "SP2010A" -Certificate $trustCert
    $stsCert = Get-PfxCertificate c:\ConsumingFarmSTS.cer
    New-SPTrustedServiceTokenIssuer "SP2010A-STS" -Certificate $stsCert

    To verify that it worked fine, open up Central Administration on the 2013 farm and go to Security > Manage Trust, it should look something like this:

    Manage Trust on 2013 farm

    Next is to export the root certificate from the 2013 farm:

    $rootCert = (Get-SPCertificateAuthority).RootCertificate
    $rootCert.Export("Cert") | Set-Content "C:\PublishingFarmRoot.cer" -Encoding byte

    Copy the file to the 2010 farm and import it like this:

    $trustCert = Get-PfxCertificate C:\PublishingFarmRoot.cer
    New-SPTrustedRootAuthority "SP2013A" -Certificate $trustCert

    And verify it on the 2010 consuming farm:

    2010 farm trusts

    Now we need to make sure that the consuming farm can query the Application Discovery and Load Balancing Service in the 2013 farm. This is done by using the Id of the consuming farm as identity. This is a Guid that you get like this (run the PowerShell on the 2010 farm!):

    (Get-SPFarm).Id

    In the 2013 farm go to Central Administration > Application Management > Service Applications and select the Application Discovery and Load Balancing Service. Then click on the Permissions button in the ribbon. Enter the Guid into the textbox and click Add, then give the remote farm Guid Full Control permissions and click OK, as in the picture below.

    image

    Now we are all set and done to proceed to the next step.

    Note: as you just saw, the steps are exactly the same as on 2010.

    Step 1b: The firewall (optional, but required)

    If you have the/a firewall enabled you need to open up the required ports for Service Applications between the farms. This is a script that I find handy for that:

    netsh advfirewall firewall add rule name="SharePoint 2010 Service Apps HTTP (TCP 32843)" dir=in action=allow protocol=TCP localport=32843 profile=domain
    netsh advfirewall firewall add rule name="SharePoint 2010 Service Apps HTTPS (TCP 32844)" dir=in action=allow protocol=TCP localport=32844 profile=domain

    Step 2: Set your Service Applications into read-only mode

    Once you have your (now) three farms ready you have a couple of options, depending on which service applications your services farm host. For instance if you have the Search Service Application published you are a bit lucky. This is a service application where users do not edit/write data – they just read the index. So you can have your 2013 service farm and 2010 service farm running in parallel and testing the new one. In service applications such as Managed Metadata its another story. In this case end-users might tag information, they might add new terms etc. In order to not loose any data you need to make sure that the end-users can’t edit data in that service application.

    In the case of managed metadata I do two things – first of all I remove all permissions for the end-users, so that they do not update/add any data. Secondly I set the database used by the managed metadata service application to read-only – to make sure that no edits are done. And third – I put up a notification about this service update on the Intranet a couple of weeks in advance :).

    Note: The MMS service application has some issues with having it’s database in read-only mode and it logs warnings and errors to the trace log. In this case we don’t care about that – we just want to protect the data.

    I set the database into read-only mode on the 2010 service farm to Read-Only by editing the properties of the database and under Options select Database Read-Only to true.

    MMS Db in Read-only mode

    Step 3: Backup and restore Service Applications databases

    From this time we need to know about timings, since the users cannot add new terms and stuff in the MMS database. For your production farms you should test and time this properly so you can inform your users about how long it’s going to take.

    To move the database to the SQL Server on the new 2013 services farm I just do a backup and restore of the MMS database and then since we backed it up in Read-only mode I just flipped the switch back to normal mode.

    The restored MMS Db

    As you can see in the picture, I restored it to a new name. This is my personal preference and a recommendation. In a few minutes it will be a 2013 database and no longer a 2010 one.

    Step 4: Create new Service Application

    The next step is to configure the 2013 service farm and to create a new MMS service application and do a database attach upgrade of the database we just copied.

    Note: This part can (and probably should) be fully automated using PowerShell, but in this post it kinda contradicts the idea of a Visual Guide…

    Service account configurationFirst of I configure a service account to be used for the service application pools on the new service farm. In this case I use the exact same service accounts that I did in the 2010 service farm, to minimize the likelihood of errors. Once that is done I’m ready to create the new MMS service application.

    In Central Administration under Service Applications choose to create a new Managed Metadata Service.

    New Managed Metadata Service

    In the dialog that appears fill in the name of your new service application and in Database Name, enter the exact database name of the database you just restored to. That is, we’re pointing the new service application to an existing database. When we click OK, SharePoint 2013 is going to take care of the upgrade of that database.

    A new MMS SA

    Next fill in to create a new application pool using the managed account you previously registered.

    A new app pool

    Once that is done, just click OK, and wait for the service application to be created and the database to be updated. When all is done you Service Application Page in the 2013 services farm should look like this:

    Nice!

    Now, go ahead and start the service instance on the servers in your new services farm.

    MMS Service instance

    The service should now be up and running and all you have to do is to give permissions to the service application. In this case a good starting point is to configure the same permissions on the new 2013 MMS SA as you had on the 2010 one.

    Step 5: Publish and Update the Service Application Connection

    Now we are ready to start the publishing process of the new 2013 MMS SA. First of all we need to give the consuming farm permissions to use the service connection. This is done exactly the same as we did for the Application Discovery and Load Balancer Service Application. Use the Farm Id of the remote farm and give it permissions for the new MMS SA:

    Service application permissions

    Time to hit Publish! Publishing of a service application hasn’t changed so the steps are the same as you did for your old 2010 service farm. First select the service application, then click Publish. Copy the hideous long Published URL into notepad (or remember it)

    Service Application Publishing

    Click OK and wait for it to finalize its configuration.

    Now it’s time to go back to the 2010 content farm. Navigate to Service Applications in Central Administration.

    This is probably the most critical part of this whole upgrade. We are going to remove the old connection to the 2010 services farm. And once we click Delete, the end-users will be limited in what Managed Metadata features they can use. Don’t be worried though, it will only take a minute to get the new 2013 connection up and running.

    So select the old 2010 connection and click Delete.

    Remove ye olde MMS SA

    Now, let’s create a new Connection. Select Connect > Managed Metadata Service Connection

    New MMS connection

    In the dialog paste (or enter) the Published URL that you got from the 2013 farm when published the MMS SA.

    Configure the connection

    Click OK and wait for the 2010 farm to discover the remote 2013 MMS SA. It will take a couple of seconds. When it’s done, you are welcomed with a fantastic intuitive UI. You need to click on the Managed Metadata Service Connection, its background should turn into yellow. Then Click OK.

    Confirm the connection

    When prompted for the name of the Service Connection, either enter a vanity name or accept the suggested one. Click OK to finalize the connection.

    Done!

    You should now be able to see your new Service Application Connection as well as the connection to the new remote Application Discovery SA.

    New Service Application Connections

    Step 6: Test

    All that is left now is to test if we got what we wanted.

    First; in the new 2013 MMS SA enter a couple of new terms and let’s see if that is reflected on our 2010 farm.

    A new term in the 2013 service

    Fire up the MMS management interface on the 2010 farm and see our newly created term. Looks like it is working!

    It's there...

    In this view, note that we do not have all the new 2013 features, but you can clearly see the Term Groups from the 2013 MMS SA that is created for the new Search Service.

    Then in the content web application, where I had a document library with a MMS column. Let’s edit document properties for a document there…

    And here...

    Yup, the new term is there as well.

    Mission accomplished!

    Step 7: Kill the old services farm

    Next step is to decommission/kill/retire the old services farm and return the metal to the metal gods! Well, you need to make sure that you have upgraded all shared service applications before doing that. The procedure for the rest of the service applications are almost the same, but I’ll let you figure out the differences :)

    Step 8: Upgrade the consuming farm(s)

    Even though this works fine it’s not a long lasting situation, you would like to draw benefit of all the new 2013 features. So the final step in this upgrade scenario is to upgrade the content farm. Details for that is for another post.

    Summary

    I hope this bedtime story post gave you some insights in how to upgrade a shared services farm from SharePoint 2010 to SharePoint 2013. As you can see it’s a very smooth transition and upgrade. This whole scenario should also show you one of the benefits of having separate service farms, this allows you to make an upgrade easier and faster.

  • Introducing the SharePoint 2010 Get-SPClaimTypeEncoding and New-SPClaimTypeEncoding cmdlets

    Tags: SharePoint 2010

    A couple of months back, when the weather was grey and it was cold (well, it still is here in Sweden, glad I did a tour to the Riviera last week), I wrote a post about how Claims encoding works in SharePoint 2010, simply called "How Claims encoding works in SharePoint 2010". In that post I discussed how SharePoint encoded Claims from relatively long descriptive claims, containing URN's, to a smarter and shorter format - smaller to store, faster to compare format etc. While there are tons of defined claim types only a selected few are "pre-encoded" in SharePoint. Here are a few examples:

    Claim Type

    Encoded value

    http://schemas.microsoft.com/sharepoint/2009/08/claims/userlogonname

    #

    http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier

    ?

    http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress

    5

    Once you start adding providers and Claim Types to SharePoint 2010 using you might start using Claim Types that are not pre-defined by SharePoint. In this case SharePoint automagically assigns the encoded Claim Type a unicode character starting with the 0x01F5 (500 or ǵ) and then incrementing that value with 1 for each and every Claim Type. (Note that it's not just incremental, it can't be uppercase characters or whitespaces.) This is really important to know since if you ever going to move information from one farm to another then you need to configure them in exactly the same order, otherwise your Claim Types encodings, or SharePoint for that matter, won't work.

    Until the June 2012 Cumulative Update there was no (good) way to find out what the encodings were that you've added and there was no way to specify the encoding for a Claim Type. But from now on we have two sleek cmdlets; the Get-SPClaimTypeEncoding and the New-SPClaimTypeEncoding.

    Get-SPClaimTypeEncoding

    The Get-SPClaimTypeEncoding is a lightweight cmdlet that allows you to list all the defined Claim Type encodings in your farm - both the pre-defined ones and the ones you've added yourself. Just run the cmdlet like this to list all the Claim Type encodings:

    Get-SPClaimTypeEncoding

    This will give you the following output:

    Get-SPClaimTypeEncoding

    Note that I have two custom Claim Types, both listed with a question mark (PowerShell can't print Unicode characters - eeek!). Both of these are ones that have been added when I've fiddled with Claims and they have both been automatically been assigned an encoding character. To get a better idea of the encoded character, just run the following PowerShell:

    Get-SPClaimTypeEncoding | ft @{Label="Character";Expression={[Convert]::ToInt32($_.EncodingCharacter)}}, ClaimType

    This will show us, exactly which double-byte character that has been used for the encoding:

    No more ?

    New-SPClaimTypeEncoding

    Using the New-SPClaimTypeEncoding cmdlet we can also add our own encodings, not that we can use any fancy vanity encodings, but at least they can be specified. The cmdlet takes two arguments the -EncodingCharacter which is the encoded value and the -ClaimType. Here's an example on how you can use it:

    New-SPClaimTypeEncoding -EncodingCharacter ([Convert]::ToChar(517)) -ClaimType "urn:another-claim-type"

    After answering Yes two times you'll now have the Claim Type encoding in your farm, and once you use it for a mapping it will not generate a new encoding.

    I'm encoded!

    Note; if you get an ArgumentException it can be one of many reasons. Make sure that your character is not used in another encoding, it is above 500 (0x01F5) and not an uppercase or whitespace character.

    Summary

    That was it, two small but quite powerful and really useful cmdlets. Whenever you need to script environments and are doing Claim Type mappings, make sure that you utilize what you got in the toolbox!

  • Understanding the Application Addresses Refresh Job in SharePoint 2010

    Tags: SharePoint 2010

    In this article I would like to give you some information about a very important timer job in SharePoint 2010 - the Application Addresses Refresh Job. If you do not understand what it is used for you might see some strange (to you) error messages when configuring SharePoint. Even if you're familiar with it it might be a good idea to continue reading.

    Purpose of the Application Addresses Refresh Job

    The Application Addresses Refresh Job has one specific job to do - keep track of all available and online instances of all service application end-points. This means that whenever a proxy requests an endpoint for a service application it will ask the Topology Service (the Application Discovery and Load Balancer Service) for an endpoint. The Topology Service keeps a list of the endpoints that has been discovered by the Application Addresses Refresh Job and passes on one of these endpoints to the proxy, using the load balancing algorithm, which uses that endpoint to talk to the service application. So far so good...

    So, what could go wrong here...

    SNAGHTML7f12719The problem is that this job only runs (by default) every 15 minutes. And unless you follow the first rule of Spence - "Step away from the keyboard", you will experience some interesting side effects.

    Service Application configuration

    One of the first times you'll experience this 15 minute delays is when creating Service Applications in SharePoint 2010. Let's take the Secure Store as an example. You create the Secure Store Service Application and trigger happy as you are you click on it to configure the Secure Store Key. And most of the times you will see an error like this:

    Cannot complete this action as the Secure Store Shared Service is not responding. Please contact your administrator.

    You hit the reload button a couple of times, starts to fiddle with permissions but nothing happens. Finally you realize - ahh, I haven't started the Service Instance of the Secure Store, so you start that and head back to the Secure Store Service App to continue to configure it. But, you still receive the same error message. You do some more fiddling with permissions etc until your totally lost in your configuration madness. You do some Binging on the Interwebs and suddenly it just works...

    What really happened was that the Application Addresses Refresh Job run, meanwhile you were furiously blaming the product group for a crappy product, and found a valid and working endpoint for the Secure Store Service App. And now the Topology Service are aware of the endpoint and can pass it on to the proxy.

    What you really should have done is; first start the Service Instance, then create the Service Application.  And if you still get the error message, manually kicking off the timer job will do the trick.

    Farm maintenance

    Another common scenario where similar results may be seen is when you do some farm reconfiguring; such as adding/removing/rebooting servers, moving Service Application Instances from one server to another (stop on one server and start on another). You could do this while your farm is hot and running but make sure to start the timer job whenever you do a change (start/stop and instance or add/remove a server). Worst case your end-users will be unable to use the Service Application for at the most 15 minutes. One scenario where I've seen it happen is when you take a server out of the load balancer rotation to do Windows patching and then you need to reboot that server - the service application will be unavailable for that time on that machine (duh!). So if you have for instance three servers running this service instance, every third (Round Robin) request will fail. Running the timer job immediately after starting the re-boot sequence will mitigate any errors.

    Should I change the Timer Job schedule?

    Well, this is totally up to you. From what I've seen it's not a "heavy" job and you could lower the interval. But under normal circumstances 15 minutes should do the trick. But when doing maintenance, as discussed above, lowering the interval might be a good idea.

    Summary

    A short, and pretty intuitive, post about a very, very important Timer Job in SharePoint 2010 - the Application Addresses Refresh Job. Make sure that this job is running and behaving - otherwise your end-users (and proxies) will not be able to talk to the service application instances.

    [Update 2012-05-20] If you are interested in more details on the topology web service and the service application load balancing I recommend that you read the following post by Josh Gavant: How I learned to Stop Worrying and Love the SharePoint Topology Service.

  • Speaking at the International SharePoint Conference London 2012

    Tags: SharePoint, SharePoint 2010, Presentations

    Square_web_banner_2bIn less than a month the greatest SharePoint conference on this side of the pond will take place in London - the International SharePoint Conference (ISC). The ISC is the new name for the conference held in London and previously called Best Practices Conference and Evolutions Conference. This will actually be my first year at the conference, but I always wanted to go there - and now I'm one of the speakers in the fantastic line up!

    This conference will not be just an ordinary conference - instead of having the traditional one hour demo sessions we will over the course of three days go from a functional specification to a deployable solution. The sessions will vary in length from less than one hour to a couple of hours long. There will be two parallel tracks like this - one focused on development topics and one focused on IT-Pro stuff.

    I will together with some fantastic SharePoint MVP colleagues and friends participate in a couple of sessions ranging from Visual Studio extensibility, to BCS and Managed Metadata thingies. It will be a blast presenting and I do think that the audience will enjoy this show. So, if you haven't already booked your tickers - now is the time!

    Se you there!

  • What is a Microsoft Certified Architect?

    Tags: Personal, SharePoint 2010

    MCA(rgb)_1417Last Friday I got the fantastic message that I had successfully passed the Microsoft Certified Architect - SharePoint 2010 (MCA) certification, something I'm really proud of - but something most of the community never ever heard of. During this weekend I've been pinged and messaged by a lots of people asking the question "What is a Microsoft Certified Architect?". In this post I intend to answer it as thorough as possible, including my own personal aspects of it.

    First of all let's answer the most common question - "How does the Microsoft Certified Architect relates to the Microsoft Certified Master exam?".

    I might agree that Master sounds way cooler than Architect, but that isn't the real story. The Master certification (MCM) is the most highly technical exam you could ever get in the Microsoft world. The term technical is important here. During the MCM rotation and the exam you explore and learn all the scary and exciting internals and externals of SharePoint (or the other MCM:able products/technologies) from a technical perspective. You will learn from the best teachers and SME's and you will be in a class together with some really awesome and skilled persons. The MCM is both a course (3 weeks on site, or 1 week on site and 10 weeks off-site), a written exam and a qualification lab. Read more about my MCM experience in one of my older posts. To even apply for the MCA you need to be an MCM on the specific product your applying for and on the current version. This means that Microsoft already tested and verified your technical skills! So one could actually say that the MCA is like the Microsoft Certified Grandmaster...

    "What is the MCA then?".

    So, let's take a look at the Architect certification (MCA). The MCA takes the certification to another level, and focus on the business side of SharePoint (or the other MCA eligible products; SharePoint, Exchange, SQL and AD). The MCA is not a course, it is not something you sit in class and learn for a couple of weeks, it is not something you can study for - it is something you learn over the course of several years of experience with the products, in real business cases together with one or more customers.

    "How do I apply for the MCA?".

    When applying for the MCA you must supply a portfolio which includes details about real customer gigs, your CV and other documentation to prove that you are in the business for real. Once the program manager thinks you have "what it takes" and that you proven that, you will be scheduled for a board appearance. You need to work on your documents and prepare for the board presentation. This is not something you should do with your left hand - you need to put in some real effort here to produce a good set of documents and a good presentation. It is up to you to prove that you have "what it takes".

    "So, how does the MCA board appearance work?".

    The board appearance is the certification. You will spend almost a day together with the MCA board (consisting of other MCA's or specific SME's). You will do a presentation, a case study and you will have several intense Q&A sessions. Enough to make you choke. The board will then grade you on six different competencies (full list and details on the official site). Once you are done - all you can do is wait for the pass/no-pass e-mail. This is an exhaustive day for which you need to prepare. But as I said earlier - it all comes down to the actual experience you have in the industry and how used you are to being in these situations with clients. You can't study for the Q&A sessions.

    "What's the value of an MCA certification?".

    The MCA, and the MCM for that matter, costs a lot of money. So is it worth it? In my opinion definitely. It's really hard to say what the exact payback is. We're currently early in the SharePoint MCA process with quite few certified MCA's and only time will tell. I can directly say that I learnt a lot while preparing for the board appearance - with a lot of time reflecting on past projects. Also the actual board appearance was great in that way that the board tested me; both on my strong areas and weak ones - and now I know what parts I might need to step up on. Studies done on the MCM community shows benefits such as a higher hourly rate, easier recruitment, better and safer deliveries. So the MCM/MCA are really a quality stamp, with MCM focused on the technical aspects and MCA on understanding and implementing business requirements.

    "Why did I do this?".

    This is the question my wife asks me! Well, first of all I always try to be better in what I'm doing. And going down the MCA route surely did this. I now know what I know and know what I don't know and know what I want to know... Also I think it is great for my company, Connecta, to have this certification - it will definitely be a USP in attracting clients and co-workers. A big thank you to Connecta and my managers who believed in me enough to send me on both the MCM and MCA journey! In the end I know that both me personally, my company and my co-workers will benefit from this.

    "I want to learn more about the MCA?".

    So, now I've been ranting about the MCA (from my perspective) and there are probably tons of questions that remains unanswered. Use the following links to learn more.

    That's it. I hope you have a far better understanding of what a Microsoft Certified Architect is.

  • How Claims encoding works in SharePoint 2010

    Tags: SharePoint 2010

    I've seen it asked numerous times on forums and I've been asked over and over how to interpret the encoded claims - so here it is: a post which will show you all the secrets behind how claims are encoded in SharePoint 2010.

    Updates: - 2012-03-09 Added Forms Authentication info. - 2012-03-11 Updated with information about how the claim type character is generated for non-defined claims

    Background

    If you have been using previous versions of SharePoint 2007, been working with .NET or just Windows you should be familiar with that (NETBIOS) user names are formatted DOMAIN\user (or provider:username for FBA in SharePoint). When SharePoint 2010 introduced the claims based authentication model (CBA) these formats was not sufficient for all the different options needed. Therefore a new string format was invented to handle the different claims. The format might at first glance look a bit weird...

    How it works?

    The claim encoding in SharePoint 2010 is an efficient and compact way to represent a claim type and claim value, compared to writing out all the qualified names for the claim types and values. I will illustrate how the claim are encoded in SharePoint 2010 focused on user names, but this claim encoding method could be used for basically any claim. Let's start with an illustrative drawing of the format and then walk through a couple of samples.

    The format

    The format is actually well defined in the SharePoint Protocol Specifications in the [MS-SPSTWS] document, read it if you want a dry and boring explanation, or continue to read this post...

    The image below shows how claims are encoded in SharePoint 2010, click on the image for a larger view of it.

    The SharePoint 2010 claim encoding format

    Let's start from the beginning. The first character must be an I for an identity claim, otherwise it has to be c. Note that the casing is important here. The second character must be a : and the third a 0. The third character is reserved for future use.

    It's in the fourth character the interesting part starts. The fourth character tells us what type of claim it is and the fifth what type of value. There are several possible claim types. The most common are; user logon name (#), e-mail (5), role (-), group SID (+) and farm ID (%). For the claim value type a string is normally used and that is represented by a . character. The sixth character in the sequence represents the original issuer and depending on the issuer the format following the sixth character varies. For Windows and Local STS the seventh character is a pipe character (|) followed by the claim value. The rest of the original issuers have two values separated by pipe characters, the name of the original issuers and then the claim value. Easy huh?

    Note: the f (Forms AuthN) as trusted issuer is not documented in the protocol specs, and this is what SharePoint uses when dealing with membership providers (instead of m and r). For more info see SPOriginalIssuerType.

    For full reference of claim types and claim value types, look into the [MS-SPSTWS} documentation.

    Charmap(Added 2012-02-13) If you are creating custom claim providers or using a trusted provider (as original issuer), you will see that you get some "undocumented" values in the Claim Type (4th) position (that is they are not documented in the protocol specs). The most common character to see here is ǵ (0x01F5). If the claim encoding mechanism in SharePoint cannot find a claim type it automatically creates a claim type encoding for that claim. It will always start with the value of 500 increment that value with 1 which results in 501. 501 is in hex 01F5 which represents that character. It will continue to increase the value for each new (and to SharePoint not already defined) claim type. The important thing here to remember is that these claim types and their encoding is not the same cross farms, it all depends on in which order the new claim types are added/used. (All this is stored in a persisted object in the configuration database)

    Update 2012-07-13: Make sure to read the "Introducing the SharePoint 2010 Get-SPClaimTypeEncoding and New-SPClaimTypeEncoding cmdlets" post to see how you can improve the custom claim type encoding experience in SharePoint 2010 June 2012 CU and forward.

    Some notes: the total length must not exceed 255 characters and you need to HTML encode characters such as %, :, ; and | in the claim values.

    Some samples

    If this wasn't clear enough, let's look at a few samples.

    Standard Windows claim

    Windows claim

    Another common claim. This time it's not an identity claim but an identity provider claim, and this is how NT AUTHORITY\Authenticated Users is represented.

    Authenticated users claim

    This is how a Windows Security Group is represented as a claim. The value represents the SID of the group.

    Security Group claim

    If we're using federated authentication (as in the Azure AuthN series I 've written) we can see claims like this. It's an e-mail claim from a trusted issuer called Azure.

    E-mail claim

    Here's how a claim can be encoded if we're having a role called facebook in the trusted issuer with the name Azure.

    Role claim

    This final example shows how the encoded claim for the Local Farm looks like. It's a Farm ID claim from the system Claim Provider and the claim value is the ID of the farm.

    Farm claim

    This is how a forms authenticated user claim looks like. image

    Summary

    I hope this little post showed you all the magic behind the claims encoding in SharePoint. It's quite logical...yea really.

  • The sixth edition of the DIWUG SharePoint Magazine is out

    Tags: SharePoint 2010

    DIWUG no. 6The best free SharePoint magazine published online, the DIWUG SharePoint e-Magazine, did yesterday release their sixth edition. As usual this is a great edition with a mix of articles for every aspects of the SharePoint universe. The articles are written by SharePoint community members and the magazine is compiled and managed by Mirjam van Olst and Marianne van Wanrooij.

    This edition contains articles ranging from hard core Service Application federation, to SharePoint Online and Azure development to articles on how to engage your users and project teams in SharePoint. As usual - something you just must read!

    In this edition I've participated with one article about Dynamic Ribbon customizations with Page Components. It's quite lengthy (sorry about that) and contains a lot of code, and quite a few tricks that makes Ribbon customizations easy(ier). My idea behind the article was to show a real world implementation of a Ribbon customization, instead of any Hello World stuff. This customization actually improves the OOB user interface (IMO) and allows your users to work with Workflows much easier. You can basically take the code from the article and install it in your farms.

    So here's where to get it:

AWS Tracker

About Wictor...

Wictor Wilén is a Director and SharePoint Architect working at Connecta AB. Wictor has achieved the Microsoft Certified Architect (MCA) - SharePoint 2010, Microsoft Certified Solutions Master (MCSM) - SharePoint  and Microsoft Certified Master (MCM) - SharePoint 2010 certifications. He has also been awarded Microsoft Most Valuable Professional (MVP) for four consecutive years.

And a word from our sponsors...

SharePoint 2010 Web Parts in Action