Archives

Archives / 2013
  • Summing up the year of 2013 and embracing 2014

    Tags: SharePoint, Azure, Personal, SharePoint 2013, Office 365

    Wow, 2013 was an interesting year and the time has come for my annual blog post to sum up the year that soon has passed us and looking a bit into the crystal ball for the next one. This is my seventh summary post and it is always fun to look back at what has happened during the last 12 months (2012, 2011, 2010, 2009, 2008, 2007 and 2006).

    For me the year has been really intensive on all levels; I don't think I´ve ever experienced such a huge demand for my professional services as of now, there is so much new stuff to learn and it´s harder and harder to keep up, I have a hard time resisting doing tons of community stuff and at the same time we had a huge construction work at our house, and of course having two soon-to-be teenager girls takes its toll!

    Writing

    The number of blog posts I create every year continues to decrease, but I do hope the quality improves and that you still get some decent value out of my posts. There are so many good bloggers out there and I don´t want to repeat what everyone else is writing about. There are a couple of posts that I´m quite proud of and here´s the list of the ones you have visited the most the last 12 months:

    There´s no coincidence that four of the top five posts written this year is about Office Web Apps Server 2013 (WAC) – it is my new favorite server product, and I think it is one of the core server products/services that will be a huge and integral part of the "One Microsoft" and its services.

    I also had the benefit of participating in a "real" writing project – as a co-author of "Inside Microsoft SharePoint 2013". This was my second book written together with some of the most famous SharePoint book authors. If you still haven´t ordered yourself a copy then you´re missing out on something!

    Speaking

    I´ve continued to do sessions at conference, perhaps not that many this year. I try to choose conferences that fits me, my family and clients and also I try to focus on creating good, new and interesting content. I´m not the kind of person that like to do the same content over and over again. I´m incredibly lucky being in this position and being able to travel and meet all the awesome people around the world. I know there are a couple of conferences that I would like to have presented at, but had to turn down due to other commitments…maybe next year. To read more about the presentations I´ve done over the last year and see the decks and some video recordings, check out my Presentations page.

    MVP

    I got re-awarded the MVP status once again, now my fourth time. It´s always really nice to be given this award.

    MCSM

    Talking about the Microsoft Certified Solutions Master, MCSM, could be a couple of posts on its own, but let´s try to crunch it down. Early January this year I attended the beta rotation of the brand new MCSM program for SharePoint. This program was totally redone to suit both on-premises SharePoint 2013 and Office 365/SharePoint Online (contrary to what some people think and say). There is/was no better training for SharePoint, in the cloud or not, than this program, and there will never be such a good program again! I was fortunate to pass both the written exam and the qualification lab "in-rotation" (that is no retakes or anything), being one of the first ones. Unfortunately the whole MCSM program was cancelled during this year. But once a Master always a Master. I´m really proud that I am one of the few who has passed MCM for SharePoint 2010, MCA for SharePoint 2010 and MCSM for SharePoint (2013) – a bit sad I didn´t get the chance to get the 2007 exam and get a full hand :-(

    SharePoint…

    Can´t write this post without a little section about SharePoint. What will happen to SharePoint, will it cease to exist? To some degree I do think so. But SharePoint as a product has played out its role in my opinion. SharePoint is just a piece in the puzzle of future collaboration software. Take a look at how Workflow has been moved out of SharePoint, how Office Web Apps is a separate product, how applications now are Apps outside SharePoint, how Enterprise Social now is a service (in Yammer). SharePoint will be there in the form of building sites and acting like a glue between all the other services. Will it be known as SharePoint? I don´t really know, but in a few years, I don´t think so. It sounds like judgment day for some, and it might be, unless you are prepared. I think this "brave new world" will for the ones who can accept the change be full of opportunities…and I´m looking forward to it! On the other hand the recent messages from Redmond that SharePoint on-premises ensures the current on-premises customers that SharePoint as a product will be here for another couple of years, which is good, it gives you good options to slowly move from a product to a service. But the innovation will be in the services, Office 365 and Azure, not in the products.

    Predictions

    Last year's predictions was not that off the chart. The Cloud message is ever increasing, and there´s no end to it. I also predicted a "collapse" of the SharePoint community, to some degree I think that has started to happen. The community is still thriving but there is not a single community as it used to be. There has been several new community sites and community conferences started this year. Not that it is a totally bad thing, but in my opinion it does not help the community moving forward. We´re also seeing many of the old community celebs and influencers moving away from SharePoint as a specialty and instead focusing on the new set of services.

    So what about 2014 – what do you think Wictor?

    SharePoint is dead, long live SharePoint. I wrote it above; SharePoint as a product is slowly going away, instead "SharePoint as a service" is where the focus should be. If anyone of you watched the PDC08 keynote, when Azure was announced – do you remember the slide where "SharePoint Services" was one of the listed services. I think this is where we´re going, six years later.

    Azure domination! The Azure team at Microsoft is really impressive right now, look at all the different services they announce and improve. Being a bit late to the game, but now being captain of the Cloud movement. If it was something I would bet my career on now, it would be Azure.

    Services, services, services! Everything will be services. Combine the things I said about SharePoint, with the things about Azure and add the recent announcements of the killed ForeFront products (and others). Microsoft is all in on the Devices and Services thing and you should be too. This changes the way we design, build and sell our professional services.

    The future does look a bit cloudy, doesn´t it.

    Happy New Year

    That was it! I do have a lot more to say, but you all should be on vacation right now stocking up on energy for 2014 so I keep it short. Next year will be an intensive year for me, I know it. I´m already excited about the new engagements I have planned for early 2014, about the SharePoint Conference in Las Vegas (last SP conference?) where I will be a presenter for the first time, and also the big change at Microsoft with a new CEO – how will that affect me!?

    So, to all of you I wish a Happy New Year and I´m looking forward to seeing a lot of you out there next year!

  • SharePoint 2013 Architecture Survey

    Tags: SharePoint 2013, Conferences, Surveys

    Happy Holidays everyone!

    At the upcoming SharePoint Conference, next year in Las Vegas, I will be presenting a session called Real World SharePoint 2013 Architecture decisions. The session will discuss and give examples of real world decisions and trade-offs you might be faced with as a SharePoint Architect. In order to make the session even more interesting I would like you all to help out with some statistics. Therefore have I created a small survey with a few questions. Filling it out should not take you more than an a few minutes, so there is no excuse not to do it.

    You can answer the questions either with data from your own companys standpoint, or if you are a consultant you are free to answer with your current project(s).

    The survey in it's full glory can be found here: http://askwictor.com/SPC14Survey

    Feel free to spread this link cross all your networks, the more data I have the more interesting it will be.

    The results will contribute to the SPC334 session and will shortly after that be presented here on this blog.

  • I will be speaking at SharePoint Conference 2014 in Las Vegas

    Tags: SharePoint 2013, Conferences

    SharePoint Conference 2014I’m really proud to announce that I will be speaking at the long anticipated SharePoint Conference 2014 in Las Vegas, March 3-6 2014. The SharePoint Conference hosted by Microsoft is returning to Las Vegas, but this time located at the Venetian, bigger and perhaps more interesting than in a long time. If you are in the SharePoint business as a developer, IT-Pro, architect, business analyst, power user or executive, then this is the conference where you would like to be next year.

    For the first time I will be speaking at this conference, which I’m really excited and a bit nervous about. I will present two sessions of which I’m really passionate about:

    SPC383: Mastering Office Web Apps 2013 deployment and operations

    Tuesday 4/3 9:00-10:15 @ Lido 3001-3103

    Microsoft Office Web Apps 2013 is a crucial part of any SharePoint, Exchange and Lync on-premises deployment. In this session we will dive into the details of planning, deploying and operating your Office Web Apps server farm. Through a great number of demos we will create a new farm from scratch, make it highly available and then connect it to SharePoint and Exchange. We will cover aspects such as scale considerations, patching with minimum downtime and security decisions you have to take as a Office Web Apps farm admin.

    SPC334: Real-world SharePoint architecture decisions

    Wednesday 5/3 13:45-15:00 @ Lido 3001-3103

    Being a SharePoint architect can be challenging - you need to deal with everything from hardware, resources, requirements, business continuity management, a budget and of course customers. You, the architect, have to manage all this and in the end deliver a good architecture that satisfies all the needs of your customer. Along the line you have to make decisions based on experience, facts and sometimes the gut feeling. In this session we will cover some of the architectural changes in the SharePoint 2013 architecture, some of the new guidance from Microsoft and provide insight into a number of successful real-world scenarios. You will see what decisions were made while designing and implementing these projects with emphasis on why they were made.

    SPC356: Designing, deploying, and managing Workflow Manager farms

    Wednesday 5/3 10:45-12:00 @ Lido 3001-3103

    Workflow Manager is a new product that provides support for SharePoint 2013 workflows. This session will look at the architecture and design considerations that are required for deploying a Workflow Manager farm. We will also examine the business continuity management options to provide high availability and disaster recovery scenarios. If you want a deep dive on how Workflow Manager works, then this is the session for you.

    I very glad to be doing this Workflow session since it will be co-presented with my buddy, the almighty MCA/MCM/MCSM/MCx Spencer Harbar!

    I’m really looking forward to seeing you there, in my sessions, in the other interesting sessions, in the casinos and … I’ll better stop there – what happens in Vegas stays in Vegas…

    [Updated 2014-02-19] Added rooms and time slots as well as the SPC356 session.

  • Inside Microsoft SharePoint 2013 is here, just in time for the holidays…

    Tags: SharePoint 2013, Books

    Inside Microsoft SharePoint 2013I remember a person who clearly stated “I will never ever write a book again”. Yup, twas me. I managed to hold that promise for a year and a half. But when an interesting opportunity appears, I’m usually all-in again. And so it was.

    Early this year I got the request from some dear friends to help with writing another book, fortunately this time not as the single responsible author but instead together with a really experienced bunch of SharePoint people, whose knowledge and resume are really impressive. I was asked to participate to write two chapters in the Inside Microsoft SharePoint 2013 book, published by Microsoft Press.

    This is a book for every single developer out there, newbie, apprentice or master candidate (oh, there is no such thing anymore…), that has interest of SharePoint 2013 – yes, that is most likely you. This is the book that you should have in your toolset. It is packed with everything from the basics up to the more advanced scenarios, covering all the different workloads of SharePoint 2013.

    If you take a look at the authors of this book you know it’s a keeper: Scott Hillier, Mirjam van Olst, Ted Pattison, Andrew Connell, Kyle Davis and little ol’ me.

    If you order it now you will have it by Christmas!

    Oh, and today you get %50 discount on the book here!

  • SharePoint 2013: Fix to the “Could not find Stored Procedure Search_GetRepositoryTimePerCrawl” error

    Tags: SharePoint 2013

    Introduction

    In this post I will show you how to fix the “Could not find Stored Procedure ‘Search_GetRepositoryTimePerCrawl’” exception in SharePoint 2013. This is an exception that you can get when looking at crawl log details for a Search Service Application. The error might go unnoticed since it will not affect indexing or querying.

    Could not find stored procedure Search_GetRepositoryTimerPerCrawl

    Symptoms

    In SharePoint 2013 when you are trying to inspect crawl logs and statistics for indexing and querying you might see exceptions that say “Could not find stored procedure ‘Search_????’”. You will also see critical errors in the ULS Trace Logs like this:

    SharePoint Foundation Database 880i High     System.Data.SqlClient.SqlException (0x80131904): 
    Could not find stored procedure 'Search_GetRepositoryTimePerCrawl'.
    SharePoint Foundation Database 5586 Critical Unknown SQL Exception 2812 occurred. 
    Additional error information from SQL Server is included below.  
    Could not find stored procedure 'Search_GetRepositoryTimePerCrawl'.
    

    Note that the name of the Stored Procedure may vary.

    The Trace Logs reveals that there are missing stored procedures in the Usage Database and by cranking up SQL Server Management Studio it clearly shows that there are no stored procedures with those names. You will only see one stored procedure (or sometimes none) with the Search_ prefix.

    Missing sprocs

    Even though these Stored Procedures are missing the Search Service Application continues to crawl and index and querying works as normal.

    Cause and Resolution

    There might be several causes for this error. One reason could be that the Usage Database has been recreated (for instance to increase the max total bytes in the partitions). When the Usage Database is created it will only contain the default set of stored procedures, it is not aware of any Service Application or custom Usage Providers.

    In this case the Search Service Application Usage Provider has not created the necessary Stored Procedures in the database. This is done by a timer job called “Search Health Monitoring – Trace Events”. Once this timer job has executed successfully the required stored procedures should be created. Normally this timer job is executed every minute, so seeing this error should be very infrequent. But just as any timer job an admin can change the schedule or even disable it – and then when the Usage Database is re-created this error will occur.

    Sprocs is back!

    Summary

    You’ve just seen the cause and fix for the missing stored procedures when looking at crawl and query logs and statistics in SharePoint 2013. The error messages might be frightening but the fix is quick and easy. Just as anything in SharePoint once you understand the moving bits and pieces.

  • SharePoint Saturday - In Stockholm for the first time

    Tags: SharePoint, Conferences

    imageFinally we’re getting SharePoint Saturday to Stockholm! Next year in January, or to be more precise the 25th of January 2014, the global SharePoint Saturday event will come to central Stockholm and World Trade Center.

    What is a SharePoint Saturday?

    SharePoint Saturdays are a free events that happens in cities around the world, unfortunately most of them are on the other side of the pond. But once in a while we see these great events pop up in Europe. It’s free in that meaning that it is organized by volunteers, the speakers do it because they have nothing better to do on Saturdays and there is no entrance fee and if you’re lucky you can get some swag as well! But, the events are sponsored, but just to get a good venue and of course the accompanying SharePint after a full day of sessions.

    SharePoint Saturday Stockholm

    As I said, now for the first time we will have our first SharePoint Saturday here in Stockholm. The event is organized by my colleague Hannah Swain, Mattias Einig and fellow MCM Erwin van Hunen – a great bunch of committed people They are currently setting up the speakers, agenda, venue and all of gazillion things that needs to be done. The event will be held at World Trade Center, which is right in the middle of Stockholm and as close as you can get to all the communications.

    You can be a speaker!

    One of the key things, and what I really like with these events, is that everyone can and should apply for a speaker slot. It’s a great opportunity to practice your speaking skills, learn more about how it is to contribute to the community, get your name out there and perhaps boost your ego a bit. I know that there are lots of people, including shy Swedes, that can, will and should be on our very first SharePoint Saturday as a speaker. Don’t just sit there – go and submit your sessions proposals right away!

    I’m really looking forward to meet all of you there!

  • The correct way to execute JavaScript functions in SharePoint 2013 MDS enabled sites

    Tags: SharePoint 2013, MDS

    Introduction

    JavaScript is the future of SharePoint development (and please don’t quote me on that :-). JavaScript is everywhere in SharePoint 2013 and upcoming incarnations, and you will see a couple of posts on this topic from me in the future. The JavaScript language is easy (well, sort of), but the many different implementations and API’s built using JavaScript might cause confusion. One of the things in SharePoint 2013 that makes JavaScript development quite problematic is the Minimal Download Strategy (MDS) in SharePoint 2013. In this post I will show you what to think of when building JavaScript features on top of SharePoint and make them aware of MDS and make them work with MDS.

    Minimal Download Strategy

    Almost a year and a half ago I wrote the Introduction to the Minimal Download Strategy (MDS) post, which since then surprisingly been one of the most visited ones on this little site! I won’t recap everything in that post, but basically MDS is a framework that allows SharePoint 2013 to download and render pages more efficient by focusing on only those parts of the page that actually needs an update. For instance, when navigating between folders in a document library, we do not need to re-render the top bar or footer etc. All this to make the perceived performance better and to reduce bandwidth.

    The Problem

    The big problem with MDS is that it only works well with the out-of-the-box stuff on a Team Site, or similar templates. As soon as you start to drop your own Web Parts, custom scripts or customizations that has not been adapted for the Minimal Download Strategy you will not see any performance benefits, au contrary you will see a performance degradation in many cases – so you turn the MDS feature off.

    One of the biggest problems is that the more and more customizations in SharePoint involves JavaScript and so far I have not seen a single non-SharePoint native script that can handle MDS that well, and I’ve been bashing my head against the wall for some time to get it to work properly. Many, if not most, JavaScripts want to execute a function when the page loads either to initialize something or to change something (haven’t we all seen all those dreaded jQuery UI customizations that could have been done with a simple CSS fix!). Common approaches to this is to use the SharePoint _spBodyOnLoadFunctionNames.push() or the jQuery $(document).ready() {} method. These methods rely on the DOM events when the page loads, so it might work perfectly fine on your first page load when MDS is enabled, since we need to fetch the full page, but on subsequent page transitions the JavaScript will not fire. An MDS page transition does not fire any DOM document load events since it asynchronously partially updates the page.

    A non MDS compatible custom field using JSLink

    List with custom JSLink columnLet’s take a look at an example using a simple Field using Client Side Rendering (or JSLink) that just makes negative numbers red and positive numbers black.

    This is how my field is defined using CAML. In this sample I create everything as a Sandboxed, purely declarative solution, but it’s easy to create the exact same solution using a SPApp or a Full Trust Code solution.

    <Field
          ID="{ce3d02df-d05f-4476-b457-6b28f1531f7c}"
          Name="WictorNumber"
          DisplayName="Number with color"
          Type="Number"
          Required="FALSE"
          JSLink="~sitecollection/Wictor/NumberWithColor.js"
          Group="Wictor">
    </Field>

    As you can see it is a standard Number field with a JSLink attribute. Next I deploy the JavaScript file, using a Module element, that looks like this:

    var Wictor = window.Wictor || {}
    Wictor.Demos = Wictor.Demos || {};
    Wictor.Demos.Templates = Wictor.Demos.Templates || {}
    Wictor.Demos.Functions = Wictor.Demos.Functions || {}
    
    Wictor.Demos.Functions.Display = function (context) {
    	var currentValue = context.CurrentItem.WictorNumber
    	if (currentValue > 0) {
    		return currentValue;
    	}
    	else {
    		return '<span style="color:red">' + currentValue + '</span>'
    	}
    }
    
    Wictor.Demos.Templates.Fields = {
    	'WictorNumber': {
    		'DisplayForm': Wictor.Demos.Functions.Display,
    		'View': Wictor.Demos.Functions.Display,
    		'NewForm': null,
    		'EditForm':  null
    	}
    }
    
    SPClientTemplates.TemplateManager.RegisterTemplateOverrides(Wictor.Demos)

    Everything is pretty straight forward. I define a couple of namespaces (which imo is a good coding practice), create a display function for my custom field, create the JSLink templates and finally registers the templates with the TemplateManager. I will not dive deep into the JSLink stuff, but if you need a really good guide I urge you to read Martin Hatch’s JSLink series.

    Once this little solution is deployed and the features activated we can add the field to a list and it should work pretty fine, unless we have MDS enabled on our site and start navigating back and forth between pages within the site. Once you navigate away from the list, containing this custom field, and navigate back using MDS page transitions it will stop using our custom template. Not funny..

    The Solution to these problems

    Fortunately there is a solution to this problem and there are two things that are really important.

    The RegisterModuleInit method

    The first reason that our field does not use the template that we defined in JavaScript is due to the fact that we register the templates with the field only when the JavaScript is loaded and executed. When navigating, using MDS, to another page this registration is reset and the JavaScript file is not evaluated again. So we need to find a way to register the module each and every time that page is used. Traditional web browsing, with full page reloads, allows us to use jQuery $(document).ready or the SharePoint function _spBodyOnLoadFunctionNames.push() to do these kind of things. But, they only do stuff when the page load event is fired – and an MDS page transition does not trigger that event.

    The SharePoint team has of course not forgotten about this scenario and has given us a JavaScript function called RegisterModuleInit(). This method is specifically designed for these kind of scenarios, we can use it to add methods that are to be executed whenever a page transition in MDS is done. The RegisterModuleInit() function takes two parameters; the first one is the path the the JavaScript file that is associated with the function to execute and the second parameter is the function to execute. One really important thing is to note that the the path to the JavaScript file must be exactly the same as used when registering it, so depending on if it’s loaded from the Layouts folder, a folder within the site etc you have to make sure to use the exact same path in the RegisterModuleInit().

    Let’s rewrite the last part of our JavaScript file and replace the line that registers the templates with these lines:

    Wictor.Demos.Functions.RegisterField = function () {
        SPClientTemplates.TemplateManager.RegisterTemplateOverrides(Wictor.Demos)
    }
    
    RegisterModuleInit(
      _spPageContextInfo.siteServerRelativeUrl + 'Wictor/NumberWithColor.js', 
      Wictor.Demos.Functions.RegisterField)
    
    Wictor.Demos.Functions.RegisterField()

    I’ve encapsulated the template registration into a new function, called RegisterField(). We then use the RegisterModuleInit() function to register this function to be executed whenever our JavaScript file is used on the page. The _spPageContextInfo object is used to get the site relative URL to which we append the relative path to where the JavaScript file is deployed. Finally we execute the RegisterField() function directly, since the RegisterModuleInit() only handles upcoming page transitions.

    If we now try this on an MDS enabled site you will quickly notice that you get JavaScript errors the second time we visit the list with this custom field, it should say something like below if you have a debugger attached or configured. In worst case MDS will notice that there is a JavaScript error and silently reload the page causing a second page load and reducing performance (you will then likely also see another JavaScript error, that we’ll talk about in a bit).

    Error: Unable to get property 'Demos' of undefined or null reference

    Looks like there’s something wrong with our namespaces!

    The Garbage Collecting issue

    The second issue requires some understanding of the Minimal Download Strategy and knowing that MDS actually has a built-in garbage collector (you didn’t see that coming right!). MDS will when doing a page transition clear up window scoped variables and delete them. This is a good thing, just imagine the number of potential JavaScript objects and structures that might have been created and stored in memory if you’re working within a site and jumping back and forth between pages. The good thing is that it will not delete objects that are properly registered as namespaces, and with that I mean Microsoft Ajax namespaces. Let’s go back to our very first sample, the one with a JSLink field. Remember I created a number of namespaces in the JavaScript file to hold my templates and functions. If I change the very first namespace definition in that file from:

    var Wictor = window.Wictor || {}

    To instead utilize the Microsoft Ajax Type.registerNamespace() function like this we will be golden:

    Type.registerNamespace('Wictor')

    Try that, redeploy your JavaScript with both the RegisterModuleInit() function and the Type.registerNamespace() declaration and you will see that (almost) everything executes just as expected. The field will render just as we want even though we navigate back and forth from the list containing the custom field.

    Getting it to work without MDS as well

    When disabling MDS on the site, or when using the “normal” URL to the list with the custom field, when a JavaScript occurs like above and on some other occasions your page will do a full page load, that is not an MDS page transition, you will get a JavaScript error that states:

    Error: '_spPageContextInfo' is undefined

    In this case the JavaScript object that we use to get the site relative URL is not created and does not exist. You will not get this error while doing MDS page transitions, since that object is created on the first page load. So how do we handle this situation?

    Since we don’t have the _spPageContextInfo object on the page, then we cannot do the RegisterModuleInit() move. But on the other hand if we get into this situation, we’re not in “MDS mode” and does not need it…clever huh! Also note, that we can get around this error by not using a site relative path and deploying stuff into the Layouts folder – but try to do that in the cloud. Let’s rewrite the last part again:

    Wictor.Demos.Functions.RegisterField = function () {
        SPClientTemplates.TemplateManager.RegisterTemplateOverrides(Wictor.Demos)
    }
    
    Wictor.Demos.Functions.MdsRegisterField = function () {
        var thisUrl = _spPageContextInfo.siteServerRelativeUrl
    		+ "Wictor/NumberWithColor.js";
        Wictor.Demos.Functions.RegisterField();
        RegisterModuleInit(thisUrl, Wictor.Demos.Functions.RegisterField)
    }
    
    if (typeof _spPageContextInfo != "undefined" && _spPageContextInfo != null) {
        Wictor.Demos.Functions.MdsRegisterField()
    } else {
        Wictor.Demos.Functions.RegisterField()
    }

    We still have a function for registering the field with the template manager, exactly the same as previously, then we introduce another method that is only used when MDS is enabled and we’re in MDS mode, that method uses the _spPageContextInfo to register the script to run for each MDS page transition. Finally we do a check in our JavaScript that if the _spPageContextInfo exists, then use our MdsRegisterField method otherwise just call the function that registers the template.

    Our full JavaScript should now look something like this:

    Type.registerNamespace('Wictor')
    Wictor.Demos = Wictor.Demos || {};
    Wictor.Demos.Templates = Wictor.Demos.Templates || {}
    Wictor.Demos.Functions = Wictor.Demos.Functions || {}
    
    Wictor.Demos.Functions.Display = function (context) {
    	var currentValue = context.CurrentItem.WictorNumber
    	if (currentValue > 0) {
    		return currentValue;
    	}
    	else {
    		return '<span style="color:red">' + currentValue + '</span>'
    	}
    }
    
    Wictor.Demos.Templates.Fields = {
    	'WictorNumber': {
    		'DisplayForm': Wictor.Demos.Functions.Display,
    		'View': Wictor.Demos.Functions.Display,
    		'NewForm': null,
    		'EditForm':  null
    	}
    }
    
    Wictor.Demos.Functions.RegisterField = function () {
        SPClientTemplates.TemplateManager.RegisterTemplateOverrides(Wictor.Demos)
    }
    
    Wictor.Demos.Functions.MdsRegisterField = function () {
        var thisUrl = _spPageContextInfo.siteServerRelativeUrl
    		+ "Wictor/NumberWithColor.js";
        Wictor.Demos.Functions.RegisterField();
        RegisterModuleInit(thisUrl, Wictor.Demos.Functions.RegisterField)
    }
    
    if (typeof _spPageContextInfo != "undefined" && _spPageContextInfo != null) {
        Wictor.Demos.Functions.MdsRegisterField()
    } else {
        Wictor.Demos.Functions.RegisterField()
    }
    

    Now, when we test this solution it should work with and without MDS enabled on the site, on all MDS Page transitions back and forth and we spare at least a handful of kittens using this code.

    Summary

    I’ve just shown you how you create a custom field rendering using JSLink that works with and without MDS. It requires you to pop in a set of additional JavaScript lines into each JavaScript file, but it is basically exactly the same JavaScript snippet each and every time. This solution does not only work for JSLink fields, it is valuable for delegate controls, web parts, ribbon customizations etc. How my life had been easier if this had been documented on MSDN twelve months ago…

    PS: If you find any situation where this does not work, please contact me and I’ll try to extend the scenario to cover that as well.

  • Clearing up the confusion with Host Named site collections and Path Based site collections

    Tags: SharePoint 2013

    Introduction

    I’ve been reading and seeing a lot of blog posts, articles, videos etc over the last few months discussion Host Named site collections vs Path Based site collections in SharePoint 2013, and 2010 for that matter. What I find interesting is that a lot of those articles are either misinterpreting the official guidance and documentation on TechNet or are just plain wrong. In this post I will try to clear up some of the confusion, and hopefully I’m not that wrong in this post. And yes, I can agree that Microsoft could have been more clear on this topic, but what’s there is actually pretty decent.

    What are Path Based site collections?

    Let’s start with Path Based site collections (PBSC) which has been the traditional method of creating and locating site collections. It’s based on the fact that you have a Web Application (IIS Web Site) in SharePoint listening to a specific host header, (1) in the picture below. Then each site collection is located relative to this host header. A site collection can only exist under (wildcard) or in (explicit) a managed path, (2) in the picture below. The URL to the site collection is the combination of the protocol, host header, the managed path and the name of the site collection. In order to have different host names, multiple Web Applications must be created, as in the example below where the Personal Sites are located on a different host name. When using Path Based site collections you configure the managed paths on a web application level,

    Path Based site collections

    The illustration above shows us how we need two Web Applications to host two different host names; one for intranet.contoso.com and one for my.contoso.com, and that we use different managed paths in the two different Web Applications.

    Path Based site collections has been the de facto, and also recommended approach, and therefore it’s the most well-known method. All configuration can be done using SharePoint Central Administration and most admins are familiar with managing Path Based site collections.

    What are Host Named site collections?

    Now, let’s take a look at Host Named site collections (HNSC). Host Named site collections have now been the “recommended” approach, very much to the fact that SharePoint Online/Office 365 uses it and it is now the most tested (by Microsoft) method. Note that this alone does not validate that HNSC should be the default method for all new deployments.

    The largest difference compared to Path Based site collections is that all site collections now have its own URL and that the web application should not listen to a host header (3), but all incoming requriest. Even though the site collections themselves have their own URL, site collections can only be created under/in a managed path. The managed paths used by HNSC are not the managed paths configured at the web application but instead they are configured on the Farm level (4).

    Host Named site collections

    One of the “myths” around Host Named Site Collections is that you cannot use managed paths, which is completely wrong. As you can see by the illustration this web application uses multiple host names and multiple farm level managed paths. You can see that we have intranet.contoso.com and my.contoso.com in the same web application. We could also add new site collections for search.contoso.com or teams.contoso.com, just as long as the DNS (and certificate) is configured correctly for the IIS.

    One of the caveats with HNSC is that you cannot configure anything of this using Central Administration. You need to use PowerShell to create the web application, managed paths and each and every site collection.

    Note: also note that the web application above have a root site collection without a site template. This is currently a support requirement. For instance see this support article KB2590564.

    Do I need to choose one of HNSC or PBSC methods?

    This is one of the largest misconceptions in this discussion. No, you do not have to choose either HNSC or PBSC. As usual with SharePoint – it depends! It depends on your requirements. For instance if you want process isolation of site collections, then you can use HNSC for the most of your site collections and then use PBSC for those who need to be in separate application pools – all you need to do is configure your DNS and the correct host headers on your PBSC. Requirements of alternate access mappings might also be a reason to think of PBSC.

    So, which method should I use then?

    My recommendation is that you always start your design with Host Named site collections. It gives you way more flexibility in the way you give host names to your site collections. You will also reduce the memory footprint of SharePoint on the web servers. Of course as always if you get bad code in your SharePoint farm which takes the application pool down – all of your sites will go down. Depending on your scalability requirements you might need to evaluate the needs of HNS and/or PBSC.

    The TechNet article referenced below contains even more details on some subtle and significant differences between HNSC and PBSC – make sure to read and understand those before starting your design. Well, you can always export and import site collections between the two methods if you need to change the structure at a later stage – but the end users will most likely not be fond of that.

    References

    Here are some important and authorative articles and posts on the topic that you should read:

    Kirk Evans, Microsoft: What Every SharePoint Admin Needs to Know About Host Named Site Collections

    TechNet: Host-named site collection architecture and deployment (SharePoint 2013)

    Summary

    I hope that this post cleared up the confusion between Host Named site collections and Path Based site collections. Which method to choose is always up to you and your requirements. I did not go into all the nitty gritty configuration details in this post, and leave that for now or to the TechNet article...

  • Office Web Apps Server: Which version is installed?

    Tags: Office Web Apps, WAC Server

    If you have been working with SharePoint you should know by now how to get the build version of an installation using PowerShell. Knowing the version of the installation is crucial for troubleshooting and knowing what features or limitations the current installation has, given the new release cadence. If you don’t know how to do it then Bing for it and then return here. But how do you do the same for Office Web Apps Server 2013?

    Retrieve the version number of an Office Web Apps Server installation

    Knowing the current version for Office Web Apps Server 2013 (WAC) is also important to know. Just as with SharePoint new features and bugs can and will be introduced over time and you need to know the version to be able to correctly get support and to patch it. Unfortunately there is not such an easy approach as with SharePoint – we cannot use the WAC cmdlets to get the current build version.

    Instead we can rely on another PowerShell method – Invoke-WebRequest. Hmm, that has nothing to do with WAC you might be thinking, which is true. But Office Web Apps returns the current version as an HTTP Header – the X-OfficeVersion header.

    In order to do use this cmdlet and to invoke an HTTP request we also need to send an anonymous request, to avoid 401’s. For this we can use one of the anonymous end-points that Office Web Apps Server exposes, for instance the end-point for the broadcast ping, like this:

    (Invoke-WebRequest https://wac.contoso.com/m/met/participant.svc/
    jsonAnonymous/BroadcastPing).Headers["X-OfficeVersion"] 

    As you can see we request the endpoint, and retrieve the specific version header from the response. This will return the current build version of your WAC farm, for instance “15.0.4535.1000” which is the August 2013 hotfix for WAC.

    Known version numbers

    I have collected the known version numbers of Office Web Apps Server 2013 on this page http://www.wictorwilen.se/WACVersions. I will try to continuously update it as the WAC server retrieves new patches and upgrades. As a bonus I also added version numbers for the version that SkyDrive currently uses (interesting!).

  • SharePoint and Exchange Forum 2013 – wrap-up

    Tags: SharePoint, Presentations, Conferences

    Stockholm ArchipelagoTwo weeks ago the ship returned to Stockholm from a 48 hour cruise on the Baltic Sea hosting the 10th edition of SharePoint and Exchange Forum. As usual the conference was a great show arranged by MVP Göran Husman and his Humandata crew. Thank You!

    We enjoyed a lot of great sessions from well-known speakers around the world and we spent the nights in the bars (and on the tables) during the nights. I had a lot of fun even though it was a bit weird having my first session just as the ship left the harbor and turned sharply – as far as I could see no one got sick, at least not from the sea.

    My session slides, and from all the other speakers sessions, can be found on the SharePoint and Exchange Forum website – http://www.seforum.se. All my demo code can be found here.

    Hope to see you next year, who knows what we will be doing then!

  • TechEd New Zealand 2013 Wrap-up

    Tags: Conferences, Presentations, SharePoint 2013

    It’s been over a week since I got home from an amazing trip to the other side of the globe (literally). It was a long way getting to New Zealand but definitely worth it. It was my first ever TechEd, both as attendee and presenter and first trip to New Zealand. I had a great couple of days meeting the SharePoint community and other Microsoft junkies, and also had the opportunity to have a quick breakfast with Scott Guthrie.

    I had three sessions and one panel session – and all of them was pure fun! Thanks for all the great questions and the feedback I received! The attendees and conference was just awesome.

    You can find my three sessions at Channel 9:

     

    Piha Beach

    Also a big thanks to my fellow MCA Wayne Ewington who was our personal guide and welcomed us with open arms to New Zealand.

    Update: the code for the solutions can be found here.

  • Microsoft Advanced Certification (MCA, MCSM, MCM) - the end of an era

    Tags: MCSM, SharePoint, MCM, MCA

    This is a sad and dark day for the Microsoft community, especially for us who love products such as SQL Server, Exchange, Lync and SharePoint. Microsoft Learning (MSL) has decided till kill their advanced certifications; Microsoft Certified Architect (MCA) and Microsoft Certified Solutions Master (MCSM) formerly known as Microsoft Certified Master (MCM). This is also a post I hoped not to write, as the matter of fact I started drafting a post a couple of weeks back that should recommend these certifications to the community out there, that post will never see the light now.

    Breaking up with an e-mail!

    This morning I and the whole MCA/MCSM/MCM community got an e-mail from Microsoft Learning stating that “we [MSL] are continuing to evolve the Microsoft certification program” but “[MSL] will no longer offer Masters and Architect level training rotations and will be retiring the Masters level certification exams as of October 1, 2013”. This e-mail came as a shock to me, and as it seems to all involved, including instructors. All already certified will retain their certification status and can still use the logo (well, thank you very much), and those in rotation or with scheduled exams have barely one month to get it or get just a small amount of refund! This is truly a slap in the face for everyone! Also sending this e-mail out on a weekend with a US holiday coming up, just to try to get under the radar is a cowardice action – and just shows how not thought through and quick the decision has been made.

    Why are MSL retiring the advanced certifications?

    imageWell, that is one question that I would like to have answered. In the rather offensive e-mail sent the following was stated: “The IT industry is changing rapidly and we will continue to evaluate the certification and training needs of the industry…”. Doh! We know it is changing, but that definitely doesn’t mean there is less requirement on training and certifications, especially these advanced certifications! About one year ago the MCM certification was changed into MCSM that took this new “era” of cloud into the curriculum. For those who have attended the last updated training know this. We’re talking a lot about Office 365, Azure etc. during the training and exams.

    I think Microsoft has to much belief in the cloud – that it will change over night – it won’t. The vast majority of SharePoint installations are on-premises today. Even if we/you see a cloud only future, there is a long way to go, and that road required skilled professionals staking out the route. And just because Office 365 satisfies the most common scenarios it will never be in parity with the requirement of Enterprise solutions. And Lync, SharePoint, Exchange and SQL are each on its own still a billion+ dollar segment for Microsoft.

    Today I’ve seen a couple of blog posts and tweetface posts from people who hasn’t attended any of the rotations. These persons claim to know why MSL is retiring the certifications with reasons such as 1) we’re on a way to a cloud-only world, 2) the program costs Microsoft to much money, 3) there’s no demand for these certified masters etc. Oh boy, they have no clue! One thing I can agree upon is that MSL have done a really bad job in marketing the advanced certifications – most of the marketing has been done by the attendees! Another reason stated is that it was written in the stars – well, all parties will eventually end, but this is not how to end it, punching their most dedicated fans in the balls!

    I’m still waiting for a decent reason from MSL…

    Why do I think the MCA/MCSM should remain?

    Noone except those who actually attended the rotations (which is what we call the training, which was required to get certified) really knows how valuable the certification is. Or to be more precise the certification in itself doesn’t mean that much – the training and the community is what matters. The Master certification has increased the SharePoint knowledge and expertise since the dawn of the certification, in the whole community. The blog posts, conference sessions, webcasts, books etc written by Masters would not have been as good if they didn’t attend the training, and eventually achieved the certification. This is a big loss for all of us, with no new training and no new fresh blood in this group we’re looking into a darker future.

    Ok, what about the MVP’s then you think! Well, an MVP is award, not a certification. I have been awarded the MVP award, but not for my knowledge – it’s all about visibility and connections. I’m still proud of being awarded and thankful for what it gives me. But the MCA, MCSM and MCM means much more to me (even though the benefits are way less than for an MVP and you have to pay big bucks for it). You can ask any of my customers and my employer and they will tell you how much I and they have benefited from these certifications. But the “new Microsoft” doesn’t care about its customers as I see it…

    Summary

    I’m very proud and thankful to have learned as much from the amazing instructors and my fellow masters. There will be no more Certified Masters or Certified Architects or recertified Certified Solution Masters…

    I could have written a way longer post on this subject, but this is the end of this road. What it actually means in a bigger perspective is another question for another day – but I do know that it will influence my future considerations and investments in certifications and the products that I used to love.

    Updates

    I’ll try to share some updates on the matter here…

    [2013-09-01] A fellow MVP from the SQL charter, Jen Stirrup, has created a plea on the Microsoft Connect site (currently 213 upvotes!). The one who claims to be responsible for the decision, Tim Sneath, has answered. His answer contains some of what should originally been communicated. But, stating this has been in the plans for months – I don’t believe it, why would MSL then have spent all the time on making the certifications available on Prometric centers, B**sh!t

  • An explanation to “To start seeing previews, please log on by opening the document.” in SharePoint 2013

    Tags: SharePoint 2013, Office Web Apps

    Overview and background

    To start seeing previews, please log on by opening the document.This post is intended to show and explain why you see the intermittent (and annoying) “To start seeing previews, please log on by opening the document.” message when using previews from Office Web Apps Server 2013 (WAC) with SharePoint 2013. Unfortunately I do not have the magic bullet (yet) on how to solve it completely, this post is more on why you get it and how you can avoid seeing it too often.

    SharePoint 2013 allows you to view and edit Office documents in the browser using the Office Web Apps Server 2013. You also have the benefit that you can see previews of these documents in for instance Document Libraries and Search Results. Sometimes, in some farms more than others, the end users get a message that says “To start seeing previews, please log on by opening the document.”. Most often you resolve the issue by clicking on the link in the message – but this is not a waterproof solution.

    Why this message?

    First of all I need to say – the message have nothing to do with Office Web Apps Server 2013, it’s a SharePoint 2013 implementation issue, and there are two major reasons why this can happen. The message can only happen in these previews (InteractivePreview in WOPI speak) – and if you pay extra attention to the link in the message you will see that the link takes you to another display mode – the view mode (View in WOPI).

    This is the URL that shows the message:

    https://team.contoso.com/_layouts/15/WopiFrame.aspx?
    sourcedoc=%2FDocuments%2Ftest%2Edocx&action=interactivepreview&wdSmallView=1

    This is the URL in the link:

    https://team.contoso.com/_layouts/15/WopiFrame.aspx?
    sourcedoc=%2FDocuments%2Ftest%2Edocx&action=View&wdSmallView=1

    Anonymous Access enabled on the Web Application

    The first reason for this to happen is when you have enable Anonymous Access on the Web Application that hosts the document AND that the current site has not enabled any anonymous access (AnonymousState on the SPWeb is set to Disabled) . This is a hardcoded setting and you cannot work around it. I don’t have the whole background to it, but I do think it’s a security measure to prohibit anonymous users accessing content they should not have access to. One of the reasons behind this thinking is that the WopiFrame.aspx page is actually inheriting from the UnsecuredLayoutsPageBase – a specific class used for Layout/Application pages that does not need any authentication. This of course is a requirement if you would like to show previews on an anonymous site. Unfortunately this issue hits you if you have the need to have something partly anonymous and partly authenticated such as in an extranet.

    The user is not authenticated!

    The second, and by far most common, issue is that the message very randomly appears. SharePoint 2013 has hardcoded the logic, so that if you are not authenticated you should of course see this message – nothing wrong with that. The problem is that we’re using the WopiFrame.aspx page – which can accept anonymous requests.

    Since we’re using Claims Mode authentication for the Web Application (a requirement for WAC) the end users will eventually be un-authenticated when their tokens time out and they need to re-authenticate. The WopiFrame.aspx will not do that for us – it will just say sorry you’re anonymous and show the message.

    Note: A non functional Distributed Cache or not using session affinity can make this problem worse, making the message appear more frequently.

    And now we come to something that I find very peculiar. There is actually another layouts page – the WopiFrame2.aspx file. This one inherits from the LayoutsPageBase which will force us to authenticate and at least check that the user has view and open permissions on the site. This page is not used at all (except in one specific case where the user is not authenticated as above, we’re in interactive preview mode, there is no HTTP context and/or some other crazy stuff – basically you’re not hitting it).

    It’s not much we can do about this either – I think it is partly a logic problem in the SharePoint code base and partly a result of the nature of Claims authentication.

    Summary

    Unfortunately there is not much we can do about this problem at present, and I do hope that something with that logic could be improved in future releases of SharePoint (I tested it with 2013 June CU this time). The best option would be to always use WopiFrame2.aspx when Anonymous Access is not enabled – but that is just my thought…

    If anyone on the SharePoint Team responsible for the WOPI stuff could chime in I would appreciate it.

  • Presenting at SharePoint and Exchange Forum 2013

    Tags: Conferences, SharePoint 2013

    imageFor the fifth (if I recall correctly) year in a row I’m proud to present that I will speak at the annual SharePoint and Exchange Forum 2013 conference, September 30th to October 2nd. This year Office 365 MVP Göran Husman not only managed to bring some of the top-notch speakers around SharePoint, Exchange, Lync and Office 365 to the conference, he also convinced them all to dress up in sailor suits and have the conference on a cruise ship between Stockholm, Sweden and Helsinki, Finland.

    If you’re around and are in for a real adventure you should book your cabin (which are limited) on this cruise and learn from the best in the business. You will also have the great opportunity to see one of the largest and most beautiful archipelagos in the world:

    Stockholm Archipelago

    Personally I will present my Real world SharePoint 2013 architecture decisions session which will take you through a lot of the architectural decisions, not always mentioned on TechNet, you have to take when implementing a SharePoint 2010 or SharePoint 2013 solution.

    [Update 2013-08-19] I will do a second session – JavaScript in SharePoint and not just for Apps. This is a session where we will look into how to leverage JavaScript in SharePoint 2013, use the CSOM, REST and other interesting features.

    Bring your life vests…

  • Office Web Apps 2013 why you can’t and shouldn’t install SharePoint 2013 on the same machine

    Tags: Office Web Apps, SharePoint 2013

    Introduction

    I frequently see one specific question asked on distribution lists, Twitter, Yammer and other social networks: “How do I install Office Web Apps 2013 (WAC) on the same machine as SharePoint 2013”, very frequently also followed by “any hacks accepted”. Those who have tried have noticed that there is a hard block – SharePoint cannot be installed on an Office Web Apps machine and Office Web Apps cannot be installed on a SharePoint machine. This is purely by design and I will in this post show you why and why you shouldn’t try to hack it.

    Office Web Apps 2013 owns IIS on the machines where it is installed

    WAC WebsitesThe heading says it all – on the machine where Office Web Apps 2013 is installed the IIS (Internet Information Services) is very much controlled by WAC. WAC creates two Websites (see image to the right) and numerous Application Pools in IIS. In order to be in good condition, and avoid people fiddling with it, WAC will actually recreate the Websites and Application Pools whenever it boots or the WAC service is restarted. One more time – WAC removes the Websites and Application Pools and creates new ones every time the machine is booted or the WAC service is restarted (Restart-Service WACSM).

    The recreation of the Application Pools also prohibits any hacks to change the accounts of the Application Pools for Office Web Apps – something you never ever should do, it’s not supported and it is a security risk.

    WAC Websites

    The HTTP80 Website is the one that is used for all the WAC stuff (viewing, editing etc). Depending on your configuration that one is configured to listen on port 80 or 443 – for all IP addresses on the box. The other website, HTTP809, is the administration website using port 809 and 810 to synchronize settings within the farm. As you can see the HTTP80 Website will very much likely collide with a SharePoint Web Application (and any other Website for that matter). As you (might) know with SharePoint 2013, Apps and Host Named Site Collections we want a Web Application, without host header listening on port 80 or 443. There’s a collision right there. If we managed to get SharePoint installed on the WAC machine and try to fool WAC to use a host header, that would just be reset for every reboot or restart of the WAC service and our IIS Website would be removed for the SharePoint Web Application.

    What if I add another Website/Web Application and uses host headers?

    So you think you are smart now! What if you added a SharePoint Web Application, without host headers, and then go in and modify the bindings in IIS. Could work in a development environment, but never a good approach in production! Unfortunately this dog won’t hunt either. When you restart the WAC service it will actually not only remove the WAC Websites but also any IIS Website listening on port 80 or 443. Try it – just add a Website on port 80 in IIS, host header or no host headers, dedicated IP or not etc and restart the WACSM service – they will be removed. The only Websites that will remain in IIS are the ones created on ports other than 80, 443, 809 or 810.

    Summary

    I hope that this little blog post helped you explain why you cannot have both Office Web Apps 2013 and SharePoint 2013 or any other IIS application on the same machine. There’s no hack and there should not be one. If you ever get asked that question again – just send them here.

  • Presenting at TechEd New Zealand 2013

    Tags: Conferences, SharePoint 2013

    TENZI’m proud to announce that I will be speaking at TechEd in New Zealand the 10-13th of September. This is really cool and my first trip down to Kiwi land. TechEd is a conference for all Microsoft technologies, not only SharePoint, but the lineup of SharePoint speakers and sessions at this conference just looks awesome; Dr. Search aka Neil Hodgkinson, MCA Wayne Ewington, MVP Mark Rhodes, MVP Debbie Ireland amongst others. If you live in the southern hemisphere and are just remotely interested in SharePoint you need to get your ticket ASAP!

    I’m presenting a total of three sessions at #TENZ

    Real world SharePoint 2013 architecture decisions

    [SE307] SharePoint 2013 is still a new kid on the block in the enterprises and even though large portions of the architecture is similar to SharePoint 2010 there are some tough choices to be made by a SharePoint architect. SharePoint 2013 is requiring more resources, have new and improved topology options – how does those affect our design decisions? The Distributed Cache and the new Search architecture are two important features and how do we factor those into the architecture equation? And what about Apps in the on-premises installations? In this session we will discuss all of these and more and take a look at our options, based on real world implementation experience.

    JavaScript in SharePoint and not just for Apps

    [SE309] JavaScript is becoming the new de-facto standard for developing solutions on top of SharePoint, thanks to the new App model. But what if you’re still running on-premises installations and what if you don’t build Apps or and just need to upgrade your current solutions? Well, you can still use a lot of the nice new client-side features. In this session we will take a look at how you use and work with the SharePoint JavaScript API – we’re going through everything from the JavaScript CSOM, to REST, to Script-On-Demand, JSLink and other JavaScript features that you will love to use – in Apps or not.

    Mastering Office Web Apps Server 2013 operations

    [SE313] This session will cover from A-Z how to setup, configure, patch and maintain your Office Web Apps Server 2013 farm. Through an extensive set of demos we’re going to set up a new Office Web Apps Farm, configure it for high-availability, going through security considerations and finally connect the Office Web Apps Server to SharePoint and Exchange.

    Looking forward to see you all down there…

  • SharePoint 2013, Office Web Apps 2013 and Excel files with Data Connections and Secure Store

    Tags: SharePoint 2013, Office Web Apps, WOPI

    Introduction

    This is a follow-up post on the SharePoint 2013, Office Web Apps 2013 and Excel files with Data Connections post that I previously wrote. That post talked about how you needed to do, so called, WOPI Suppressions if you had Excel files with Data Connections and had those data connections configured to use the authenticated users account. The WOPI Suppression made sure that the rendering of the Excel book was done by Excel Services in SharePoint 2013 rather than with Office Web Apps 2013.

    But if you have Excel documents with external data and do not rely on delegating the account (and don’t do the Negotiate, SPN and delegate dance), but instead like to store the credentials to the data in the connection string (not recommended) or like to take advantage of the Secure Store in SharePoint 2013 – then you actually have the option to not do any WOPI Suppressions. Let’s take a look on how you can get this to work: an Excel sheet with an external data connection using Secure Store to store credentials and then using Office Web Apps 2013 to refresh the data.

    Configuring the Secure Store

    Configuring a Secure Store ApplicationFirst of all we need to have the Secure Store Service Application up and running in our SharePoint farm. Once that is done and you have generated the encryption key, we need to add a new Target Application. To create the Target Application use the New button in the Ribbon. This will take you to a wizard. First of all we need to give the Target Application an ID, this is what we later use to reference this application from Excel (and other sources). Also give it a descriptive name as well as a contact. Then we have to choose what type of Target Application Type to use. Use Individual or Group, where Group is preferred from a management perspective. If you choose Individual you have to set the credentials for each user individually and for Group you have one set of credentials per application.

    On the next wizard page you set up the “fields” for the application. By default it assumes that it is Windows credentials that we would like to store, if that is the case you only need to click Next. Otherwise, if you’re using for instance SQL credentials, you have to set up two new fields using the User Name and Password type of fields. On the final page you give permissions to the Target Application itself and if it is a Group application then you also configure what users/groups that are mapped to the credentials.

    When the application is configured we need to set the credentials for it. For a Group Application you use Central Administration or PowerShell to set the credentials to be used by the group of users. For Individual Applications you need to set the credentials for each and every user through Central Admin or create some custom Web Part or similar that the user can use to fill in it’s credentials. Fortunately most of the plumbing is already done for this. You can use the Layouts page called securestoresetcredentials.aspx to allow the user to set their credentials. This is how it is used:

    /_layouts/15/securestoresetcredentials.aspx?TargetAppId=AdventureWorks

    You have to pass the TargetAppId as an argument and set the value to the Application ID of the application. Good to know here is that if you don’t use SSL/TLS (HTTPS) on you SharePoint sites you will get messages that will annoy your users, since this data will be sent in clear text. Just another reason to do HTTPS :-)

    By now we should be all set to continue to the next step.

    Create the Excel book with the Data Connection

    Excel Data Connection propertiesNow it is time to create our Excel book. To do this we just add a data connection as normal but in the connection wizard we click on the Excel Services: Authentication button and then choose to use a stored account. In the stored account input box we have to enter the Secure Store Target Application ID, the one we created in the Secure Store Service Application above. Everything else is precisely as normal.

    Now this workbook is ready to be uploaded to a document library in SharePoint. Good thing here is that now you don’t have to do anything more – no SPN’s, no nothing!

    Refresh data using Office Web Apps 2013

    So, let’s see if this works with Office Web Apps 2013! If you fiddled with the WOPI Suppressions as in my previous blog post, make sure to remove them so we have WAC rendering the Excel Sheet instead of Excel Services (this will work in Excel Services as well, but that is not the purpose of this post).

    Click on the document so that it renders in the browser, make sure to update some data in the external data source so that you can see that the data is actually refreshed, then use Data > Reload All Data Connections to update the sheet. Everything should work fine here, your data should be refreshed.

    So, there it is you can actually use Office Web Apps 2013 to render and refresh external data – only thing is that you can’t use any delegation features.

    Troubleshooting

    Didn’t it work? Of course things can go wrong. But a correctly configured environment should work immediately. Here are some of the common errors that you can get:

    External Excel data connections disabled in Office Web Apps

    Your admins (or you) might have turned of the Excel external data. You can check this by running the following command on a WAC machine:

    (Get-OfficeWebAppsFarm).ExcelAllowExternalData

    If it returns false, then you have turned it off. By default when configuring a new farm it is set to true. If set to false and you want to use external data then use the following command:

    Set-OfficeWebAppsFarm -ExcelAllowExternalData

    Note that it can take a couple of minutes before changes like this actually is propagated in the WAC farm.

    If this is your problem then you’re getting an error like this:

    “The trusted location where the workbook is stored does not allow external data connections.”

    The trusted location where the workbook is stored does not allow external data connections

    You’re running SharePoint over HTTP!!!

    The most common problem is that your SharePoint is still using HTTP and not HTTPS (when will you ever learn!). If that is the case you will get an error like follows:

    “An error occurred while accessing application id XXX from Secure Store Service.”

    An error occurred while accessing application id XXX from Secure Store Service.

    This is perfectly normal and good! Since the SharePoint farm is accessible through HTTP this also means that Office Web Apps Server will try to retrieve the credentials over HTTP – in clear text, and you don’t want that.

    But if you are running a development environment that uses HTTP it could be all right to work around this. Or you might be running IPSec (fat chance…) then it also ok to allow this. Otherwise I do recommend you to change your SharePoint content web application to use HTTPS or don’t just use the Secure Store and WAC.

    Anyhow, this is how you configure WAC to allow retrieval of credentials over HTTP:

    Set-OfficeWebAppsFarm –AllowHttpSecureStoreConnections

    Once this command is executed, wait a minute, reload the workbook and you’re golden.

    No credentials configured

    Another common problem, common when using Individual Secure Store applications, is that the application does not have any credentials set. The error you will see then is the following:

    “An error occurred during an attempt to establish a connection to the external data source.”

    An error occurred during an attempt to establish a connection to the external data source

    To fix this you have to set the Group credentials, if it is a Group application type, or make sure that all users set their individual credentials.

    Summary

    You have now seen that Office Web Apps 2013 can actually render external data, including refreshing it, using the SharePoint 2013 Secure Store service. Just be careful with the security issues I highlighted with SSL/TLS/HTTPS. All this will also work with plain ol’ Excel Services if you need to combine it with delegation.

  • Office Web Apps Server 2013 - machines are always reported as Unhealthy

    Tags: Office Web Apps

    As you might have noticed I have somewhat fallen in love with Office Web Apps 2013, or WAC as we say now that we’ve gotten this close to each other. It’s an amazingly well written server product with the good side benefit that it is also very usable for the end-users. Even though me and WAC has been hanging around for a while and by now know each other pretty well, WAC has constantly been reporting that it is Unhealthy. And from what I’ve seen, heard and experienced in the field I am not alone…

    Office Web Apps 2013 Health Reporting

    Office Web Apps 2013 has a really well written reporting mechanism that constantly monitors and reports issues with your WAC farm and its machines. You can at any time see the Health status of your machines by running the following Windows PowerShell command:

    (Get-OfficeWebAppsFarm).Machines

    If you have installed and configured the farm according to the interwebs and/or official sources it is most likely that all your servers are reporting Unhealthy even though your WAC farm is running fine, from the end-user perspective.

    Unhealthy WAC machines

    The health reporting mechanism in Office Web Apps consists of a number of Watchdog processes (FarmStateManagerWatchdog.exe, ImagingWatchdog.exe etc). These processes regularly check for issues with the different WAC services. For instance it makes sure all the servers and service endpoints are responding, it checks if the proofing tools works by actually testing the service with a correctly spelled word and an incorrectly spelled word and so on. If any of these watchdog process reports an error the machine is marked as Unhealthy. (Note: It will check all the reports from the last 10 minutes, so it is a slight delay in this status).

    You can see the Health reports in either the log files (SharePoint ULS Trace Log style) in your log directory on the WAC machines, but you might find it easier (and faster) to look in the Event Viewer of the machine. You will find the logs under Applications and Service Logs > Microsoft Office Web Apps.

    WAC Event Log entries

    In this log you will clearly see all the errors and be able to solve them, well at least most of them can be fixed by reading (and understanding) the log entry.

    Two common reasons for Unhealthy machines

    As I said, most WAC farms I’ve seen are reporting all machines as Unhealthy and I’ve found two major reasons for this. One is related to certificates and the second is related to Windows Server 2012/IIS8 and WCF.

    Correct WAC Farm certificate

    If you have been following the SharePoint 2013 space recently you should be pretty aware of that you should protect your server communication with SSL. Especially Office Web Apps 2013, which sends the AuthN token in clear text over the wire. So you create a certificate for your WAC farm load balanced DNS address, wac.contoso.com for instance, install it on the WAC machines, set up the farm and everything looks fine. But all the machines are reporting that they are Unhealthy. If you take a closer look at the WAC logs you will see exceptions like this: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

    SSL errors

    It’s not that strange actually. In order to test the endpoints of the different machines the watchdog processes cannot call the load balanced DNS entry (wac.contoso.com) but instead has to call the individual machines, for instance wacserver1.contoso.com – and the certificate is not valid for that DNS entry.

    In order to resolve this issue for the farm you have to create a new SAN certificamyte (or optionally a wildcard) containing the name of all the machines and the load balanced DNS entry.

    New SAN certificate

    If you’re creating a new farm just proceed as normal with a certificate like this. If you have an already existing farm and need to update the certificate you just install the certificate on all the machines (before doing anything else) and then use the following PowerShell command to configure the farm to use the new certificate, on one of the WAC machines. Note that you need to restart all the servers in the farm once you have changed the certificate.

    Set-OfficeWebAppsFarm -CertificateName "wac.contoso.com"

    Once the machines are restarted you can go back to the logs and see if it now reports any issues and if everything is fine your machines should soon (it takes a couple of minutes) start reporting that they are Healthy.

    After this you also need to update the WOPI Proof Key of your SharePoint farm(s), to avoid security errors when using Office Web Apps in SharePoint. This can be done either by removing all WOPI bindings and re-add them or by running the following command:

    Update-SPWOPIProofKey

    WCF .svc endpoints not working on IIS8

    The second thing that I’ve seen is that (once you fix the certificate per above) the logs contains error messages that says it cannot access the Participant.svc file: BroadcastServicesWatchdog_Wfe reported status for BroadcastServices_Host in category '3'. Reported status: Contacting Participant.svc failed with an exception: The remote server returned an error: (404) Not Found. There are other similar error messages. This is a bit strange, if you look in the IIS for that specific for this file it is clearly there but if you try to browse to it you get a 404. Here’s a good way to test if you’re suffering from this issue without reading any logs:

    Invoke-WebRequest https://wacserver1.contoso.com/m/met/participant.svc/jsonAnonymous/BroadcastPing
    Tip: this command and/or URL is a very good way to use in your monitoring software or load balancer to check if the WAC farm/machine is up and running.

    If you receive a HTTP status equal to 200 then you’re golden and can quit reading this section but if you get an exception and specifically a 404 you’re suffering from this exact issue.

    When you install/configure IIS8 on Windows Server 2012 using instructions from the internet or TechNet and trying to do a “minimal” installation with as little Server features installed as possible you are most likely not installing the HTTP Activation for .NET Framework 4.5 WCF Services.

    Forgot the .NET WCF HTTP Activation?

    In order for IIS to understand .svc files, and not use the Static file handler, you need to install this Feature on your WAC machines. It can be done using the Add Roles and Features Wizard (as shown above) or using the following PowerShell command. Note that it will at the same time install two other dependent features.

    Add-WindowsFeature NET-WCF-HTTP-Activation45

    If you’re installing a WAC machine from scratch your PowerShell command should look like follows (on Windows Server 2012):

    Import-Module ServerManager
    
    # Required Features
    Add-WindowsFeature NET-Framework-45-Core,NET-Framework-45-ASPNET,`
        Web-Mgmt-Console,Web-Common-Http,Web-Default-Doc,Web-Static-Content,`
        Web-Filtering,Web-Windows-Auth,Web-Net-Ext45,Web-Asp-Net45,Web-ISAPI-Ext,`
        Web-ISAPI-Filter,Web-Includes,InkAndHandwritingServices, NET-WCF-HTTP-Activation45
    
    # Recommended Features
    Add-WindowsFeature Web-Stat-Compression,Web-Dyn-Compression
    
    

    Once the feature is installed, there is no need for any server restart, you should see that your machines are reported as Healthy. Remember it can take up to 10 minutes. You can sometimes speed up the health reporting a bit by restarting the WAC Service:

    Restart-Service WACSM

    If they are no reported as Healthy then you have another issue. Check the logs again, fix it, report to me what you did and why and I’ll update this post.

    Healthy WAC machines

    Another note, you might not see the same status on all WAC machines when you inspect the health status for each machine. WAC machines are not constantly talking to each other, it may take a while before the synchronize. You can always log in to each and every WAC machine and run the following cmdlet to get the actual value of that specific machine:

    Get-OfficeWebAppsMachine

    Repairing an Office Web Apps Farm

    There is a PowerShell cmdlet in Office Web Apps 2013 that is called Repair-OfficeWebAppsFarm. This command is sounding way better than it actually is. It will not repair anything, it will remove all Unhealthy servers, and nothing more. So unless you have fixed the issues mentioned above, you will have no WAC machines after running that cmdlet. Just a tip…

    Summary

    Having an Healthy Office Web Apps Server 2013 farm is important. You should constantly monitor the Health status of each and every Office Web Apps Server machine, and if found Unhealthy fix it. Most likely you don’t have a valid certificate for your Office Web Apps Server 2013, one that contains the load balanced name and all the individual server names, and thus you will always have an Unhealthy farm. Get a new certificate for it, so that you can properly monitor the farm.

  • SharePoint 2013, Office Web Apps 2013 and Excel files with Data Connections

    Tags: SharePoint 2013, Office Web Apps

    Here goes a post in the middle of the summer, directly taken from yet another e-mail conversation with information that I thought was well known. It has been blogged before, but perhaps you readers (thanks mum and the other one) don’t follow those blogs, so here we go.

    Introduction

    Who doesn’t like Excel? Most people love it so much that they can’t get enough of it and uploads the Excel files to SharePoint and view and edit them using Office Web Apps 2013 (WAC). The web view and editing can be very beneficial on slow networks, on machines and devices without any decent Office edition or just as nice widgets on your dashboards. External Data Refresh Failed

    Assume you have you SharePoint 2013 farm connected to a WAC 2013 farm and then you use all your Excel skills and starts retrieving data from other data sources. Unfortunately your data will not refresh, since Office Web Apps 2013 cannot read data from external data sources – that is what Excel Calculation Services Service Application is doing for you in SharePoint 2013. So you create one of these Service Applications, but still no luck. Office Web Apps 2013 are still rendering your Excel files. So how do we fix this, keep on reading and I’ll show you how…

    [Update 2013-08-01] If you are not using delegation but instead would like to use the Secure Store Service Application in SharePoint 2013 you might want to take a look at this follow-up post.

    Stop rendering Excel files in Office Web Apps 2013

    Fortunately there is a feature in SharePoint 2013 that allows you to do suppressions for the different WOPI Bindings. What we would like to do is to suppress the WOPI Binding that is responsible for viewing of Excel files. This is done through PowerShell on a per SharePoint farm basis, it’s not a WAC setting, like this:

    New-SPWOPISuppressionSetting -Extension XLSX -Action View

    WOPI Suppression for Excel files

    This command will suppress the WOPI Binding for the extension XLSX (Excel files) and for the WOPI Action View. If we now have Excel Services running in our SharePoint farm then Excel Services will be responsible for viewing Excel (XLSX) files. But Office Web Apps 2013 will be responsible for previews (in search and document libraries) and editing of the files.

    You don’t have to do anything else, no IISRESET (this must be the only time you don’t need this for a SharePoint config change!) or anything else.

    Verify if Excel files are rendered by Office Web Apps

    If you are unsure if Excel files are rendered using WAC or Excel Calc you can easily take a look at the URL. When Office Web Apps is responsible for the rendering the URL will look something like this:
    https://server/_layouts/15/WopiFrame.aspx?sourcedoc=/Documents/excel.xlsx&….
    And when Excel Calc is rendering the document it should look like the following:
    https://server/_layouts/15/xlviewer.aspx?id=/Documents/excel.xlsx&…

    You can also check the WOPI suppression settings using the Get-SPWOPISuppressionSetting cmdlet if you have shell access to the SharePoint farm.

    I want my Office Web Apps viewing back…

    Of course you can revert back to using the Office Web Apps for rendering of the Excel files. This is done using the Remove-SPWOPISuppressionSetting cmdlet.

    Summary

    Basically for every farm where you are using BI features such as Excel Services and/or PowerPivot you need to do this WOPI suppression setting. But if you don’t have these requirements you should stick to using Office Web Apps 2013, to avoid unnecessary Service Applications and take advantage of the features in WAC that are not available in Excel Services.

  • SharePoint: Specifying Content Database for new Site Collections when using Host Named Site Collections

    Tags: SharePoint 2013, SharePoint 2010

    Over the last few months I’ve been asked numerous times and I’ve seen quite a few e-mail conversations on how to work with new Host Named Site Collections (HNSC) and Content Databases. In this post I will show you how I have solved the problem using the native API hooks in SharePoint.

    Background

    Host Named Site Collections are not a new thing in SharePoint, it has been with us for quite some time, but not been extensively used due to previous limitations (and there still are some). With SharePoint 2013 one strong recommendation is to consider using HNSC, in contrast to the traditional path based site collections. It gives you a couple of benefits in management, performance and is required for Apps to work properly. On the other hand it also has a couple of downsides such as not being able to create new Site Collections in the UI.

    If you are using HNSC, a single content Web Application and you also use the same Web Application for Personal Sites (My Sites) then you might have stumbled upon the problem that your personal sites will be mixed in the content databases with the “normal” site collections. This is in many cases not a desirable solution, since you might have different SLA’s on personal sites and “normal” sites. Personal sites are created automatically when someone hits the My Site Host and you have no option to select the content database, compared to if you pre-create sites and use the –ContentDatabase parameter of the New-SPSite cmdlet. Fortunately there is a solution to this problem!

    A custom Site Creation Provider

    As you already might know SharePoint uses a very simple (stupid) algorithm by default to select the content database in which new Site Collections should be created – it takes the database with the least number of sites (somewhat simplified), not the one with least amount of data. This can actually, and possibly should, be overwritten with a custom algorithm. Such a custom algorithm can be implemented in something called a Site Creation Provider.

    A custom Site Creation Provider implements the abstract class SPSiteCreationProvider and then implements its logic in the SelectContentDatabases() method. The SelectContentDatabases method returns a list of possible content database candidates – if it only returns one that one will be used, if it returns many the default algorithm will be used on that list of databases and if it returns zero or null then it will of course not be created anywhere.

    The problem with the Site Creation Provider and its implementation is the limited number of options we have to do the selection of the content database. All we have is the set of content databases attached to the Web Application and a number of properties, defined in the SPSiteCreationParameters object passed into the method. We can get the the URL for the site to be created and but not the template used, so we have to be a bit clever when implementing it.

    What we could do is to make sure that all our content databases follows a strict naming schema – for instance databases that should contain Personal Sites always has a name like this for instance: FarmX_WSS_Content_MySite_nnn. Secondly we can get the URL to the My Site host from the User Profile Service. Using this we can create an algorithm that says: that any Sites (remember we don’t know what template is used) that is created on or under the My Site Host URL will end up in databases containing the text “MySite”.

    This is how a custom Site Creation Provider using this algorithm can be implemented:

    public sealed class WictorsSiteCreationProvider : SPSiteCreationProvider
    {
        public override IEnumerable<SPContentDatabase> SelectContentDatabases(
            SPSiteCreationParameters creationParameters, 
            IEnumerable<SPContentDatabase> contentDatabases)
        {
            SPServiceContext context = SPServiceContext.GetContext(
                creationParameters.WebApplication.ServiceApplicationProxyGroup,
                creationParameters.SiteSubscription == null ?
                    new SPSiteSubscriptionIdentifier(Guid.Empty) :
                    creationParameters.SiteSubscription.Id);
    
            UserProfileManager upManager = new UserProfileManager(context);
    
    
            List<SPContentDatabase> databases = new List<SPContentDatabase>();
            if (new Uri(upManager.MySiteHostUrl).DnsSafeHost == creationParameters.Uri.DnsSafeHost)
            {
                // find the My Sites databases
                SPContentDatabase smallestContentDb =
                    contentDatabases.
                        Where(contentDb => contentDb.Status == SPObjectStatus.Online).
                        Where(contentDb => contentDb.Name.Contains("MySite")).
                        OrderBy(contentDb => contentDb.DiskSizeRequired).First();
                databases.Add(smallestContentDb);
            }
            if (databases.Count == 0)
            {
                // choose from all databases
                SPContentDatabase smallestContentDb =
                    contentDatabases.
                        Where(contentDb => contentDb.Status == SPObjectStatus.Online).
                        Where(contentDb => !contentDb.Name.Contains("MySite")).
                        OrderBy(contentDb => contentDb.DiskSizeRequired).First();
                databases.Add(smallestContentDb);
            }
    
            if (databases.Count == 0)
            {
                // Log some error!!!
            }
    
            return databases.AsEnumerable();
        }
    }

    As you can see we use the site creation parameters to locate the UserProfileManager which can give us the My Site Host Url. In this case we expect the My Site host be a HNSC and all Personal Sites to be path based underneath that HNSC, so all we need to do is to compare the DnsSafeHost. If they match we retrieve all the databases containing the word “MySite” and add them to a list which will be returned by the method. If it does not find any databases or if the URL’s don’t match we retrieve the databases without the “MySite” in its name and return them.

    For the sharp eyed one you will notice that in this implementation we do not return a list of content databases, but instead always one – and we select only the one with the least amount of used space. Very useful if you want to keep a similar size of all your content databases – if not just remove the OrderBy() and First() statements.

    Registering the Site Provider

    Of course you need to build this as a full trust solution – any other way just don’t work. To register it with your farm you can either do it in PowerShell or using a Farm scoped feature with a Feature Receiver. The registration could look something like this:

    SPWebService contentService = SPWebService.ContentService;
    contentService.SiteCreationProvider = new WictorsSiteCreationProvider();
    contentService.Update();
    

    To unregister it you just set the value to null instead.

    Summary

    There you have it! It wasn’t that hard! Just a few lines of code and you have much more control over your site collections and content databases in the Host Named Site Collection scenario. Just remember to have proper error handling and logging in your Site Creation Provider, any errors or exceptions will guarantee that you don’t have any site collections at all.

  • Announcing new Visual Studio 2012 tool for JavaScript Localization in SharePoint 2013

    Tags: SharePoint 2013, Visual Studio, Downloads

    In SharePoint 2013 JavaScript is the new default language and all our (at least mine) solutions and projects are using JavaScript more and more, even though everything is not built as SharePoint Apps. Farm or Full-trust solutions built using JavaScript will in many situations create a better user interface and an improved perceived performance. The more we build user interfaces using JavaScript we cannot just forget about some of the basic UX rules, such as using localization. End-users really hate when they see mixed content in different languages. We’ve known for quite some time how to do localization server-side, but how do we do it in a smart way in JavaScript?

    Localization in JavaScript

    As always when it comes to cool stuff my mate Waldek Mastykarz already covered this topic pretty well in his post called “Globalizing JavaScript in SharePoint 2013”. In that blog post he shows how you actually can use a feature in SharePoint 2013 that allows you to get JavaScript objects, generated from RESX files, to localize the user interface. His guide pretty well covers everything you need, but I’m going to re-iterate some of it here anyways, and here is a sample on how to do it.

    Assume that you have a resource file (RESX) in your solution in a SharePoint Mapped folder (to the Resources folder). By adding a script link to the ScriptResx.ashx HTTP Handler with the resource file name and the culture as parameters you can get all the resources as a JavaScript object. For instance a simple Hello World alert in a Visual Web Part could looks something like this.

    First of all we need a resource file in the project (mapped to the Resources folder) and if you want localization, you of course have to translate it. This Resource file contains only one resource called HelloWorld.

    A Resource file

    To use this resource in JavaScript I have to write some code to load the ScriptResx.ashx HTTP Handler and pass in the name of the resource file and the name of the current culture.

    SP.SOD.executeOrDelayUntilScriptLoaded(function () {
      SP.SOD.registerSod("demoresources", 
        "/_layouts/15/ScriptResx.ashx?name=demoresources&culture=" + 
        STSHtmlEncode(Strings.STS.L_CurrentUICulture_Name));
    
      SP.SOD.executeFunc("demoresources", 'Res', function () {
        alert(Res.helloWorld);
      });
    }, "strings.js");

    In the sample above I use the Script-On-Demand (SOD) JavaScript functions to first make sure that the Strings.js is properly loaded. I need that to get the name of the current culture (String.STS.L_CurrentUICulture_Name). Then I do a SOD registration and registers the script link for the ScriptResx.ashx file using the name of the RESX file and the name of the culture. Finally I wait for the SOD to load and then shows an alert using the Res.helloWorld object.

    The ScriptResx.ashx file automatically creates a JavaScript object called Res and that object contains strings of all the resources in the RESX file. Note that the resources are using Camel Casing.

    What is the problem with this approach?

    This works fantastically great in most cases, and it is extremely hard to find anything on Waldeks posts that can be improved, but what if you don’t want to use the default namespace Res? It could be a matter of taste or a collision in names from different resource files. Fortunately there is a solution to this built into the ScriptResx.ashx HTTP Handler. You can actually specify two resheader elements in the Resource file to indicate that you want a custom namespace and the name of it – and this is exactly what SharePoint 2013 do for some of the built-in resource files.

    <resheader name="scriptResx">
      <value>true</value>
    </resheader>
    <resheader name="classFullName">
      <value>SPResXDemo.Resources</value>
    </resheader>

    The resheader with the scriptResx name attribute tells the ScriptResx.ashx handler that we would like to generate vanity namespaces and the one with the classFullName name attribute tells the handler the namespace to use.

    Unfortunately Visual Studio (not really, but the .NET ResXResourceWriter) does not handle this at all, as soon as you save your RESX file these two resheader elements will be removed. (And this is where I felt challenged!)

    SPResX to the rescue!

    To handle this situation and to be able to use custom namespaces for my JavaScript localizations I’ve created a small tool called SPResX that replaces the default RESX Custom Tool, the ResXFileCodeGenerator, and correctly preserves the resheader attributes.

    The SPResX tool can be downloaded from the Visual Studio Gallery and installed on any Visual Studio 2012 system, or found in the Extensions and updates in Visual Studio.

    SPResX in the Extension gallery

    Once you have the tool installed you just need to change the Custom Tool to SPResX and save you Resource files. You need to do this on all resource files, including the different language and region variants. Now you will get a namespace that corresponds to your default project namespace. For instance if I have a project called Wictor.WebParts, the namespace will be Wictor.WebParts.Resources (since it is in the Resources folder). If you would like some namespace that is completely different, then you can just specify that namespace in the Custom Tool Namespace. (Yes, I know you will get a notification that the file has to be reloaded – that is just the way it is).

    Configure the Custom Tool

    This is how the code from above can then be re-written using this tool, it works in the exactly same way but with a fancy vanity namespace of the resources.

    SP.SOD.executeOrDelayUntilScriptLoaded(function () {
      SP.SOD.registerSod("demoresources", 
        "/_layouts/15/ScriptResx.ashx?name=demoresources&culture=" + 
        STSHtmlEncode(Strings.STS.L_CurrentUICulture_Name));
    
      SP.SOD.executeFunc("demoresources", 'SPResXDemo', function () {
        alert(SPResXDemo.Resources.helloWorld);
      });
    }, "strings.js");

    Summary

    I hope this little tool will help somebody out there and I would appreciate feedback on it, either on this post or in the Visual Studio gallery.

  • SharePoint 2013: Enabling PDF Previews in Document Libraries with Office Web Apps 2013

    Tags: SharePoint 2013, Office Web Apps

    Introduction

    A couple of weeks back I blogged about the March Update for Office Web Apps 2013 and also how you could use that update to show PDF previews in a SharePoint 2013 Search Center. Since then I’ve received a lot of requests on how to enable PDF Previews in a Document Library, which isn’t there by default. Of course it is not a WAC thing, it’s a SharePoint 2013 thing – but the SharePoint 2013 updates (up until now at least) does not provide this capability either.

    In this post I will show you that it can be done. It’s a JavaScript thing and can be done using a Content Editor Web Part added on all pages where you want the PDF previews or as Farm solution which uses a delegate control and a custom JavaScript file.

    [Update 2014-01-01] Some customers may see errors when using this solution and previewing PDFs. If so, make sure that you have the http://support.microsoft.com/kb/2825665 hotfix installed. Thanks to Dan from MSFT Support.

    Build the PDF Preview solutionhe

    I assume that you are familiar with SharePoint 2013 development and knows what a delegate control is. What you need to do is create a new empty Farm solution project. In this project we’ll create a new web control that will add a JavaScript (which we will create in just a minute) to the page. The implementation should look like below:

    [MdsCompliant(true)]
    public class PdfPreviewControl: WebControl
    {
        protected override void OnPreRender(EventArgs e)
        {
            ScriptLink.RegisterScriptAfterUI(this.Page, "wictor.pdfpreviews/previews.js", false);
        }
    }

    The JavaScript file, added as a Layouts file, is what makes the magic happen. We’re using the Script-On-Demand features in this script to make sure that the scripts aren’t executed before the SharePoint filepreview.js file is loaded. Once that file is loaded two new JavaScript objects are created; the filePreviewManager and the embeddedWACPreview. To enable the PDF previews we only need to add the PDF extension to these objects and specify the previewer objects and the dimensions. In this case I use the same settings as the other Word previewers.

    function WictorsPdfPreviews() {
        SP.SOD.executeOrDelayUntilScriptLoaded(function () {
            filePreviewManager.previewers.extensionToPreviewerMap.pdf = 
                [embeddedWACPreview, WACImagePreview];
            embeddedWACPreview.dimensions.pdf= { width: 379, height: 252}
        }, "filepreview.js");
        notifyScriptsLoadedAndExecuteWaitingJobs("wictor.pdfpreviews/previews.js");
    }
    WictorsPdfPreviews();

    Now we need to make sure that this control loads the JavaScript on all pages. This is done by adding a new Empty Element SPI and creating a Control element pointing to the web control, like this:

    <Control 
      ControlAssembly="$SharePoint.Project.AssemblyFullName$" 
      ControlClass="Wictor.PdfPreviews.PdfPreviewControl" 
      Id="AdditionalPageHead" 
      Sequence="100"/>

    As you can see we’re adding this control to the AdditionalPageHead, which means that we will have it on every page. Do not forget to add the web control as a Safe Control in the project!

    The final thing we need to do is to modify the Feature that was automatically created when the Empty Element SPI was added to the project. You can scope it to whatever you like, but I want it for all Document Libraries in my farm so I set the scope to Farm. The image below shows all the files in the project.

    The Visual Studio 2012 solution

    Deploy and Test

    Now all we have to do is to deploy the solution. Once the solution is deployed and the Farm feature activated we can navigate to any document library and upload a PDF file. Note that you have to be on at least the March 2013 update of Office Web Apps Server and that you have enabled the WordPDF application (see previous blog post). Once you have the PDF file in the library you can click on the ellipsis button and see the PDF Preview:

    PDF Previews in SharePoint 2013

    Disclaimer

    As always when it comes to stuff that is not documented. I do not give any guarantee that this will work on your machine(s) or after any upcoming SharePoint patches.

    Summary

    Enabling PDF Previews are not (yet) a default feature of SharePoint 2013 but can easily be added to your SharePoint farm – if you’re allowed to use Full Trust solutions.  If you don’t feel like you want to do some hacking yourself you can download the WSP here and deploy it yourself to try it out.

  • Recertified as Microsoft Certified Solutions Master (MCSM) for SharePoint

    Tags: MCSM, SharePoint, SharePoint 2013

    Yesterday I got the really cool news that I completed all recertification requirements for the Microsoft Certified Solutions Master: SharePoint certification. Couldn’t be a happier SharePoint professional right now!

    MCSM: SharePoint

    What is the MCSM and what about MCM?

    The Microsoft Certified Master (MCM) program has during the latest year transitioned into the Microsoft Certified Solutions Master (MCSM) program. It is not only a change in name but also a change made to adapt to the new world order. The program is not longer focusing on one specific version of the product but instead focus on what’s in the market at the current moment and specifically it covers both on-premise and cloud solutions. This is good in many senses – this allows the program to always be current, always use the latest techniques and technologies etc. The MCM was a certification without expiration date (well eventually the product cease to exist, but you still have the cert) whereas the MCSM has a three year life span and you must recertify to stay on top.

    The first MCSM : SharePoint rotation!

    I was fortunate to be able to participate in the rotation called U3, or Upgrade 3, which was the first MCSM rotation for SharePoint. It was two weeks on site, in the always sunny and warm Seattle, in January. I had the opportunity to spend these two weeks in a class room with the finest SharePoint professionals there is. We had great instructors, awesome labs, and fantastic discussions over the two weeks duration. It all led up to one written exam, called the Knowledge Exam, and one hands on lab, called the Qualification Lab. As always the QL was basically doing a couple of weeks worth of deep dive SharePoint work in about 8 hours time. Taken into account that this was a beta it was just about the hardest work I’ve ever done – but it was pure fun. And I made it! Phew…

    Congrats to all my other friends that made it through this rotation and best of luck to the ones who will have the joy of doing the exams one more time! You can do it!

  • Introducing Open WOPI - an open WOPI Client for SharePoint, Exchange and Lync

    Tags: SharePoint 2013, WOPI, Office Web Apps, Open WOPI

    Today at the SharePoint Evolutions 2013 Conference I announced my latest pet project called Open WOPI. Open WOPI is an open WOPI client that allows you to extend SharePoint 2013, Exchange 2013 and Lync 2013 with file previews and editors for any type of file formats.

    Open WOPI

    The project is now (at least very, very soon) available to download from openwopi.codeplex.com and is published under the Ms-PL license. This is currently an early beta (or what you would like to call it) but will be improved over time.

    Standards

    Open WOPI is based on the [MS-WOP] protocol, published by Microsoft, and used by Office Web Apps Server 2013, SharePoint 2013, Exchange 2013 and Lync 2013.

    File format support

    Currently Open WOPI has support for the following formats:

      • GPX - Uses Bing maps
      • TXT - Viewing and editing
      • VSDX - Thumbnail viewing

    More formats are in the works…

    Documentation

    Not much yet, but I’ll try to add that over the next few weeks.

    More information

    For more information, feedback and ideas about the project please refer to the Codeplex site: openwopi.codeplex.com. I’d like to hear what file formats you would like Open WOPI to support.

    The slides from the presentation at the SharePoint Evolutions conference and links can be found here.

    Contributors

    Thanks to Sam Dolan, aka Pinkpetrol, for the really cool logo for Open WOPI.

  • SharePoint 2013: Enabling cross domain profile pictures

    Tags: SharePoint 2013

    Just discovered a really interesting and just awesome nugget in SharePoint 2013 that solves a problem that have been annoying me for a long time. The problem manifests itself when you’re having multiple URL’s for your SharePoint farm or when using SAML or Forms based login (like in Office 365 and SharePoint Online) and you’re using the profile pictures on sites not residing on the My Site Host Web Application (or host named site collection). Then the user profile picture is not shown, you get the default image not found image or you’re prompted to authenticate with the My Site Host.

    Let’s take an example. Assume I have one site at intranet.contoso.com and the My Site host exists on mysite.contoso.com. I have not configured any Internet Explorer zones or anything and I’m promted to log in at each location. This is how the Newsfeed Web Part will look like on intranet.contoso.com, if I cancel out the authentication prompt or if I’m using some forms based login:
    No picture...

    You see, no fancy picture of Mr administrator! There’s a couple of ways to solve this using IE Zones, anonymous access etc, but none are perfect and comes with consequences.

    So how can I get the picture to be shown without messing with security, cross domain issues etc. Fortunately I guess I was not the only one that was annoyed by this (most likely everyone using Office 365 as well) so the SharePoint team has added a new feature to SharePoint that allows us to show profile pictures cross-domain.

    It’s a very simple operation and just requires some basic PowerShell skills. Basically all you need to do is to set the CrossDomainPhotosEnabled property on the SPWebApplication object to true, like this:

    asnp Microsoft.SharePoint.PowerShell
    $wa = Get-SPWebApplication http://intranet.contoso.com
    $wa.CrossDomainPhotosEnabled = $true
    $wa.Update()

    Now the Newsfeed, in the sample above, will look like below. And I was not prompted for any authentication or anything! Isn’t that sweet! And it works very well on Host Named Site Collections as well.

    Look at that guy!

    Basically what happens behind the scenes is that the request for the user picture is sent via a “proxy” .aspx page called userphoto.aspx which takes a couple of parameters; URL of the picture, the account name (or e-mail) as well as the picture size (S, M or L). This page will return a JPEG stream of the user profile picture without crossing any domains on the client side.

    I hope this little nugget will save you and your customers a lot of time and annoyance..

  • SharePoint 2010 Web Parts in Action as the Manning Deal of the Day

    Tags: SharePoint 2010 Web Parts in Action

    SharePoint 2010 Web Parts in ActionIf you still haven’t picked up on my book about SharePoint Web Parts – SharePoint 2010 Web Parts in Action, then this is the chance for you. Today (5th of April) the book is featured as the Manning Deal of the Day. All you have to do is browse to http://www.manning.com/wilen/ and then use the dotd0405au promotion code. This will give you 50% percent discount, for you non-math-geniuses that’s half off the full price! And since you now saved a couple of bucks, there’s another Manning book with the same deal and promotion code today and that’s PowerShell in Action 2nd edition.

    PS: Even though the name of the book implies SharePoint 2010, it’s fully applicable to SharePoint 2013 – very little has changed in the Web Parts space.

    Well, what are you waiting for, go get it…

  • Speaking at SharePoint Evolutions Conference 2013

    Tags: Conferences

    In less than two weeks I’m speaking at the SharePoint Evolutions Conference 2013 in London. It is as exciting for me as it is for all attendees. The conferences held by Combined Knowledge has proven over the last years to be the highlight of SharePoint conferences around the world – everything from the venue, to the people, to the sessions and to the parties are surgically planned and executed – no one leaves without a smile on their face and pumped with knowledge!

    As a speaker being selected for this conference is the by far the proudest moment of any SharePoint professional and I’m so glad that I’m back for another year, hanging around with the best of the best, and not only speaker wise – the speakers at the Evolutions conference are the people that knows the product inside out and work with the product in real projects.

    Speaker_Evo-2013-Banner

    This year I’m having one session called Extending SharePoint, Exchange and Lync 2013 with a custom WOPI Client. I will walk you through the WOPI protocol, which is what Office Web Apps Server 2013 (WAC) is built upon, and show you how to make your own WOPI client for custom file formats, for viewing, editing, previews etc. This will be an interesting session and show you a really interesting “partner opportunity”. I’ve prepared a good set of demos, so be there!

    Looking forward to seeing old and new faces at this conference. If you haven’t registered yet – there’s still a chance for you to get your very own ticket.

  • Renewed as SharePoint Most Valuable Professional (MVP) for 2013

    Tags: SharePoint, MVP, Personal

    I just received the confirmation that I am renewed as SharePoint MVP (Microsoft Most Valuable Professional) for my fourth consecutive year. It’s an honor being chosen among all the professionals around the world, especially now when SharePoint is getting more and more widespread and is being adopted by more and more companies worldwide.

    Microsoft MVP

    I’d like to take the opportunity to say thanks all my colleagues at Connecta, that put up with me, and all my friends around the world that I’ve learnt to know throughout these years. I’ll continue to write obscure blog posts and show up at conferences, and I will continue to organize the Swedish SharePoint User Group meetings.

    Thank you all for the support!

  • SharePoint 2013 Managed Metadata field and CSOM issues in 2010-mode sites

    Tags: SharePoint 2013, SharePoint 2010, CSOM, JSOM

    Introduction

    SharePoint 2013 introduces a new model on how Site Collections are upgraded using the new Deferred Site Collection Upgrade. To keep it simple it means that SharePoint 2013 can run a Site Collection in either 2010 mode or 2013 mode, and SharePoint 2013 contains a lot of the SharePoint 2010 artifacts (JS files, Features, Site Definitions) to handle this. When you’re doing a content database attach to upgrade from 2010 to 2013, only the database schema is upgraded and not the actual sites (by default). The actual Site Collection upgrade is done by the Site Collection administrator when they feel that they are ready to do that and have verified the functionality of SharePoint 2013 (or you force them to upgrade anyways). But, the Site Collection admin might have to upgrade sooner than expected for some sites.

    Upgrade troubles

    This post is all about what has happened to us when our Office 365 SharePoint Online (SPO) tenant had been upgraded (and right now the Site Collection upgrade is disabled for us so we can’t do anything about it). The limitations of customizations in the 2010 version of SPO was very limited (to be kind) and we embraced JavaScript (which people have despised for a decade and now suddenly think is manna from heaven). We also leveraged Managed Metadata in a lot of lists, libraries and sites. We’ve built Web Parts using JavaScript CSOM to render information and also .NET CSOM stuff running in Windows Azure doing background work. Once our tenant was upgraded to 2013 all of the customizations using Managed Metadata stopped to work…

    JSOM sample with Managed Metadata

    I will show you one example of what works in SharePoint 2010 and what does not work in SharePoint 2013 when you’re site is running in 2010 compatibility mode.

    Assume we have a simple list with two columns; Title and Color, where Color is a Managed Metadata Field. To render this list using JSOM we could use code like this in a Web Part or Content Editor or whatever.

    var products;
    function loadProducts() {
    	document.getElementById('area').innerHTML = 'Loading...';
    	var context = new SP.ClientContext.get_current();
    	var web = context.get_web();
    	var list = web.get_lists().getByTitle('TestList');
    	products = list.getItems('');
    	context.load(products, 'Include (Title, Color)');
    	
    	context.executeQueryAsync(function() {
    		var collection = products.getEnumerator();
    		var html = '<table>'
    		while(collection.moveNext()) {
    			var product = collection.get_current();
    			html +='<tr><td>'
    			html += product.get_item('Title');
    			html += '</td><td>'
    			html += product.get_item('Color').split('|')[0];
    			html += '</td></tr>'			
    		}
    		html += '</table>'
    		document.getElementById('area').innerHTML = html;
    	}, function() {
    		document.getElementById('area').innerHTML = 'An error occurred';
    	});
    }
    ExecuteOrDelayUntilScriptLoaded(loadProducts, 'sp.js');

    As you can see a very simple approach. We’re reading all the items from the list and rendering them as a table, and this html table is finally inserted into a div (with id = area in the example above). This should look something like this when rendered:

    JSOM Rendering in SharePoint 2010

    The key here is that Managed Metadata in 2010 JSOM is returned as a string object (2010 .NET CSOM does that as well). This string object is a concatenation of the Term, the pipe character (|) and the term id. So in the code sample above I just split on the pipe character and take the first part. There was no other decent way to do this in SharePoint 2010 and I’ve seen a lot of similar approaches.

    Same code running in SharePoint 2013 on a SharePoint 2010 mode site

    If we now take this site collection and move it to SharePoint 2013 or recreate the solution on a 2010 mode site in SharePoint 2013. Then we run the same script, then this is what we’ll see, something goes wrong…

    Failed JSOM Rendering

    You might also see a JavaScript error, depending on your web browser configuration. Of course proper error handling could show something even more meaningful!

    JSOM Runtime exception

    Something is not working here anymore!

    What really happens is that the CSOM client service do not return a string object for Managed Metadata but instead a properly typed TaxonomyFieldValue. But that type (SP.Taxonomy.TaxonomyFieldValue) does not exist in the 2010 JSOM. Remember I said that SharePoint 2013 uses the old 2010 JavaScripts when running in 2010 compatibility mode. Unfortunately there is no workaround, unless we roll our own SP.Taxonomy.TaxonomyFieldValue class (but that’s for another JS whizkid to fix, just a quick tip to save you the trouble – you cannot just add the 2013 SP.Taxonomy,js to your solution).

    So why is this so then?

    If we take a closer look at what is transferred over the wire we can see that when running on SharePoint 2010 the managed metadata is transferred as strings:

    Fiddler trace on SharePoint 2010

    But on SharePoint 2013 it is typed as a TaxonomyFieldValue object:

    Fiddler trace on SharePoint 2013

    It’s a bit of a shame, since the server is actually aware of that we’re running the 2010 (14) mode client components! (SchemaVersion is what we sent from the CSOM and LibraryVersion is the library used on the server side)

    Fiddler trace on SharePoint 2013

    I do really hope that the SharePoint team think about this for future releases – respect the actual schema used/sent by the Client Object Model!

    Of course, this JavaScript solution will not work as-is when upgrading the Site Collection to 2013 mode. That is expected and that’s what the Evaluation sites are for.

    What about .NET CSOM?

    We have a similar issue in .NET CSOM, even though we don’t get a SharePoint CSOM runtime exception. Instead of returning a string object you will get back a Dictionary with Objects as values – but if you’re code is expecting a string you still get the exception. So in 99% of the cases it will fail here as well.

    Nothing exciting here, move along...

    Summary

    Deferred Site Collection update might be a good idea and you might think that your customizations will work pretty well even after an upgrade to SharePoint 2013, just as long as you don’t update your actual Site Collections to 2013. But you’ve just seen that this is not the case.

    Happy easter!

  • SharePoint 2013: Enabling PDF Previews with Office Web Apps 2013 March 2013 update

    Tags: SharePoint 2013, Office Web Apps

    In my last post (still smoking fresh) I showed you how to update your Office Web Apps 2013 farm to the March 2013 update, connect it to SharePoint 2013 and being able to view PDF documents in the browser. What I didn’t explain or show in that post was how to enable the PDF Previews in Search – but I’ll do it now.

    Pre-requisites

    Before you start fiddling with this, you need to make sure that you have the March 2013 update of Office Web Apps Server 2013 (WAC) installed and connected to your farm – if you don’t know for sure, ask your admins – sometimes they know…if they don’t give them the link to my previous blog post.

    Note: You don’t have to have a patched SharePoint 2013, this will work on the RTM bits.

    Default PDF Search Experience

    SharePoint 2013 natively supports crawling PDF documents, through the new document parsers. That is you don’t have to fiddle with any custom PDF IFilters etc. The native PDF document parser is a good enough solution, but have some room for improvements.

    The Search Experience and display in SharePoint 2013 is based upon Display Templates. Display Templates decide how the result should be shown and how the fly-out of the result should look like. For Office documents, when SharePoint 2013 is connected to a WAC farm, SharePoint displays inline previews which you can use to skim through the results really quick. For PDF this is not the case – not even if you use a WAC farm with the March 2013 update (even though the WOPI binding supports the interactivepreview action). This is a sample on how a PDF document could look like in SharePoint 2013 Search:

    Default PDF Search Experience

    Enabling PDF Previews in the Search Result

    Since we have the opportunity to modify the Display Templates and create our own Search Experience we can very easy modify the fly-out/hover panel of the PDF results to show the interactive preview. We can do this in two different ways…

    Create a new Result Type

    The easiest and fastest approach to enable our previews for PDF documents is by creating a custom Result Type. This is done by going to Site Settings > (Search) Result Types and then finding the PDF Result Type. Choose to Copy the Word Result Type. This will create a new Result Type.

    Copy the Result Type

    Give the new Result Type an appropriate name, “PDF with Preview” for instance. Then scroll down to Actions and in the “What should these results look like?” drop-down, choose to use the Word Display Template.

    Word or PDF?

    Then just click Save, go back to your Search Center and search for a PDF document and voilá – we have PDF Previews.

    PDF Previews in da house!

    Modifying the PDF Display Templates

    The second approach, which is a bit more advanced is to actually modify the PDF Flyout Design Template. You do this by going to Site Settings > Design Manager then choose 5. Edit Display Templates. Locate the PDF Flyout item by filtering on the Target Control Type column and use SearchHoverPanel, then scroll down to PDF Hover Panel. To modify the PDF Hover Panel, use the Word Hover Panel as a template. I’m not going in to all the details on how to modify the actual HTML file (see my similar post on how to achieve this here). But once you’ve modified it, make sure to save it to the gallery so the JS file is generated, and publish it. Now (if you made the correct updates) you will have Custom Search Previews for PDF documents.

    Summary

    You’ve now seen how you quick and easy can enable PDF Search Previews in SharePoint 2013 using the March 2013 update for Office Web Apps 2013. All you need to do is either create a custom Result Type and use the Word Display Templates or modify the default PDF Display Template. I really hope that Microsoft makes this a standard feature for upcoming SharePoint 2013 releases.

  • Office Web Apps 2013: Patching your WAC farm with no downtime

    Tags: Office Web Apps, SharePoint 2013

    I’m really glad to see some patches being rolled out for Office 2013, SharePoint 2013 and Office Web Apps 2013. There’s some really important fixes and some very interesting fixes that I’ve been waiting for. In this post we’ll take a look at the first Office Web Apps 2013 (WAC) update – specifically we’re looking at how to patch your WAC farm to minimize the downtime. If you follow my instructions you will have zero downtime (except for a brief moment where Excel stuff will not be accessible).

    Background and preparations

    RTM WACsFor this sample I will have a SharePoint 2013 (RTM) farm connected to a WAC 2013 (RTM) farm. The WAC farm is load balanced using NLB, like illustrated on the right.

    On order to update our WAC farm we need to download the March 2013 patch for Office Web Apps 2013. You can find it at Microsoft Download center and it is called “Update for Microsoft Office Web Apps Server 2013 (KB2760445)”. The KB article does not reveal much of what has changed, but if you take a look at an earlier patch KB2760486 (March 5th) you’ll notice some really cool things. If you don’t have time to read it – I’ll show you some of them at the end of this post. Ok, you do not have to install KB2760486, you only need the KB2760445 one.

    Patching the first WAC Server

    After downloading the patch and copying it to our WAC machines we can start the patching process. We would like to do this without the users even noticing it. First of all we need to take one of the servers out of rotation in the WAC farm. You do this using the load balancer. In my case I’m using Microsoft NLB and using PowerShell this is an easy task. Next thing to do is just remove the WAC server from the WAC farm – that is also a one-line PowerShell command. WAC Servers should only be patched with the binaries installed, not when they are participating in a WAC farm.

    These two lines does the trick:

    Get-NlbClusterNode -NodeName $env:COMPUTERNAME | Stop-NlbClusterNode -Drain
    Remove-OfficeWebAppsMachine
    
    NOTE: You MUST remove the machines from the old farm before patching and then later create a new WAC farm (follow the instructions thoroughly). You cannot patch a running WAC Farm!

    Once this is done the server is no longer receiving requests and it’s no longer a part of the WAC farm and we can start patching this machine. All our end-user requests are now going to the remaining server(s) in the WAC farm.

    One RTM and one patching going on...

    To patch the machine start the downloaded .exe file (wacserver2013-kb2760445-fullfile-x64-glb.exe) and just follow the instructions; accept license agreement etc. You will be asked to close any PowerShell (IDE) windows that has the Office Web Apps module loaded and you might get asked to reboot once the update is finished (there’s of course no harm in doing it anyways).

    When the patching is done (and you rebooted) we need to create a new WAC farm, on the patched server. You do this using the same way as you did for the RTM version. Use the same server name as your old farm, same settings and the same certificate.

    Patching the rest…

    Once the new farm is created you have a few options depending on how many servers you have in your WAC farm. For instance if you have more than two, then you can take another one out of rotation, patch it and join it to the new and patched farm and then you flip the load balancer so it points to the new WAC farm. Depending on your load balancer you might have a brief moment where end-user request might be served by both versions. A bit funky situation but it works, as the illustration below shows:

    Dual versions...

    In my case I have two servers, so once I created the new farm I just put the patched server into the NLB rotation and then immediately take the RTM server out of rotation. Once the RTM WAC Server is taken out of rotation I remove WAC from that machine and start the patching process. All requests to the WAC farm are now on our patched server(s).

    One March 2013 and the other one is being patched

    As you can see, simple PowerShell operations this time as well, and this can (and should) be automated.

    # On the new patched WAC Server
    Get-NlbClusterNode -NodeName $env:COMPUTERNAME | Start-NlbClusterNode
    
    # On the RTM WAC Server
    Get-NlbClusterNode -NodeName $env:COMPUTERNAME | Stop-NlbClusterNode -Drain
    Remove-OfficeWebAppsMachine
    
    

    Once the other server(s) are patched, you need to join it to the newly created (and patched) WAC farm – just as you normally join a server to a WAC farm. And then you’re done. You’re farm is now patched and ready! But wait, there’s some stuff you have to do on the SharePoint side to enjoy all the new cool features.

    All new and shiny!

    Configuring new WOPI bindings

    One of the most interesting new features in the March 2013 update of Office Web Apps is the ability to view PDF documents! Yup, you heard me right! The March 2013 update contains a couple of new WOPI bindings that enable viewing and previewing (even mobile views) for PDF documents. These new WOPI binding has an application name called WordPdf. If you try to use the New-SPWOPIBinding cmdlet and just specify the WAC server name, you will get an error – since you already have those WOPI bindings. What you need to do is to specify the name of the application as an extra parameter to the command, like this:

    New-SPWOPIBinding - ServerName wac.corp.local -Application WordPdf

    WOPI Bindings

    Another approach could be to remove all the bindings and then add them all back again – of course this would cause a few seconds of downtime, and that’s not how I roll…

    Once you’ve run that command (and waited a couple of seconds) you can start previewing PDF documents in your Document Libraries. And now we got yet another good reason not to install the crapware called Adobe Reader!

    Viewing PDF documents

    Now, your end users can start taking advantage of all the new features in the March 2013 update.

    New Excel WOPI Bindings!

    If you’re examining the WOPI bindings provided by the March 2013 update really carefully, you’ll notice that the Excel application has six new bindings, three each for the two new actions syndicate and legacywebservice. We cannot use these two bindings in SharePoint 2013 RTM – the New-SPWOPIBinding cmdlet does not accept those actions. They are not a part of the well-known WOPI action values (as specified in the WOPI specification available when this post is written) – and I have at the moment no clue of what they are doing…stay tuned…

    Summary

    You’ve just seen how simple patching an Office Web Apps 2013 farm is. Compare that to patching SharePoint! By using the load balancer effectively you can make sure that the end-users can continue to use the Office Web Apps features while you’re doing the patching. Enjoy!

  • SharePoint 2013: Intelligent Promoted Results and Best Bets

    Tags: SharePoint 2013

    Introduction

    The search engine and search experience in SharePoint 2013 has been totally re-written, since its predecessors. In SharePoint 2010 we had something called Best Bets or Visual Best Bets if you worked with FAST for SharePoint. A best bet was a promoted result that was triggered by one or more keywords, used by the search admins to promote certain links or banners for specific search queries. In SharePoint 2013 this is now called a Promoted Results and the procedure of creating them is different and so much better – there’s more ways to trigger on, more ways to render the results etc, but the actual shown result isn’t that smart, until now…

    In this post I will show you how to create an even smarter and more intelligent Promoted Result – a best bet that actually uses the search query to do something interesting with. In this sample I will let users enter simple math questions and then we let the promoted result calculate the math question for you (just as the big search engines on the interwebz does).

    SNAGHTML5e5971

    Creating Promoted Results aka Best Bets

    A Promoted Result is created either on Search Service Application, Site Collection or Site level using the Search Query Rules option. To create a new Site Collection scoped Promoted Result you navigate to Site Settings > Search Query Rules. Next thing to do is to choose which scope to create this Query Rule for, choose Local SharePoint Results in this case. You have the option to further narrow down the Query Rule using segments and topic categories, but that’s not relevant for this exercise. Next thing to do is to click on New Query Rule. The Add Query Rule page contains quite a few options to make sweet things happening. First we have to give it a name and then we have to specify the query conditions. Query Conditions is what is used to trigger this promoted result. In order to trigger on our “mathematical” question we need to use a regular expression. To use a regular expression as the Query Condition you have to select Advanced Query Text Match and then write a regular expression.

    SNAGHTML6b29a7

    I’m no regular expression savant, but this regular expression works for this blog post. Anyone with a cooler or smarter or more complex regex – feel free to post it in the comments.

    \d+\s*[/\+\-\/*]\s*\d+

    Using a custom page as the Promoted Results

    Next up is to add the Promoted Results Action to this Query Rule. This is done by clicking on the Add Promoted Result further down on the Add Query Rule page. When adding a promoted result you will be asked for a Title, a URL and a description. In our case the URL is of interest – we want to render a page containing our logic to calculate the math query.

    I let the URL point to a page (an .aspx page we’ll create in a few seconds) and then I pass two query string parameters to it; IsDlg=1 to get rid of the chrome in the page and query={SearchTerms} to pass the actual query to the page. I also check the check box Render the URL as a banner instead of as a hyperlink so that the promoted result will be rendered as an iframe instead of a link.

    SNAGHTML70ca44

    Seem simple, doesn’t it, but wait – this will not work until you followed all the steps in this post. There is no thing out of the box that allows you to pass the search query to the Visual Best Bet.

    Now click Save on the Add Promoted Result page and then Save on the Add Query Rule page.

    Creating the custom promoted results page

    Now, before we test this (and it also takes a couple of seconds before this promoted result can be used – don’t ask me why!) let’s create our page that should calculate the result from the query. For this demo we create a fairly simple .aspx page with a JavaScript section and a small HTML snippet, as shown in the snippet below:

    <%@ Page Inherits="Microsoft.SharePoint.WebPartPages.WebPartPage, Microsoft.SharePoint, 
    Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c"
    MasterPageFile="~masterurl/default.master" Language="C#" %> <asp:Content ContentPlaceHolderID="PlaceHolderAdditionalPageHead" runat="server"> <script type="text/javascript"> function getQueryStringValue(name) { var qs = window.location.search.substring(1); var pairs = qs.split("&"); for (var i = 0; i < pairs.length; i++) { var pair = pairs[i].split("="); if (pair[0] == name) { return pair[1]; } } return ''; } _spBodyOnLoadFunctions.push(function () { var query = getQueryStringValue("query") try { document.getElementById("result").innerHTML = eval(query) } catch (e) { document.getElementById("result").innerHTML = "error" } }); </script> <style> #s4-ribbonrow { display:none; } </style> </asp:Content> <asp:Content ContentPlaceHolderID="PlaceHolderPageTitleInTitleArea" runat="server"> Intelligent Best Bet </asp:Content> <asp:Content ContentPlaceHolderID="PlaceHolderMain" runat="server"> <h1>SharePoint Search Calculator</h1> <h2>The result is: <span id="result"></span></h2> </asp:Content>

    The first section contains a simple JavaScript that extracts the Search Query from the query string and then uses the JavaScript eval() method on it to “calculate” the value. In the PlaceHolderMain I added a span element which is where the calculator writes the result.

    So let’s upload this page to the Documents library in the Search Center – which was the location I specified when creating the Promoted Results. If we now search for something that should be triggered by the Query Rule you will see the page rendered as a banner, but  it will not calculate the query as we expected.

    Passing the search query to the custom promoted results page

    So, why aren’t our result rendering properly? It all comes down to that the default Display Template for Best Bets/Promoted Results does not pass the search query to the iframe. In order to fix this we need to modify this Display Template. The Display Template that we need to edit is the Item_BestBet.html file.

    First we add two lines, directly following the var isVBB = … line, like this

    var isVBB = ctx.CurrentItem.IsVisualBestBet;
    
    var url = ctx.CurrentItem.Url
    url = url.replace('{SearchTerms}',ctx.CurrentItem.ParentTableReference.Properties.QueryModification)
    

    The code is crafting the URL to use in the iframe by retrieving the URL from the CurrentItem and then replacing the text {SearchTerms} with the actual search terms used.

    Next we need to modify the line where the iframe is rendered.

    <iframe id="_#= $htmlEncode(id + Srch.U.Ids.visualBestBet) =#_" 
        class="ms-srch-item-visualBestBet" title="_#= $htmlEncode(ctx.CurrentItem.Title) =#_" 
        scrolling="no" frameborder="0px" 
        src="_#= $urlHtmlEncode(url) =#_"></iframe>
    
    Modify the src attribute of the iframe element so that it uses our new url property instead of the default URL value from the current context. Save the file and go back to the Search Center…

    See it in action

    Now we can take it for a test drive. Search for simple mathematical questions and see the results being rendered as a promoted result.

    SNAGHTML80e9e4

    Summary

    I’ve just shown you how to make the Promoted Results aka Visual Best Bets more intelligent by creating a Query Rule that promotes a web page that does operations on the search query. To get this all to work the secret ingredient was to modify the Display Template for promoted results to allow it to pass the search query to the promoted results page.

    I can really see this being extended to provide the users with even more interesting “intelligent” promoted results.

  • SharePoint 2013 Central Administration Productivity Tip

    Tags: SharePoint 2013

    Here’s a short post with a tip that I find very useful.

    In many scenarios you have several SharePoint 2013 installations to handle – it might be different farms, production environments, testing, staging, development etc. Do you know which Central Administration you’re working in at the moment? They all look the same, SharePoint Blue, the regular Status Bar warning that you’re running out of disk space etc. Unless countermeasures are taken you don’t know what environment you’re in unless you take a look at the URL – which in many cases is just another server name and port. It’s very easy to make a mistake and make a change in the production environment instead of in the test or dev environments.

    So, how do we keep track of what Central Administration site we’re actually working on at the moment? One way could be to change the theme of the Central Admin site. But, was the production CA red or was it the one with the dog in the background? I’ve got a better tip for you!

    Here’s how I’ve done to keep track of the Central Administration sites. I take advantage of the Suite Link Bar in the upper left corner. By default it says just “SharePoint” – yes we know it’s SharePoint.

    Standard Suite Bar

    By modifying the Web Application property that controls this text we can easily change it to something more friendly and appropriate for the specific farm, like below.

    Cool Suite Bar

    It’s a very simple PowerShell operation to accomplish this. You just retrieve the Central Administration Web Application object, then update the SuiteBarBrandingElementHtml property and set it’s value to something that tells you which Central Administration site this is:

    asnp microsoft.sharepoint.powershell
    $ca = Get-SPWebApplication -IncludeCentralAdministration | `
        ?{$_.IsAdministrationWebApplication -eq $true}
    $ca.SuiteBarBrandingElementHtml = "<div class='ms-core-brandingText'>Central Admin: FarmA Production</div>"
    $ca.Update()

    You should leave the div element with the ms-core-brandingText class, to get decent formatting of the text, but inside it you can add whatever HTML you like (I’m thinking the marquee or blink tag…)..

    That’s it. I hope I saved a few kittens from being slaughtered…

  • SharePoint 2013: Personal Site instantiation queues and bad throughput

    Tags: SharePoint 2013

    In SharePoint 2013 the way Personal Sites (aka My Sites) are created have been totally rebuilt to support the new way of utilizing the Personal Sites. In this post I will go through how Personal Sites are provisioned, asynchronously, and bust a couple of myths about how interactive Personal Site instantiations should be “prioritized” and increase throughput.

    Get the most out of SharePoint

    Background

    Personal Sites or My Sites were previously created “on-demand”. When a user went to his or hers non-existing My Site the provisioning started while the user waited for the site to be created, painfully watching the spinning animated gif. This worked fine in SharePoint 2010, and earlier, but now with SharePoint 2013 so much more are depending on the user having a Personal Site – everything from the social stuff in SharePoint 2013 (yes, not all SharePoint customers have yet wandered down the Yammer road) to the really great Office 2013 interaction (SkyDrive whatever). Requiring users to “manually” creating their Personal Sites is no longer a good option…

    Note: this post is valid for SharePoint 2013 RTM – and hopefully some of the myths/bugs/features/stuff that is explained in this post will be fixed in upcoming releases…

    Asynchronous Personal Site instantiation

    In SharePoint 2013 the Personal Site is no longer created synchronously while the users wait for the site to be created. Instead all Personal Site provisioning are handled asynchronously by the Timer Job service (OWSTIMER.exe). So how does all this work? Whenever a user visits the My Site Host for the first time they are forced to create a Personal Site (depending on the My Site creation settings in the User Profile Service Application). The Personal Site instantiation are then added to one of possibly three queues (more about these in a bit). A set of timer jobs picks up the instantiation requests from these queues, creates the Personal Site and finally sends an e-mail to the user. The benefits of this new approach are many; the user can continue to do other stuff in SharePoint (real productive work instead of facetweeting for instance), the provisioning are not hogging the w3wp.exe process, we avoid performance bottlenecks, we should(!) see improved Personal Site creation throughput etc. There are of course drawbacks of this approach, such as it could potentially take some time before the personal site is created, since we have to wait for the timer job etc. But all in all this is something that is necessary (especially for the cloud) and something is very good.

    The  Instantiation queues

    I mentioned that there are three queues for the Personal Site instantiation.

    • The Interactive Request Queue
    • The Second Interactive Request Queue
    • The Non-interactive Request Queue

    First of all – the difference between the Interactive and Non-interactive requests are basically that all “requests” to create a Personal Site through the web user interface are called interactive, while those requests coming from the Office client are non-interactive. The reason of having a second interactive request queue is that requests coming from the user interface – users actually wanting to facetweet – should have a higher priority and a higher personal site provisioning throughput.

    SNAGHTML48ee1a

    Each queue are (by default) polled every minute, by three timer jobs – one per queue, Each Timer Job will retrieve five (5) items from the queue and process them one at a time. According to some (ahem, SPC12) presentations this timer job are executed on every WFE, but that’s not what happens in real life. The Timer Jobs are based on the SPWorkItemJobDefinition Job Definition Type. This is a really nice timer job implementation that has a queue per content database. In this case the queue exists in the Content Database where the My Site Host is residing. This also means since the lock type of the job is Content Database per Web Application – only one server will run this timer job. So, to sum it up – for each queue and every minute a total of 5 Personal Sites can be created, which means about 300 Personal Sites per hour per queue – if you have the hardware that can handle that! One might think now that since there are two interactive queues, we can get a throughput of 600 Personal Sites an hour, well…no…

    The second (and useless) interactive queue

    The idea with the second interactive queue is that interactive personal site requests should have a higher throughput, and that is a good idea. But unfortunately something went wrong when this was implemented. Stop reading now if you don’t like digging deep into SharePoint internals and just accept the fact that this implementation is flawed…

    The three queues, or rather the three Timer Jobs are created per Web Application through a hidden feature called MySitesInstantiationQueues. This feature creates the timer jobs and also configures an object in the config database (SPPersistedObject) which contains a value of how many queues that should be used. The funny thing is that this is an internal definition (the SPPersistedObject) and it sets the number of instantiation queues to 1, not 2. This means that we’re hardcoded to only have one instantiation queue, and the second interactive timer job just don’t do anything since it’s queue is always empty – even though we could use reflection to modify it, but then we’re in unsupported territory. So there goes the idea of bumping up the number of Personal Sites created asynchronously.

    You can very easily see this in the ULS logs:

    [Enque selection] GetWorkItemGuidHelper: Choosing first interactive queue instance as we are using only one queue.

    Also, if we take a look directly at the configuration object we’ll see that it is configured only for one queue:

    SNAGHTML7313d6

    So the idea of having servers provisioning up to 600 Personal Sites an hour we instead have 300 Personal Sites per farm per hour. Imagine a new corporate portal launch spiced with social features for 20.000 users and about 5% of the users visiting the site at launch. This will give us 1.000 users in the queue and a potential queue time of 3:20 hours, at least – not that great in my opinion. Just imagine if you did some marketing of your new launch and only a couple of more users tried to use the social features…

    Well, the good thing is that we don’t have the web servers hogged with creating thousands of Personal Sites, prohibiting real work.

    Increasing the throughput anyways…

    This section should really be classified under the unsupported things you shouldn’t do, ever, unless you need to… But for a large Enterprise Social deployment using SharePoint 2013 it might be a necessity, to not piss the end users off by having them waiting four hours to get started using SharePoint 2013.

    If you don’t want to provision all the Personal Sites in advance, which in most cases are a dumb idea, you could instead of just relying on the queues to provision the Personal Sites you could read the queue yourself and create the personal sites “manually”. You need to find the database where the My Site is hosted and then query the ScheduledWorkItems table for items. Query for where the Type is E94A6CAA-B0F5-4897-B489-585CA50C7803 (which is the id of the first interactive instantiation queue, the second queue has it’s own Type Id – but you will never see that in here :-)). Find those with the InternalState equal to 8 – those are the ones waiting to be processed. Using this information you can use PowerShell or similar to spread out the load on other servers to create the personal sites.

    SNAGHTML82acf1

    To create the actual personal sites you could use the CreatePersonalSite() method of the UserProfile object, this method will bypass the queue and immediately create the site. 

    Summary

    By now you should be familiar with the process of how SharePoint provisions Personal Sites in SharePoint 2013. It’s a better approach than in previous versions. The idea of having multiple queues and allowing several servers to help out with provisioning the personal sites is good, but currently flawed. Hopefully this will get fixed in upcoming cumulative updates. As it is now it’s not working for a large enterprise rolling out SharePoint Enterprise Social, so you have to be very careful when planning a roll out of Enterprise Social in SharePoint 2013…or use Yammer…

  • SharePoint 2013 and Unified Access Gateway (UAG) 2010 Service Pack 3

    Tags: SharePoint 2013, UAG 2010, Forefront

    Last week the Forefront team finally released Service Pack 3 for Forefront Unified Access Gateway (UAG) 2010. This is a long awaited release for us working with SharePoint 2013 and for those using non-legacy operating systems and browsers. In this post I will show you how to publish SharePoint 2013 Host Named Site Collections through UAG 2010 SP3 and consume the published site using Internet Explorer 10 on Windows 8.

    What’s new in UAG 2010 Service Pack 3

    Before diving into the interesting pieces let’s take a look at some of the new features of Service Pack 3. First of all, to be able to install SP3 you need to have your UAG at SP2 level – SP3 is NOT cumulative.

    The most interesting features in SP3 are support for Windows 8, Windows Phone 8, Windows 8 RT and Internet Explorer 10 (classic and Metro) as client, support for the Office 2013 client and support for SharePoint 2013 as the published application. For SharePoint 2013 the URL Rule Sets has been modified to handle the new types of URL’s and request patterns used in SharePoint 2013, and new endpoint policies are added for SharePoint 2013. If you’re doing an 2010-2013 upgrade, you need to look over any custom URL Rules and policies. There’s also a number of improvements with regards to ADFS publishing.

    Get all the nitty gritty details from the KB article: KB2744025.

    Infrastructure

    Ok, now you know what’s new! Let’s start publish our SharePoint 2013 sites through UAG. First of all before even installing UAG you need to think about your infrastructure, network, DNS and certificates (there it is again – certificates!). The picture below shows my demo environment. As you can see I have a UAG 2010 SP3 server with dual NIC’s (actually it have three – but we can safely ignore that). One NIC is connected to the Corporate Network and the other one to the External Network.

    Demo infrastructure

    DNS

    Before starting our publishing and SharePoint 2013 configuration we need to plan our DNS namespace. We need at least one publically available (that is a real) domain name. For this demo I will use three:

    • trunk.corp.local – this will be used for my UAG Trunk Portal
    • sp.corp.local – this will be a SharePoint 2013 site
    • teams.sp.corp.local – this will be another SharePoint 2013 site

    In order to make it efficient for the end-users I do prefer to have alignment of URL’s internally and externally. In this case it means that I have the second two domains above registered in the internal DNS as well. There is no need to register the Portal/Trunk domain internally.

    Certificates

    When publishing sensitive stuff such as SharePoint content you most likely would like to use HTTP over SSL, HTTPS, to encrypt the data flying throughout the internet. This is not only a good option to do externally, you should do it internally as well – even more now with SharePoint 2013 and the authorization flow over HTTP. Utilizing HTTPS internally also gives you the benefit of URL alignment for internal and external users and you don’t have to worry about sending an unreachable link. And we don’t have to do SSL termination.

    Certificate requestWhat you essentially need is one or two certificates – one from a trusted root authority to use on the UAG server and one for SharePoint, which can be a local CA issued certificate. You could use the same though. You should use wildcard certs or SAN certificates here – so in the end it comes down to manageability and costs of certificates.

    In this sample I use one certificate, issued by my local CA. So any external clients, not joined to the domain will get a certification error (unless they have the CA root cert as a trusted root authority – which my external demo machine have). Specifically the Office clients have issues with not trusted certs.

    The certificate is a SAN certificate covering sp.corp.local, trunk.corp.local and *.sp.corp.local. The certificate is installed on the SharePoint 2013 machine and on the UAG 2010 machine.

    SharePoint with HTTPS

    Before starting the UAG configuration let’s take a quick look at how the SharePoint 2013 farm is configured. In the SharePoint farm there is one Web Application using SSL on port 443. This Web Application is used to host Host Named Site Collections. This installation has two; sp.corp.local and teams.corp.local.

    Web Applications

    Install and update UAG 2010 to SP3

    Installing Unified Access Gateway 2010 and patch it to Service Pack 3 is quite easy. But first of all – UAG 2010 SP3 does not support Windows Server 2012, so you have to dust off your old Windows Server 2008 R2 DVD’s and get used to that annoying Start button in the lower left corner.

    Before inserting the DVD though, make sure that the OS is patched – yea, this machine should be a secure box and should have all the latest security hotfixes etc. Secondly make sure that you have the two required NIC’s wired in. Third – configure the NIC’s. If you’re uncertain on how to configure the NIC’s there is a good TechNet Wiki article called “Recommended Network Adapter Configuration for Forefront UAG” that is highly recommended as a starting point.

    InstallationOnce all that is set, insert the DVD and follow the wizard to install UAG. If your DVD is a UAG SP1 you need to apply the following patches, in this order, to get up on Service Pack 3.

    1. UAG Service Pack 1 Update 1
    2. TMG Service Pack 2
    3. UAG Service Pack 2
    4. UAG Service Pack 3

    Once that is done a reboot is not out of order. Especially if you fiddled with the NIC’s and haven’t rebooted since.

     

    Initial UAG configuration

    Once you’re up on UAG Service Pack 3 it’s time to do the first time configuration of UAG. It’s a three step wizard that automatically starts when you fire up the UAG Management console the first time – and you can’t bypass it.

    The first step is to connect UAG to your different NIC’s. As I said you need two of them – one for the internal network and one for the external network. In the Network Configuration Wizard (Step 1 in the UAG Getting Started Wizard) you have to choose which adapter to connect to the UAG Internal and External zone respectively. A good practice is to name your network adapters (in Windows) and give them describing names so they are not called just “Local Area Connection 1” and “Local Area Connection 2” – very easy to make stupid mistakes then…

    The image below shows how I connected my “Corporate Network” NIC to the Internal UAG zone and the “External Network” to the External UAG zone. The machine have a third NIC – which I leave unassigned in this case.

    UAG NIC Settings

    Click Next when you’re done and finish the wizard. Step 2 in the Getting Started Wizard let’s you specify the UAG topology – leave it as as a single server and finish the wizard. Step 3 is the obligatory – do you want Windows Update and do you want to send usage data to Microsoft – I’ll leave it up to you to figure out the settings here.

    Note: you can always get back to the Getting Started Wizard by choosing Admin > Getting Started in the UAG Management app.

    Now all is set to start publishing some applications.

    Creating a UAG Trunk

    All applications in UAG are published in Trunks. A trunk consists of a portal and one or more applications. Each trunk have a set of shared settings between the different published applications – such as a public host name, certificate, URL Rules, endpoint policies, authentication and authorization settings etc.

    Since we would like to publish an HTTPS SharePoint 2013 application we start with creating a new Trunk under the “HTTPS Connections” node in the UAG Management App.

    New Trunk...

    This will start yet another wizard (YAW). In Step 1 choose to create a Portal trunk. In Step 2 we need to specify the name, public host name and IP of the trunk.

    Trunk Settings

    Step 3 is used to configure how the end-user will authenticate. Click Add to select (at least) one Authentication Server. If you don’t have any servers in the server list you need to add one. You have a lot of options here – but in most cases you will add an Active Directory server and connect it to your forest. You also need a service account for this – that needs permissions to retrieve user information and change password.

    In Step 4 it’s time to select the external certificate. Hopefully you followed the instructions and have already registered the certificate on the UAG box. Choose the appropriate certificate and click next. If you’re like me and using a dummy domain (corp.local) you will get a warning here – it’s safe to ignore, we’re only writing a blog post.

    Certificate Settings

    The fifth step allows you to select what kind of endpoint protection you would like to use. Choose between UAG access policies or NAP. Choose UAG and continue to step 6, where you choose the endpoint policies – keep the default ones and continue. Then review your settings and click finish.

    That’s it, you now have a trunk. All you now need to do is to save and then activate your new configuration. This is done through the cog wheel button in the management console.

    Activation

    You should now be able to test if you can access the trunk from a remote machine. The first time you access the trunk URL you will be asked to download and install the Forefront UAG client components. Depending on your configuration you will have limited access to your applications unless you install it. These components does a number of client security validations and they are also responsible for cleaning up your machine when you log out. Once logged in you should see an “empty” portal saying “No applications defined”.

    If you don’t see anything like this or get errors here – it’s most likely related to certificates. Make sure that you have a correctly issued certificate registered with the Trunk. Also check the certificate chain and make sure that all the certs in the chain are trusted.

    Add an application to the trunk

    Now the time has come to publish our SharePoint 2013 Site Collection. In the Trunk you created under Applications click the Add button. YAW! In Step 1 you specify what kind of application you want to publish. You will find SharePoint 2013 under Web – it’s called “Microsoft Office(!) SharePoint Server 2013”.

    SharePoint 2013 Application

    Click next, give the application a name. In Step 3 leave all the endpoint policies as-is – you could easily spend a day fiddling with these if you have the time or need. In the fourth step choose to publish an Application Server. You should choose this if you have one server or a SharePoint farm behind some kind of load balancer. Step 5 is where you configure the actual SharePoint 2013 application. Add the name of your internal SharePoint 2013 site and specify the public host name. In my case my SharePoint 2013 site is published using HTTPS internally so I need to specify HTTPS/443 here – no SSL Termination.

    Web Servers

    Step 6 – Authentication. Choose an authentication server (use the same as you created and used for the Trunk). Here you also specify how rich clients authenticate with your application – in my case I choose to use both Rich Client integration and a Office Forms Based AuthN.

    In the seventh step you configure how the application appears in the portal and in the eight and last step you can configure which users actually have access to your application, but just leave the default settings for now – that is everyone who can log in can also access the application (note, that this doesn’t mean they bypass any security in SharePoint).

    Once you’re done with the wizard you should have two applications listed in your trunk – the portal and your newly added SharePoint site:

    Applications

    Now save and activate your new configuration.

    Take it for a test-drive…

    Close down your browsers, if you did test to log in to the Trunk earlier, and browse to the public portal URL once again. If you don’t see your application – log out of the portal and log in again.

    You should see something like this:

    The Portal

    Now, click on the application link and see how beautifully your SharePoint 2013 site is published through UAG. Test how you can drag and drop documents, open them in Office clients, edit lists in Quick Edit mode and work just perfectly!

    Woohoooo....

    Publishing the second SharePoint 2013 Site Collection is exactly the same – you add a new Application to the trunk and follow the wizard.

    Summary

    In this post I showed you how to install, patch and configure Forefront Unified Access Gateway (UAG) 2010 Service Pack 3 to be able to publish SharePoint 2013 Web Applications and Site Collections. It’s a pretty straightforward approach but involves some crucial steps. If you previously published SharePoint 2010 applications using UAG you noticed that there is no difference with regards to setting it up. The difference are in the details, specifically in the URL rules applied and a new set of policies.

  • SharePoint 2013: SharePoint Health Score and Throttling deep dive

    Tags: SharePoint 2013

    The SharePoint Health Score was introduced in SharePoint 2010 and plays an even more important role in SharePoint 2013. The Health Score determines the health of a SharePoint server/web application on a scale from 0 to 10, where 0 is the most healthy state. SharePoint automatically starts throttling request once the Health Score is to high. The Health Score is can be calculated using many parameters, such as memory usage, concurrent requests etc. In this post I will give you some details on how the Health Score works, how you can troubleshoot it and how you can use it and how you can configure it.

    Note: this article contains some registry changes that never ever should be done on a production server, unless told by Microsoft support...or whatever...

    What is the Health Score?

    Let’s start with some background facts. Stop whining, I know plenty of peeps have already blogged about it but I think it is good to have all the facts in this same blog post.

    The Health Score is as an integer between 0 and 10, where 0 represents a really good health of the server, or rather Web Application, and 10 is where you do not want to be. SharePoint continuously calculates the Health Score and for each and every request the Health Score is sent as an HTTP Response Header, called X-SharePointHealthScore.

    The SharePoint Health Score in action!

    This Health Score is used by (or at least should be used by) client applications to determine the health of the web application and then use it to make a decision on whether it should stop hammering the web application or not. If the Health Score reaches up to 10, SharePoint itself will start to throttle requests (which we’ll talk about in just a bit).

    Worth noticing here is even though this article is written for SharePoint 2013, most of it is valid on SharePoint 2010 as well.

    How the SharePoint Health Score is calculated

    So, how does SharePoint calculate this Health Score? When a SharePoint Web Application spins up, internally a new thread is created (you can see it in debuggers with the name SPPerformanceInspector). This thread regularly reads performance counters and calculates the Health Score. Each time it calculates the Health Score it will log this to the Trace Logs (Category=Http Throttling, Level=Verbose):

    The current health score for web application is X

    ULS shows you the current health score

    Performance Counters

    The Health Score is calculated from a set of Performance Counters. By default SharePoint 2013 (and SharePoint 2010) uses two performance counters for this:

    • Memory/Available MBytes
    • ASP.NET/Requests Current

    You can easily retrieve these performance counters by using the following cmdlet:

    Get-SPWebApplicationHttpThrottlingMonitor http://app.contoso.com

    You can also use the Set-SPWebApplicationHttpThrottlingMonitor cmdlet to modify the current monitors.

    Buckets, what is that!?

    When running the cmdlet above, to retrieve the performance counters used to calculate the Health Score, you should have noticed that each associated performance counter has a property called AssociatedHealthScoreCalculator.

    Buckets of what?

    This is an array of values, or a set of Buckets, that SharePoint uses to calculate the Health Score. There are 10 buckets in each of those (you can have fewer). Each bucket represents a Health Score value and corresponds to a lower limit of the performance counter. For instance if you have 24 current ASP.NET requests – you will have a Health Score of 4 and if you have less than 200 MB of RAM available you will have a Health Score of 6. The Health Score of the Web Application is equal to the largest Health Score of all measured performance counters.

    Adding your own ,monitor

    To add your own performance counters to monitor you need to use the API. Here’s a sample on how to add the CPU usage as a monitored object. You should not add this one into any production environment, but it can be used in testing scenarios when you would like to see what happens when the Health Score is high.

    $throttle  = (Get-SPWebApplication http://app.contoso.com).HttpThrottleSettings
    $throttle.AddPerformanceMonitor("Processor", "% Processor Time", "_Total", 10, $true)
    $throttle.Update()

    This will create a new monitor for the Web Application that checks the processor time on the machine. It will only have one bucket. So if the processor time is larger than 10 it will yield a Health Score of 10.

    Refresh Interval

    The Health Scores are by default updated every 5 seconds. You can get (or set) this setting by looking at the HttpThrottlingSettings of a Web Application like this:

    $throttle = (Get-SPWebApplication http://app.contoso.com).HttpThrottleSettings
    $throttle.RefreshInterval

    To avoid having spikes in performance counters creating high Health Scores, the actual value is calculated from an average of a number of samples, by default 12 samples. This can be configured using the NumberOfSamples property on the HttpThrottleSettings.

    How SharePoint uses the Health Score

    Now you know a little bit more on how the SharePoint Health Score is calculated. So what is it used for then? SharePoint internally uses it for mainly two reasons; HTTP throttling and for the Request Management Service.

    HTTP Throttling

    Whenever the SharePoint Health Score reaches the value of 10, SharePoint 2013 starts its throttling mechanism to protect the servers. As soon as the Health Score is below 10, the servers stops the throttling. The actual throttling is divided into two steps – the first and second stage. After receiving a Health Score of 10 and SharePoint enters throttling it will start with the first stage and requests will start to be throttled. After one minute of throttling the throttling enters the second stage and starts throttling even more. What’s being throttled in the different stages depends on a set of throttling classifiers. Out of the box (in SharePoint 2013) there is only one classifier used – which checks if the current request is a search crawler. This classifier starts throttling request directly in the first stage – all other requests are handled as usual. When the throttling enter the second stage all requests will be throttled (default, if you have any custom classifiers some requests might be allowed).

    The Server is busy now. Try again later

    Throttled requests are met with the Error message above: “The Server is busy now. Try again later”, and the HTTP Response: “503 Service Unavailable”.

    When entering the second stage of throttling, you will see the following lines in the ULS logs. The logs also includes the “Excessive” counters.

    Ouch, we're hogging the machine!

    To the Server is Busy response SharePoint also adds two Response Headers; one called “Retry-After” which contains a numeric value which should be taken as an indication (by clients) when a retry operation should be done (it’s always has the same value as the Health Score refresh interval) , and a second one “SharePointError” which always has the value of “2”.

    By the way, if you notice the nice little error icon in the Busy error. That image is not throttled, why? Well, I can tell you…there’s actually an exclusion for that specific image with regards to resource throttling. Requests to that image will never be throttled.

    HTTP Throttling of POST Requests?

    There’s this myth out there that POST and PUT requests are never throttled. Well, that is to good to be true! My tests and reverse engineering skills shows that all requests are throttled in SharePoint 2013 (I would be glad to be proven wrong :). Fortunately Office 2010 and later has the “Upload Center” which will retry any save operations and eventually succeed once throttling has stopped.

    But, using one of the built-in classifiers we can actually make sure that our precious documents are saved regardless of HTTP Throttling, and this is how this is achieved.

    $classifier = New-Object Microsoft.SharePoint.Utilities.SPHttpUserAgentAndMethodClassifier 
        -Args "","POST","HttpMethodMatch", "Never"
    $throttle = (Get-SPWebApplication http://app.contoso.com).HttpThrottleSettings
    $throttle.AddThrottleClassifier($classifier)
    $throttle.Update()

    First of all we create a new classifier of the type SPHttpUserAgentAndMethodClassifier, when creating that object we pass in “POST” as HTTP method, make sure it’s using the HttpMethodMatch method to match the method :), and finally we tell it to never throttle these requests. We add this classifier object to the throttle settings of the web application and badabing, POST operations are not longer throttled.

    Turning off HTTP Throttling

    On or Off?Ok, so you’re fed up with this throttling and want to turn it off. It’s an easy operation and you can do it from Central Administration or from PowerShell. In PowerShell just update the PerformThrottle property of the HttpThrottleSettings of the web application. And in Central Administration you select the Web Application for which you want to turn off throttling, then select General Settings > Resource Throttling, then at the bottom of the dialog you will find HTTP Request Monitoring and Throttling.

    HTTP Throttling and automagic garbage collection

    The HTTP Throttling is actually a quite smart implementation, and internally tries to fix your machine if throttling occurs. One thing that is important here is if you start fiddling with the different performance counters used for calculating the Health Score, make sure that you do not remove the monitor that monitors memory usage. That plays a special role in the throttling scenario. If that specific monitor is considered an “Excessive” monitor, which means that it has an Health Score of 10, it will actually force a Garbage Collection.

    Request Management

    SharePoint 2013 introduced the new Request Management Service, which is used to “redirect” requests to specific servers based on rules and/or health. When using health based rules, it is the SharePoint Health Score that is used to determine the health of a server. Whenever a machine receives a request it will update the Health Score in the Request Management service, which makes the Health Based Request Management a very efficient way to “load balance” the requests for optimal performance.

    For more information about Request Management read Spencer Harbars series about it.

    How to use the SharePoint Health Score

    Whenever you’re building a client that interacts with SharePoint 2013, for instance a SharePoint App, a Windows 8 app or a Windows Phone app, then you should strongly consider always take the Health Score in consideration. You will receive the X-SharePointHealthScore response header for each and every CSOM, REST or web service call to SharePoint. If the value is 10 (and SharePoint starts throttling) then you will most likely not be able to get data. If your application is constantly polling SharePoint for information, perhaps you should increase the delays between your calls when the Health Score raises above a predetermined number.

    Andrew Connell has a small code sample on how to leverage this in a Silverlight app.

    The SPPerformanceInspector

    You could also use the Health Score in traditional SharePoint Farm solutions (timer jobs, or other long running or resource consuming operations) using the SPPerformanceInspector object. Here’s a quick sample how you can show the data in a Web Part:

    SPPerformanceInspector pi = 
        SPPerformanceInspector.GetInspector(SPContext.Current.Site.WebApplication);
    
    this.Controls.Add(new LiteralControl(
        String.Format("Current Health Score: {0}<br/>", pi.HealthScore)));
    this.Controls.Add(new LiteralControl( 
        String.Format("Is throttling: {0}", pi.IsInThrottling())));

    This Web Part retrieves the performance inspector object and then prints out the current Health Score for the current Web Application on the current machine. Also this Web Part will print out if throttling is currently going on (which you never should see anyways, since you’re throttled!).

    How to debug the SharePoint Health Score

    To wrap this little post up I will give you two great debugging tips – they both involve editing the registry (you’re warned!). First of all if you’re building a remote application that communicates with SharePoint and you want to test how it behaves at different Health Scores. Then there’s a neat trick to give a server a constant Health Score value. You do this by adding a new DWORD key called ServerHealthScore to the HKLM\SOFTWARE\Microsoft\Shared Tools\Web Server Extensions\15.0\WSS hive and then do an IISRESET. You can give this key any value from 0 to 10 and the Response Header will always show that value. It will not invoke throttling though.

    ServerHealthScore regkey hack!

    It will of course log a warning in the ULS telling you how bad this really is:

    You've been warned!

    The second tip is a cool debugging feature, and its also a small regkey hack. In the same hive as above create a new DWORD key called DebugThrottle and give it the value of 1.

    DebugThrottle regkey hack!

    Once again you need to do an IISRESET. After that you can browse to any page and then append debugthrottle to the address, for instance: http://site.contoso.com/debugthrottle. This will give you a sweet looking dashboard with all the details you need about the health of the server/web application. You can see the monitors and their Health Score history, the throttling stage, your classifiers etc.

    Classy design!

    Summary

    There you have it! All you need to know about the SharePoint Health Score and HTTP Throttling. As you can see it’s a really nice feature if used correctly – by you, by third party vendors (I’m pretty sure 99% of the vendors do not care about this though) and by SharePoint itself. Just one final word of warning – if you’re fiddling with monitors and classifiers make sure that you thoroughly test them, it’s really easy to get things funked up here…

  • Office Web Apps 2013: Securing your WAC farm

    Tags: WAC Server, Office Web Apps, SharePoint 2013, Security

    With this new wave of SharePoint, the Office Web Apps Server (WAC – I don’t like the OWA acronym, that’s something else in my opinion) is its own server product, implementing the WOPI client protocol, which allows a client to retrieve documents from SharePoint on the behalf of the user. Documents will flow from the WOPI servers (SharePoint, Lync, Exchange etc.) to the Office Web Apps Server – this means that potentially confidential information will be transferred from the SharePoint environment and stored/cached on another server. This could result in unnecessary information leakage and compromise the enterprise security.

    In this post I will walk through a number of steps that you can do to properly secure your Office Web Apps 2013 farms. And you should seriously consider and implement most of these methods.

    Note: this post focuses on the Office Web Apps Server and not a WOPI client in general (but if you’re building your own you should consider security as well!).

    The WOPI protocol specification and security

    Note: I will not cover how WOPI clients and servers implements the server to server authentication and authorization.

    WAC runs as Local System

    To start with it is very important to know that the Office Web Apps Server 2013 runs as the Local System and Network Service on the machine it is installed on. There is no service account or anything! This means that you cannot protect your systems using dedicated accounts etc., like you do with SharePoint, SQL and other applications.

    The images below shows the Office Web Apps Windows Service, which runs as LocalSystem.

    Local System

    And this image shows some of the applications pools in the IIS on an Office Web Apps machine.

    Network Service

    The advantage of using these local accounts is that it makes installation and configuration easier. But it is very important that you are aware of this configuration.

    SSL is a requirement!

    Exposing the Office Web Apps Server over HTTPS should be a requirement in my opinion. There is no reason not to. Having it on HTTP will only cause trouble for you; for instance if your SharePoint uses https you will not be able to render the iFrame containing the document (aka WOPI Frame) since you’re not allowed to show http content in an https environment. But first and foremost you’re sending data in clear text.

    So what about SharePoint on HTTP then? Well, if you’re using SharePoint 2013 you should seriously consider running that over HTTPS as well – that IS a best practice. SharePoint 2013 leverages several technologies that sends tokens and credentials over the wire, OAuth for instance, so in order to have a secure environment make sure you use HTTPS for SharePoint as well. If you are running SharePoint on HTTP you must fiddle with the security settings in SharePoint to allow OAuth over HTTP – and this is not a good thing.

    Certificates are king!

    Any WAC farm running on SSL must have a certificate for the HTTPS endpoint. You can use self-signed, issue certificates using a Domain CA or buy a certificate. When you’re creating the WAC farm, using New-OfficeWebAppsFarm, you can/should specify the certificate.

    For any SharePoint, WAC and even SQL installations nowadays certificates are more and more important. If you’re on the verge of deploying these in your organization you should consider deploying a Domain CA – which is not a lightweight task.

    Securing the communication using IPSec

    If you for some reason do not run HTTPS on SharePoint and/or WAC you could consider implementing IPSec. Unfortunately there is no button in the Control Panel that says “Use IPSec”. This is something that requires careful planning and testing. So going SSL might be an easier way. But consider the scenario where you have an internet facing web site which leverages WAC and using the HTTP protocol – then you should consider using IPSec for the communication between SharePoint and Office Web Apps Server.

    Firewall considerations and requirements

    When setting up your Office Web Apps Farm you should also configure the firewall for the WAC machines. Office Web Apps uses four different ports. It uses 80 and 443 for HTTP and HTTPS, that’s used by the end-users and the WOPI Server/Client communication. Internally Office Web Apps uses port 809 (HTTP) and 810(HTTPS) for communication between the WAC machines. I’ve only seen 809 in use, which is the default. There is no way (I’ve found at least, but internally WAC has a switch to use port 810) to configure WAC to use port 810 and if you do find a way, it’s likely unsupported. The things sent over the wire using the admin channel (809) is mainly health and configuration information for the WAC farm, but it would be nice to be able to secure this channel as well (IPSec!).

    When installing WAC the Windows firewall is configured to allow incoming TCP connections on port 80, 443 and 809.

    WAC Windows Firewall Rule

    As always it is a good practice to evaluate these default rules and if you’re not using port 80, disable that port. For port 809 it might also be a good practice to make sure that it only allows incoming connections if they are secure (i.e. implement IPsec).

    Even more secure...

    Preventing rogue machines

    So far we’ve been talking about how to secure information being transmitted from and to the Office Web Apps farm. Let’s take a look at Office Web Apps farm security from another angle. Joining a new WAC machine to an Office Web Apps Farm can be quite easy. The only thing that you need is local administrator access on the WAC machine that is the master (the Get-OfficeWebAppsMachine gives you the master machine). Depending on how you’re having your (virtual) metal hosted this might be a problem, too many sysadmins have to much permissions out there. If you have this access then you can easily join a rogue machine to the WAC farm and take control over it, without the users/client knowing anything about it.

    There are a couple of methods you can and should use to protect the WAC farm. And the error messages below can also be a good troubleshooting reference…

    Master Machine Local Administrator

    If the account trying to create the new WAC machine does not have local admin access on the machine specified when joining the WAC farm you will simply get an “Access is denied”.

    New-OfficeWebAppsMachine : Access is denied

    As a side note; if you’re not running the cmdlet using elevated privileges you will get an “You must be authenticated as a machine administrator in order to manage Office Web Apps Server”.

    Using the Firewall

    I already mentioned the firewall. If the machine joining the WAC farm cannot access the HTTP 809 channel the New-OfficeWebAppsMachine will throw a “The destination is unreachable error”.

    The destination is unreachable error

    This is a fairly easy way to protect the farm, but if the user has local admin access on the master machine it can easily be circumvented.

    Certificate permissions

    If you’re using a domain CA, make sure that you protect the private key using ACL’s, or if you’re buying a certificate make sure to store the certificate private key in a secure location. If you’ve specified a certificate when the Office Web Apps farm was created, which you should have, then the user cannot join the new machine – regardless of local machine admin, since the user cannot install the certificate locally. The error message shown is “Office Web Apps was unable to find the specified certificate”.

    Office Web Apps was unable to find the specified certificate

    Using an Organizational Unit in Active Directory

    The way that Microsoft recommends to secure your WAC farm is to have a dedicated OU in Active Directory where the computer accounts for the WAC farm is located. When joining a new machine to the farm the cmdlet verifies that the account is in the OU specified by the WAC configuration. If it’s not then you’ll see the “The current machine is not a member of the FarmOU”.

    The current machine is not a member of the FarmOU

    The Farm OU is specified when creating a new WAC farm or using the Set-OfficeWebAppsFarm/ cmdlet. The only caveat with this OU is that it has to be a top level OU in Active Directory. Creating that OU in your or your customers AD might cause some headache, but if you want to use the FarmOU as protection method for your farm it has to be this way. That’s the way it is designed!

    Also having all the WAC servers in a OU gives you other benefits; such as using Group Policies to control the WAC servers.

    Limit the WOPI Server and host access

    Now we’ve seen how we protect the farm from rogue machines and data tampering. Another issue with the WAC farm in it’s default configuration is that any WOPI Server can use it. Might not be a big problem for most of the internal installations, but what if you’ve designed a WAC farm and someone with a huge SharePoint collaboration implementation connects to your WAC farm. It can sure bring it down. Or if you’re exposing your Office Web Apps farm on the internet anyone on the internet can potentially use it.

    For this purpose there’s a cmdlet called New-OfficeWebAppsHost. This cmdlet allows you to specify host names that will be accepted by the WAC farm. The cmdlet interprets any domain with a wildcard. For instance the following cmdlet will allow all WOPI Servers on contoso.com (www.contoso.com, extranet.contoso.com etc.) to contact the WAC farm:

    Set-OfficeWebAppsHost -Domain "contoso.com"

    Do not forget to do this!!

    Summary

    You’ve seen quite a few ways how to protect your WAC farm from information leakage, rogue machines, undesired excessive usage etc. Using HTTPS and certificates together with a dedicated OU in Active Directory will give you the most secured WAC Farm. Hopefully you also understand a bit more on how Office Web Apps Server works internally. It’s a magnificent and simple server product, but it should be handled with care. 

  • Sharing a Workflow Manager 1.0 farm between multiple SharePoint 2013 farms

    Tags: SharePoint 2013, Workflow Manager

    SharePoint 2013 introduces a whole set of new and improved features. One of the things that is both new and vastly improved is the Workflow platform. Workflow is no longer a part of the SharePoint core infrastructure, but instead a separate server product. Even though ye olde Workflow platform, 2010 style, is still in the product for backwards compatibility. SharePoint 2013 leverages the Azure service called Workflow Manager 1.0. (Not the cloud version but a local on-premises installation).

    The Workflow Manager 1.0 server application should be installed on a separate set of servers – not on the SharePoint servers (remember; just because you can doesn’t mean you should). This introduces for some organizations more investments in physical and virtual hardware. So, what happens if you have several farms? Do you need several Workflow farms? That can be expensive! Well, as always in the SharePoint world - It depends! If you come to the conclusion that you could use the same Workflow Farm and hardware for managing all your SharePoint Farms workflows then you can actually use the same Workflow Manager 1.0 farm. Usual disclaimers apply here – it’s all up to you and your organization to decide upon designing your workflow infrastructure for this scenario, remember this can make backup/restore and DR a bit problematic since now your farms are sharing the same workflow databases etc.

    Let me show you a multi-tenant Workflow farm, how it works and how you do to set it up properly…

    Workflow Service

    Multi-tenancy in Workflow Manager 1.0

    Azure Workflow Manager 1.0 is multi-tenant aware through Scopes. “A scope is a named and securable container for Activities, Workflows, Instances, configuration and child Scopes.” – that is directly stolen from the Workflow Manager documentation. Everything that runs inside Workflow manager belongs to a Scope. By default a Root Scope is created when you set up the Workflow Manager farm. You can see details about the Root Scope by navigating to the HTTP/HTTPS port of your farm. The URL should look something like this: http://wffarm.corp.local:12291 for HTTP and https://wffarm.corp.local:12290 for HTTPS.

    Workflow Manager Root Scope

    Connecting SharePoint 2013 to the Workflow Manager 1.0 farm

    I’m not going to dive in to how to configure the Workflow Manager farm; it’s a wizard or PowerShell thingie and it’s very well documented – you can’t fail! Once you have the Workflow farm up and running you MUST install the Workflow Client 1.0 components on all the Web Server in your SharePoint 2013 farm (you can get the bits using the Web Platform Installer).

    All you have to do then is to register the Workflow Manager 1.0 farm with the SharePoint 2013 farm using the Register-SPWorkflowService cmdlet. This will register the SharePoint 2013 farm with the Workflow Manager and vice versa and it will also create the Workflow Service Application Proxy in SharePoint.

    You execute the cmdlet like below and pass in a URL to a site in your SharePoint 2013 farm (any site will do) and the URL to your Workflow Manager farm (if you’re using HTTP you also need to add the AllowOAuthHttp parameter).

    Register-SPWorkflowService 
        -SPSite https://teams.corp.local 
        -WorkflowHostUri https://wffarm.corp.local:12290

    Not only will this cmdlet do the things mentioned above, it will also create a new Scope in the Workflow Manager farm (under the Root Scope) called SharePoint. You can verify this by navigating to Workflow Manager URL and append /SharePoint, like this: https://wffarm.corp.local:12290/SharePoint. In the XML returned you can see the URL to the SharePoint 2013 metadata endpoint (https://teams.corp.local/_layouts/15/metadata/json/1) under the SecurityConfigurations element. Now, that was one SharePoint 2013 farm connected to the Workflow Manager farm.

    So, let’s jump on over to our second SharePoint 2013 farm, that we now decided should use the same Workflow farm! If we run the same cmdlet in that farm (using a different SPSite parameter URL of course) we get a nice error that says “An existing scope named “SharePoint” already exists in the workflow server”.

    An existing scope named 'SharePoint' already exists in the workflow server

    You see, this scope is already taken by some other farm (the one we previously connected)! The clever one might think that oh, I’ll just use the Force parameter of that cmdlet and override this. Well, if you’re that smart then you will hijack that Scope from the other farm. The Scope will be recreated and the other farm will have a non-functional Workflow Service (the security configuration will now point to the SharePoint farm that hijacked the Scope).

    Note: the Force flag might be useful if you really want to re-create the Workflow connection using the same Scope name.

    Instead you should use the ScopeName parameter of the Register-SPWorkflowService cmdlet. That parameter will create a new Scope in the Workflow farm, and with that create an isolated container for this new SharePoint farm. So on our second farm we run this PowerShell cmdlet:

    Register-SPWorkflowService 
        -SPSite https://farmb.corp.local 
        -WorkflowHostUri https://wffarm.corp.local:12290
        -ScopeName FarmB

    Once this command is issued your Workflow Manager farm will have two child Scopes under the Root Scope; SharePoint (the first farm we connected) and FarmB (the second one with the ScopeName parameter). You can verify the second scope by browsing the the Workflow farm and append the Scope name to the URL, like this: https://wffarm.corp.local:12290/FarmB.

    Now you have two SharePoint 2013 farms using the same Workflow Manger 1.0 farm, and each SharePoint farms workflows are isolated using the Scopes in Workflow Manager. Cool!

    Summary

    I’ve just shown you how easy it actually is to share a Workflow Manager 1.0 farm between several instances of SharePoint 2013 farms. Now it’s up to you to decide on weather this is an appropriate way for your organization to build your infrastructure or not. Remember to plan your resources and your DR strategy!

About Wictor...

Wictor Wilén is the Nordic Digital Workplace Lead working at Avanade. Wictor has achieved the Microsoft Certified Architect (MCA) - SharePoint 2010, Microsoft Certified Solutions Master (MCSM) - SharePoint  and Microsoft Certified Master (MCM) - SharePoint 2010 certifications. He has also been awarded Microsoft Most Valuable Professional (MVP) for seven consecutive years.

And a word from our sponsors...