Contents tagged with Security

  • SharePoint Framework and Microsoft Graph access – convenient but be VERY careful

    Tags: SharePoint Framework, Office 365, Azure AD, Microsoft Graph, SharePoint Online, Security

    SharePoint Framework (SPFx) is a fantastic development model on top of (modern) SharePoint, for user interface extensibility, and it have evolved tremendously over the last year since it became general available. The framework is based on JavaScript extensibility in a controlled manner, compared to the older JavaScript injection mechanisms we used to extend (classic) SharePoint, that comes with a lot of power.

    Using SharePoint Framework our JavaScript has access to the whole DOM in the browser, meaning that we can do essentially what we want with the user interface – however, of course, we shouldn’t, only certain parts of the DOM are allowed/supported for modification. These areas are the custom client-side Web Parts we build (that squared box) or specific place holders (currently only two of them; top and bottom). For me that’s fine (although there’s a need for some more placeholders), but if you want to destroy the UX it is all up to you.

    In our client-side solutions we can call out to web services and fetch data and present to the user and even allow the end-user to manipulate this data. For a while now we’ve had limited access to Microsoft Graph, where Microsoft has done the auth plumbing for us, and now in the latest version (1.4.1) a whole new set of API’s to both call Microsoft Graph, with our own specified permission scopes, and even custom web services protected by Azure AD. Very convenient and you can build some fantastic (demo) solutions with this to show the power of UX extensibility in SharePoint Online.

    However – there are some serious security disadvantages that you probably don’t think or even care of if you’re a small business, a happy hacker or just want to build stuff. For me – designing and building solutions for larger enterprises this scares me and my clients…a lot!


    Some perspective

    Let’s take a step back and think about JavaScript injections (essentially SPFx is JavaScript injections – just with a fancier name and in a somewhat controlled way). It’s all very basic things, but from recent “social conversations” it seems like “people” forget.

    JavaScript running on a web page, has all the power that an end-user has, one could say even more power, since it can do stuff the user doesn’t see or is aware of. I already mentioned that JavaScript can modify the DOM – like hiding, adding or moving elements around. But it can also execute code, that is not necessarily visible. A good example is for instance to use Microsoft Application Insights to log the behavior of the user or the application – seems like a good thing in most cases (although I don’t think that many users of AppInsights understand how GDPR affects this – but that’s another discussion). We could also use JavaScript to call web services, using the information we have on the page to manipulate the state of the page, and also send data from our page to another page. All without the user noticing it. For good or for bad…let’s come back to the latter in a minute or so.

    No Script sites and the NoScript flag

    Before SharePoint Framework Microsoft introduced “No Script” sites to mitigate the issue with arbitrary JavaScript running in sites and pages. All modern Team sites, based on Office 365 Groups, and OneDrive sites are No Script sites. You can as an admin control the behavior of newly created SharePoint Sites using the settings in the SharePoint admin center (under settings):

    Bad settings for this...

    Depending on when your tenant was created (before or after the addition of this setting) your default settings may be different. My recommendation is, of course, to Prevent users from running custom scripts, to ensure that you don’t get some rogue scripts in there (see below).

    This setting can also be set on individual sites using the following SharePoint Online PowerShell command:

    -DenyAddAndCustomizePages 0

    More information here: “Allow or prevent custom script

    This setting on a site not only affects JavaScript injections it also prohibits the use of Sandbox solutions and the use of SharePoint Designer – all good things!

    Script Editor Web Part – the wolf in sheep clothes

    “Our favorite” SharePoint extensibility mechanism, specifically for the citizen developers (or whatever you prefer calling them), has been the Script Editor Web Part (SEWP). As an editor of a site in SharePoint we can just drag the SEWP onto a page and add arbitrary scripts to get our job done and we’re done.

    The aforementioned No Script setting will make the Script Editor Web Part unavailable on these sites.

    The Script Editor Web Part does not exist in modern SharePoint. The whole idea with modern SharePoint and SPFx is that we (admins/editors) should have a controlled and managed way to add customizations to a site – and of course SEWP is on a collision course with that. Having that option would violate the whole idea.

    You can read much more about this in the SharePoint Patterns and Practices article called “Migrate existing Script Editor Web Part customizations to the SharePoint Framework”.

    But, there is now a “modern” version of the Script Editor Web Part available as a part of the SharePoint Patterns and Practices samples repository (which is a bit of a shocker to me). This solution is bypassing the whole idea of SharePoint Framework – controlled and governed JavaScript in SharePoint Online. And of course this is being used by a lot of users/tenants – since it’s simple and it works. If  you do use this solution you really should continue reading this…

    SharePoint Framework and Microsoft Graph = power?

    How does this relate to SharePoint Framework then? As I said, with SharePoint Framework we now have a very easy way to access the Microsoft Graph (and other Azure AD secured end-points) with pre-consented permission scopes. As a developer when you build a SharePoint Framework solution you can ask to be granted permissions to the Microsoft Graph and other resources. The admin grants these permissions in the new SharePoint Online admin center under API management.

    API Management in new SPO admin center

    For instance you want to build a Web Part that shows the e-mail or calendar on your portal page, you might want to have access to read and write information to tasks. The possibilities are endless and that is great, or is it?

    I think this is a huge area of concern. Imagine these user stories:

    As a user I would like to see my calendar events on my Intranet” – pretty common request I would say. This requires the SPFx Web Part developer to ask for permissions to read the users calendar.

    “As a user I would like to see and be able to update my Planner tasks” – another very common request. This requires the SPFx Web Part developer to ask for Read and Write access to all Groups (that’s just how it is…).

    Both these scenarios opens up your SharePoint Online solution for malicious attacks in a very severe way. Of course the actual permission has to be approved by an admin – but how many admins do really understand what’s happening when the business cries “we need this feature”.

    Note: this is not just a SharePoint Framework issue, but SPFx makes it so easy that you probably don’t see the forest for the trees. And this is also true for many of these “Intranet-in-a-box” vendors that has made their similar service to access mail/calendars etc from the Graph. It’s still JavaScript and if you allow a single user to add a script it can be misused.

    Rogue scripts

    Once you have granted permissions to the Microsoft Graph, by a single request from that fancy calendar Web Part, all other scripts in the whole tenant has those permissions. So your seemingly harmless Web Part has suddenly exposed your calendar for reading to any other Web Part. Assume that now the admin installs a weather Web Part (downloaded or acquired from a third party). This weather Web Part is now also allowed to read the users e-mail, even though it did not request it. And if that vendor goes rogue or already is, he or she can without the users knowing send all the calendar or e-mail details away to a remote server, while just displaying the weather. This requires some social engineering of course to make the admin install this Web Part. But what about allowing the modern Script Editor Web Part! And you piss an employee off…with just some simple JavaScript knowledge this user can then create a sweet looking cat-of-the-day web part or even a hidden one with the Modern Script Editor Web Part. Then send the boss to that page, and read all the bosses calendar events or e-mails, sending them to some random location somewhere…

    You still think this is a good idea?


    And what about the second user story; where we need full read and write to all the groups, just to be able to manipulate tasks in Planner. It’s not much you can do, if you’re building this web part – you are opening up for so many more possibilities for “working with” groups and its associated features. This is not a SharePoint Framework thing, but a drawback in how Microsoft Graph works with permissions and the lack of contextual or fine-grained scopes in Azure AD. Same goes for reading/writing data from SharePoint sites – Azure AD/Microsoft Graph cannot restrict you to a single site or list – you have access to all of them.

    Remember how SharePoint Add-ins have Site Collection or list scoped permissions. I guess you all remember how we complained back then as well. That was some sweet days and we really want those features back. Well, we still have them – SharePoint add-ins are probably the best way to protect the users and your IP still…

    As I stated above, of course all SPFx solutions has to be added to the tenant app catalog – but, we also have the option of a site collection scoped catalog. So that’s another vector for insertion of seemingly nice solutions that can take advantage of the permissions you granted on a tenant level.

    The grey area between modern and classic sites

    Currently most SharePoint Online tenants is in a transition period between classic and modern sites. That is, they have built their SharePoint Online environment based on the “classic way” of building stuff, most often requiring script enabled sites. And now they want to transition to modern sites, without these scripting capabilities. Should they just add the new modern Script Editor Web Part or should they turn of scripting for all sites?

    If you turn this off I can almost guarantee that a lot of your sites will be useless. And in many cases your whole Intranet – specifically this happens with many of the “Intranet-in-a-box” vendor solutions. So be careful.

    So, what should I do?

    If you still think it is very valuable to build solutions with SPFx and Microsoft Graph the first thing you MUST do is to ensure that there is not a single site in your tenant with scripting enabled. You can do a quick check for this with this sample PowerShell command:

     get-sposite | 
      ?{$_.DenyAddAndCustomizePages -eq 'Disabled'}

    This will list all the site collections which still allow JavaScript execution using the Script Editor Web Part for instance. If there’s a single site in here, stop what you’re doing and don’t even consider granting SPFx any permissions.

    I wish we had this kind of notification and warning in the permission grant page in the new SharePoint Admin center. To make it very obvious for admins.

    Secondly, be very thoughtful on what solutions you are installing in your app catalog. Do you know the vendor, do you know their code, is their code hosted in a vendor CDN (warning signs – since they can update this without you knowing) etc? Do you have multiple vendors? Who have access to do this?

    So, you really need to do a proper due diligence of the code you let into your SharePoint Online tenant.

    A note on the CDN issue; when you add a SPFx solution to the app catalog all “registered” external script locations are listed. But this is not a guarantee. It’s only those that are registered in the manifest. As a developer you can request other resources dynamically without having them show up on this screen.


    I hope that this gave you the chills, and that you start reflecting on these seemingly harmless weather web parts that you install. You as an admin, developer or purchaser of SharePoint Online customizations MUST think this over.

    1. Do we have a plan to move from script enabled sites to ALL sites with no custom scripts enabled
    2. Do we have the knowledge and skill to understand what our developers and vendors are adding to our SharePoint Online tenant
    3. Do we understand the specific permission requirements needed

    I also hope (and know) that the SharePoint Framework product team listens, this is an area which needs to be addressed. We want to build these nicely integrated solutions, but we cannot do it on behalf of security concerns. And it’s NOT about security in SharePoint or SharePoint Framework it is about how web browsers work. What we need is:

    1. Visibility – make it visible to the admins what is really happening in their tenant; script enabled sites, SEWP instances etc.
    2. Isolation – we need to be able to isolate our web parts, so that they and their permissions scopes cannot be intercepted or misused. In the web world I guess that Iframes is the only solution
    3. Granularity - Azure AD permission granularity – it’s not sustainable to give your applications these broad permissions (Group Write All). I want to give my app access to write in one Planner plan only and not in all groups.

    Thanks for reaching the end! I oversimplified some parts, but if you have any concerns, questions or issues with my thinking – don’t hesitate to debate, confront, question or agree with this post.

  • SharePoint 2013: How to refresh the Request Digest value in JavaScript

    Tags: SharePoint 2013, JSOM, Security


    SharePoint 2013 (and previous versions) uses a client side “token” to validate posts back to SharePoint to prevent attacks where the user might be tricked into posting data back to the server. This token is known by many names; form digest or message digest or request digest. The token is unique to a user and a site and is only valid for a (configurable) limited time.

    When building Apps or customizations on top of SharePoint, especially using patterns such as Single Page Applications (SPA) or using frameworks such as knockout.js it is very common that you see errors due to that the token is invalidated, which is due to that you have not reloaded the page and the token has timed out. The purpose of this article is to show you how you can refresh this form digest using JavaScript.

    How to use the Request Digest token

    When working with CSOM or REST you need to add the Request Digest token to your request. Well, with CSOM (JSOM) it is done for you under the hood, but when you handcraft your REST queries you need to manually add them. It usually looks something like this:

        url: _spPageContextInfo.siteAbsoluteUrl + "/_api/web/,,,",
        method: "POST",
        headers: { 
            "Accept": "application/json; odata=verbose", 
            "X-RequestDigest": $('#__REQUESTDIGEST').val() 
        success: function (data) {},
        error: function (data, errorCode, errorMessage) {}

    On line #6 we add the “X-RequestDigest” header, with the value of the hidden input field “__REQUESTDIGEST” – which is the form digest value. I will not dig deeper into this since it’s part of basically every SharePoint POST/REST sample on the interwebs.

    But what happens if you’ve built a page (SPA application) where you don’t reload the page and the users work on the page for longer times than the token timeout (default 30 minutes). Then they will get exceptions like these:

    HTTP/1.1 403 FORBIDDEN
    {"error":{"code":"-2130575252, Microsoft.SharePoint.SPException","message":{
    "value":"The security validation for this page is invalid and might be corrupted. 
    Please use your web browser's Back button to try your operation again."}}}

    How to refresh the token

    So how do we get a new and updated token? There are multiple ways to refresh the token, or retrieve a new and updated one. The two most common ways are to either use ye olde web services and call into the /_vti_bin/sites.asmx and use the GetUpdatedFormDigest method. To use this you have to create a SOAP message and then parse the response and retrieve the updated token. You can then either pass that new token for your subsequent requests, or even better update the hidden __REQUESTDIGEST field. The second one is to use the new REST POST endpoint /_api/contextinfo and then parse that response which can be either XML or JSON. This is how to do it the JSON way:

        url: _spPageContextInfo.webAbsoluteUrl + "/_api/contextinfo",
        method: "POST",
        headers: { "Accept": "application/json; odata=verbose"},
        success: function (data) {
        error: function (data, errorCode, errorMessage) {

    It’s also worth noticing that for every REST query SharePoint actually generates a new Form Digest which is sent back as a response header (X-RequestDigest), so you could always read that response header and update the form digest.

    Also, if you’re not using JavaScript but instead building an app using other frameworks, platforms, languages etc – you can always use the two aforementioned methods to update the token. Well, you actually need to do it since you don’t have any hidden input field :)

    How to do it the best and native way

    The drawbacks with the methods above is that you have to either request a new form digest all the time to make sure that it is up to date, or catch the exception and retry your query. And as we all know this will lead to bad performance and/or cluttered JavaScript code (like we don’t have enough of that already!).

    Fortunately there is a native method for refreshing the token built-in to the SharePoint JavaScript libraries. It’s a function called UpdateFormDigest(). It’s defined in INIT.js. The method takes two parameters, first of all the URL to the current site (remember the token is only valid for one site) and secondly it takes an update interval parameter. The update interval value is also already given to us, using a global constant called _spFormDigestRefreshInterval. And this is how you should use the function:

    UpdateFormDigest(_spPageContextInfo.webServerRelativeUrl, _spFormDigestRefreshInterval)
    As you can see, it’s very simple, only use built-in stuff and there’s no magic to it. Under the hood this method uses the /_vti_bin/sites.asmx method and it does it synchronously. This means that all you have to do is copy and paste this line into your own code just before your REST calls. The other smart thing about this method is that is uses the update interval and only updates the form digest when needed – so no extra calls.


    There you have it – no more security validation issues with your SPA applications in SharePoint 2013. All you have to do is copy and paste the line above and stay on the safe side.

  • Office Web Apps 2013: Securing your WAC farm

    Tags: WAC Server, Office Web Apps, SharePoint 2013, Security

    With this new wave of SharePoint, the Office Web Apps Server (WAC – I don’t like the OWA acronym, that’s something else in my opinion) is its own server product, implementing the WOPI client protocol, which allows a client to retrieve documents from SharePoint on the behalf of the user. Documents will flow from the WOPI servers (SharePoint, Lync, Exchange etc.) to the Office Web Apps Server – this means that potentially confidential information will be transferred from the SharePoint environment and stored/cached on another server. This could result in unnecessary information leakage and compromise the enterprise security.

    In this post I will walk through a number of steps that you can do to properly secure your Office Web Apps 2013 farms. And you should seriously consider and implement most of these methods.

    Note: this post focuses on the Office Web Apps Server and not a WOPI client in general (but if you’re building your own you should consider security as well!).

    The WOPI protocol specification and security

    Note: I will not cover how WOPI clients and servers implements the server to server authentication and authorization.

    WAC runs as Local System

    To start with it is very important to know that the Office Web Apps Server 2013 runs as the Local System and Network Service on the machine it is installed on. There is no service account or anything! This means that you cannot protect your systems using dedicated accounts etc., like you do with SharePoint, SQL and other applications.

    The images below shows the Office Web Apps Windows Service, which runs as LocalSystem.

    Local System

    And this image shows some of the applications pools in the IIS on an Office Web Apps machine.

    Network Service

    The advantage of using these local accounts is that it makes installation and configuration easier. But it is very important that you are aware of this configuration.

    SSL is a requirement!

    Exposing the Office Web Apps Server over HTTPS should be a requirement in my opinion. There is no reason not to. Having it on HTTP will only cause trouble for you; for instance if your SharePoint uses https you will not be able to render the iFrame containing the document (aka WOPI Frame) since you’re not allowed to show http content in an https environment. But first and foremost you’re sending data in clear text.

    So what about SharePoint on HTTP then? Well, if you’re using SharePoint 2013 you should seriously consider running that over HTTPS as well – that IS a best practice. SharePoint 2013 leverages several technologies that sends tokens and credentials over the wire, OAuth for instance, so in order to have a secure environment make sure you use HTTPS for SharePoint as well. If you are running SharePoint on HTTP you must fiddle with the security settings in SharePoint to allow OAuth over HTTP – and this is not a good thing.

    Certificates are king!

    Any WAC farm running on SSL must have a certificate for the HTTPS endpoint. You can use self-signed, issue certificates using a Domain CA or buy a certificate. When you’re creating the WAC farm, using New-OfficeWebAppsFarm, you can/should specify the certificate.

    For any SharePoint, WAC and even SQL installations nowadays certificates are more and more important. If you’re on the verge of deploying these in your organization you should consider deploying a Domain CA – which is not a lightweight task.

    Securing the communication using IPSec

    If you for some reason do not run HTTPS on SharePoint and/or WAC you could consider implementing IPSec. Unfortunately there is no button in the Control Panel that says “Use IPSec”. This is something that requires careful planning and testing. So going SSL might be an easier way. But consider the scenario where you have an internet facing web site which leverages WAC and using the HTTP protocol – then you should consider using IPSec for the communication between SharePoint and Office Web Apps Server.

    Firewall considerations and requirements

    When setting up your Office Web Apps Farm you should also configure the firewall for the WAC machines. Office Web Apps uses four different ports. It uses 80 and 443 for HTTP and HTTPS, that’s used by the end-users and the WOPI Server/Client communication. Internally Office Web Apps uses port 809 (HTTP) and 810(HTTPS) for communication between the WAC machines. I’ve only seen 809 in use, which is the default. There is no way (I’ve found at least, but internally WAC has a switch to use port 810) to configure WAC to use port 810 and if you do find a way, it’s likely unsupported. The things sent over the wire using the admin channel (809) is mainly health and configuration information for the WAC farm, but it would be nice to be able to secure this channel as well (IPSec!).

    When installing WAC the Windows firewall is configured to allow incoming TCP connections on port 80, 443 and 809.

    WAC Windows Firewall Rule

    As always it is a good practice to evaluate these default rules and if you’re not using port 80, disable that port. For port 809 it might also be a good practice to make sure that it only allows incoming connections if they are secure (i.e. implement IPsec).

    Even more secure...

    Preventing rogue machines

    So far we’ve been talking about how to secure information being transmitted from and to the Office Web Apps farm. Let’s take a look at Office Web Apps farm security from another angle. Joining a new WAC machine to an Office Web Apps Farm can be quite easy. The only thing that you need is local administrator access on the WAC machine that is the master (the Get-OfficeWebAppsMachine gives you the master machine). Depending on how you’re having your (virtual) metal hosted this might be a problem, too many sysadmins have to much permissions out there. If you have this access then you can easily join a rogue machine to the WAC farm and take control over it, without the users/client knowing anything about it.

    There are a couple of methods you can and should use to protect the WAC farm. And the error messages below can also be a good troubleshooting reference…

    Master Machine Local Administrator

    If the account trying to create the new WAC machine does not have local admin access on the machine specified when joining the WAC farm you will simply get an “Access is denied”.

    New-OfficeWebAppsMachine : Access is denied

    As a side note; if you’re not running the cmdlet using elevated privileges you will get an “You must be authenticated as a machine administrator in order to manage Office Web Apps Server”.

    Using the Firewall

    I already mentioned the firewall. If the machine joining the WAC farm cannot access the HTTP 809 channel the New-OfficeWebAppsMachine will throw a “The destination is unreachable error”.

    The destination is unreachable error

    This is a fairly easy way to protect the farm, but if the user has local admin access on the master machine it can easily be circumvented.

    Certificate permissions

    If you’re using a domain CA, make sure that you protect the private key using ACL’s, or if you’re buying a certificate make sure to store the certificate private key in a secure location. If you’ve specified a certificate when the Office Web Apps farm was created, which you should have, then the user cannot join the new machine – regardless of local machine admin, since the user cannot install the certificate locally. The error message shown is “Office Web Apps was unable to find the specified certificate”.

    Office Web Apps was unable to find the specified certificate

    Using an Organizational Unit in Active Directory

    The way that Microsoft recommends to secure your WAC farm is to have a dedicated OU in Active Directory where the computer accounts for the WAC farm is located. When joining a new machine to the farm the cmdlet verifies that the account is in the OU specified by the WAC configuration. If it’s not then you’ll see the “The current machine is not a member of the FarmOU”.

    The current machine is not a member of the FarmOU

    The Farm OU is specified when creating a new WAC farm or using the Set-OfficeWebAppsFarm/ cmdlet. The only caveat with this OU is that it has to be a top level OU in Active Directory. Creating that OU in your or your customers AD might cause some headache, but if you want to use the FarmOU as protection method for your farm it has to be this way. That’s the way it is designed!

    Also having all the WAC servers in a OU gives you other benefits; such as using Group Policies to control the WAC servers.

    Limit the WOPI Server and host access

    Now we’ve seen how we protect the farm from rogue machines and data tampering. Another issue with the WAC farm in it’s default configuration is that any WOPI Server can use it. Might not be a big problem for most of the internal installations, but what if you’ve designed a WAC farm and someone with a huge SharePoint collaboration implementation connects to your WAC farm. It can sure bring it down. Or if you’re exposing your Office Web Apps farm on the internet anyone on the internet can potentially use it.

    For this purpose there’s a cmdlet called New-OfficeWebAppsHost. This cmdlet allows you to specify host names that will be accepted by the WAC farm. The cmdlet interprets any domain with a wildcard. For instance the following cmdlet will allow all WOPI Servers on (, etc.) to contact the WAC farm:

    Set-OfficeWebAppsHost -Domain ""

    Do not forget to do this!!


    You’ve seen quite a few ways how to protect your WAC farm from information leakage, rogue machines, undesired excessive usage etc. Using HTTPS and certificates together with a dedicated OU in Active Directory will give you the most secured WAC Farm. Hopefully you also understand a bit more on how Office Web Apps Server works internally. It’s a magnificent and simple server product, but it should be handled with care. 

  • Visual guide to Azure Access Controls Services authentication with SharePoint 2010 - Index Post

    Tags: Security, Windows Live, Windows Azure, SharePoint 2010

    This post serves as an index for all the articles in the Visual guide to Azure Access Controls Services authentication with SharePoint 2010

    This series is a set [not yet determined amount] of articles where I show you how to leverage the Azure Access Controls Services (ACS) in combination with SharePoint 2010 to make it easier for you to use identity providers such as Google ID, Windows Live ID, Facebook AuthN etc.


    • Part 1 - basic setup This article guides you through the basic setup of ACS and SharePoint 2010, from creating the Azure ACS endpoints to configuring the identity providers and relying party to finalizing the setup in SharePoint 2010  and finally log in using Google ID credentials.
    • Part 2 - common problems Even though if you follow all the instructions in part 1 you or more likely your colleague will run into problems. This article discusses the most common problems - and will hopefully be updated as time flies by.
    • Part 3 - Facebook Authentication The third part shows how to enable Facebook Authentication for your Azure ACS namespace and log in using a Facebook account to SharePoint 2010.
    • Part 4 - Multiple Web Applications This post will show you how to handle the case when you have multiple web applications and would like to use the same Azure ACS settings.
    • Part 5 - Custom Claims In this post we'll take a look at how we can add custom claims through Azure ACS and leverage them in SharePoint.
    • Part 6 - Facebook Integration In this post we'll use the features in Azure ACS to do deeper integration with the Facebook Graph API.
    • Part 7 - Customize the login experience
    • ...

    Note: this is what is actually written and planned for now . Any planned posts might change over time...

    Happy reading!

  • Visual guide to Azure Access Controls Services authentication with SharePoint 2010 - part 4 - multiple web applications

    Tags: Security, Windows Azure, SharePoint 2010

    Back with another promised post in the Visual guide to Azure Access Controls Services authentication with SharePoint 2010. This time I'm going to show you how to work with multiple web applications. We're going to use the stuff we configured in part 1 (basic setup) and part 3 (Facebook setup), and hopefully we're avoiding the problems discussed in part 2 (common problems).


    In this article I would like to show you how to use Azure ACS and SharePoint 2010 when we have multiple Web Applications in SharePoint. The sample will assume the same web application as used in the previous posts, but now with a dedicated My Site Host Web Application (called http://my). If we just enable the same Trusted Identity Provider to the "My" Web Application, the user will be redirected to the Azure ACS log in page, but when he/she is redirected back it will redirect back to the other web application (called http://sp2010 in the previous posts), because that's the web application we configured in the Return URL in Azure ACS.

    Only one Return URL

    Since this is a "visual" guide we're only using the Azure ACS management web site to configure the ACS and the UI only supports one Return URL per Relying Party Application. If you're using the ACS management web services you can configure multiple return url's - but that's another story for someone else to write about.

    So, we actually need to create a new Relying Party Application in ACS to handle a different Return URL, and with that also another Trusted Identity Provider in SharePoint.

    Create a new Azure Relying Party

    Let's start with Azure ACS. Log in to the Azure management portal and go to Relying Party Applications, then choose to add a new one. Give it a Name (must be unique), a new Realm (must also be unique within the ACS namespace) and finally the new Return URL for our new web application.


    The next parts are pretty trivial, but important. Make sure you choose SAML 1.1 as token format, increase the Token Lifetime (to the same value as your other/original RP), then choose the same Identity Providers as for your other RP. Do NOT create a new Rule Group, make sure to select the same Rule Group as your original Relying Party. This is to make it easier for us to manage (especially for upcoming posts). Leave the rest as is and click Save.

    The next thing to do is to configure the Token Signing certificate for this guy. Click on Certificates and Keys and choose to add a new Token Signing certificate. Choose the newly created Relying Party Application in the drop down and then choose to upload the SAME certificate that you used for your original RP - this is important, no other certificate will do!

    Add the SAME certificate for the new RP

    Hit Save once you're done and we're done with configuring our new Relying Party Application for the new SharePoint Web Application.

    Create a new SharePoint Trusted Identity Provider

    In SharePoint we need to add a new Trusted Identity Provider that uses the new Realm that was specified in the Relying Party Application. The procedure is basically the same as we did for our first Trusted IP, but with a subtle but important difference - we will not set the ImportTrustCertificate parameter in the cmdlet. We have already imported the certificate once, to the original trusted IP. Adding the same certificate to a new trusted IP will throw an exception. But this is good, you will shortly learn that. So use the following PowerShell to create the new trusted identity provider.

    $realm = "uri:visualauthn-my" $signinurl = "” $map1 = New-SPClaimTypeMapping     ""     -IncomingClaimTypeDisplayName "Email"     –SameAsIncoming New-SPTrustedIdentityTokenIssuer     -Name "Visual AuthN ACS - MY"     -Description "ACS rocks!"     -Realm $realm     -ClaimsMappings $map1     -SignInUrl $signinurl     -IdentifierClaim $map1.InputClaimType

    The important things here to notice is that we're using the new Realm, the same sign in URL, gives the trusted ip a new Name and do not choose to use the ImportTrustCertificate parameter.

    Creating the new trusted ip

    Now we're ready to connect the web application to the new trusted identity provider.

    Configure the secondary Web Application

    In Central Administration go to Web Applications Management, choose the new Web Application (in this case http://my ) and select the Authentication Providers button in the Ribbon. Choose the appropriate zone in the dialog and then scroll down to Trusted Identity Providers and select the newly created one.

    Configure the web app

    Click Save when done and you're ready to test it.

    Test it

    Now when you browse to the secondary web application you can log in using our new Azure ACS Relying Party Application and you will be redirected to the correct web application (and not back to the first one, which is the case if we used the original trusted IP for the secondary web application).

    Now to the really interesting stuff! If you choose My Settings in both web applications you will notice that the account name is exactly the same.

    Personal settings

    This is good! This account will now be same throughout your web applications and you can actually set up a dedicated My Site Web App and have all Notes and Tags from the other web apps.

    Even though we created a new trusted identity provider, the account name is exactly the same! Even the issuer is the same in this identity. The reason behind this is that since we did not add any signing certificate to the secondary trusted IP (and we don't want to). When first trying to sign in SharePoint will redirect to the sign in URL (remember we have the same for both trusted IP's) using the Realm for the web application. We're using the same token signing certificate for both the ACS RP's (and the same rule group, which means that we get the same set of claims back) and when SharePoint retrieves the incoming request from Azure ACS it will locate the trusted identity provider using that token signing certificate. And it will find our original trusted identity provider and use it's settings (including claims mappings - which we'll also see in subsequent posts). Smart huh!

    The image below just shows our trusted IP's and their certificates.



    In this case we're adding a new Relying Party Application and a new Trusted Identity Provider in SharePoint just to get a new Return URL. The Trusted IP is there to send the correct Realm to Azure ACS so that it sends back the request to the correct Return URL. And this is only what the new trusted IP is for - in follow up posts to this one we'll modify the trusted IP and we only need to modify the original one to get effects on both web applications.

    SharePoint is smarter than you think!

  • Visual guide to Azure Access Control Services authentication with SharePoint 2010 - part 3 - Facebook

    Tags: Security, Windows Azure, SharePoint 2010

    Welcome back to a third post in the Visual Guide to Azure Access Control Services authentication with SharePoint 2010. In the first part I showed you how to do the basic configuration of Azure ACS and SharePoint 2010 and log in using a Google Id. The second part discussed the most common problems I've seen so far. In this post we'll continue extending the ACS Relying Party to support another Identity Provider - namely Facebook! Depending on what type of site/community you're trying to build with your SharePoint 2010 site it might be of interest to use Facebook login (they have like a gazillion of users or something). The Facebook AuthN parts are a bit different than the others OOB IP's in Azure ACS - but not complicated at all, so let's get started...

    Create a Facebook application

    The first thing we need to do is to actually create a Facebook application. This is required to allow Azure ACS to convert the Facebook OAuth to outgoing claims, using the Facebook Graph API. And to do this you need a Facebook developer account. You do this by going to Once you have your account you just click Apps in the upper right.

    Apps, apps, apps

    This will take you to all your apps, if you're new you don't have any...but you get the point. Next, click the Create New App button. Give the app a display name and a namespace (as always namespaces must be unique - the UI helps you with that). Agree to the terms (you read those, right!?) and click Continue.

    Create a new app

    You will be asked to fill in the security check - do that and finalize you app creation.

    Now we need to do some configuring, but just one simple thing. We need to tell the app how we integrate with Facebook - we'll do that by checking the Website mark and then entering the URL to Azure ACS - it should be https://[your ACS namespace] (you can also find the URL under Application Integration in the ACS portal). Then save the changes. It says this will take a couple of minutes, but we have some more configuring to do so we'll be okay.

    Connect the app to Azure

    Keep this page open, you'll need the App ID and App Secret in the next step, when we configure Azure ACS.

    Important stuff

    Configure Azure Access Control Services

    I assume that you have already created your Azure ACS Relying Party, but if you have not revert to Part 1 of this series. Choose to add a new Identity Provider and select Facebook application. Click Next when done.

    Add Identity Provider

    Now you need the App ID and App Secret from the Facebook application, input those values.

    Configure the Facebook IP

    Make sure that the checkbox is checked next to your Relying Party under Used By and then click Save.


    Next step is to create Rules for this new IP. Go to the Rule Group that your RP is using and click on Generate. Azure ACS will by default mark those IP's that does not have any rules, so just click the Generate button to create the default rules.

    More rules...

    Once the rules are created, you can verify that an output claim is created for the Facebook IP using the emailaddress claim.

    It has to be there....

    Also check your Relying Party and make sure that it has the correct set of IP's configured.

    Remove that fugly WLID

    That's it with the Azure ACS configuring. Now all that is left to do is test it in SharePoint!

    Login using your Facebook account in SharePoint 2010

    Before logging in you need to give access to your Facebook account in SharePoint. You'll use the e-mail address of the Facebook account - just add it to the Members Group of the site.

    Grant access to the FB account

    Then log out and sign in as a new user and choose to use Azure ACS login. You will now, as usual, be redirected to the ACS Sign in screen and you should see Facebook listed as an IP there. You might have to refresh your browser using Ctrl-F5, since the page might be cached.

    Sign in...

    When you click the Facebook button you will be redirected to Facebook which will prompt you for your credentials. Log in using the account you just gave permissions.


    The first time you will be requested to approve that Facebook sends information to your App, just click Go to App and you'll be authenticated in ACS.


    And voila! You have now logged in to SharePoint 2010 using your Facebook account.

    Houston, we have lift off


    Making SharePoint take advantage of Azure ACS and the Facebook integration you can very easily create a log in experience that users are quite used to by now. As I have shown you it just takes a couple of minutes.

    I just can't stop writing on this topic, so I'll be back with some more awesomeness another day...

  • Visual guide to Azure Access Control Services authentication with SharePoint 2010 - part 2 - common problems

    Tags: Security, Windows Azure, SharePoint 2010

    This is a the second part of the Visual guide to Azure Access Control Services authentication with SharePoint 2010. I hope you've read part 1 which showed you how to configure SharePoint 2010 to use Windows Azure Access Control Services, ACS, as the federated Identity Provider, IP. In this post I'll go through the most common errors that you might stumble upon (most likely due to the fact that you didn't follow part 1 thoroughly). These errors are also applicable to other providers such as ADFS.

    Note: this post is written using Azure ACS as per February 2012 and with SharePoint 2010 Server with SP1 and December 2011 Cumulative Update.

    So let's get started with a very annoying problem - Live ID...

    Windows Live ID and the e-mail claim

    The first error is not an error per see. You will see this one if you followed the instructions in the first part - but instead tried to use Windows Live ID when logging in. What you will see is the classic "An unexpected error occurred".

    An unexpected error occurred

    The key here is to take a look at the URL. You will see a query string parameter called errorCode which has the value TrustedMissingIdentityClaimSource.

    URL reveals it all

    So there is something missing! To understand what happened here we need to get back to the ACS management portal and take a look at the Rule Group that was created for our Relying Party Application. As you can see in the image below only one claim is augmented when using Windows Live ID - the nameidentifier. In part 1 we configured the identity claim (in the PowerShell script) and we configured it to use the e-mail as identity claim ( If you read the last series on Live ID AuthN with SharePoint 2010 you might remember that you could not use the e-mail address of the Live ID user but instead had to use the UUID (basically a GUID that uniquely identified the user). Unfortunately (and this is a real bummer) this UUID (claim type is not available in Azure ACS, for Live ID. Instead the only claim we have access to is the nameidentifier - which is a unique identifier for the specific user on this ACS namespace.

    Rules, rules, rules

    If we add a new Rule in Azure ACS that uses the nameidentifier as input claim and outputs it as the emailaddress identifier claim. Then we at least have something unique for the user to work with.

    Create me a rule

    When you now log in using a Live ID you will get an access denied message, which displays this unique nameidentifier. Copy the identifier and then log in using a Windows account (or the working Google account) and this identifier to the Members group (for instance).

    Who is that!

    Then log in again it will work, but...


    Looks pretty bad! So you better leave Windows Live ID out of the discussion (until Microsoft fixes ACS to give us a decent claim to work with). Edit the Relying Party Application in ACS and remove Windows Live ID as Identity Provider and you will be a much happier person.

    Turn of WLA

    No Rules applied here!

    One problem you might see when setting up the authentication using ACS is that you might be to trigger happy and fast when configuring and just forgets to add any Rules to your Relying Party. When you're trying to log in you will see an error message like this:


    The error message is ACS50000: There was an error issuing a token, with two inner messages ACS60000 and ACS60001 where the last one gives us the clue in plain text: "No output claims were generated during rules processing". This is just because you have no rules applied to that Relying Party which converts the incoming claims from the IP's to outgoing claims. To fix it edit your Rule Group and just use the Generate to create your output claims:

    No rules!

    This might happen if you're adding more Identity Providers to your Relying Party once you have configured it.

    Note: make sure that you're doing it for the correct Rule Group - the one selected in the Relying Party.

    Invalid identifier claim

    Another error, not that common though, that might happen if you start fiddling with the claims is that you do not get any incoming identifier claim to SharePoint. When this error happens you will get the classic An unexpected error has occurred error page.

    The expected unexpected

    The actual error can be find either in the URL or in the trace logs - both explain exactly what has happened. In the URL the error code is written out as TrustedMissingIdentityClaimSource. Exactly the same as in the Windows Live ID dilemma above.

    Check thy URL

    And in the trace logs you will find several entries (depending on your log level):

    ULSViewer FTW!

    To fix this make sure that you have an output claim, from the ACS Relying Party, for each Identity Provider, that matches the exact claim that you specified when configuring the token issuer identity claim (using PowerShell). You can verify your identity claims by running the PowerShell snippet

    Get-SPTrustedIdentityTokenIssuer | ft Name,  @{Label = "Id Claim"; ` Expression={$_.IdentityClaimTypeInformation.InputClaimType}} -autosize


    Then check you ACS rules and that each IP has an output claim of that type (that is unique of course!).

    Token lifetime

    The most common problem with Azure ACS and SharePoint 2010 is that you successfully logs in and then either directly are redirected back to the log in page or you're logged in for a second and then as soon as you click something are requested to log in again. If you have enabled Verbose Trace Logging for Claims Authentication (which definitely is a good thing to do when troubleshooting claims stuff) you will also see this message in the ULS logs: "Token cache entry missing.".

    missing token

    This is most likely due to that you have misconfigured the Token Lifetime, or just let it have the standard value of 600. Most likely you did not read my previous post thoroughly enough!!!

    The STS of SharePoint 2010 has a default lifetime of the logon token set to 10 minutes (600 seconds) and this is also the default value of the ACS RP token lifetime (600 seconds). If you with these default  values can actually log in you have a fast connection/machine, but any subsequent action in SharePoint will force you to re-authenticate. You have to configure the lifetime in ACS larger than the value in SharePoint (either by increasing the ACS token lifetime or lowering the SharePoint 2010 LogonTokenCacheExpirationWindow value of the  STS.

    In my previous post I set the token lifetime to 700 seconds - this will then make your users log in every 100 seconds (700-600). If you set it to 610 seconds in ACS you will have to re-authenticate every 10 seconds. A recommendation is to bump it up to 3.600 seconds, so you don't annoy your users to much!

    Not using correct SAML version

    Another common error is that you get a Runtime Error (Yellow Screen of Death) directly after you have logged in using one of the IP's. The ULS logs does not show any useful information, you just see that a request is going to the /_trust/default.aspx page.


    On the other hand, if you switch to the Windows Event Viewer you will notice an ASP.NET error, with event id 1309. And if you look closer at the details, the answer is there, once again under your fingertips.

    Event Viewer

    The exception message says: "ID4014: A SecurityTokenHandler is not registered to read security token ('Assertion', 'urn:oasis:names:tc:SAML:2.0:assertion').". Once again you've misconfigured the ACS Relying Party - you must use SAML 1.1 to get this working (without setting up an intermediate ADFS server or similar).

    Certificate problems 

    Another issue that might throw the yellow screen of death on the /_trust/default.aspx page is when you have invalid or missing certificates. Again both the trace logs and Windows event logs shows us the error. In the Windows Event Logs you will see a SharePoint error with event id 8311. This error will say that it could not validate the certificate used to sign the incoming claims. The trace logs will show the exact same error, also with id 8311, in the Topology category.


    Fix the error by making sure that you have added the same signing certificate to Azure ACS and as a trusted root authority. See part 1 of this series for more info.


    As you can see it's all about configuration and make it right! I hope this post will help you with the basic problems, and if you get any other errors, please post a comment.

    I'll be back with at least one more post on this topic...

  • Suddenly getting Access Denied on your SharePoint 2010 User Profile Sync

    Tags: Security, SharePoint 2010

    The last week I stumbled upon a really interesting new and shiny User Profile Synchronization issue - one of these things that just make your day! We had to manually initialize a full synchronization, after doing some updates to one of the user profile properties, and the user profile synchronization would not just start...

    Timer Job - Access DeniedEverything looked fine (on the surface) and we tried the incremental sync, which also looked like it was starting but nothing happened. The sync service was up and running and the FIM services was started, the MIISClient showed no activity. We took a look at the timer jobs, which are responsible for kicking of the synchronizations and saw that they all failed with the error message Access Denied.

    No more than this simple error message. Since the timer jobs are executed using the Farm Account this sounded very peculiar. Oh, and you who still have your Farm Account in the local administrators group would probably never see this error, you'll be aware of this in a minute!

    Next resort was to dig in to the trace logs (ULS), using my favorite SharePoint tool: ULSViewer. And there we had a Critical and an Unexpected entry, related to this Access Denied error message. ULS Logs

    The accompanying stack trace showed me that it was some problems getting the instance of the Management Agents.

    Stack Trace

    Time to fire up my second best SharePoint tool: Reflector! Peeking into the failing method in the User Profile assembly revealed that the Access Denied was thrown while trying to retrieve the MA's using WMI. Now, this sounds really weird. As usual nothing has been changed in the farm (and this time I knew it for a fact) - but you should always check with the admins, which I did and no new policies or similar had been applied to any machines recently.

    On the machine that hosted the sync service/FIM Services I started a Management Console and added the WMI snap-in and took a look at the Security tab for the local machine. And this is what I found:

    Weird WMI permissons

    The security settings for the MicrosoftIdentityIntegrationServer looked not just right, it more or less looked liked the default WMI security settings. (And as you can see the Administrators group is there - so that's why you who still have the farm account in the administrators group, probably never see this error...). A quick comparison with the identical staging environment showed a whole lot more permissions.

    Correct WMI permissions

    To be exact the following permissions did not exist for the environment that showed the Access Denied:

    • WSS_ADMIN_WPG group - Execute Methods, Provider Write, Enable Account, Remote Enable
    • FIMSyncOperators group - Execute Methods, Provider Write, Enable Account, Remote Enable
    • FIMSyncBrowse group - Execute Methods, Provider Write, Enable Account, Remote Enable
    • FIMSyncPasswordSet group - Execute Methods, Provider Write, Enable Account, Remote Enable

    I fixed the permissions using the WMI management console, went back to Central Admin and started the synchronization manually and within a minute the synchronization was running beautifully!

    I do not know about any supportability on this one - and you should NOT be doing this unless you really have to. A safer way might be to unprovision and reprovision the User Profile Synchronization Service - this should correctly set this permissions.

    So, what caused this then? I've not yet found the actual source of the problem (which is frustrating to me at least), but I now how to fix it if it appears. Safe sources tell me that this might happen when you're doing a backup of the UPSS (and this is done in the correct way, which stops and starts the UPSS so no ongoing syncs are interfering with the backup). Question remains open...

    Happy Christmas everyone!

  • Fix the SharePoint DCOM 10016 error on Windows Server 2008 R2

    Tags: SharePoint, Security, Windows Server 2003, Windows Server 2008, Windows Server 2008 R2

    If you have been installing SharePoint you have probably also seen and fixed the DCOM 10016 error. This error occurs in the event log when the SharePoint service accounts doesn't have the necessary permissions (Local Activation to the IIS WAMREG admin service). Your farm will still function, but your event log will be cluttered.

    On a Windows Server 2003 or Windows Server 2008 machine you would just fire up the dcomcnfg utility (with elevated privileges) and enable Local Activation for your domain account.

    But for Windows Server 2008 R2 (and Windows 7, since they share the same core) you cannot do this, the property dialog is all disabled due to permission restrictions. It doesn't matter if you are logged in as an administrator or using elevated privileges. The change is probably due to some new security improvements.

    DCOMCNG - all disabled

    The reason for it being disabled is that this dialog is mapped to a key in the registry which the Trusted Installer is owner of and everyone else only has read permissions. The key used by the IIS WAMREG admin is:


    Registry permissions on R2 Registry permissions on R1

    Image on the left shows the default permissions for Windows Server 2008 R2 and on the right the default settings for Windows Server 2008.

    To be able to change the Launch and Activation Permissions with dcomcnfg you have to change the ownership if this key. Start the registry editor (regedit), find the key, click Advanced in the Permissions dialog of this key and select the Owner tab. Now change the owner of the key to the administrators group for example, then set full control to the administrators group. Make sure not to change the permissions for the TrustedInstaller.

    Now you have to restart the dcomcnfg application and once find the IIS WAMREG application and then set the Launch and Activation settings that you need to get rid of the DCOM 10016 error.


    Good luck!

    WARNING: Changing the registry may seriously damage your server. All is on your own risk!

  • In defense of User Account Control

    Tags: Security, Windows Vista

    Everybody has something to say about Windows Vista, good and bad. Most often I hear complaints and especially on the User Account Control. Today the Swedish IDG website had an article about the 10 most annoying things with Vista and how to solve them, and of course one of them was about the poor UAC.

    I must say, and I have been using Vista since before RTM, and only found the UAC annoying during the first few days, when installing the machine. Since then I barely notices it – and if I do, I know why and I can feel more safe using my machine.


    Why UAC?

    The User Account Control was not thrown into Vista just to cause warning dialogs whenever you do something that you should be careful about. It is put there to make you aware of that you or an application is making changes to something that may have an effect on your machines configuration and/or security.

    During your initial installation and configuration phase with your Vista machine this may of course be annoying since you are doing a lot of installs and configurations, but during normal usage you should not see it at all. I see the UAC dialog about twice a day on my laptop which I use for developing and this is when I start the necessary services (like SQL Server, which I have set to manual startup) and when I start Visual Studio for doing SharePoint development (I have installed WSS on my Vista). Then once in a while UAC is shown when I install/uninstall some new applications. At home on Media Center I have not seen it in ages.

    I’ll just turn it off

    Most Vista UAC tips and tweaks states that you should turn the UAC off and so did the article I referred to. This is a completely working solution – but I do not think that it is any good solution! Anyone recommending this solution should immediately quit their jobs and do something else.

    We Windows users have had some rough years prior to Vista with viruses, Trojans and worms infecting Windows machines due to bugs in applications and uncareful clicking on mail attachments. Recommend to turn it off will set us back a few years…but it’s up to you.

    Running all programs as administrator is just plain dumb. What other modern operating systems are recommended to run as administrator or root?

    The UAC is designed to be annoying

    Yes it is! If it wasn’t annoying then you wouldn't notice it – and it would not make any sense having it. The UAC is not designed for power users, it is mainly designed for normal Vista users, like my mother for example. I have previously helped her with removing stuff from her machine that she had no idea that she installed. Now with Vista and UAC I have not had the “pleasure” of that.

    As a power user, who often tries new programs, I will be aware of when a program tries to write to the system folders or registry and I can allow or disallow it.

    I think UAC is here to stay. Hopefully Windows 7 may contain some tweaks to make it more responsive, since starting the Task Manager with elevated privileges during a 100% CPU usage really sucks.

About Wictor...

Wictor Wilén is the Nordic Digital Workplace Lead working at Avanade. Wictor has achieved the Microsoft Certified Architect (MCA) - SharePoint 2010, Microsoft Certified Solutions Master (MCSM) - SharePoint  and Microsoft Certified Master (MCM) - SharePoint 2010 certifications. He has also been awarded Microsoft Most Valuable Professional (MVP) for seven consecutive years.

And a word from our sponsors...