The blog of Wictor Wilén

  • Announcing Azure Commander

    Tags: Windows Azure, Windows Phone 8, Windows 8, Microsoft Azure

    For no one out there, in the SharePoint space or any other space, Microsoft Azure has gone unnoticed. Microsoft Azure is a really great service, or rather set of services, that for a (Microsoft or SharePoint) developer or IT-Pro is something that they should use and embrace. Personally I’ve been using Azure since the dawn of the service and I’ve been using it more and more. I use it to host web sites, host SharePoint and Office Apps, Virtual Machines, Access Control and a lots of other things.

    Specifically I’ve been lately using it for work more and more, and my customers and company are seeing huge benefits from using Microsoft Azure. We’re using it to host our development machines, demo environments, hosting full SharePoint production environments (staging, CI etc etc) and we’re using it to host SharePoint and Office Apps.

    All these services and instances and configurations must be managed somehow. For that we have the Azure portal (new and old version) and PowerShell. None of them are optimal in my opinion and I needed something more agile, something I could use when on the run, something to start, stop or reset my environments before a meeting, on my way home etc. This so I can optimize my resource/cost usage for the services and to save time.

    Introducing Azure Commander

    For all these purposes and reasons I have created a brand new Universal App for Windows 8.1 and Windows Phone 8.1 called – Azure Commander. Using Azure Commander I never forget to shut down a Virtual Machine since I can do it while commuting home, I can easily restart any of my web sites, when something hits the fan and more.

    Azure Commander are in its initial release focused on maintenance tasks for Virtual Machines (IAAS), Web/Worker Roles (PAAS) and Azure Web Sites. But you will see more features being added continuously! I personally like the option of easily firing up the app, choose a virtual machine and start an instant RDP session to it – from my laptop or Surface 2. For full features head on over to the Azure Commander web site.

    The interface on Windows 8.1

    AZ-Screen-W8-1.1.1.2

    The interface on Windows Phone 8.1

    AZ-Screen-WP8-1.1.1.2

    The App is since a couple of hours back available in both the Windows and Windows Phone store. You can get the app for only $3.99 – which is something you save in an instant when you use the app to remember turning of a virtual machine or web site or service over the weekend. A good thing is that this is an Universal Windows App – which means that if you buy them on either platform you will get it for free on the other one.

    Windows 8.1

    Download it from the Windows Store!

    Windows Phone 8.1

    Download it from the Windows Phone Store!

    Summary

    I hope you will enjoy the App as much as I do (both using it and building it). If you like it, please review it and follow Azure Commander on Twitter (@AzureCommander) or like it on Facebook.

  • Renewed as Microsoft Most Valuable Professional for the fifth time

    Tags: MVP, SharePoint

    April 1st 2014, for many a day full of jokes, but for 966 individuals this is the day they either is being awarded the Microsoft Most Valuable Professional (MVP) award or being renewed as MVPs. I’m fortunate to be one of those this time, and now for my fifth year!

    This award is given to exceptional technical community leaders who actively share their high quality, real world expertise with others. We appreciate your outstanding contributions in SharePoint Server technical communities during the past year.

    image

    Thank you everyone for your support!

  • Workflow Manager Disaster Recovery – Preparations

    Tags: Workflow Manager, Series, Service Bus

    Introduction

    This is the first “real” posts in the Workflow Manager Disaster Recovery series. In this post I will show you what you need to do to prepare for Disaster Recovery (DR) situations when working with Workflow Manager and Service Bus.

    The obvious

    Let’s start with the obvious pieces. You should run your Workflow Manager farm on three (3) servers (for more on this discussion see the SPC356 session). Running on three servers are important not just for high-availability it might save you from going into DR mode. DR should be considered as the last resort. You should also consider one or more SQL Server high-availability options, more on this later.

    Data tier preparations

    WFM/SB default databasesThe Data tier in the Workflow Manager (WFM) /Service Bus (SB) is the actual SQL Server databases deployed when setting it up. It’s a minimum of six (6) databases – three for WFM and three for SB. These databases should of course be backed up – all DR options for Workflow Manager requires that you have a database backup/replica to restore from. Configure a “good” backup schema with full, differential and transaction log backups according to your SLA. You should also keep these backups in “sync” – that is don’t do SB backups on odd weeks and WFM backups even weeks for instance. They are normally not that large databases and the backup process should be pretty fast.

    Secondly you must think of the high-availability (HA) options for these databases. As I said previously, this can save you from going into DR mode. Most SQL HA options can be used with Workflow Manager and Service Bus, for instance:

    • Mirroring
    • Always On Availability Groups with sync commit

     

    If you need to shorten your DR time/RTO (Recovery Time Objective) then you should also not only rely on SQL backups but implement some replication technique with SQL to replicate data to a secondary location. The following techniques works great:

    • Log Shipping
    • Always On Availability Groups with async commit

     

    Compute tier preparations

    Once you are in control of the databases, backups and optionally replicas you could also consider weather you would like to have a cold or warm standby. A warm standby will shorten your RTO compared to a cold standby. A hot stand by Workflow Manager farm is NOT supported (a hot standby is a secondary running instance).

    • Cold standby – when a disaster occurs you create a brand new Workflow Manager/Service Bus farm using scripts and restore data from backups or replicated data.
    • Warm standby – a secondary farm configured but all the nodes (WFM and SB) are turned off. Use scripts to resume the nodes. Once the nodes are resumed you need to run the consistency verification cmdlet (Invoke-WFConsistencyVerifier, more on this later in the series).

     

    The Symmetric Key is the key

    In order to be successful or succeed at all you must keep track of the Symmetric key of the Service Bus. Without the Symmetric Key you cannot restore the Service Bus and you will loose all of your data. Keep it in a safe place!

    You can find the Symmetric Key by running the following PowerShell command:

    Get-SBNamespace -Name WorkflowDefaultNamespace

    image

    Summary

    You’ve just read the first post of this series and you’ve learnt the basics in how to prepare for, and possibly avoid, a Disaster Recovery situation for Workflow Manager and Service Bus. In the following posts we will discuss what do do if and when the disaster happens…

  • Workflow Manager Disaster Recovery and Restore options series

    Tags: Workflow Manager, Service Bus, Series, SharePoint 2013

    Introduction

    Welcome to a new series of blog posts in which we will focus on the Disaster and Recovery (DR) routines for Workflow Manager 1.0 in combination with SharePoint 2013. During SharePoint Conference 2013 me and SharePoint sensei Spencer Harbar presented a session called “Designing, deploying, and managing Workflow Manager farms” (watch the video recording). During that session we discussed different DR options for Workflow Manager and the Service Bus and we got tons of questions on that specific topic. We did not have time to go into details and we did not show any of the necessary scripts/routines you need to do when restoring a Workflow Farm or Workflow Scopes, and there is very little information available on that topic on the interwebs – so that is why this new blog series is being posted.

    Index of posts

    This blog post is just the introductory post and will be used as a place holder for all the posts in the series. Instead of writing one behemoth post I will split them into multiple ones and that will be a little bit easier for you to consume as well.

    These are the planned/written/unwritten posts and will be linked to when posted.

    • Workflow Manager Disaster Recovery – Index post (this post)
    • Workflow Manager Disaster Recovery – Preparations
    • Workflow Manager Disaster Recovery – Recover a single Scope
    • Workflow Manager Disaster Recovery – Recover a single database
    • Workflow Manager Disaster Recovery – Recover a single machine
    • Workflow Manager Disaster Recovery – Recover a full farm

     

    If you have any ideas on what more options to cover then feel free to post a comment below.

    Note:All examples in this series uses Windows Server 2012 R2, SharePoint 2013 Service Pack 1 and Workflow Manager 1.0 Refresh (+ Service Bus 1.1).

    Further reading

    For proper configuration of the Workflow Manager Farm see the following posts by Spencer Harbar:

  • Issue when installing Workflow Manager 1.0 Refresh using PowerShell

    Tags: Workflow Manager, SharePoint 2013

    Introduction

    When using the Web Platform Installer to download and/or install Workflow Manager you can no longer download and install Workflow Manager 1.0 and Workflow Manager 1.0 CU1. The only option is to download Workflow Manager 1.0 Refresh (which essentially is CU2). So when installing a new Workflow Manager farm for SharePoint or just because you want to rock some workflows you have to use Workflow Manager (WFM) 1.0 Refresh. Unless you’ve been smart and previously downloaded and saved the original Workflow Manager. When using WFM 1.0 Refresh you also need to download Service Bus 1.1.

    webpicmd.exe listing of available components

    Trouble in paradise!

    Installing the bits works like a charm and if you configure the Workflow Manager farm and the Service Bus using the wizard everything is golden. But if you’re a real professional then you of course use a scripted and repeatable approach and use PowerShell to build out your Workflow Manager farm.

    Most of the PowerShell commands and steps works as expected when creating the Service Bus farm, the Workflow Manager Farm, adding the first Service Bus host and creating the Service Bus namespace. But when you try to add a Workflow Host to the machine, using the Add-WFHost cmdlet, and when it tries to connect to the Service Bus namespace you get an error like this:

    Add-WFHost : Cannot validate argument on parameter 'SBClientConfiguration'.
    Could not load file or assembly
    'Microsoft.ServiceBus, Version=1.8.0.0, Culture=neutral, 
    PublicKeyToken=31bf3856ad364e35' or one of its dependencies.
    The system cannot find the file specified.

    If we crack up the Microsoft.Workflow.Deployment.Commands.dll using our favorite tool of choice of disassembly we can see that it actually references version 1.8.0.0 of Microsoft.ServiceBus.dll. Obviously a bug in this release!

    Solve the problem using basic .NET features

    The easiest way to work around it is of course to use the wizard, which works, but that is for slackers. So let’s instead use some basic .NET framework knowledge and make an assembly redirection. Assembly redirection is a way to redirect one version of an assembly to another, so that when the .NET framework tries to load version 1.8.0.0 we ask it to load version 2.1.0.0 (which is the Service Bus 1.1 version number). There is a couple of ways to do this;

    • We can update the machine.config file and make a machine wide redirection – I do not recommend this approach for obvious reasons
    • We can update the PowerShell.exe.config file and make the redirection for all PowerShell sessions – not a good approach either
    • Or we can dynamically load the configuration data in our PowerShell session – Heureka, that’s the way to do it!

    In order to do the assembly redirection we need to create a .config file which tells the .NET framework how to do the redirection. This is how the .config file should look like, and in this case I named it wfm.config. If you want do dig deeper into Assembly Redirection there is of course documentation on MSDN.

    <?xml version="1.0" encoding="utf-8" ?>
    <configuration>
      <runtime>
        <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
          <dependentAssembly>
            <assemblyIdentity name="Microsoft.ServiceBus"
              publicKeyToken="31bf3856ad364e35"
              culture="en-us" />
            <bindingRedirect oldVersion="1.8.0.0" newVersion="2.1.0.0" />
          </dependentAssembly>
        </assemblyBinding>
      </runtime>
    </configuration>

    To load this configuration into the current AppDomain we use the SetData() method of the current AppDomain and pass in the path to the file like this:

    $filename = Resolve-Path .\wfm.config
    [System.AppDomain]::CurrentDomain.SetData("APP_CONFIG_FILE", $filename.Path)

    As you can see from above, we create a variable with the path to the .config file and then grab the current AppDomain and invoke the SetData() method with the APP_CONFIG_FILE key and the filename as value.

    Once you have created the file and executed those two lines of PowerShell you can use the Add-WFHost once again and continue your Workflow Manager configuration.

    Summary

    In this post you’ve seen a (major) issue with the Workflow Manager 1.0 Refresh, where you cannot add new Workflow Hosts to your Workflow Manager farm, due to an invalid assembly reference. Using “basic” .NET framework knowledge we can “fix” this issue by using assembly redirection. Hopefully someone from the WFM team will pick this up and make a refresh of the refresh.

  • SPC14: Scripts for Mastering Office Web Apps 2013 operations and deployments

    Tags: Office Web Apps, Presentations, WAC Server, Exchange 2013, SharePoint 2013

    Here’s another post with scripts from my sessions at SharePoint Conference 2014 – this time from the Mastering Office Web Apps 2013 Operations and Deployments session (SPC383). To get a more in-depth explanation of all the details, please watch the recording at Channel 9.

    Let’s start…but first! OWA = Outlook Web App and WAC = Office Web Apps (Web Application Companion).

    Preparing the machine before installing Office Web Apps

    Before you install the Office Web Apps bits on the machine you need to install a set of Windows Features. The following script is the one you should use (not the one on TechNet) and it works for Windows Server 2012 and Windows Server 2012 R2.

    # WAC 2013 preparation for Windows Server 2012 (R2)
    Import-Module ServerManager
    
    # Required Features
    Add-WindowsFeature NET-Framework-45-Core,NET-Framework-45-ASPNET,`
        Web-Mgmt-Console,Web-Common-Http,Web-Default-Doc,Web-Static-Content,`
        Web-Filtering,Web-Windows-Auth,Web-Net-Ext45,Web-Asp-Net45,Web-ISAPI-Ext,`
        Web-ISAPI-Filter,Web-Includes,InkAndHandwritingServices,NET-WCF-HTTP-Activation45
    
    # Recommended Features
    Add-WindowsFeature Web-Stat-Compression,Web-Dyn-Compression
    
    # NLB
    Add-WindowsFeature NLB, RSAT-NLB
    
    
    Note, that I also add the NLB features here – it is not required if you use another load balancer.

    Starting a sysprepped WAC machine

    The following script is used when booting up a sysprepped machine that has all the Office Web App binaries installed (including patches and language packs), but no WAC configuration whatsoever. It will simply configure a NIC on the box and then join the machine to a domain and rename the machine. A very simple script that can be automated. Most of the scripts, just as this one, contains a set of variables in the beginning of the script. This makes it much more easier to modify and work with the scripts.

    $domain = "corp.local"
    $newName = "WACSPC2"
    $ou = "ou=WAC,dc=corp,dc=local"
    
    $ethernet = "Ethernet1"
    $ip = "172.17.100.96" 
    $prefix = 24
    $dns = "172.17.100.1"
    
    # Set IP
    Write-Host -ForegroundColor Green "Configuring NIC..."
    New-NetIPAddress -InterfaceAlias $ethernet -IPAddress $ip -AddressFamily IPv4 -PrefixLength $prefix 
    Set-DnsClientServerAddress -InterfaceAlias $ethernet -ServerAddresses $dns
    
    # Verify 
    ping 172.17.100.1
    
    # Get creds
    $credentials = Get-Credential -Message "Enter credentials with add computer to domain privilegies"
    
    # Join domain
    Write-Host -ForegroundColor Green "Joining domain..."
    Add-Computer -Credential $credentials -DN $domain -OUPath $ou 
    
    # rename
    Write-Host -ForegroundColor Green "Renaming machine..."
    Rename-Computer -NewName $newName
    
    # Reboot
    Write-Host -ForegroundColor Green "Restarting..."
    Restart-Computer

    Once this script is executed the machine should reboot and be joined to a domain.

    Configure a new Office Web Apps Farm

    Once the machine is joined to the domain it is time to configure Office Web Apps. If you want more information about the variables/parameters I use I recommend watching the session! These variables are solely for demo purposes and you should adapt them to your needs. Also this step requires that you have a valid certificate (pfx file) that conforms to the WAC certificate requirements.

    # New WAC Farm
    Import-Module OfficeWebApps
    
    $farmou = "WAC"                           # Note the format!!
    $internalurl = "https://wacspc.corp.local"
    $externalurl = "https://wacspc.corp.com"
    $cachelocation = "c:\WACCache\"           # %programdata%\Microsoft\OfficeWebApps\Working\d\
    $loglocation = "c:\WACLog\"               # %programdata%\Microsoft\OfficeWebApps\Data\Logs\ULS\
    $rendercache = "c:\WACRenderCache\"       # %programdata%\Microsoft\OfficeWebApps\Working\waccache
    $size = 5                                 # Default 15GB
    $docinfosize = 1000                       # Default 5000
    $maxmem = 512                             # Default 1024
    $cert = "wacspc.corp.local.pfx"              # File name
    $certname = "wacspc.corp.local"              # Friendly name
    
    
    $certificate = Import-PfxCertificate -FilePath (Resolve-Path $cert) -CertStoreLocation  Cert:\LocalMachine\My -ea SilentlyContinue 
    $certificate.DnsNameList | ft Unicode
    
    
    New-OfficeWebAppsFarm -FarmOU $farmou `
        -InternalURL $internalurl `
        -ExternalURL $externalurl `
        -OpenFromUrlEnabled `
        -OpenFromUncEnabled `
        -ClipartEnabled `
        -CacheLocation $cachelocation `
        -LogLocation $loglocation `
        -RenderingLocalCacheLocation $rendercache `
        -CacheSizeInGB $size `
        -DocumentInfoCacheSize $docinfosize `
        -MaxMemoryCacheSizeInMB $maxmem `
        -CertificateName $certname `
        -EditingEnabled `
        -Confirm:$false
        
        
    (Invoke-WebRequest https://wacspc1.corp.local/m/met/participant.svc/jsonAnonymous/BroadcastPing).Headers["X-OfficeVersion"]
    
    

    As a last step I do a verification of the local machine and retrieve the current Office Web Apps version.

    Create the NLB cluster

    In my session I used NLB for load balancing. The following scripts creates the cluster and adds the machine as the first node to that cluster. The script will also install the DNS RSAT feature and add two DNS A records for the internal and external names for the Office Web Apps Server. That step is not required and might/should be managed by your DNS operations team.

    # Create NLB Cluster
    $ip = "172.17.100.97"
    $interface = "Ethernet1"
    
    # New NLB Cluster
    New-NlbCluster -ClusterPrimaryIP $ip -InterfaceName $interface -ClusterName "SPCWACCluster" -OperationMode Unicast -Verbose
    
    # DNS Bonus
    Add-WindowsFeature  RSAT-DNS-Server  
    Import-Module DnsServer
    Add-DnsServerResourceRecordA -Name "wacspc" -ZoneName "corp.local" -IPv4Address $ip -ComputerName ( Get-DnsClientServerAddress $interface  -AddressFamily IPv4).ServerAddresses[0]
    ping wacspc.corp.local
    Add-DnsServerResourceRecordA -Name "wacspc" -ZoneName "corp.com" -IPv4Address $ip -ComputerName ( Get-DnsClientServerAddress $interface  -AddressFamily IPv4).ServerAddresses[0]
    ping wacspc.corp.com

    Adding additional machines to the WAC farm

    Adding additional machines to the WAC farm is easy, just make sure you have the certificate (pfx file) and use the following scripts on the additional machines:

    Import-Module OfficeWebApps
    
    $server = "wacspc1.corp.local"
    $cert = "wacspc.corp.local.pfx"  
    
    Import-PfxCertificate -FilePath (Resolve-Path $cert) -CertStoreLocation  Cert:\LocalMachine\My -ea SilentlyContinue 
    
    New-OfficeWebAppsMachine -MachineToJoin $server
    
    # Verify
    (Get-OfficeWebAppsFarm).Machines

    Configuring NLB on the additional WAC machines

    And of course you need to configure NLB and add the new WAC machines into your NLB cluster:

    $hostname = "WACSPC1"
    $interface = "Ethernet1"
    
    Get-NlbCluster -HostName $hostname | Add-NlbClusterNode -NewNodeName $env:COMPUTERNAME -NewNodeInterface $interface

    That is all of the scripts I used in the session to set up my WAC farm. All that is left is to connect SharePoint to your Office Web Apps farm

    Configure SharePoint 2013

    In SharePoint 2013 you need to add WOPI bindings to the Office Web Apps farm. The following script will add all the WOPI bindings and also start a full crawl (required for the search previews):

    The first part (commented out in this script) should only be used if your SharePoint farm is running over HTTP (which it shouldn’t of course!).

    asnp microsoft.sharepoint.powershell -ea 0
    
    # SharePoint using HTTPS?
    #(Get-SPSecurityTokenServiceConfig).AllowOAuthOverHttp
    #$config = Get-SPSecurityTokenServiceConfig
    #$config.AllowOAuthOverHttp = $true
    #$config.Update()
    #(Get-SPSecurityTokenServiceConfig).AllowOAuthOverHttp
    
    # Create New Binding
    New-SPWOPIBinding -ServerName wacspc.corp.local
    Get-SPWOPIBinding | Out-GridView
    
    # Check the WOPI Zone
    Get-SPWOPIZone
    
    # Start full crawl
    $scope = Get-SPEnterpriseSearchServiceApplication | 
        Get-SPEnterpriseSearchCrawlContentSource | 
        ?{$_.Name -eq "Local SharePoint Sites"}
    
    $scope.StartFullCrawl()
    
    # Wait for the crawl to finish...
    while($scope.CrawlStatus -ne [Microsoft.Office.Server.Search.Administration.CrawlStatus]::Idle) {
        Write-Host -ForegroundColor Yellow "." -NoNewline
        Sleep 5
    }
    Write-Host -ForegroundColor Yellow "."

    Connect Exchange 2013 to Office Web Apps

    In the session I also demoed how to connect Office Web Apps and Exchange 2013. The important things here to remember is that you need to specify the full URL to the discovery end-point and that you need to restart the OWA web application pools.

    # WAC Discovery Endpoint
    Set-OrganizationalConfig -WACDiscoveryEndpoint https://wacspc.corp.local/hosting/discovery
    
    # Recycle OWA App Pool
    Restart-WebAppPool -Name MSExchangeOWAAppPool
    
    # (Opt) Security settings
    Set-OwaVirtualDirectory "owa (Default Web Site)" -WacViewingOnPublicComputersEnabled $true -WacViewingOnPrivateComputersEnabled $true

    Summary

    I know I kept all the instructions in this blog post short. You really should watch the recording to get the full picture. Good luck!

  • SPC14: Scripts for Real-world SharePoint Architecture decisions

    Tags: Conferences, SharePoint 2013

    As promised I will hand out all the scripts I used in my SharePoint Conference 2014 sessions. The first set of scripts are from the demo used in the Real-world SharePoint Architecture decisions session (SPC334). This session only contained one demo in which I showed how to set up a Single Content Web Application and using Host Named Site Collections when creating Site Collections.

    Creating the Web Application and the Root Site Collection

    The first part of the script was to create the Web Application using SSL, configure the certificate in IIS and then create the Root Site Collection. The Web Application is created using the –Url parameter pointing to a FQDN, instead of using the server name (which is used in the TechNet documentation, and causes a dependency on that specific first server). Secondly the script assumes that the correct certificate is installed on the machine and we grab that certificate using the friendly name (yes, always have a friendly name on your certificates, it will make everything much easier for you). A new binding is then created in IIS using the certificate. Finally the Root Site Collection is created (it is a support requirement) – the Root Site Collection uses the same URL as the Web Application and we should not specify any template or anything. This will be a site collection that no end-user should ever use.

    asnp *sh*
    
    # New Web Application 
    $wa = New-SPWebApplication `
        -Url 'https://root.spc.corp.local/' `
        -SecureSocketsLayer `
        -Port 443 `
        -ApplicationPool 'Content Applications' `
        -ApplicationPoolAccount 'CORP\spcontent' `
        -Name "SP2013SPC Hosting Web App " `
        -AuthenticationProvider (New-SPAuthenticationProvider) `
        -DatabaseName 'SP2013SPC_WSS_Content_Hosting_1' `
        -DatabaseServer 'SharePointSql'
    
    
    # Get Certificate
    $certificate = Get-ChildItem cert:\LocalMachine\MY | 
        Where-Object {$_.FriendlyName -eq "spc.corp.local"} | 
        Select-Object -First 1
    $certificate.DnsNameList | ft Unicode
    
    
    # Add IIS Binding
    Import-Module WebAdministration
    $binding = "IIS:\SslBindings\0.0.0.0!443"
    Get-Item $binding -ea 0 
    $certificate | New-Item $binding
    
    
    # Root site
    New-SPSite `
        -Url https://root.spc.corp.local `
        -OwnerAlias CORP\spinstall
    

    Creating Host Named Site Collections

    Secondly we created a Host Named Site Collection (HNSC) in our Web Application. For HNSC this can only be done in PowerShell, and not in Central Administration, and we MUST use the –HostHeaderWebApplication parameter and it MUST have the value of the Web Application URL.

    New-SPSite `
        -Url https://teams.spc.corp.local `
        -Template STS#0 `
        -OwnerAlias CORP\Administrator `
        -HostHeaderWebApplication https://root.spc.corp.local
    

     

    My Site host and Personal Sites

    If you would like to have Personal Sites and the My Site Host in the same Web Application (which in many cases are a good approach) then you must make sure to have Self Service Site Creation enabled on the Web Application and then use the following scripts. The script will first create a Farm level Managed Path for the Web Application by using the –HostHeader parameter. Then we just create the My Site Host as a Host Named Site Collection.

    # Personal Sites
    New-SPManagedPath `
        -RelativeUrl 'Personal' `
        -HostHeader
    
    # My Site Host
    New-SPSite `
        -Url https://my.spc.corp.local `
        -Template SPSMSITEHOST#0 `
        -OwnerAlias CORP\Administrator `
        -HostHeaderWebApplication https://root.spc.corp.local
    

     

    Configure search

    In the session I also explained why you should have a dedicated Content Source for People Search (watch the session for more info). And using the following script we add the correct start addresses to the two content sources, based on the Site Collections created above:

    # Configure Search
    $ssa = Get-SPEnterpriseSearchServiceApplication
    
    # The root web application
    $cs = Get-SPEnterpriseSearchCrawlContentSource -SearchApplication $ssa -Identity "Local SharePoint sites"
    $cs.StartAddresses.Add("https://root.spc.corp.local")
    $cs.Update()
    
    # People search
    $cs = Get-SPEnterpriseSearchCrawlContentSource -SearchApplication $ssa -Identity "Local People Results"
    $cs.StartAddresses.Add("sps3s://my.spc.corp.local")
    $cs.Update()

    Once this is done, you just kick off a full crawl of the People Search and then wait for a couple of hours (so the Analytics engine does its job) before you start the crawl of the content.

    Summary

    This was all the scripts I used during the demo in the SPC334 session. It’s a proven method and I hope you can incorporate them into your deployment scripts.

  • SPC 14 sessions, recordings and wrap-up

    Tags: Presentations, SharePoint 2013, Office Web Apps, Conferences

    Wow, that was an awesome conference! SharePoint Conference 2014 is over and I’m very glad I attended the conference – both as a speaker and attendee. Finally Microsoft and the SharePoint Product Group told us about their future and vision for SharePoint and SharePoint Online. If you knew how long we have waited for this…

    I’m glad they start to sort out the service (ie Office 365) and now can add new capabilities into the platform.
    I’m glad Jeff Teper officially said that there will be at least one more version of SharePoint on-premises.
    I’m glad that the product group is listening to our and our customers feedback.
    I’m glad that we have such a strong community
    I’m excited about the future of SharePoint (to be honest, it’s been some time since I had that feeling).

    My sessions

    As a first time speaker at this event I was a bit nervous, which the ones who attended my sessions might have noticed. I’m proud that so many people turned up on my sessions, especially the Architecture session where we had people standing in the back and we had 90 minutes of Q&A at the end! That was cool! Unfortunately the room where I had all my three sessions suffered from severe microphone issues (which impacts my session ratings), apart from that everything except one demo was a success. Everything was recorded so if you did not have time to attend my sessions or just want to see them again here they are:

    Real-world SharePoint architecture sessions (SPC334)

    Mastering Office Web Apps 2013 operations and deployments (SPC383)

    Designing, deploying, and managing Workflow Manager farms (SPC356)

    Co-presented with Spencer Harbar.

    Summary

    If you have any questions on my sessions, feel free to post them here. And before you ask – yes, I will post all the PowerShell scripts I used, but in a separate blog post(s).

    If you’d like to watch more videos from SPC14, head on over to Channel 9 and take a look at any of the keynotes and sessions for free. I’m really looking forward to see the what’s up next with SharePoint, I think the next conference (whatever it will be called) will be something very different from this one.

  • Using SQL Server Resource Governor to optimize SharePoint 2013 performance

    Tags: SharePoint 2013, SQL Server

    Introduction

    We all know that one of the most important parts of SharePoint 2013 (and 2003, 2007 and 2010) are SQL Server. Bad SQL Server performance will lead to bad SharePoint performance! That’s just how it is! There are tons of ways of doing this by having enough cores, adding more RAM, using fast disks, using multiple instances and even servers. You should all already be familiar with this.

    Search is one of the components in SharePoint that requires A LOT of resources, especially when crawling and doing analytics. For both SQL Server and SharePoint Search there are plenty of documentation on how to optimize both the hardware and configuration of these components. In this post I will explain and show you how to use the SQL Server Resource Governor to optimize the usage of SQL Server, especially for Search.

    SQL Server Resource Governor

    Default Resource Governor configurationThe Resource Governor was introduced in SQL Server 2008 and is a feature in SQL Server that allows you to govern the system resource consumption using custom logic. You can specify limits for CPU and memory for incoming sessions. Note that the Resource Governor is a SQL Server Enterprise feature (but also present in Developer and Evaluation editions).

    The Resource Governor is by default disabled and you have to turn it on. Just turning it on doesn’t do anything for you. You have to configure the Resource Pools, Workload Groups and the Classification.

    Resource Pools

    Resource Pools represents the physical resources of the server, that is CPU and memory. Each resource has a minimum value and a maximum value. The minimum value is what the resource governor guarantees that the resource pool has access to (that is those resources are not shared with other resource pools) and the maximum value is the maximum value (which can be shared with other pools). By default SQL Server creates two Resource Pools; internal and default. The internal pool is what SQL Server itself uses and the default pool is a …. default pool :-). Resource Pools can be created using T-SQL or using the SQL Server Management Studio.

    Workload Groups

    Each Resource Pool can have one or more Workload Groups, and the Workload Groups is where the sessions are sent to (by the Classifier, see below). Each Workload Group can be assigned a set of policies and can be used for monitoring. Workload Groups can be moved from one Resource Pool to another. Workload Groups can be created using T-SQL or using the SQL Server Management Studio.

    Classification

    The Classification of requests/sessions are done by the Classifier Function. The Classifier function (there can be only one) handles the classification of incoming requests and sends them to a Workload Group using your custom logic. The Classifier function can only be created using T-SQL.

    Using SQL Server Resource Governor to optimize Search Database usage

    So, how can we use the Resource Governor to improve or optimize our SharePoint 2013 performance? One thing (among many) is that we can take a look at how Search crawling affects your farm. While crawling the crawler, part from hammering the web servers being crawled (which you should have dedicated servers for), it also uses lots of SQL Server resources. In cases where you only have one SQL Server (server, cluster, availability group etc) all your databases will be affected by this, and one thing you don’t want to do is to annoy your users during their work with a slow SharePoint farm. What we can do here using the Resource Governor is to make sure that during normal work hours the Search databases are limited to a certain amount of CPU (or RAM).

    Configure the SQL Server Resource Governor to limit resource usage of Search databases

    The following is one example of how you can configure SQL Server to limit the resource usage of the SharePoint Search databases during work hours and not limit them during night time. All the following code is executed as a sysadmin in the SQL Server Management Studio.

    Create the Resource Pools

    Our customized Resource GovernorWe need two resource pools in this example – one for sessions using the Search databases under work hours (SharePoint_Search_DB_Pool) and one for sessions using the Search databases during off-work hours (SharePoint_Search_DB_Pool_OffHours). We configure the work hours Resource pool to use at the maximum 10% of the total CPU resources and the Off hours pool to use at the max 80%. In T-SQL it looks like this:

    USE master
    GO
    CREATE RESOURCE POOL SharePoint_Search_DB_Pool
    WITH
    (
    	MAX_CPU_PERCENT = 10,
    	MIN_CPU_PERCENT = 0
    )
    GO
    CREATE RESOURCE POOL SharePoint_Search_DB_Pool_OffHours
    WITH
    (
    	MAX_CPU_PERCENT = 80,
    	MIN_CPU_PERCENT = 0
    )
    GO

    Create the Workload Groups

    The next thing we need to do is to create two Workload Groups (SharePoint_Search_DB_Group and SharePoint_Search_DB_Group_OffHours) and associate them with the corresponding Resource Pool:

    CREATE WORKLOAD GROUP SharePoint_Search_DB_Group
    WITH
    (
    	IMPORTANCE = MEDIUM
    )
    USING SharePoint_Search_DB_Pool
    GO
    CREATE WORKLOAD GROUP SharePoint_Search_DB_Group_OffHours
    WITH
    (
    	IMPORTANCE = LOW
    )
    USING SharePoint_Search_DB_Pool_OffHours
    GO
    

    After this we need to apply this configuration and enable the Resource Governor, this is done using this T-SQL:

    ALTER RESOURCE GOVERNOR RECONFIGURE
    GO

    Create the Classifier function

    The Resource Pools and Workload Group are now created and the Resource Governor should start working. But all the incoming requests are still going to the default Resource Pool and Workload Group. To configure how the Resource Governor chooses Workload Group we need to create the Classifier function. The Classifier function is a T-SQL function (created in the master database) that returns the name of the Workload Group to use.

    The following Classifier function checks if the name of the database contains “Search” – then we assume that it is a SharePoint Search database (of course you can modify it to use “smarter” selection). During normal hours it will return the SharePoint_Search_DB_Group and between 00:00 and 03:00 it will return the SharePoint_Search_DB_Group_OffHours group for the Search databases. For any other database it will return the “default” Workload Group.

    CREATE FUNCTION fn_SharePointSearchClassifier()
    RETURNS sysname
    WITH SCHEMABINDING
    AS
    BEGIN
    	DECLARE @time time
    	DECLARE @start time
    	DECLARE @end time
    	
    	SET @time = CONVERT(time, GETDATE())
    	SET @start = CONVERT(time, '00:00')
    	SET @end = CONVERT(time, '03:00')
    	IF PATINDEX('%search%',ORIGINAL_DB_NAME()) > 0 
    	BEGIN 
    		IF @time > @start AND @time < @end 
    		BEGIN
    			RETURN N'SharePoint_Search_DB_Group_OffHours'
    		END
    		RETURN N'SharePoint_Search_DB_Group'
    	END
    	RETURN N'default'
    END
    GO

    This is the core of our logic to select the appropriate Workload Group. You can modify this method to satisfy your needs (you need to set the Classifier to null and reconfigure the Resource Governor, and then set it back and reconfigure again whenever you need to change the method). An important thing to remember is that there can only be one Classifier function per Resource Governor, and this method will be executed for every new session started.

    To connect the Classifier function to the Resource Governor there is one more thing that we need to do. First the connection and then tell the Resource Governor to update its configuration:

    ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = dbo.fn_SharePointSearchClassifier)
    GO
    ALTER RESOURCE GOVERNOR RECONFIGURE
    GO
    

    Verification

    You should now immediately be able to see that the Resource Governor starts to use this Classifier function. Use the following DMVs to check the usage of the Resource Pools and Workload Groups respectively.

    SELECT *  FROM sys.dm_resource_governor_resource_pools
    SELECT *  FROM sys.dm_resource_governor_workload_groups

    Resource Governor DMV views

    As you can see from the image a above the Resource Governor has started to redirect sessions to the SharePoint_Search_DB_Group.

    Another useful T-SQL query for inspecting the usage is the following one, which will list all the sessions and which Workload Group they use and from where they originate.

    SELECT CAST(g.name as nvarchar(40)) as WorkloadGroup, s.session_id, CAST(s.host_name as nvarchar(20)) as Server, CAST(s.program_name AS nvarchar(40)) AS Program
              FROM sys.dm_exec_sessions s
         INNER JOIN sys.dm_resource_governor_workload_groups g
              ON g.group_id = s.group_id
    ORDER BY g.name
    GO
    

    Summary

    You have in this post been introduced to the SQL Server Resource Governor and how you can use it to optimize/configure your SharePoint environment to minimize the impact crawl has on the SQL Server database during normal work hours.

    Remember that this is a sample and you should always test and verify that the Resource Pool configurations and the Classifier logic works optimal within your environment.

  • Office Web Apps 2013: Excel Web App ran into a problem - not rendering Excel files

    Tags: Office Web Apps, WAC Server

    Introduction

    This is a story from the trenches where Excel Web App in Office Web Apps 2013 refuses to render Excel documents, while other Apps such as Word and PowerPoint works just fine. The end-users are met with the generic error message: “We’re sorry. We ran into a problem completing your request.”

    Houston - we got a problem

    The problem is easy to solve but can be somewhat difficult to locate and in this post I will show you how to find the issue and fix it.

    Symptoms

    Whenever Office Web Apps 2013 fails to render a document it shows the end-users a generic error message without any details. Fortunately the Office Web Apps Server contains good logging mechanisms and will in most cases give you an idea on where to solve it and in some cases it’s written in clear text.

    This specific issue, for the Excel Web Apps, shows itself in three different places (except for the error message that is shown in the user interface). First of all “normal” sys admins will see a couple of errors in the System Event Log manifesting itself like this:

    System log

    Event Id 5011:

    A process serving application pool 'ExcelServicesEcs' suffered a fatal 
    communication error with the Windows Process Activation Service. 
    The process id was '2168'. The data field contains the error number.

    Event Id 5002:

    Application pool 'ExcelServicesEcs' is being automatically disabled due to a series 
    of failures in the process(es) serving that application pool.
    

     

    Pretty nasty messages which does not give you a clue, except that something is horribly wrong. There are also lots of Dr Watson log entries in the Application log which might cause the admin to start looking up the Microsoft support phone number.

    The more “clever” admin then knows that Office Web Apps actually has it’s own log in the Event Viewer. When checking that log messages like the following are shown for the Excel Web App:

    Event Id 2026:

    An internal error occurred.
       at System.Diagnostics.PerformanceCounterLib.CounterExists(String machine, String category, String counter)
       at System.Diagnostics.PerformanceCounter.InitializeImpl()
       at System.Diagnostics.PerformanceCounter..ctor(String categoryName, String counterName, String instanceName, Boolean readOnly)
       at System.Diagnostics.PerformanceCounter..ctor(String categoryName, String counterName, Boolean readOnly)
       at Microsoft.Office.Excel.Server.CalculationServer.ExcelServerApp.Initialize()
       at Microsoft.Internal.Diagnostics.FirstChanceHandler.ExceptionFilter(Boolean fRethrowException, 
          TryBlock tryBlock, FilterBlock filter, CatchBlock catchBlock, FinallyBlock finallyBlock)

    This should actually start to give you an idea – something is wrong with the Performance Counters on this machine. Worst thing to do here is to start fiddling with the registry and try to fix it or start adding users/groups into the performance counter groups.

    The “smartest” Office Web Apps admin then takes a look at the Trace Logs (ULS) (and that admin most likely read my SharePoint post “The Rules of SharePoint Troubleshooting” – if not, he/she should!). This is what will be found:

    Excel Web App                 	Excel Calculation Services    	cg34	Unexpected	Unexpected exception occured 
      while trying to access the performance counters registry key. Exception: System.InvalidOperationException: Category does not 
      exist.     at System.Diagnostics.PerformanceCounterLib.CounterExists(String machine, String category, String counter)     at ...
    Excel Web App                 	Excel Calculation Services    	89rs	Exception	ExcelServerApp..ctor: An unhandled exception 
      occurred during boot. Shutting down the server. System.InvalidOperationException: Category does not exist.     at 
      System.Diagnostics.PerformanceCounterLib.CounterExists(String machine, String category, String counter)     at 
      System.Diagnostics.PerformanceCounter.InitializeImpl()     at ...
    Excel Web App                 	Excel Calculation Services    	89rs	Exception	...atchBlock, FinallyBlock 
      finallyBlock) StackTrace:  at uls.native.dll: (sig=4635455b-a5d6-499c-b7f2-935d1d81cf8f|2|uls.native.pdb, offset=26E32) at 
      uls.native.dll: (offset=1F8A9)	 
    

    The key thing here is the “Category does not exist” message.

    When the Excel Web App Calculation Services is starting (and the Excel Calc Watchdog) it is trying to read a performance counter. If that performance counter is not found – it will just crash!

    Unfortunately there is no good way to find out which performance counter it is trying to use, except firing up good ole Reflector. Using that tool we can find that it is trying to access an ASP.NET performance counter.

    Resolution

    The fix for the problem is easy – we just need to register/update the performance counters for ASP.NET. This is done using the lodctr.exe tool like this:

    lodctr C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_perf.ini

    Give it a few seconds and then retry to load an Excel file using Office Web Apps and all your users should once again be happy.

    Summary

    A simple fix to an annoying problem, which could be difficult to locate unless you know where to look (and in this case also have the skillz to read some reflected code).

    This error might not be so common, but it shows the importance of having a correctly installed machine and that you shouldn’t go fiddling with settings or the registry if you’re not really sure on what you’re doing – ok, not even then…

About Wictor...

Wictor Wilén is a Director and SharePoint Architect working at Connecta AB. Wictor has achieved the Microsoft Certified Architect (MCA) - SharePoint 2010, Microsoft Certified Solutions Master (MCSM) - SharePoint  and Microsoft Certified Master (MCM) - SharePoint 2010 certifications. He has also been awarded Microsoft Most Valuable Professional (MVP) for four consecutive years.

And a word from our sponsors...

SharePoint 2010 Web Parts in Action