Contents tagged with SQL Server
After doing the Microsoft Cloud Show interview with Andrew Connell I thought it might be a good idea to write some of my tips and tricks for running SharePoint 2013 on Azure IAAS. Some of the stuff in this post are discussed in more depth in the interview and some things we just didn’t have time to talk about (or I forgot). I really recommend you to listen to the podcast as well and not just read this post.
Disks, disks and disks
As mentioned on the Microsoft Cloud Show interview more than once, one of the first things you should look into is your disk configuration for your Azure VM’s.
Use a lot of disks
One of the first things you must look into is the performance of disks. As you should be aware of SharePoint and SQL requires fast disks to operate with decent performance. When running an on-premises installation (virtual or physical) you have almost full control of the disk performance – you can choose from fast spinning disks to SSDs to different RAID configurations. But you cannot do that in Microsoft Azure. The virtual disks in Azure uses the blob storage for storing the VHD files, as blobs. Disk performance are often measured in IOPS (input/output operations per second) and in with the VHD files in the Azure blob storage we are limited to 500 IOPS per disk. Also worth noticing is that if you run the VM using the Basic tier offering, the limit is 300 IOPS per disk.
TechNet contains a couple of article about calculating IOPS and the need for IOPS. For instance in the article called “Storage and SQL Server capacity planning and configuration (SharePoint Server 2013)” we can see that a content database requires up to 0.5 IOPS per GB. If we translate that into an Azure VHD we see that we can only have about 5 content databases per disk, assuming that they grow to 200GB each (a total of 1.000 GB). The Search Service Application, specifically the crawl and link databases has high IOPS requirements, so they should be on dedicated disks and so on. This is exactly the same as on any SharePoint installation – in the cloud or on your own metal. But the hard limit on 500 IOPS per VHD makes it even more obvious, given that we do not have a choice.
So, when deciding which machine you are going to use for SharePoint and SQL, choose one with a lot of disks. For instance choose A4/A7 to get 16 disks or A3/A6 to get 8 disks.
Read and Write caching
A very important thing to configure for the Azure IAAS disks are the read and write caching, it is turned off by default on data disk – leave it like that!
SQL Server Filegroups and multiple data files
SQL Server has a concept called Filegroups which can be used to increase performance of databases. You can for a database in the primary file group add multiple data files and have them reside on different disks. (Thanks to Trevor Seward for the tip).
The maximum size of a VHD in Azure is 1TB (1.000GB), and the limitation is in the Azure blob storage. If you need larger volumes/disks in your Azure VMs then you need to stripe multiple disks into a single volume. Just add the VHDs and then use the Disk Manager to configure the disk striping.
Create large disk files! You only pay for the space you actually use. If you store 1MB on a 500GB disk, you pay for 1MB. Reconfiguring disks cost you more!
SQL specific stuff
When running SQL on an Azure VM or any kind of virtual or physical hardware – do not forget to format the disk using 64K allocation unit size and do not forget to give the SQL Server service account “Perform Volume Maintenance Tasks” right. These two things makes a huge difference!
Examples (and just examples nothing else!)
For a single SQL Server this could look something like this:
Disk/Volume Purpose 1 OS (default) 2 Temporary (default) 3 SQL Server binaries 4 Temp DB 5 Temp DB log files 6 Default SQL Server DBs 7 Default SQL Server DB log files 8 Search DBs 9 Search DB log files 10 Content DBs 1 11 Content DBs 1 log files 12 Content DBs 2 13 Content DBs 2 log files 14 Service Application DBs 15 Service Application DB log files 16 Backup 1
And for a SharePoint machine it could look like this:
Disk/Volume Purpose 1 OS (default) 2 Temporary (default) 3 SharePoint binaries 4 Blob cache 5 Log files 6 Index files 7 ASP.NET Temporary files 8 Tools 9 Visual Studio (dev machine) 10 Project files (dev machine)
Note: do NOT store anything of value on the Temporary (default) disk, it will be wiped whenever Microsoft decides to. I would not even recommend storing SharePoint log files there (which I’ve heard recommendations of) since you might want to go back in time and search the logs eventually.
Virtual Machine size
Choosing the Virtual Machine size is one of the tricky questions; do you need RAM, CPU, Memory etc. “Fortunately” we do not have that much of a choice in Azure IAAS (compared to other vendors). We can choose from A0 to A9 – all the details here: “Virtual Machines Pricing Details”.
If we’re talking about SharePoint and SQL we’re even further limited – A3 to A7 are the ones with sufficient RAM/CPU for that scenario, but we can almost exclude A5 which only has CPU two cores and only support 4 data disks. There is no “perfect machine” here, it depends. With regards to running SharePoint you might want a lot of cores for search machines etc etc. A4, A5 and A6 are good candidates for SharePoint in my opinion. A3 might do for a simple dev machine.
Plan for High Availability
Let’s assume we’re setting up a SharePoint + SQL production environment in Azure, then you need to start to think about HA (High Availability). Part from the usual ways to do it (see my SPC 2014 presentation on the topic) there are a couple of other things that we must think of in Azure IAAS.
Location should be fairly obvious. Do you want your stuff in North America, Asia, Northern Europe etc. It doesn’t have much with HA to do, but more about latencies and costs! Yes, if you want to be cost efficient check out the pricing in the different locations, it’s quite a difference.
Affinity Groups are a very important construct in Azure. Affinity Groups allows you to make sure that your cloud services, storage, virtual networks etc are placed “together”. Remember the data centers might be several football fields in size and you don’t want your machines scattered across all that space, that would only cause unwanted latencies. An Affinity Group makes sure that all your stuff in Azure are as close to each other as possible. Also this reduced our costs, since we don’t get any cross datacenter communication. An Affinity Group exists in one Location.
“Cloud Services” are used in Azure to “group” instances together. Cloud Services have features such as end-points, load balancing and auto-scale. For SharePoint and SQL don’t even think about auto-scale though.
It is very important to note that if you are using SQL Server Always-On Availability Groups, this SQL Server setup must be in its own Cloud Service, separate from the clients accessing it.
Availability sets are a logical construct that allows you to specify a group of instances/roles. Whenever Microsoft must reboot one of your VMs for maintenance or other purposes, they will respect the Availability Sets and make sure that only one of the machines within an Availability Set are down at a time. For instance grouping all SharePoint Web Servers into one Availability Set, makes sense.
Here is one example that I’ve shown when presenting this that gives you an idea of how an Azure IAAS SharePoint and SQL infrastructure might look like, once again just an example!
SQL Server optimizations
I will not dig too deep into tips and tricks with SQL Server specifically but instead urge you to read the really good article “Performance Best Practices for SQL Server in Azure Virtual Machines”. That article gives you all the details and a nifty check list.
Here are some other small things for you to remember:
- Office Web Apps 2013/Office Online/WAC are NOT supported on Azure IAAS at the moment – hybrid with a site-to-site VPN is your way if you want WAC.
- Always set your machine in High Performance mode .
- If you’re using the public IP of the Cloud Service, remember to always have a machine running in that Cloud Service, otherwise the IP will change. Or use the Azure static DNS offering.
- For development SharePoint machines and SQL – do not use the Azure SQL images, instead use your MSDN SQL Server Development Edition. If you don’t you will be billed for the SQL Server resource usage and that is even more expensive than running the actual VM.
That was a mouthful of tips and tricks and I hope you get something out of this. Of course there are plenty more, don’t be shy and use the comments for your best tips and tricks. I might update the post with your best tips or other things that I find. And also note, that these are in no way the official tips and tricks from Microsoft and the Azure Team, just my experience from working with it.
We all know that one of the most important parts of SharePoint 2013 (and 2003, 2007 and 2010) are SQL Server. Bad SQL Server performance will lead to bad SharePoint performance! That’s just how it is! There are tons of ways of doing this by having enough cores, adding more RAM, using fast disks, using multiple instances and even servers. You should all already be familiar with this.
Search is one of the components in SharePoint that requires A LOT of resources, especially when crawling and doing analytics. For both SQL Server and SharePoint Search there are plenty of documentation on how to optimize both the hardware and configuration of these components. In this post I will explain and show you how to use the SQL Server Resource Governor to optimize the usage of SQL Server, especially for Search.
SQL Server Resource Governor
The Resource Governor was introduced in SQL Server 2008 and is a feature in SQL Server that allows you to govern the system resource consumption using custom logic. You can specify limits for CPU and memory for incoming sessions. Note that the Resource Governor is a SQL Server Enterprise feature (but also present in Developer and Evaluation editions).
The Resource Governor is by default disabled and you have to turn it on. Just turning it on doesn’t do anything for you. You have to configure the Resource Pools, Workload Groups and the Classification.
Resource Pools represents the physical resources of the server, that is CPU and memory. Each resource has a minimum value and a maximum value. The minimum value is what the resource governor guarantees that the resource pool has access to (that is those resources are not shared with other resource pools) and the maximum value is the maximum value (which can be shared with other pools). By default SQL Server creates two Resource Pools; internal and default. The internal pool is what SQL Server itself uses and the default pool is a …. default pool :-). Resource Pools can be created using T-SQL or using the SQL Server Management Studio.
Each Resource Pool can have one or more Workload Groups, and the Workload Groups is where the sessions are sent to (by the Classifier, see below). Each Workload Group can be assigned a set of policies and can be used for monitoring. Workload Groups can be moved from one Resource Pool to another. Workload Groups can be created using T-SQL or using the SQL Server Management Studio.
The Classification of requests/sessions are done by the Classifier Function. The Classifier function (there can be only one) handles the classification of incoming requests and sends them to a Workload Group using your custom logic. The Classifier function can only be created using T-SQL.
Using SQL Server Resource Governor to optimize Search Database usage
So, how can we use the Resource Governor to improve or optimize our SharePoint 2013 performance? One thing (among many) is that we can take a look at how Search crawling affects your farm. While crawling the crawler, part from hammering the web servers being crawled (which you should have dedicated servers for), it also uses lots of SQL Server resources. In cases where you only have one SQL Server (server, cluster, availability group etc) all your databases will be affected by this, and one thing you don’t want to do is to annoy your users during their work with a slow SharePoint farm. What we can do here using the Resource Governor is to make sure that during normal work hours the Search databases are limited to a certain amount of CPU (or RAM).
Configure the SQL Server Resource Governor to limit resource usage of Search databases
The following is one example of how you can configure SQL Server to limit the resource usage of the SharePoint Search databases during work hours and not limit them during night time. All the following code is executed as a sysadmin in the SQL Server Management Studio.
Create the Resource Pools
We need two resource pools in this example – one for sessions using the Search databases under work hours (SharePoint_Search_DB_Pool) and one for sessions using the Search databases during off-work hours (SharePoint_Search_DB_Pool_OffHours). We configure the work hours Resource pool to use at the maximum 10% of the total CPU resources and the Off hours pool to use at the max 80%. In T-SQL it looks like this:
USE master GO CREATE RESOURCE POOL SharePoint_Search_DB_Pool WITH ( MAX_CPU_PERCENT = 10, MIN_CPU_PERCENT = 0 ) GO CREATE RESOURCE POOL SharePoint_Search_DB_Pool_OffHours WITH ( MAX_CPU_PERCENT = 80, MIN_CPU_PERCENT = 0 ) GO
Create the Workload Groups
The next thing we need to do is to create two Workload Groups (SharePoint_Search_DB_Group and SharePoint_Search_DB_Group_OffHours) and associate them with the corresponding Resource Pool:
CREATE WORKLOAD GROUP SharePoint_Search_DB_Group WITH ( IMPORTANCE = MEDIUM ) USING SharePoint_Search_DB_Pool GO CREATE WORKLOAD GROUP SharePoint_Search_DB_Group_OffHours WITH ( IMPORTANCE = LOW ) USING SharePoint_Search_DB_Pool_OffHours GO
After this we need to apply this configuration and enable the Resource Governor, this is done using this T-SQL:
ALTER RESOURCE GOVERNOR RECONFIGURE GO
Create the Classifier function
The Resource Pools and Workload Group are now created and the Resource Governor should start working. But all the incoming requests are still going to the default Resource Pool and Workload Group. To configure how the Resource Governor chooses Workload Group we need to create the Classifier function. The Classifier function is a T-SQL function (created in the master database) that returns the name of the Workload Group to use.
The following Classifier function checks if the name of the database contains “Search” – then we assume that it is a SharePoint Search database (of course you can modify it to use “smarter” selection). During normal hours it will return the SharePoint_Search_DB_Group and between 00:00 and 03:00 it will return the SharePoint_Search_DB_Group_OffHours group for the Search databases. For any other database it will return the “default” Workload Group.
CREATE FUNCTION fn_SharePointSearchClassifier() RETURNS sysname WITH SCHEMABINDING AS BEGIN DECLARE @time time DECLARE @start time DECLARE @end time SET @time = CONVERT(time, GETDATE()) SET @start = CONVERT(time, '00:00') SET @end = CONVERT(time, '03:00') IF PATINDEX('%search%',ORIGINAL_DB_NAME()) > 0 BEGIN IF @time > @start AND @time < @end BEGIN RETURN N'SharePoint_Search_DB_Group_OffHours' END RETURN N'SharePoint_Search_DB_Group' END RETURN N'default' END GO
This is the core of our logic to select the appropriate Workload Group. You can modify this method to satisfy your needs (you need to set the Classifier to null and reconfigure the Resource Governor, and then set it back and reconfigure again whenever you need to change the method). An important thing to remember is that there can only be one Classifier function per Resource Governor, and this method will be executed for every new session started.
To connect the Classifier function to the Resource Governor there is one more thing that we need to do. First the connection and then tell the Resource Governor to update its configuration:
ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = dbo.fn_SharePointSearchClassifier) GO ALTER RESOURCE GOVERNOR RECONFIGURE GO
You should now immediately be able to see that the Resource Governor starts to use this Classifier function. Use the following DMVs to check the usage of the Resource Pools and Workload Groups respectively.
SELECT * FROM sys.dm_resource_governor_resource_pools SELECT * FROM sys.dm_resource_governor_workload_groups
As you can see from the image a above the Resource Governor has started to redirect sessions to the SharePoint_Search_DB_Group.
Another useful T-SQL query for inspecting the usage is the following one, which will list all the sessions and which Workload Group they use and from where they originate.
SELECT CAST(g.name as nvarchar(40)) as WorkloadGroup, s.session_id, CAST(s.host_name as nvarchar(20)) as Server, CAST(s.program_name AS nvarchar(40)) AS Program FROM sys.dm_exec_sessions s INNER JOIN sys.dm_resource_governor_workload_groups g ON g.group_id = s.group_id ORDER BY g.name GO
You have in this post been introduced to the SQL Server Resource Governor and how you can use it to optimize/configure your SharePoint environment to minimize the impact crawl has on the SQL Server database during normal work hours.
Remember that this is a sample and you should always test and verify that the Resource Pool configurations and the Classifier logic works optimal within your environment.
As usual a new version of a product has new requirements of all different kinds; especially when it comes to resource usage. With SharePoint 2013 there is no difference. The Hardware and Software requirements for SharePoint 2013 Preview is published and I thought I should walk through the new and updated requirements and compare them with SharePoint 2010. And also talk about some other key changes that you need to be aware of when planning your SharePoint 2013 installations.
Note: this is written for the SharePoint 2013 Preview and stuff will/can be changed for RTM.
Let's start with some key changes in the physical topology for SharePoint 2013. In SharePoint 2010 we basically had three different server roles; Web Servers, App Servers and SQL Servers. These roles are still valid for SharePoint 2013 but we also have two new roles; Office Web Apps Servers and Workflow Servers. Some roles can be combined and some cannot - for instance it is not supported to run any other role or application on servers that has Office Web Apps installed.
Update 2012-07-21: A clarification on Office Web Apps Server. You cannot install Office Web Apps Server on a SharePoint 2013 Server Preview. This is currently a hard block.
Of course we can still split out specific service applications on dedicated servers; for instance we can use specific search servers or cache servers, but I categorize these into the App Servers role.
Let's start with the Software requirements. Software requirements are the minimum stuff you need on your (virtualized) metal before installing SharePoint 2013 or Office Web Apps. I'm not going to go through all the details - since they are well documented in the official documentation. Instead let's focus on some key things.
SharePoint 2013 only supports two operating systems; Windows Server 2008 R2 Service Pack 1 and Windows Server 2012, 64-bit editions of course. There is no support for client OS such as Windows 7 any longer.
For a Office Web Apps Server it looks like Service Pack 1 for 2008 R2 is not needed, but I strongly suggest that you install that anyways.
In SharePoint 2013 the support for SQL Server 2005 has been dropped and you need to use SQL Server 2008 R2 Service Pack 1 or higher (which includes SQL Server 2012).
For SharePoint 2013 Web and App servers you need Microsoft .NET Framework 4.5 (RC) installed and for Office Web Apps Servers you need .NET Framework 4.0. If you have dedicated Workflow servers you can use 4.5 (RC) or 4.0 PU3.
Windows Server AppFabric
One new addition to the pre-requisites of SharePoint 2013 is the usage of Windows Server AppFabric, which is a requirement (you actually need to bump it to version 1.1). The AppFabric component is used for the Distributed Cache service (aka Velocity) which is used for the "Social stuff" and token caching.
Windows Azure Workflow Server specifics
The servers hosting the Windows Azure Workflow server (WAW) has some specific requirements. The SQL Server must have Names Pipes and TCP/IP enabled and you must have the Windows Firewall enabled. Note that Windows Azure Workflow is an optional component of SharePoint 2013
Other required software and things to note
You also need a set of hotfixes, WCF Data Services, WIF 1.0+1.1. By using the pre-requisite installer you get these things configured for you automagically. The pre-req installer downloads them for you but if you would like to automate your installations you can pre download them using Paul Storks excellent PowerShell scripts.
If you're leveraging the Exchange 2013 and SharePoint 2013 integration features, you need to install the Exchange Web Services Managed API, version 1.2, on your SharePoint boxes.
So let's take a look at the hardware requirements, which has been stirring up the community to really weird proportions the last few days. Your hardware requirements really depends on how you intend to use SharePoint 2013, which Service Applications, which customizations etc etc, so you need to take this with a pinch of salt. For development purposes you can adjust these values as it fits you and your projects, but consider the "minimum values" below as something to start with. I'm going to add my personal "minimum values" to this discussion.
The specified minimum requirements for the processor is 4 cores (64-bit of course). I've been setting up 2-core machines on my laptop and it has been working fine. But if you're doing development with Visual Studio you likely need a few more. For Production you of course add at least follow the "minimum requirements".
For Database servers the recommendation is 4 cores for smaller deployments and at least 8 for medium and large deployments.
This one really hit the fan, when the requirements was published. And the reason for that is that the Requirements document states that you need 24 Gb of RAM for Server development on a single box. In my opinion it's far more than you need for average development, unless you're firing up all service applications, or using Search or Social features heavily. Having 24GB snapshots, will drain your disk space quite fast :).
For my development boxes with SQL Server 2012 and Visual Studio 2012 installed I've so far found that the sweet spot is somewhere between 6 and 8 GB which is a good trade-off for laptop disk space and performance, but your mileage may vary.
For production the minimum recommended RAM is 12GB, basically what I recommended for 2010.
If we take a look at the SQL side you should have at least 8GB RAM for smaller deployments and 16GB RAM for medium ones. In production I would seriously consider bumping that to at least 64GB.
When the guessing game of SharePoint 15 features was in full motion a lot of people expected native HTML5 support for SharePoint vNext. Thankfully that did not happen and the Browser support is therefore basically the same as for SharePoint 2010; no Internet Explorer 6, use 32-bit IE's, support for latest versions of Chrome, Safari and Firefox.
This was a quick walkthrough of the hardware and software requirements for SharePoint 2013 Preview. A few new requirements and increased hardware resources compared to SharePoint 2010. But this is all about planning - you cannot just take the requirement and apply that onto your SharePoint 2013 farm, you need to evaluate your farm design and test it. Over the next few months I expect to see some great design samples including metrics from MSIT and of course gain experience from the 2013 engagements starting now...
I bet you will!
The Advanced Certification Team at Microsoft Learning will start a new Live Meeting series where you can learn more about the Microsoft Certified Master and Microsoft Certified Architect programs. It will be regularly held meetings where they will go into details about the programs. The program managers will make you understand the program mission and vision, how to prepare for a certification, how to apply for participation and the value of the programs. If you're interested in one or more of these programs I recommend you to attend one of the live meetings or watch the recordings. Of course attending the live meetings will allow you to directly ask questions to the program managers!
To register for one of the live meetings head on over to the Microsoft World Wide Events site and get your slot. Currently there are two planned sessions:
- up to 200GB - still the recommendation
- 200GB to 4TB - yes, it's been done and can be done (with the help of a skilled professional architect :-)
- 4TB or more - only for near read-only "record centers" with very sparse writing
This looks good right, and it can be in some cases. But now on to the fine prints, which actually are written in the updated Software Boundaries and Limits article. If you read the announcement and the boundaries article you see that to be supported you need to follow a number of hard rules (such as IOPS per GB) and you must have governance rules (such as backup and restore plans) in place. Ok, if I got the IOPS needed, the best disaster recovery plans ever made and a skilled professional - should I go for the 4TB limit then? I think not, unless you really need the scale and have the hardware requirements.
RBS: The content database size is the sum of all the data in the database and all other blobs stored on disk using RBS. So RBS does not get you around these limits!
First of all take a look at the file sizes of the content databases. Ok, you say, they are still taking the same amount of disk space whether I have a single content database or multiple content databases. Yes, the do occupy the same disk space, but you can't split them on separate physical medias (unless you go for multiple files for the database - which is another thing you should avoid), which might be necessary for performance, SLA and other reasons.
Also consider moving really large files from your backup media, perhaps over the wire from a remote location, to restore something...
Now think about your upgrades and patching. Remember that you update on a per content database basis, so an upgrade may take mighty long time. The SharePoint database schema is updated once in a while and if you're using really large content databases for collaboration sites (for instance) your users will be more than furious when the day of the upgrade comes along.
Ok, I can live with all this you say. Then you need to take a look at all the other limits/thresholds/boundaries in the TechNet article - the Site Collection limits for instance. There is a strong recommendation for the size of a Site Collection to be less than 100GB, and it's there for a reason. Moving sites (using PowerShell) larger than this limit can fail or lock the database. And SharePoint (built-in) backup does only support backup of Site Collections less than 100GB.
Exceeding the 4TB limit and aim for the sky for Records Centers? It can be done (MS done it obviously) but also this under very explicit guidance. For instance you must base your sites on the Document or Records Center site definitions.
Why? I'm not 100% sure since they are just site definitions, so it has to be some kind of "upgrade promise" from the product group that these site definitions will not have any rough upgrade paths in the future. The reason is to "reinforce the ask that the unlimited content database is for archive scenarios only".
This post is all about raising a finger of warning and tell you that you should not run away and tell your clients that they can now fill their content databases up to the new limits. Consider this really carefully and in most, if not all, cases use the 200GB limit when designing your SharePoint architectures. It's still good that there now is support for larger content databases when scale is needed and that we can pass the 200GB limit.
Note: updated some parts to clarify my points.
Yesterday the SharePoint Team posted on their blog about a major issue with the latest Cumulative Update for SharePoint 2010 and recommending not to install it. If you have installed it you might experience major problems with User Profile services - contact Microsoft Support as soon as possible for help.
So what about these Cumulative Updates?
Everyone that has been in the business for some time working with products such as SharePoint and other products such as SQL Server knows that the CU's are coming every each month or quarter. These updates contains the latest hotfixes assembled into a one package to make it easier for you to patch your server product. One problem with these CU's (not the actual CU's though) is that a lot of people download them and install them as soon as they are released - Fail! This is not the intended purpose of Cumulative Updates, let me explain why:
Since the CU's contains of hotfixes, critical on-demand fixes and security fixes; some might already be available for download and some can only be reached through the Microsoft Support Services. A hotfix is intended to fix one or more specific bugs. The aim is to release a high-quality fix in an acceptable time frame - which in some cases might be a real issue. Sometimes there is not enough time to thoroughly test each hotfix for the wide diversity of environment and use-cases. One problem with the CU thing is that Microsoft also try to release them on a predictable schedule - this also means that there is not enough time to test.
If you need to install a hotfix or a CU you must (just as you do with custom code or similar) test the installation before applying it to the production servers. One problem I've seen here is the lack of a test/staging environment where you can apply these hotfixes - if you do not have these environments - do not install the fixes! Even if you have a staging environment the best tip is to wait for a Service Pack, more about these later, or take it easy and listen (read Twitter, blogs etc.) for any issues. Oh, and how about making sure that you have a backup plan!
For each and every hotfix Microsoft releases a set of Knowledge Base articles describing (almost) exactly what the hotfix is intended to fix. Before applying even the smallest hotfix - make sure that you read and understand the KB articles and also that you are experiencing the problems mentioned. If you do not experience them - there is no reason to apply the fix!
The KB articles might not be ready when the CU/hotfix is published (back to the time frame issue). So wait until it is published and then read it. But still - only apply fixes that fix a problem you are having or if the CSS tells you to install it.
Sooner or later Microsoft bundles all CU's, hotfixes and optionally some new features together into a Service Pack. These are treated differently since most often there are more time to plan and test these. Service Packs are not flawless though. Same goes for all patching, updating, upgrading etc. - TEST, TEST and TEST!
Been there, done that...
I know I shouldn't throw stones in a glass house - I've been too trigger happy sometimes finding that a CU fixes one of my problems, installed it without testing enough and finally finding out that it only caused me more trouble on another end. Nevertheless I've learned my lesson...
You can read more about the CU's in this KB article: http://support.microsoft.com/kb/953878
I thought I should share my experience on working with SharePoint 2010 development on Windows 7. My previous posts on installing SharePoint 2007 on Vista and Windows 7 are posts that are quite popular. The downside with the “old” SharePoint version is that it was not officially supported to install it on a client machine, but SharePoint 2010 is supported for installation on Windows 7 and Windows Vista SP1 for development purposes.
There are many opinions on having SharePoint 2010 installed on your client OS. Some thinks it is despicable, but I think it is great and I’ve used local installations for years now. It’s perfect for rapid development, testing and demos. In seconds you can spin up a site and show some basic stuff for a client. Of course I use virtualization when testing my final bits etc.
Benefits and drawbacks
Having a local installation have several benefits:
- You are running on the metal – i.e. no virtualization layer
- If you don’t have more than 4GB RAM then virtualization will be really slow
- Visual Studio 2010 heavily uses WPF which requires a lot of your GPU. The virtualization layer will degrade the WPF performance
- You don’t need to spin up and shut down VMs
- Saves you a lot of disk space – one VM requires at least 20GB
Of course there are drawbacks. I would never go from developing on a local installation to production. Always test your code on a server OS!
SharePoint Foundation 2010 on Windows 7
I have installed SharePoint Foundation 2010 on my Windows 7 machine. I did not go for a full SharePoint Server installation. Most of the development can be done towards SPF and it makes a minimal impact on my client OS. But perhaps I go for a full SPS 2010 install once we have the final bits of it in June.
MSDN contains instructions on how to install SharePoint 2010 on a client OS, you need to extract the setup files and make some changes to the setup configuration file. With SQL Server Development Edition installed on my machine I installed a Server Farm, i.e. not the Standalone configuration. I also used domain accounts when I installed it, which required me to be connected to the domain during installation.
After the installation I’ve changed all SharePoint Services and SQL Services to be manually started to save some boot time. Emmanuel Bergerat has a great post on how to create Stop and Go scripts with PowerShell so that you quickly can start and stop all services. Starting these services and warming up my sites takes about 2-3 seconds on my machine equipped with an SSD (best gadget buy of 2010!)
Visual Studio 2010 RC and Team Foundation 2010 Basic
I use Visual Studio 2010 in combination with a Team Foundation Server 2010 Basic installation on my local machine. Using a local install of TFS is something I really recommend – never loose any of your local projects or files.
Visual Studio uses the GPU a lot so you will certainly have a better experience running without a virtualization layer. F5 debugging is no pain, it just takes a second to compile, package, deploy and start Internet Explorer.
If you have not tried to install SharePoint Foundation 2010 on your Windows 7 then stop reading right now and do it! It will increase your productivity immediately. The experience is awesome and together with RC of Visual Studio 2010 it’s just amazing.
The day has come when Microsoft officially started to talk about the next version of Office 2010 clients and SharePoint Server 2010 (no longer Office SharePoint Server). We have since some time known that SharePoint 2010 will be supported only on a 64-bit platform, just as Exchange 2007.
The new stuff revealed yesterday (as preliminary) are that not only is 64-bit required, it will only be supported on the Windows Server 2008 64-bit platform (including R2) and it will require that you have SQL Server 2008 on a 64-bit platform. There are some other interesting facts that you should check out also in the post (and on about 1.000 other blog posts), but this post is not just about these news.
The interesting parts of this announcement is that now is the time to learn the 64-bit platform for real and especially Windows Server 2008 R2 and SQL Server 2008, not everything is the same; registry hives, file system, settings, know when to use int (Int32) or Int64 etc etc. You can start now, it's no time to wait! Make a decision to only install your new SharePoint installations on the required SharePoint 2010 hardware, make sure that you have that in your development environments and on your virtual machines. Yes, it will in many cases cost you a bit in new hardware.
I think that this is the time when 64-bit really will kill the 32-bit era.
As a bonus I can tell you one thing that I didn't know was achievable. My main laptop runs 32-bit Windows 7 and not 64-bit due to that it does not have the 64-bit driver support for the peripherals and I usually use(d) Virtual PC to virtualize my development servers. Downside with Virtual PC is that you guest machines can only be 32-bit and I don't want to have a Hyper-V laptop in 64-bit mode so I thought that I had to get me a new laptop (which is due for later). I was preparing for the worst of having a dual boot. Fortunately I did a test using VMWare Workstation today and found out that as long as you have a 64-bit capable hardware (which I have) you can host 64-bit guests on a 32-bit host OS. Did you know that, I did not! So I will spend this evening preparing my new development VM's. If you are in the same situation as me, stuck with a 32-bit OS for some time, head on over to VMWare and run the 64-bit compatibility checker and then dump Virtual PC and get VMWare Workstation.
Welcome to the 64-bit world!
A recent discussion about how the licenses of Windows, SQL and SharePoint Servers should be handled when we are developing solutions using Virtual Machines made me throw away a mail to Emma Explains Licensing. The concern was that; do we have to pay licenses for every VM or test server? That would have been insane! But I wanted to have this explained how this licensing works - a lot of you perhaps already know but I always have a hard time getting all the different licensing options and rules.
To make her excellent explanation a bit shorter; if you are a MSDN subscriber or a Visual Studio licensee then you are fully covered. You may use as many copies on any number of devices you like of the server products for developing, testing and demonstrations. You may not use it in live production or for staging environments.
To understand everything, please read Emma's post.
Have a nice weekend…
I have a few times failed to install Windows SharePoint Services or Microsoft Search Server Express, when I have come to a location where the SQL Server 2005 is already in place with custom configuration. The failures have occurred during the phase when the WSS is trying to create and configure the SQL Server. First time I had some troubles working it out, since I’m not a DBA, so I would like to share my solution since nothing is found on Google on this matter.
The installation fails during one of the first steps (don’t remember exactly but it’s second or third) and you get a link to the installation log file. In the log there are reported errors, such as this one:
05/08/2008 10:38:44 8 ERR Task configdb has failed with an unknown exception 05/08/2008 10:38:44 8 ERR Exception: System.Data.SqlClient.SqlException: Ad hoc update to system catalogs is not supported. Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.
An other scenario is that SharePoint is during the installation trying to modify some of the SQL Server 2005 configuration parameters, such as the Min server memory (MB) to 128 MB. The error message then states
An exception of type System.Data.SqlClient.SqlException was thrown. Additional exception information: The affinity mask specified conflicts with the IO affinity mask specified. use the override option to force this configuration. configuration option ‘min server memory (MB)’ changed from o to 128. Run the RECONFIGURE statement to install
All you have to do to solve this is to execute the statements using SQL Management Studio and use the RECONFIGURE WITH OVERRIDE statement.
exec sp_configure 'allow updates', 1 reconfigure with override
allows updates to be done. To solve the second problem I have used
exec sp_configure 'min server memory (MB)', 128 reconfigure with override
Hope this helps somebody out.