- up to 200GB - still the recommendation
- 200GB to 4TB - yes, it’s been done and can be done (with the help of a skilled professional architect :-)
- 4TB or more - only for near read-only “record centers” with very sparse writing
This looks good right, and it can be in some cases. But now on to the fine prints, which actually are written in the updated Software Boundaries and Limits article. If you read the announcement and the boundaries article you see that to be supported you need to follow a number of hard rules (such as IOPS per GB) and you must have governance rules (such as backup and restore plans) in place. Ok, if I got the IOPS needed, the best disaster recovery plans ever made and a skilled professional - should I go for the 4TB limit then? I think not, unless you really need the scale and have the hardware requirements.
RBS: The content database size is the sum of all the data in the database and all other blobs stored on disk using RBS. So RBS does not get you around these limits!
First of all take a look at the file sizes of the content databases. Ok, you say, they are still taking the same amount of disk space whether I have a single content database or multiple content databases. Yes, the do occupy the same disk space, but you can’t split them on separate physical medias (unless you go for multiple files for the database - which is another thing you should avoid), which might be necessary for performance, SLA and other reasons.
Also consider moving really large files from your backup media, perhaps over the wire from a remote location, to restore something…
Now think about your upgrades and patching. Remember that you update on a per content database basis, so an upgrade may take mighty long time. The SharePoint database schema is updated once in a while and if you’re using really large content databases for collaboration sites (for instance) your users will be more than furious when the day of the upgrade comes along.
Ok, I can live with all this you say. Then you need to take a look at all the other limits/thresholds/boundaries in the TechNet article - the Site Collection limits for instance. There is a strong recommendation for the size of a Site Collection to be less than 100GB, and it’s there for a reason. Moving sites (using PowerShell) larger than this limit can fail or lock the database. And SharePoint (built-in) backup does only support backup of Site Collections less than 100GB.
Exceeding the 4TB limit and aim for the sky for Records Centers? It can be done (MS done it obviously) but also this under very explicit guidance. For instance you must base your sites on the Document or Records Center site definitions. Why? I’m not 100% sure since they are just site definitions, so it has to be some kind of “upgrade promise” from the product group that these site definitions will not have any rough upgrade paths in the future. The reason is to “reinforce the ask that the unlimited content database is for archive scenarios only”.
This post is all about raising a finger of warning and tell you that you should not run away and tell your clients that they can now fill their content databases up to the new limits. Consider this really carefully and in most, if not all, cases use the 200GB limit when designing your SharePoint architectures. It’s still good that there now is support for larger content databases when scale is needed and that we can pass the 200GB limit.
Note: updated some parts to clarify my points.