Using SPWebConfigModification to Update the Web.config in SharePoint 2013

Seven months ago I wrote an article on Jumping the Hurdles of using SPWebConfigModification to Update the Web.config – that article was based on my experiences in SharePoint 2010 and having researched the topic thoroughly at the time I was interested to see how things fared in SharePoint 2013. Thankfully I had the opportunity to architect a solution from the ground up in 2013 which gave me the opportunity to once again adhere to the vow I made in Application Settings in SharePoint to never manually modify the web.config file.

While this post has not been as thoroughly researched at the time of writing as I usually would like, a quick search on SPWebConfigModification in SharePoint 2013 brought up few results and when I was looking into it for the project mentioned above, little had been written about it. A recent question on the OZMOSS mailing list showed that there were still some questions around how SPWebConfigModification did (or didn’t) work so I thought it would be useful to document my findings here for anyone wanting to ‘do the right thing’ and keep their hands off the web.config file.

For a bit of background for this post I’d thoroughly recommend you read the article I link to above – the purpose of this post is to compare the behaviour of the functionality between versions of the platform and it will make far more sense if you understand my (and others) previous findings.

However for those of you who care little about the past and just want to know what the deal is in 2013 – here is the quick summary:

SPWebConfigModification in SP2010 came with a number of issues however none which were completely insurmountable with a little work. The majority were already documented by others (and I link to them in that post) however one phenomenon had little written about it and even less in terms of a solution.

This problem was that even though the removal code ran successfully and even removed the entry from the modification collection, the element in the web.config file still physically existed. I came up with a workaround where  in the removal code I first applied another modification reverting the entry back to its original state, then ran the code to remove all modifications.

So how did this fair in SP2013? There was good news and bad news. On the plus side, this issue had been fixed! Removing the entry from the modification collection successfully removed the entry from the web.config file. On the negative side it came with a nasty side effect – if the modification you were removing was a change to an attribute on a pre-existing element then that whole element was removed from the web.config, not just your change.

This was clearly unacceptable behaviour particularly if the entry being removed was essential to successfully loading the website.

Once again however I managed to jump the hurdles that always seem to exist with this tricky class. The workaround this time was to remove all the customisations by default at the beginning of both the FeatureActivated and FeatureDeactivating functions and then add the customisation required. In FeatureActivated this would be your customised entry. In FeatureDeactivating this would be the original entry (if it existed in the web.config before your modifications). Again this solution isn’t full-proof and is prone to falling over in future iterations of the platform, however it is a solid workaround as things stand now. I’ve provided a code snippet to achieve this below:

public class WebConfigModificationsFeatureEventReceiver : SPFeatureReceiver
{
	// Due to what appears to be a bug or just and unfortunate side-effect in SP2013, using SPConfigModification on an existing element
	// replaces that element with a new one. When removing the customisation, that element is removed, hence meaning that the original
	// entry no longer exists. To counter this we will re-add the original values in the deactivating feature.

	public override void FeatureActivated(SPFeatureReceiverProperties properties)
	{
		SPWebApplication webApp = properties.Feature.Parent as SPWebApplication;
		RemoveAllCustomisations(webApp);

		#region Enable session state

		httpRuntimeModification = new SPWebConfigModification();
		httpRuntimeModification.Path = "configuration/system.web/pages";
		httpRuntimeModification.Name = "enableSessionState";
		httpRuntimeModification.Sequence = 0;
		httpRuntimeModification.Owner = "WebConfigModifications";
		httpRuntimeModification.Type = SPWebConfigModification.SPWebConfigModificationType.EnsureAttribute;
		httpRuntimeModification.Value = "true";
		webApp.WebConfigModifications.Add(httpRuntimeModification);

		httpRuntimeModification = new SPWebConfigModification();
		httpRuntimeModification.Path = "configuration/system.webServer/modules";
		httpRuntimeModification.Name = "add[@name='Session']";
		httpRuntimeModification.Sequence = 0;
		httpRuntimeModification.Owner = "WebConfigModifications";
		httpRuntimeModification.Type = SPWebConfigModification.SPWebConfigModificationType.EnsureChildNode;
		httpRuntimeModification.Value = "<add name='Session' type='System.Web.SessionState.SessionStateModule' preCondition='' />";
		webApp.WebConfigModifications.Add(httpRuntimeModification);

		#endregion

		/*Call Update and ApplyWebConfigModifications to save changes*/
		webApp.Update();
		webApp.Farm.Services.GetValue<SPWebService>().ApplyWebConfigModifications();
	}

	public override void FeatureDeactivating(SPFeatureReceiverProperties properties)
	{
		SPWebApplication webApp = properties.Feature.Parent as SPWebApplication;
		RemoveAllCustomisations(webApp);

		#region Revert session state

		httpRuntimeModification = new SPWebConfigModification();
		httpRuntimeModification.Path = "configuration/system.web/pages";
		httpRuntimeModification.Name = "enableSessionState";
		httpRuntimeModification.Sequence = 0;
		httpRuntimeModification.Owner = "WebConfigModifications";
		httpRuntimeModification.Type = SPWebConfigModification.SPWebConfigModificationType.EnsureAttribute;
		httpRuntimeModification.Value = "false";
		webApp.WebConfigModifications.Add(httpRuntimeModification);

		#endregion

		/*Call Update and ApplyWebConfigModifications to save changes*/
		webApp.Update();
		webApp.Farm.Services.GetValue<SPWebService>().ApplyWebConfigModifications();
	}

	private void RemoveAllCustomisations(SPWebApplication webApp)
	{
		if (webApp != null)
		{
			Collection<SPWebConfigModification> collection = webApp.WebConfigModifications;
			int iStartCount = collection.Count;

			// Remove any modifications that were originally created by the owner.
			for (int c = iStartCount - 1; c >= 0; c--)
			{
				SPWebConfigModification configMod = collection[c];

				if (configMod.Owner == "WebConfigModifications")
				{
					collection.Remove(configMod);
				}
			}

			// Apply changes only if any items were removed.
			if (iStartCount > collection.Count)
			{
				webApp.Update();
				webApp.Farm.Services.GetValue<SPWebService>().ApplyWebConfigModifications();
			}
		}
	}
}

So once again we’re left with an option to do things right which doesn’t quite act the way you’d expect, but as you can see, can still be used to achieve the end result desired. For all its ills in both versions of the products i’d still strongly recommend taking this approach to making changes to the web.config file over making manual modifications any day. I hope this post has made doing so a little easier moving forward.

UPDATES

We shouldn’t be surprised, but over time new hurdles have presented themselves which need documenting (and I apologise, the first one I mention I should have covered a long time ago when I encountered it!).

The approach documented above worked fine in my single-server development environment. As soon as we tried activating the feature in a multi-server farm test environment all sorts of issues occurred. This has been documented however and you can read about it in Jeremy Jameson’s post Waiting for SharePoint Web.config Modifications to Finish and Simon Doy’s post PowerShell to Detect Web Configuration Modification jobs.

The code changes required include a few functions to be added and a slight modification to the ApplyWebConfigModifications call as indicated in the below code.

        private static bool IsJobDefined(SPFarm farm)
        {
            SPServiceCollection services = farm.Services;

            foreach (SPService service in services)
            {
                foreach (SPJobDefinition job in service.JobDefinitions)
                {
                    if (string.Compare(job.Name, jobTitle, StringComparison.OrdinalIgnoreCase) == 0)
                        return true;
                }
            }

            return false;
        }

        private bool IsJobRunning(SPFarm farm)
        {
            SPServiceCollection services = farm.Services;

            foreach (SPService service in services)
            {
                foreach (SPRunningJob job in service.RunningJobs)
                {
                    if (string.Compare(job.JobDefinition.Name, jobTitle, StringComparison.OrdinalIgnoreCase) == 0)
                        return true;
                }
            }

            return false;
        }

        private void WaitForOneTimeJobToFinish(SPFarm farm)
        {
            float waitTime = 0;

            do
            {
                if (!IsJobDefined(farm) && !IsJobRunning(farm))
                    break;

                const int sleepTime = 500; // milliseconds

                Thread.Sleep(sleepTime);
                waitTime += (sleepTime / 1000.0F); // seconds

            } while (waitTime < 20);
        }

        /*Call Update and ApplyWebConfigModifications to save changes*/
        webApp.Update();
        WaitForOneTimeJobToFinish(webApp.Farm);
        webApp.Farm.Services.GetValue<SPWebService>().ApplyWebConfigModifications();

The second issue is one we’ve encountered more recently. The issue was in regards to the inability to successfully extend a web application which already had the web.config modifications applied. There’s 2 ways to approach this one – if you know before the fact, then make sure you extend your web application before applying any of the web.config modifications. Otherwise, hope that you’ve implemented decent deactivation code and be sure to deactivate the features before extending, and then reapply after the fact.

How I passed 70-667

Part three of my ‘Certification Series’ delves in to how I recently passed 70-667 – TS: Microsoft SharePoint 2010, Configuring. It was always going to be a bit of a challenge stepping back up to the administrator certification plate (I had achieved administrator certification in the 2007 stream) after having a heavy focus both in development and also predominantly on the 2007 platform for the past year and a half, however I’m happy to say with the preparation I followed I managed to pass reasonably comfortably.

I took somewhat of a different approach to studying this exam in that, for the first time heading into an exam, I decided to base a large proportion of my preparation time reading a book. Without turning this article into a book review or commentary on what the best book out there for studying for this exam would be, I can say that the book I read, Professional SharePoint 2010 Administration, was a really good read in terms of learning a few things that I previously had not known. Whether it helped achieve the certification in the end i’m not sure (there ended up being a few months difference between finishing the book and sitting the exam) but regardless it was a good read and one I would definitely recommend.

Closer to the examination date I chose to go back to my tried-and-true method of finding a blog post which highlighted all the relevant Technet and MSDN articles on the skills measured for the exam. Not surprisingly, as I did for 70-573, I found that Becky Bertram’s post Exam 70-667 Study Guide covered everything I would need to read for each point and is the site I chose to go through.

I would however suggest that there is an even better resource when it comes to studying for the 667 exam and that is Benjamin Athawes series How I passed SharePoint 2010 exam 70-667 (part 1, 2, 3 and 4). The reason I didn’t use his articles as a guide for my study was simply due to time – I thought I had enough experience and knowledge already to get by without that level of study – however as a starting point for someone without the experience it would definitely be my number 1 suggestion on where to start.

Unfortunately, unlike the Cram Session videos I used for the developer exams which were extremely helpful, I couldn’t find anything similar for the administration stream. What I did find (some of which I had viewed previously) were some decent video content on Technet including Getting started with SharePoint Server 2010 for IT pros and SharePoint 2010 advanced IT pro training.

In the interest of completeness, like I did in my 70-576 post, I thought it would also be useful to highlight some of the training and labs available to help pass this exam, even though I didn’t use it in my own personal study plan. The labs available on TechNet Virtual Labs: SharePoint Products and Technologies are definitely worth exploring for those without enough hands on administrative experience.

Finally there was one pretty cool resource that I stumbled upon and perused briefly, Accelerated Idea’s Free Practice Exam – MCTS: SharePoint 2010, Configuration. It’s a neat little tool to help get you used to the multiple choice format of the Prometric exams and give you a little bit of confidence going into the administration certification (assuming you do well – I wouldn’t be overly concerned if you didn’t, the questions in this practice test are fairly broad and not necessarily indicative of what you’ll face in 70-667).

So that covers it. In the end I actually found the exam a little harder than I expected. I thought I’d blitz it but I’m glad I put the effort in to learn all the content being tested, not only as I picked up a few things I didn’t know but it definitely helped me pass in the end. The final piece to the puzzle will be 70-668 which I hope to attempt some time soon, but until then, all the best achieving 70-667!

Jumping the Hurdles of using SPWebConfigModification to Update the Web.config

In my Application Settings in SharePoint post earlier in the year I vowed that on any new project I would ensure that the web.config file would never be (ab)used with manual modifications to store application settings. I’m happy to say that I stuck to that goal, and this post is a reflection on my experience since of avoiding any manual intervention with the file. As I discovered there are a number of potential issues and gotchyas with using the SPWebConfigModification class including one particular bug which will be discussed later – it is important that one has read the numerous informative and valuable posts on the topic before endeavouring to ‘do things the right way’ when making updates or changes to the web.config.

You could be excused for thinking that with the myriad of issues that seem to exist when using SPWebConfigModification that it couldn’t possibly be a suitable method for making updates to the web.config. It is however generally considered a best practice and there is good reason for its use. Anyone who has studied for and sat the application development exam (70-576) would know that it is clearly positioned as the right way to modify values in the configuration file. It is listed as one of two available techniques in Managing Web.config Modifications, mentioned in the patterns & practices SharePoint Guidance in regards to adding entries into the web.config and referenced as one of the 10 Best Practices For Building SharePoint Solutions by having SharePoint Manage Custom Config Settings.

Rather than take it as gospel though there are some great posts out there that explain exactly why it should be used. One of the best is Charlie Lee’s What every SharePoint developer should know about the web.config file which sheds some light on how the web.config is pieced together while stressing the fact it should not be manually modified. Mark Wagner’s How To: Modify the web.config file in SharePoint using SPWebConfigModification, while being written for SP2007 and being a great resource in outlining exactly what SPWebConfigModification and its properties do, also discusses the advantage of having the class propogate your changes across the farm. Finally, Jonathon Le exposes some of the issues with making manual modifications in his post SPWebConfigModifications and Manual Config entries don’t mix.

But as I also mentioned in my Application Settings post, the internet is littered with issues and problems with using SPWebConfigModification. While this shouldn’t preclude its use, it’s important to understand everything that may go wrong and the things to pay particular attention to so any potential issues can be avoided. The first port of call here is Reza Alirezaei’s SPWebConfigModification’s Top 6 Issues. It highlights some of the common pitfalls that occur when trying to use it and how to work around them. One important thing to note here is a lot of the best resources around this topic were written for SharePoint 2007 and hence are not all still applicable in 2010, however a vast number of them still are.

So how did I implement it myself? As per numerous others I used a Web Application scoped feature with FeatureActivating and FeatureDeactivating event receivers to run the necessary code. MSDN’s How to: Add and Remove Web.config Settings Programmatically page proved reasonably useful however it’s important to note that the two code samples use different methods of applying the modifications – I adjusted the adding code to use the web application format present in the removing code. I also adjusted the removal code to ensure all modifications from a particular owner were removed, not just the most recent.

I was quick to learn however that using the Web Application scoped feature came with its own side effect – as soon as the solution file was deployed the feature was activated across all web applications automatically – not just the one I intended it for. A more in depth explanation and the resolution to this issue can be found at Waldek Mastykarz’s post Inconvenient SPWebConfigModification development with Visual Studio 2010 SharePoint Developer Tools and Nancy Brown’s SPWebConfigModification 2010: Change, Caveat, and Code – the latter also has a neat code sample on how to back up the web.config files before applying changes.

That wasn’t the only issue I ran into though. At one stage I was getting duplicate entries resulting in a configuration error when trying to load the site, and while I managed to resolve the reason for this on my own, was interested to see that Nancy had already documented the exact phenomenon in her article SPWebConfigModification Tool and Explorations under the How to Make the Removal Fail (or “How This Can Make You Nuts”) heading. This is another post worth reading as it includes a lot more valuable information within it.

The most intriguing issue overall however was the fact that my removal code didn’t seem to be taking effect although it was running without error and successfully removing the change from the modification collection. Essentially, it just wasn’t being reflected in the web.config. The change I had made was to add an attribute to an existing element within the file and while it added it fine it just wouldn’t revert back to its previous state after being ‘removed’. I stumbled across a couple of Technet forum posts; WebConfigModification Issues and Remove the web.config entries programmatically not working in sharepoint 2010 in which a Microsoft staffer, Rickee, indicated it was actually a bug in SharePoint 2010 and one which was not due to be fixed. He mentioned within those threads that it may be possible that while it visibly existed in the web.config file it may not have actually been taking affect – possibly a satisfactory conclusion to some but not something I was content with (as a side note, I never did end up testing whether that theory was correct – I wanted that attribute gone!).

I ended up coming up with my own solution – in the removal code I first applied another modification reverting the entry back to its original state, then ran the code to remove all modifications (including the one I had just run). This resulted in a clean modifications collection and a web.config entry which mirrored the initial value. While its not a perfect solution (that value could easily be set to something different in a subsequent version of SharePoint!) I figured it suited my immediate needs just fine.

As you can appreciate from this article there are a number of things you need to consider when modifying the web.config using SPWebConfigModification and you must take great care when doing so, lest you end up in a situation described in Michael Nemtsev’s post How SharePoint manages web.config via SPWebConfigModification! What’s important though is that you understand the risks and pitfalls but more pressingly understand the importance of using it to apply your configuration updates.

(Update: To see how SPWebConfigModification performs in SP2013, take a look at Using SPWebConfigModification to Update the Web.config in SharePoint 2013)

Multi-tenancy in SharePoint 2010

It’s not often that i’ve delved into an administration related post on this blog. In fact, it’s been a while since i’ve had to wear my administration hat having focussed on development for the most part of the last year and a half. This topic however was one particularly relevant to me and peaked my interest so I thought it worth while writing about.

Truth be told, I’m not the first to write about this topic. There is a tonne of information already out there, and some absolutely brilliant posts that i’ll reference in this article. The goal, as always, is to collate the best information about the topic and discuss it within the confines of the situation I was up against.

I’ll start with the back-story. This environment was one that hosted a relatively significant number of public facing websites on MOSS 2007 – between 10 and 15. Due to licensing restrictions this was primarily done with a single web front end and application server in a staging -> production content deployment model. While there were no obvious performance concerns raising their head, it was always a bit of a concern of mine.

When given the opportunity to architect the SharePoint 2010 environment I immediate dove into reading as many best practice articles as possible to suggest how the farm should be created. With 3 licenses to play with my suggestion was the relatively stock-standard 2 load balanced web front end servers with a stand alone application server, discarding the content deployment model which was no longer considered a best practice.

The true opportunity however was to ensure that the way we handled the multiple sites was done so in a way that would both suit the requirements of the environment and maximise performance. My main concerns were adhering to the capacity planning boundaries outlined in the Technet article SharePoint Server 2010 capacity management: Software boundaries and limits – most specifically the maximum of 10 application pools per server which I knew we would have been exceeding with our one-to-one map between web applications and application pools previously. Multi-tenancy using host named site collections immediately sprang to mind.

It was a model I had recently read about and thought potentially perfect for our situation. The article I read happens to be one of the best out there for understanding how to approach multi-tenancy in SharePoint 2010 – Spencer Harbar’s Rational Guide to Multi Tenancy with SharePoint 2010. It definitely got me excited about the possibilities.

Thinking that the solution had been found I went on to read a number of other quality articles that deserve to be mentioned in this post. Benjamin Athawes’ Multi-Tenancy in SharePoint 2010 using host-named site collections , Technet’s Plan for host-named site collections (SharePoint Server 2010) and Steve Peschka’s Enabling Multi Tenant Support in SharePoint 2010 Parts 1, 2 and 3 are all worth a read.

Perhaps the most important article I went on to read however was Kirk Evans’ What Every SharePoint Admin Needs to Know About Host Named Site Collections. It really helped to ram home what was and wasn’t possible with host named site collections, and it turns out a lot of the things I had hoped to do weren’t.

Initially I had planned to group related sites in a web application, and hence have a few web applications hosting multiple host named site collections. This seemed only possible however using multiple ports or multiple IP addresses, neither of which I wanted to impose on our infrastructure team (as a side note, Mark Arend points out another way in which this can be achieved in his article Host Named Site Collections (HNSC) for SharePoint 2010 Architects however that wasn’t an attractive option either). Regretfully, I thought that we could still manage using just the one web application on the default port.

More curve balls were thrown however when I saw that alternate access mappings weren’t available for host named site collections. This removed the ability to have multiple addresses for the public facing sites (of which some existed) and more critically prevented us from going down the recommended path of having an authenticated web application for content editing and an extended web application for anonymous public access. My hopes were dashed and I felt like I was back to square one.

Funnily enough, the available solution was also one of the simplest – multiple web applications using the same application pool. This way my concerns about breaching the 10 application pool boundary were allayed and the number of web applications we were dealing with fit comfortably under the best recommended guidance I could find regarding web application boundary’s – Spencer’s 3rd hand comment on Benjamin’s article Web [Application] limits in SP2010 – keep it low!.

My only remaining concern was the Service Application app pools, thinking that this could easily blow out the numbers. To this date i’m still not entirely sure if these application pools count towards the boundary limit (it seems they do not but I’m not 100% sure) – there are however some interesting articles available to read; Shannon Bray’s SharePoint and the Various Application Pools, Benjamin’s WFE Application Pool Limitations in SharePoint 2010 (Part 2) and Todd Klindt’s How to change the App Pool ID of a SharePoint 2010 Web Application. The latter article suggests using the same application pool for all of the Service Applications which seems the safest approach.

And so ended my introduction to multi-tenancy in SharePoint 2010. I must admit I was slightly dissapointed that I wasn’t able to stand over the infrastructure team’s shoulders and help implement host named site collections but reading some of the interesting and inspiring articles on the topic has at least motivated me to ensure I fire up a virtual environment and play around with it myself. If there was a moral to this article it would be that when faced with a multi-tenancy proposition, explore all of the options and read all of the material highlighted in this post to determine the best solution for your scenario.

The Benefits of Harnessing Social in SharePoint

The idea to write this post stems from a brief consulting gig I did a couple of years ago. The client wanted to know what SharePoint had to offer in the social space and how it may be of benefit to the organisation. It was around the same time that I was developing an intranet prototype on SharePoint 2010 in which I was making a push for the adoption of My Sites while highlighting some of the reasons why it would be a good idea for them to do so. The purpose of this post is not so much to state what SharePoint necessarily has to offer in this area but to explore the shift in approach towards these concepts organisations seem to be taking and further reinforce the benefits to be gained by doing so.

It wasn’t that long ago that I was setting up a greenfields MOSS 2007 intranet implementation with a relatively high degree of flexibility in suggesting the direction the site should take and the features of SharePoint it should leverage. One area the line was drawn at however was My Sites. While the My Site story in SharePoint 2007 wasn’t as advanced or attractive as in SharePoint 2010, there was regardless an obvious negative bias towards anything remotely social for the corporate environment. This stretched from the common theme of blocking social sites such as facebook to not considering corporate alternatives.

It was therefore somewhat refreshing to be involved in projects where the thought of corporate social was not only being considered but actively investigated as an option. I had a number of personal opinions on the topic, mostly positive, and was keen to see all potentially valuable features of SharePoint used to increase organisational efficiency and effectiveness.

These days the push for corporate social is fairly mainstream. Mark Fidelman recently wrote about Microsoft’s push in FINALLY, Microsoft Embraces Social — And It’s Going to be Big and the even more recent Microsoft aquisition of Yammer speaks volumes to the value being placed on social networking for the enterprise.

While it may be inferred that the Yammer aquisition highlights SharePoint’s weaknesses in the space, I tend to argue that leveraging a feature within a product no matter how immature it may be perceived as being is worthwhile if it adds value. Even harnessing some of the out of the box social functionality in SharePoint 2010 will deliver effeciencies let alone when steps are taken to customise and improve the offering like in Chris O’Brien’s Extending SharePoint 2010 social features. I need not look further than my own company Ignia to see an example of how companies are now using the social features of SharePoint to derive value.

The arguments for and against corporate social networking could fuel a never ending debate and at the risk of losing impartiality in this post (and completely neglecting the required levels of governance required for a successful implementation) I want to highlight a few of the reasons I believe leveraging the social features of SharePoint is a good idea for the enterprise.

First and foremost the obvious benefit derived from leveraging the social features of SharePoint is, well, enabling social. By that I mean fostering a social environment within the organisation allows employees to communicate with one another, get to know one another and ultimately assist in improving morale, satisfaction and engagement within the organisation.

Embracing the corporate social network also encourages participation and idea sharing. If discussions are taking place within the network and all employees are encouraged to join in or even propose them, then the chances of valuable ideas being derived improve. Much like brainstorming, the more unique ideas that are put forward, the more chance there is of eventually solving a given problem.

Enabling a healthy enterprise social environment also assists in forming and nurturing a company culture. Particularly for new employees coming on board, having an environment where members of the organisation willingly expouse and reinforce the values of the company assists in fast tracking adoption of the culture by all staff.

Finally, discoverability is a huge advantage of harnessing the social features of SharePoint. Being more specific here – encouraging all staff members to enter their skills and interests in their profile ensures that other members of staff can search for the particular skill set they need for a given problem. Rather than having to know someone is an expert in a certain field and going to them for an answer they are able to search for the topic and have a list of subject matter experts returned to them with their contact details, or in the spirit of embracing the social landscape, posting a question to their profile page.

Now I don’t claim to be an expert in this field and the benefits i’ve listed above are more my personal opinions than facts based on any scientific research. They are however in my mind just a sample of the many ways in which leveraging the social features of SharePoint can benefit the organisation.

As an addendum to this post, rather than researching similar posts on this topic before writing it I chose to write the majority right from the start. As it was more of an opinion piece I thought it better that the words were solely my own. It is however valuable to include some of the posts I found after the fact that should be read to get some more insight into the benefits of using SharePoint’s social (and particularly My Sites) features. Todd Klindt wrote a piece on Making the Case for MySites which focussed more on the advantages of introducing a staff member to SharePoint more so than the social features themselves, but was none the less interesting. Only yesterday Veronique Palmer wrote about the Benefits of Using SharePoint My Sites which is a pretty comprehensive discussion on the topic. Finally, Ant Clay has an interesting post on The ROI of SharePoint Social Features (Measuring the Intangible) which was interesting to read.

Overall I think that the push towards enterprise social networking, particularly from an internal corporate perspective, is fast gaining steam both from Microsoft providing us with the tools and the organisations willingness to adopt them. By harnessing the social tools in SharePoint organisations are able to leverage efficiencies and gain a number of benefits which should lead to an increase in the rate of uptake across industries, something that i’m definitely looking forward to.

Provisioning SharePoint Features – Declaratively or Programmatically?

The question posed is often one which causes much debate. It’s one of those ready-made ‘Friday flame wars’ topics that could easily bait the uninitiated. Right from the outset I want to define some boundaries around this discussion. I’ll be approaching it predominantly from a content type and list definition/instance perspective more so than anything else. Obviously features can be used to provision much more, but those mentioned are some of the most common and often what is being referred to when this question arises.

While i’ve filed this under ‘best practices’ I must admit that I wouldn’t consider it ‘best practice’ as such. More to the point, this is my current personal preference and my reasons behind it. Like most things SharePoint, both standpoints could be argued and in different situations the alternate approach might be the right one.

I consider myself able to judge the merits of both having recently made the switch. For a long time – the majority of my SharePoint career – i’ve been firmly in the declarative camp. I wouldn’t say this was the case for any noble reason – it was the way I learnt it, the way in which it was being implemented where I first worked and hence the way I implemented it myself on numerous projects since. Recently however on a greenfields project I decided that programmatic provisioning would be the way to go. This decision was made for a couple of reasons. I had been burnt by declaratively provisioning list definitions in the past and wanted a first hand experience in programmatic provisioning to be able to make a more personally-educated decision for future endevours.

Before writing this post I decided to read anything I could find out there to get the wider community’s opinion on the subject matter. I’ll endevour to include all links that helped shape this post.

I remembered having a discussion with Jeremy Thake at the Perth SharePoint User Group a long time ago on the reasons why he favoured programmatic provisioning and while I couldn’t completely recall the conversation, knew that there was a resource on the old SharePoint Dev Wiki. Thankfully it still exists on NothingButSharePoint.com and still proves to be one of the most useful resources on the matter. Have a read of SharePoint Declarative (CAML) vs Imperative (Object Model) Provisioning to see some of the collaborative thought on the pro’s and con’s of both approaches.

When approaching SharePoint best practices one of my first stops is always to the patterns & practices SharePoint Guidance where I found buried in the documentation the following quote:

“There are various tradeoffs to consider when you choose whether to create content types declaratively or programmatically. You cannot update a content type that has been created declaratively—all content types must be upgraded programmatically. However, declarative approaches are simple; therefore, they are a better choice if your content types are not expected to evolve or change.”

Also worth a read are the opinions of highly respected SharePoint MVP, MCM and MCA Wictor Wilen in his post Avoiding Xml Based SharePoint Features – Use The API Way. The post is a few years old and it would be interesting to hear an updated opinion but it still strongly advocates the programmatic approach.

Finally a couple of forum postings influenced my view, particularly the comments by Scot Hillier in what is the best practice to deploy a list instance to a sharepoint site and Rob Windsor in Best way to package publishing content types.

My views and experience with the declarative approach

As previously mentioned I’ve been using this approach for a long time. Aside from the fact that this may have been so due to habit, there were factors that lead me to believe it was a fine approach and seeking an alternative was not necessary. As vindicated by the patterns and practices quote above – it can be a simple method of provisioning features. There are examples and documentation aplenty all over the internet and MSDN. Particularly for provisioning lists and content types, applications exist which allow you to essentially reverse engineer the XML required from those that were created/prototyped in SharePoint via the UI (in 2007 there was the Solution Generator and SPSource, in 2010 you can import site templates into Visual Studio and find the relevant files or use the Community Kit for SharePoint).

A lot of the often-mentioned downsides to declarative provisioning never really caused me any issues. There was the ‘inability’ to define lookup fields in lists declaratively (which I never had a problem with using something similar to that described in Peter Holpar’s post Declaratively adding a lookup field to a list schema using the name of the referenced list). There was the dreaded length of the Views component of the schema which was easily rectified using Oskar Austegard’s approach in MOSS: The dreaded schema.xml. Overall, I think I just got used to writing XML and how it all tied together and it never really seemed too difficult.

There was one incident however which started to change my mind a little. It was at a client where I had provisioned reasonably simple custom libraries using the declarative method which ended up behaving oddly. I can’t remember the exact issue but it had something to do with adding (seemingly random – not all) documents and issues with setting the metadata. The issue was not replicable in an identical list created through the UI, yet multiple sets of eyes couldn’t see any issues with the XML. Sure, the problem may have been caused by the definition, but that’s the problem – it’s hard to debug and hard to determine where issues may arise from. It’s also relatively easy to introduce them unknowingly.

I also, in hindsight, remember putting up with a lot of iterations to identify why features weren’t activating correctly or lists weren’t being provisioned correctly, usually all a result of manually typed XML errors. In development there was the constant process of making slight changes to the content types or lists, making sure all traces were removed from SharePoint then reploying and activating. It was a cycle I probably got used to and thought little of, but reflectively it really can be quite painful considering the alternative.

My views and experience with the programmatic approach

Having recently experienced using the programmatic approach to feature provisioning, I’d have to say i’ve been converted and can’t see myself going back. Sure it takes a little longer sometimes, but the level of control you have (checking to see if lists already exist, whether fields already exist, making conditional changes if desired) over everything is such a refreshing change. Everything is in the one location – no need to jump from XML file to XML file, making changes to Elements and Schemas and the like – everything can exist in the one feature receiver class. Whereas previously if you needed some code for things you just couldn’t do in the XML (cleaning up upon removal perhaps?) now it all just lived in the same place and you could easily get a picture of what the feature as a whole achieved.

The option is potentially only a valid one for developers – but at least in my experience it tends to be the developers doing the XML definitions anyway. It is far better to keep them to their strengths within the SharePoint API.

It also seems that the majority of the community seems to be behind the programmatic approach – at least from the readings I did for this post. There are some fairly respected names lending their weight of support to the method and it’s hard to ignore.

So to conclude it’s fair to say that i’m now firmly in the programmatic camp. While my experience using the approach is in its infancy and my opinion may end up changing as I stumble upon potential hurdles – I have had a LOT of experience manipulating SharePoint via the API in various applications, including building up entire sites from the ground up, and have yet to run into as many issues as I had with declaratively provisioned features. One thing that I wished I had the opportunity to view before writing this post is a presentation/debate delivered at the Tulsa SharePoint Interest Group by Corey Roth (declarative) and Kyle Kelin (programmatic). It was a slideless presentation and hence I wasn’t even able to view them, however they have made available their code examples which were used in the presentation which may be worth a look if you’re wanting to compare the two approaches to provisioning features in a SharePoint site. Take a look at Corey’s post Code Samples from our CAML vs API talk at Tulsa SharePoint Interest Group for the solution.

Using Web Part Verbs to Shortcut Web Part Property Configuration

Web part verbs are a pretty useful yet seemingly under-utilised feature of SharePoint web parts. Over my years in SharePoint I’ve seen and created more web parts than I care to remember, yet in that time can never remember seeing one which made use of the handy functionality web part verbs provide. I’m not sure whether it’s through lack of knowledge or lack of necessity, but I thought it would be useful to highlight one scenario where harnessing web part verbs would be valid – a more efficient method of setting web part property configuration.

First though, the obligatory background information. It would be remiss of me to assume that everyone reading this knows what web part verbs are. Most however, including those who think they don’t, probably already do. They’re simply the menu items available when you click the downward arrow in the top right-hand corner of the web part chrome.

Not as many however would know that it is quite easy to get your own items into that menu. This is done by creating custom WebPartVerb objects using one of 3 constructors – one which specifies a client-side function to run in the event the action item is clicked, one which specifies a server-side function to run in the event the action item is clicked and one which specifies both a client-side and server-side function to run – and adding them to a WebPartVerbCollection. You can read about the WebPartVerb class for more information.

Now this opens up a number of possibilities – you can essentially run any code client-side or server-side from these web part verbs, however being in an environment where i’ve worked on so many public facing websites for some time now and the fact that those web parts would rarely if ever expose the web part chrome, this functionality tends to only be available when the page is in edit mode, hence lends itself to configuration actions rather than functional actions.

As such, i’ve decided to slant this post in that direction – showing how web part verbs can be used to simplify the action of configuring a web part’s properties.

Most would be familiar with the concept of web part properties. When you click the ‘Edit Web Part’ verb shown above, you make visible a tool pane in which you can set a number of web part properties. When creating a web part, you can also define custom properties which will also appear in that tool pane. Without wanting to delve into custom ToolParts or EditorParts generally web part properties are displayed based on their type – for instance a string will appear as a textbox and a boolean as a checkbox. So if I wanted to toggle a switch on my web part to enable one display type over another I would create a custom property as a boolean value and conditionally render the web part based on that value.

This is a pretty common pattern when creating generic reusable web parts and trying to minimise the sprawl of custom web parts. The example i’ll show in this post is a visual web part used for searching. The web part will have two views – a simple view and an advanced view, configurable by the user who places the web part on the page.

When simply using a custom property to achieve this, the user must click the downward arrow in the web part chrome, select Edit Web Part, expand out the relevant category, check the checkbox and click OK. It’s not ridiculously painful in SP2010 but it is a little more frustrating in SP2007. I think everyone would agree that such a simple display toggle would be better served with an option that can be selected directly from the menu which is what web part verbs enable. The example can be seen below.

Search in Simple Mode

Search in Advanced Mode

Toggling Display via the ToolPane

Toggling Display via a Web Part Verb

The code required to get this up and running is fairly simple and requires modifications to 3 files:

WebPart.cs

using Microsoft.SharePoint.WebPartPages;

namespace WebPartVerbsExample.SearchWebPart
{
    [ToolboxItemAttribute(false)]
    public class SearchWebPart : System.Web.UI.WebControls.WebParts.WebPart
    {
        // Visual Studio might automatically update this path when you change the Visual Web Part project item.
        private const string _ascxPath = @"~/_CONTROLTEMPLATES/WebPartVerbsExample/SearchWebPart/SearchWebPartUserControl.ascx";

        [WebBrowsable(true),
        WebDisplayName("Advanced Search"),
        WebDescription("Click to show advanced options for search"),
        Personalizable(PersonalizationScope.Shared),
        Category("Custom Properties"),
        DefaultValue("")]
        public bool IsAdvancedSearch { get; set; }

        protected override void CreateChildControls()
        {
            SearchWebPartUserControl control = (SearchWebPartUserControl)Page.LoadControl(_ascxPath);
            control.IsAdvancedSearch = IsAdvancedSearch;
            Controls.Add(control);
        }

        public override WebPartVerbCollection Verbs
        {
            get
            {
                //server-side verb
                WebPartVerb titleVerb = new WebPartVerb(this.ID + "_searchModeVerb", ToggleSearchMode);
                titleVerb.Text = "Toggle Search Mode";

                // return collection with new verbs
                WebPartVerb[] verbs = new WebPartVerb[] { titleVerb };
                return new WebPartVerbCollection(verbs);
            }
        }

        // event gets executed when the user invokes the server-side verb
        void ToggleSearchMode(object sender, WebPartEventArgs e)
        {

            SPWeb site = SPContext.Current.Web;
            SPFile page = site.GetFile(Context.Request.Url.AbsolutePath);
            SPLimitedWebPartManager wpm = page.GetLimitedWebPartManager(PersonalizationScope.Shared);
            SearchWebPart webpart = wpm.WebParts[this.ID] as SearchWebPart;

            //Toggle the search mode
            webpart.IsAdvancedSearch = !webpart.IsAdvancedSearch;
            wpm.SaveChanges(webpart);
        }
    }
}

WebPartUserControl.ascx.cs

namespace WebPartVerbsExample.SearchWebPart
{
    public partial class SearchWebPartUserControl : UserControl
    {
        public bool IsAdvancedSearch { get; set; }

        protected void Page_Load(object sender, EventArgs e)
        {
        }
    }
}

WebPartUserControl.ascx

<div>
    <% if (!IsAdvancedSearch) { %>
    <div>Search</div>
    <div>
        <!-- simple search fields -->
    </div>
    <% } else { %>
    <div>Advanced Search</div>
    <div>
        <!-- advanced search fields -->
    </div>
    <% } %>
</div>

So there you have it, a neat little way to leverage web part verbs to shortcut the need to edit the web part’s properties to set a simple configuration toggle. This makes the page editors life far easier and is definitely a more efficient way to handle such a requirement. Before I finish and on a slightly unrelated note, if you’re looking for an even more efficient way to set web part configuration values in SP2010 then check out Wictor Wilen’s post Improve the experience using Modal Dialogs when adding Web Parts in SharePoint 2010.

Harnessing SignalR in SharePoint

Note: For those interested in the implementation of SignalR in SharePoint 2013, view the latest post Harnessing SignalR in SharePoint 2013 (Office 365)

On the 1st of April Bil Simser wrote an article Introducing SharePointR. I happened to read it on the 3rd and as such it wasn’t until I got to the ‘quote’ by David Fowler that I tweaked to what was going on. It was a clever April Fools post and likely got a lot of people excited. For me, it piqued my interest. I’d heard of SignalR in passing but had yet to delve into it. Once I had, I have to say I got pretty excited myself. It’s super cool. I, along with one of my extremely talented colleagues Elliot Wood, went about getting SignalR up and running in a SharePoint environment.

SignalR is relatively new and as such the information available isn’t extensive, but what is out there is pretty good. There are a few examples you should check out right off the bat: Scott Hanselman’s Asynchronous scalable web applications with real-time persistent long-running connections with SignalR and Justin Schwartzenberger’s Learn how to use SignalR and Knockout in an ASP.NET MVC 3 web application to handle real-time UX updates were the two I found to start with. Both link off to a number of other valuable examples.

So what is SignalR? Essentially, it’s real-time client-server communication on the web. Without having to constantly poll or perform any refreshes, data on the page will ‘magically’ update in front of your eyes. The library is maintained on Github which gives you access to the latest code, issues and documentation at a central source. I’m not going to go too far into what it is, because with the excitement that’s being generated around this, plenty of other people are doing a far better job of it than what I could. I’ll skip straight to the fun stuff.

So if the examples are already out there, why can’t we just plug this straight into SharePoint?

The majority of examples which currently exist tend to host the hub and the client in the same project, and hence are hosted on the same domain. This works great, you can spin up your .NET 4.0 web application and everything will work smoothly. Only problem being SharePoint runs on the .NET 2.0 framework – you won’t be able to add the SignalR DLLs to your SharePoint project.

This however is not a deal-breaker. As long as your hub is hosted on a .NET 4.0 web application you can leverage SignalR in a simple HTML file with JavaScript, so surely that would be easy enough to plug into a SharePoint web part?

Firstly; it’s not exactly simple. It requires something called Cross-Domain Calling which I’ve since found out is a bit of a pain to get working across different browsers. This is where the information fell down a little, or at least was scattered around. I’ve read more StackExchange and Github Issues than I care to dig back up and link to for this article so excuse my lack of referencing here. One page I did come across which got me most of the way there was Thomas Krause’s Making Cross-Domain Calls in SignalR which summed it up pretty nicely, but still didn’t work in all browsers for me. But more on this later.

Secondly; even with all those issues resolved we still want a way to trigger SignalR to broadcast an update from SharePoint, and unless you’re doing that purely with client interaction in the UI, chances are that’s going to mean handling create, update and delete events via event receivers. Which brings us back to our first issue – these will be on the .NET 2.0 framework and won’t be able to reference the SignalR DLLs. So how do we get around this?

Essentially what is required is a bridge between SharePoint’s .NET 2.0 environment and a .NET 4.0 one. I liked the way Elliot termed this better: breaking the .NET barrier. My initial thoughts were hosting a WCF service and passing the information from the event receiver to that service to be broadcast by SignalR, and I still think that would be the ideal solution. Elliot however beat me to the implementation by making the leap via HTTP and posting to a handler sending the information via Query String, and for the purposes of the proof of concept (minimal data being transfered) this did the trick nicely.

With all the pieces of the puzzle in place, it’s time to implement our SignalR in SharePoint proof of concept. The idea is pretty simple – consider it a mini task-tracking system. Tasks will get added to the pile and others will be completed. The service quality manager will have a dashboard on their monitor open all day receiving live information on whether performance targets are being hit on a day-to-day basis. Let this be a glimpse into the power of what SignalR can achieve and let your imagination run wild on possible real-world implementations.

Step 1: Create the Hub

The Hub needs to be a .NET 4.0 web application. The majority of examples on the net state the first step of implementing any SignalR application is to use nuget to retrieve the SignalR DLLs and scripts. This is fine in theory, but while we were investigating our cross-browser issues (non-performance in Firefox and Chrome) Elliot noticed that the nuget version of SignalR is not the latest, therefore downloading the latest ZIP and re-referencing the DLLs is the way to go. In fact the ‘latest’ version from Github at the time of writing included another required DLL that the nuget version didn’t bring across – SignalR.Hosting.Common.dll. Others that you’ll want the updated version for (or at least the ones we used) include SignalR.dll, SignalR.Hosting.AspNet.dll and Newtonsoft.Json.dll.

The next step is to add an entry into the web.config file to allow the cross-domain calls. This requires adding the following snippet to the system.webServer node

<httpProtocol>
  <customHeaders>
    <add name="Access-Control-Allow-Origin" value="*" />
  </customHeaders>
</httpProtocol>

The final step is to add a class to your project to house the Hub

using SignalR.Hubs;

namespace SignalRHub
{
  public class SharePointHub : Hub
  {
    public void Send(string message)
    {
      // Call the addMessage method on all clients
      Clients.addMessage(message);
    }
  }
}

At this point all the extraneous components of the project can be removed so you’re left with the class, the web.config and packages.config. Just a note – the Send implementation is somewhat redundant as we’ll be calling the client script from the handler below rather than the hub itself – but it’s useful for testing.

Step 2: Create the HTTP Handler

The handler can exist in the same project created above and will essentially be the tool to receive the Query String data from SharePoint and broadcast it via SignalR to the clients. All credit here goes to Elliot for the concept, I’ve adapted it to match my SignalR implementation (I used a Hub, he used a Persistent Connection).

One major thing to point out is that the documentation currently offers this as the method of broadcasting over a Hub from outside of a Hub

using SignalR.Infrastructure;

IConnectionManager connectionManager = AspNetHost.DependencyResolver.Resolve<IConnectionManager>();
dynamic clients = connectionManager.GetClients<MyHub>();

This documentation however is only valid for the 0.4.0 build of SignalR you would get from nuget – the latest version establishes the IConnectionManager from another means

IConnectionManager connectionManager = Global.Connections;

The resulting code for the ProcessRequest is as follows

public void ProcessRequest(HttpContext context)
{
    context.Response.ContentType = "text/plain";

    IConnectionManager connectionManager = Global.Connections;
    dynamic clients = connectionManager.GetClients<SharePointHub>();

    var payload = new
    {
        TasksOpenedToday = context.Request.Params["TasksOpenedToday"],
        TasksCompletedToday = context.Request.Params["TasksCompletedToday"],
        LightStatus = context.Request.Params["LightStatus"]
    };

    JavaScriptSerializer jss = new JavaScriptSerializer();
    var payloadJSON = jss.Serialize(payload);
    clients.addMessage(payloadJSON);
}

Step 3: Create the Event Receiver

There’s nothing particularly fancy going on here aside from posting to our HTTP handler we created in step 2. The rest of the code is just a simple event receiver bound to a task list. If you need help at this step then take a look at Walkthrough: Deploying a Project Task List Definition. We’ll need to override the ItemAdded, ItemUpdated and ItemDeleted events and add the following code

Broadcast(getPayload(properties));

private string getPayload(SPItemEventProperties properties)
{
    SPList list = properties.List;
    SPQuery query = new SPQuery();
    query.Query = "<Where><Eq><FieldRef Name='Created' /><Value Type='DateTime'><Today /></Value></Eq></Where>";
    SPListItemCollection items = list.GetItems(query);
    double tasksOpenedToday = items.Count;

    double tasksCompletedToday = 0;
    foreach (SPListItem item in items)
    {
        if (item["Status"].ToString() == "Completed") tasksCompletedToday++;
    }

    string colour = "RED";
    int percentage = (int)Math.Floor(tasksCompletedToday / tasksOpenedToday * 100);
    if (percentage >= 70) colour = "GREEN";
    else if (percentage >= 50) colour = "ORANGE";

    return string.Format("?TasksOpenedToday={0}&TasksCompletedToday={1}&LightStatus={2}",
        tasksOpenedToday.ToString(),
        tasksCompletedToday.ToString(),
        colour);
}

private void Broadcast(string Payload)
{
    WebRequest request = HttpWebRequest.Create(string.Concat("http://server:port/SharePointRProxyHandler.ashx",Payload));
    WebResponse response = request.GetResponse();
}

Step 4: Create the Web Part

We’re on the home stretch. We’ve got an event receiver posting data to an HTTP handler, which is in turn broadcasting that via SignalR. The only thing left to do is create the client to listen out for that broadcast. This can essentially be HTML and JavaScript and as such doesn’t really need to be a web part at all, but I’ll be creating a visual one in the interests of effective deployment.

It’s here that we need to think back to when we retrieved the latest files from Github for SignalR. You’ll now need to grab the latest JavaScript files from that package and store them in SharePoint so we can reference them in our web part. You’ll also need to grab and store Knockout.

There’s a few differences you’ll need to consider compared to the generic example provided for implementing the SignalR client for hub communication. Firstly, instead of referencing /signalr/hubs in your script src, you’ll need to reference it in the location in which it exists

<script src="http://server:port/signalr/hubs" type="text/javascript"></script>

Secondly, you’ll need to force jQuery to support cross-site scripting

jQuery.support.cors = true; //force cross-site scripting

Third, your hub URL will also need to point to the relevant location

$.connection.hub.url = 'http://server:port/signalr';

Your Hub name will need to be in camel-case (which isn’t clear in the example seeing it’s all lower case)

var sharePointHub = $.connection.sharePointHub;

Your client handler will need to parse the JSON sent in seeing it’s not a simple string

sharePointHub.addMessage = function(json) {
var data = JSON.parse(json);
};

And finally you’ll need to pass a couple of options into the call to start the connection to your Hub

$.connection.hub.start({ transport: 'longPolling', xdomain: true });

That pretty much covers it. Add in some initial population of the viewModel server side and the necessary Knockout bindings and the web part is ready to be deployed. You can see the final web part (minus the basic server side population of variables) below

<style type="text/css">
    .dashboard-item { margin-bottom: 10px; }
    .dashboard-label { font-weight: bold; }
    #LightStatus { width: 50px; height: 50px; }
    .RED { background-color: Red; }
    .ORANGE { background-color: Orange; }
    .GREEN { background-color: Green; }
</style>

<script src="/SiteAssets/jquery-1.7.min.js" type="text/javascript"></script>
<script src="/SiteAssets/json2.min.js" type="text/javascript"></script>
<script src="/SiteAssets/jquery.signalR.min.js" type="text/javascript"></script>
<script src="/SiteAssets/knockout-2.0.0.js" type="text/javascript"></script>
<script src="http://server:port/signalr/hubs" type="text/javascript"></script>

<div class="dashboard-item"><span class="dashboard-label">Tasks Opened Today: </span><span data-bind="text:tasksOpenedToday"></span></div>
<div class="dashboard-item"><span class="dashboard-label">Tasks Completed Today: </span><span data-bind="text: tasksCompletedToday"></span></div>
<div id="LightStatus" data-bind="attr: { class: lightStatus }"></div>

<script type="text/javascript">
    var viewModel = {
        tasksOpenedToday: ko.observable(),
        tasksCompletedToday: ko.observable(),
        lightStatus: ko.observable()
    };

    $(document).ready(function () {
        jQuery.support.cors = true; //force cross-site scripting

        $.connection.hub.url = 'http://server:port/signalr';

        // Proxy created on the fly
        var sharePointHub = $.connection.sharePointHub;

        // Declare a function on the chat hub so the server can invoke it
        sharePointHub.addMessage = function (json) {
            // Update viewModel values here
            var data = JSON.parse(json);
            viewModel.tasksOpenedToday(data.TasksOpenedToday);
            viewModel.tasksCompletedToday(data.TasksCompletedToday);
            viewModel.lightStatus(data.LightStatus);
        };

        //Populate viewModel via ASP.NET and bind to Knockout
        viewModel.tasksOpenedToday("<%= InitialTasksOpenedToday %>");
        viewModel.tasksCompletedToday("<%= InitialTasksCompletedToday %>");
        viewModel.lightStatus("<%= InitialColour %>");
        ko.applyBindings(viewModel);

        // Start the connection
        $.connection.hub.start({ transport: 'longPolling', xdomain: true });
    });
</script>

So how does it look in the end? You be the judge..

So there you have it. I’d be surprised if there’s anyone out there that didn’t think this was pretty awesome. With any luck this will inspire you to go forth and make your own SharePoint implementations with SignalR – I’m looking forward to seeing what will be achieved with this excellent technology. Already Elliot has taken up the challenge to bring Bil’s ‘April Fools’ concept to life, it’s the kind of inspiring functionality that makes you want to go out and experiment. Just one last note – for anyone fearful of how this somewhat new technology would go in an enterprise environment, take a look at the video C#5, ASP.NET MVC 4, and asynchronous Web applications.

Performance Optimising SharePoint Sites – Part 3

In Performance Optimising SharePoint Sites – Part 1 of this series I focussed on some of the first steps you should undertake or consider when embarking on performance optimising your SharePoint site. Performance Optimising SharePoint Sites – Part 2 explored some of the platform-independent techniques available at your disposal while this part of the series will identify some of the SharePoint-specific techniques able to be leveraged.

This is another topic where the information available is quite extensive. In a lot of cases however the information is either spread around or quite targetted so I think there is some value consolidating a lot of these ideas as part of this series.

One aspect of performance optimisation for SharePoint sites I won’t delve heavily into is on the infrastructure and administration side. This isn’t particularly my area of expertise so instead i’ll invite you to read some articles with more appropriate authors including Arpan Shah’s Things to Consider for SharePoint Performance, David Lozzi’s Improving SharePoint’s Performance and Eric Shupps’ 10 steps to optimize SharePoint performance.

Make sure your Objects are Disposed Correctly

It could be considered that this has been done to death and surely no one out there would fall into the trap of incorrectly, or failing to completely, dispose their SharePoint objects – but no performance optimisation article would be complete without at least mentioning it. It’s one of the most serious pitfalls a developer can fall into when developing a SharePoint site. Two main resources exist to steer you clear of this evil; the MSDN articles Disposing Objects for SharePoint 2010 and Best Practices: Using Disposable Windows SharePoint Services Objects for SharePoint 2007 and the SPDisposeCheck tool which can be run over your code to check for potential errors. There really is no excuse to letting these issues slip through the cracks and should definitely form part of your optimisation process.

Perform a Code Review and Ensure Your SharePoint Code is Optimal

Aside from object disposal mentioned above, a number of other code factors exist which can leave your pages rendering in a less than optimal time. A number of blog posts run through a bunch of these including Andreas Grabner’s How to avoid the Top 5 SharePoint Performance Mistakes, Eric Shupps’ SharePoint.Performance: Optimizing Web Parts and Waldek Mastykarz’s Performance of various methods to retrieve one list item. However the best resource i’ve found in regards to writing optimal code is on the MSDN site Developer Best Practices Resource Center – definitely worth your time in parsing through all the information on that site to ensure your code is as optimal as it can be.

Use Caching within your Code

No matter how optimal you eventually make your code, it will still be slower than not having to query the information in the first place. Caching is a great mechanism to allow you to run the query once then retrieve the data from memory on subsequent loads. There’s always a trade off between caching the data for performance and the currency of data – the trick is to find the balance between improving the performance of your site and ensuring the data being displayed is relatively up to date – this could differ on a case by case basis. You can either harness the built-in SharePoint objects which handle the caching for you or cache the objects yourself. Claudio Brotto has a good article on all things caching at SharePoint Internet Sites – Performance Optimization for Data Access which is worth a read.

Make Use of SharePoint’s Built-In Caching Functionality

SharePoint also has other built-in forms of caching that can be harnessed for your site both identified in Claudio’s article above; the Output Cache and the BLOB Cache. There are pitfalls associated with caching and these need to be considered, however caching is one of the most powerful methods of improving the load time of your SharePoint site. For more information on BLOB caching i’d recommend reading Sean McDonough’s Do You Know What’s Going to Happen When You Enable the SharePoint BLOB Cache? – it’s a long but quality post. For Output Caching (and caching in general) have a read of Tobias Zimmergren’s Caching in SharePoint 2010. As a side note particularly for SharePoint 2007 you may need to consider the issues Chris O’Brien identifies in his article Optimization, BLOB caching and HTTP 304s.

Get Rid of those Unnecessary SharePoint JavaScript Files

If your site has been extended to solely be an anonymous internet site then chances are a bunch of JavaScript is being downloaded to the page that will never be used by the end user and you should strongly consider changing that. Microsoft covered this off in SharePoint 2007 in a section (within a larger article) titled Managing Page Payload (Small Is Good). Chris O’Brien covered this off again in SharePoint 2010 in his post Eliminating large JS files to optimize SharePoint 2010 internet sites and Mahmood Hamed extended on that in his article Eliminate large JS files to optimize SharePoint 2010 internet sites: the complete reference.

Consider Warming up your SharePoint Site

Warming up the pages across your SharePoint site is a good idea for two reasons. Firstly, ASP.NET’s Just-In-Time compiler will often cause the first access of a site to be extremely slow compared to usual. Secondly, the caching techniques discussed previously only kick in once a page has been visited for the first time. By running a script or job to warm up the site you can avoid the ‘first hit’ performance lag associated with these issues. There are a number of sources of information regarding this topic but one such article that seems to collate a lot of them is Wahid Saleemi’s Roundup: SharePoint Warm-Up Scripts.

So that about sums it up – my take on a start to finish process on performance optimising a SharePoint site. When approaching this task with limited time and budget it’s important to target the factors you believe will offer the greatest gain for the time required to implement. With any luck the tasks completed will be identifiably successful enough that future projects will be undertaken with performance in mind right from the start.

Performance Optimising SharePoint Sites – Part 2

In Performance Optimising SharePoint Sites – Part 1 of this series I focussed on some of the first steps you should undertake or consider when embarking on performance optimising your SharePoint site. This part of the series will explore some of the platform-independent techniques available at your disposal while Performance Optimising SharePoint Sites – Part 3 will identify some of the SharePoint-specific techniques able to be leveraged.

Before I delve into some of the techniques available I should point out that the information available on this subject is extensive. It may seem a bit redundant even including this section in my series due to the wealth of information available elsewhere however I’ve done so for completeness. Both Google and Yahoo offer terrific sources of information on this subject matter if you’d prefer to go directly to the source – i’ll try to differentiate my contribution by linking to alternate tools and resources.

Minimise HTTP Requests

Reducing the number of HTTP requests on your page is one of the best ways to reduce the load time, particularly for first time visitors. As pointed out on Yahoo’s performance rules page via Tenni Theurer’s Browser Cache Usage – Exposed! 40-60% of daily visitors to your site come in with an empty cache. Every request requires a round trip to the server so it’s easy to see how this component of page optimisation could provide a significant improvement to the overall load time of the page. The majority of techniques that can be used to achieve this will be identified in the following 3 sections.

Optimise your JavaScript

There are a number of ways in which the JavaScript on your site can be performance optimised. Firstly, you should Move Scripts to the Bottom as Steve Sauders explains. This enables progressive rendering and helps to achieve parallel downloads. This should only be done where it doesn’t impact the visual rendering of the page. You should ensure that JavaScript isn’t duplicated within your site – either in various libraries or particularly in terms of referencing the same file multiple times (which can happen if you’ve included the reference in multiple web parts or controls). Your JavaScript should exist in external files rather than inline so that it can be cached and hence not add to the size of the HTML document. You should combine your JavaScript into as few external files as possible to minimise the number of HTTP requests and finally you should minify those external JavaScript files. There are a number of resources available to assist you in automating the minification process, both for Visual Studio described in Dave Ward’s post Automatically minify and combine JavaScript in Visual Studio or if you prefer it with a SharePoint flavour via Waldek Mastykarz’s Minifying JavaScript and CSS files made easy with Mavention SharePoint Assets Minifier.

Optimise your CSS

Similarly to optimising JavaScript, you can perform a number of optimisations to your CSS. Whereas you should load your JavaScript at the bottom of the page to enable progressive rendering, with CSS you should Put Stylesheets at the Top. You should always refactor your CSS similar to how you would your code to ensure it is optimal and doesn’t repeat definitions. Take advantage of the fact you can assign multiple classes to your elements – for instance where appropriate include a base CSS class which has the shared definitions and another which has the unique ones. You should externalise your CSS for the same reasons as you should your JavaScript and combine your CSS into as few external files as appropriate (find the balance between reducing HTTP requests and combining a large amount of CSS which is only used on one or two pages). Finally, you are also able to minify your CSS much in the same fashion as you can your JavaScript.

Optimise your Images

Along with HTTP requests, the size of images is one of the biggest factors in determining page load times. There are 3 major areas of focus when it comes to optimising your images. Firstly, it is important that the images are as optimised as possible to begin with. This means achieving smaller file sizes without reducing the quality of the image itself. A number of tools exist to assist with this highlighted in Jacob Gube’s post 8 Excellent Tools for Optimizing Your Images – the one I’m most familiar with it Yahoo’s Smush.it. Secondly, it’s important to ensure that you’re not relying on the browser to resize your images. It is common to find large images reduced in size by the size of the containing element which is definitely not optimal. Finally, CSS sprites can be used to reduce the number of HTTP requests required to retrieve the images explained via Chris Coyier’s CSS Sprites: What They Are, Why They’re Cool, and How To Use Them. One thing you’ll want to consider is that you may be able to optimise the images for the site initially, but if you have content being added by end users and they upload and reference unoptimised and HTML-resized images to a page you’re at risk of losing the optimised nature of that page. One final thing to note is that if you’re displaying a page with a known transition path you can pre-load the images that will appear on the next screen/page to further optimise that page’s load time using a method such as CSS Ninja’s Even better image preloading with CSS2.

Utilise GZip Compression

GZip compression is another way in which you can reduce the size of the payload coming back from the server to the user accessing your site. The majority of browsers support GZip compression and hence this should not be ignored. For a better explanation of why it is beneficial and what it actually does have a read of Kalid Azad’s How To Optimize Your Site With GZIP Compression. For implementation details with IIS6 see Bill Baer’s HTTP Compression, Internet Information Services 6.0, and SharePoint Products and Technologies or Todd Sharp’s Enabling GZip Encoding On IIS7.

In Performance Optimising SharePoint Sites – Part 3 of this series i’ll identify some of the SharePoint-specific techniques able to be leveraged.