How I passed 70-480

Over the course of last year and the beginning of this one I went about studying for and completing the 4 SharePoint 2010 certification exams and blogging how I went about it. It’s been half a year or so since I last targeted a certification so I figured it was about time I got back on the wagon and started attempting to gain the SharePoint 2013 certifications. Considering the ‘How to’ posts for 2010 were some of my most popular it would be remiss of me not to document my journey through the 2013 stream in the same way. Kicking things off was the Programming in HTML5 with JavaScript and CSS3 exam.

It was a little different approaching this exam – on the one hand it isn’t really SharePoint per se so it may seem a little out of our collective comfort zones. For me however, it’s what I’ve been doing for a fair while now so I went in to it with confidence. Before going into the resources I used to pass this I thought it worth noting that while I did very well, it was one of the longest exams I feel I’ve sat for – I think it was important to carefully read the questions and work through the answers more than usual, or maybe that was just me. Just a heads up anyway that it probably paid to be a little more careful than usual.

So on to the resources out there to help you pass. To be honest, there is a lot. I was quite impressed with the range of options available – maybe because this was a more generic topic, the range of people completing it (and blogging about it) was greater. Perhaps it’s just because it’s a pretty interesting topic and enjoyable to comprehensively study for.

I took a couple of different angles at it. Firstly I read the articles linked to by Microsoft themselves in the skills measured section of the page I linked to previously. My first impression was that the content was pretty thin, until I got to the CSS section! In hindsight though I think rather than thin it is actually quite targeted – perhaps not comprehensive enough to guarantee a pass, but it covers enough of the topics that you can expect to come up against. I’d suggest they’re worth reading, which is good to know seeing it was the first time I’ve relied on the links provided by Microsoft for a study plan.

The other resource I used was the training videos created by Jeremy Foster and Michael Palermo – Developing HTML5 Apps Jump Start. I couldn’t recommend these highly enough – they were interesting, fun to watch and weren’t ridiculously long (the jump start training series based on these videos can also be completed here). It helped reinforce that the articles linked to above were actually well targeted, and also touched on a wider spread of content, most of it still particularly relevant. Seeing I had a bit of time after I also read Matthew Hughes’ accompanying blog series to which you can find all the links on Michael’s page HTML5 Apps 70-480. I’d suggest it’s particularly wise to read up further on some of the more complex topics you’ll find in the last post – Matthew does a good job linking to extra resources worth reading.

That’s about did it – I found that was a decent amount of study to target my learning to the exam and pass relatively comfortably. I do have a fair bit of experience in these technologies however, so anyone with less may be interested in delving into the topics further using the resources below.

The first type I tracked down were the blogs which broke down the skills measured and targeted links to quality resources to study for them. There was an abundance of these and considering I didn’t use a particular one, I thought I’d list a few for consideration. You could use Chris Myers’ Microsoft Exam 70-480 Study Guide, Becky Bertram’s Exam 70-480 Study Guide, the post Exam 70-480 Study Material: Programming in HTML5 with JavaScript and CSS3, Adnan Masood’s Study Notes for 70-480 Programming HTML5 and CSS 3 Exam, Xtian’s Microsoft Exam 70-480 Certification Tips, Martin Bodocky’s 70-480 Programming in HTML5 with JavaScript and CSS3 – Preparation links or the forum post by Robert Kaucher 70-480 – Programming in HTML5 with JavaScript and CSS3. The list probably goes on and on, like I said there’s a lot of quality resources out there.

If clicking a million links isn’t your thing then Aidan Ryan created a series starting at Microsoft Exam 70-480: Programming in HTML5 With JavaScript and CSS3 where you can follow through reading his own study notes in the one place.

Then there’s always the paid options if you find you’re really stuck – the book Exam Ref 70-480: Programming in HTML5 with JavaScript and CSS3 by Rick Delorme and the nugget series Microsoft Visual Studio 2012 70-480 Programming in HTML5 with JavaScript and CSS3 by Garth Schulte of CBT Nuggets. Unfortunately for these guys, with the amount of free content which is available for this exam i’m not sure how necessary these options would be.

So that about covers it – as you can see there is no shortage of quality resources for this exam, and the content is actually quite interesting and enjoyable to learn so I’d encourage you to dive right into it, play around and complete the exam with a pretty short turn around time. Until next time – good luck!

Harnessing SignalR in SharePoint 2013 (Office 365)

Over a year ago I wrote a post on Harnessing SignalR in SharePoint. Back then, SignalR was fairly new and the amount of information out there was limited. Getting SignalR to work with SharePoint posed some challenges, all of which are outlined in that post, however eventually we got it working. Since then I’ve had little to do with the technology however the colleague I worked with on that proof of concept, Elliot Wood, has continued to keep on top of the changes to the library and the implications that may have for SharePoint integration. While i’m posting this to my blog, i’m doing so on behalf of Elliot who is responsible for all of the magic soon to be displayed.

Even though there was a fair amount of excitement in regards to SignalR, and particularly the work we did around getting it working in SharePoint, the number of subsequent posts along the same lines was fairly limited. One person who carried the torch was Maximilian Melcher – his post SignalR in SharePoint 2013 – the real-time web is coming! outlined that integrating SignalR in SharePoint 2013 wasn’t as easy as one might expect – even though we were now on the .NET 4.0 framework which should have made life much easier. Max ended up getting his solution working and subsequently posted a codeplex solution, however the post was enough to pose the challenge to Elliot to get it working as easily as possible.

The solution presented uses an autohosted App for SharePoint 2013 hosted on Office 365. With Microsoft’s direction seemingly steering away from full-trust and sandboxed solutions and straight towards the App model, this can be considered the most viable and future-proof solution to the ‘SharePointR’ dream.

To ensure the focus of this post remains focussed on integrating SignalR with SharePoint i’m going to avoid the specifics about SignalR and how to program with it. I’m also going to avoid specifics on how to create and deploy SharePoint Apps. We’ve come a long way with SignalR and the information out there is much better – I’d recommend reading ASP.NET SignalR Hubs API Guide – Server (C#) if you need an overview of programming with SignalR. For those that need a primer on SharePoint Apps, particularly in relation to provider-hosted Apps on Office 365 (the more ‘production-safe’ way of hosting Apps on Office 365), then head over to Chris O’Brien’s post Deploying SP2013 provider-hosted apps/Remote Event Receivers to Azure Websites (for Office 365 apps) for a quick read.

Now that you’re comfortable with the theory, it’s on to the solution. There are 2 pieces to the autohosted App puzzle – the SharePoint App and the SharePoint App Web.

SharePoint App

The purpose of this project is to define the Remote Event Receiver (RER) and also configure the Installed Event Endpoint for the App. The former essentially sets the types of events that will trigger the remote service and points the App to that service, the latter points to the service which will trigger when the App is installed.

Adding the RER is as simple as right-clicking the project and adding a new Remote Event Receiver and selecting the events you want to listen out for. Everything else will be taken care of for you including creating the Package, Feature and tying the RER to its class which will be hosted in the SharePoint App Web.

Configuring the Installed Event Endpoint requires you to edit the properties of the SharePoint App project and set the Handle App Installed property to true. Once again Visual Studio takes care of the rest.

The final step is to access the Permissions tab in the AppManifest.xml editor and add the necessary permissions to the Web and List to ensure your App has the required levels of access to your site.

That’s all there is to it.

SharePoint App Web

Setting up the SharePoint App Web project is a little more involved than the above however is still relatively simple. The first step is to add a SignalR Hub Class to the project – having this native in Visual Studio is fantastic and greatly simplifies the process of getting up and running with SignalR ensuring all necessary references are added to the solution (note that you must have installed the ASP.NET and Web Tools 2012.2 update before you can add this class natively). Alternatively you can add SignalR via the Package Manager Console.

Elliot has also decided to implement the hub using a Singleton instance for optimum performance. For this you’ll need to add another class and insert the following code:

    public class SharePointR
    {
        // Singleton instance
        private readonly static Lazy<SharePointR> _instance = new Lazy<SharePointR>(() =>
            new SharePointR(GlobalHost.ConnectionManager.GetHubContext<SharePointRHub>().Clients));

        public SharePointR(IHubConnectionContext clients)
        {
            Clients = clients;
        }

        public static SharePointR Instance
        {
            get
            {
                return _instance.Value;
            }
        }

        private IHubConnectionContext Clients
        {
            get;
            set;
        }

        public void NotifyDataChanged(string ListName, string Event)
        {
            Clients.Group(ListName).dataChanged(Event);
        }
    }

Once that’s been created we can edit the Hub we added earlier and insert:

    [HubName("SharePointRHub")]
    public class SharePointRHub : Hub
    {
        private readonly SharePointR _sharePointR;

        public SharePointRHub() : this(SharePointR.Instance) { }

        public SharePointRHub(SharePointR sharePointR)
        {
            _sharePointR = sharePointR;
        }

        public void Subscribe(string ListName)
        {
            Groups.Add(Context.ConnectionId, ListName);
        }

        public void UnSubscribe(string ListName)
        {
            Groups.Remove(Context.ConnectionId, ListName);
        }

        public void NotifyDataChanged(string ListName, string Event)
        {
            _sharePointR.NotifyDataChanged(ListName, Event);
        }
    }

One final piece to the SignalR puzzle is to add a Global.asax file which should include the following:

        protected void Application_Start(object sender, EventArgs e)
        {
            var hubConfiguration = new HubConfiguration();
            hubConfiguration.EnableCrossDomain = true;
            hubConfiguration.EnableDetailedErrors = true;
            RouteTable.Routes.MapHubs("/signalr", hubConfiguration);
        }

Now we have our SignalR plumbing we can set up our event receivers. The first cab off the rank is the AppEventReceiver which will fire when the App is installed. What we want to achieve here is to manually hook up the item event receivers to a list in the host web, for instance an Announcements list.

    public class AppEventReceiver : IRemoteEventService
    {
        private string ReceiverName = "RemoteEventReceiver";
        private List<EventReceiverType> EventType = new List<EventReceiverType>()
        {
            EventReceiverType.ItemAdded,
            EventReceiverType.ItemAdding,
            EventReceiverType.ItemUpdated,
            EventReceiverType.ItemUpdating,
            EventReceiverType.ItemDeleted,
            EventReceiverType.ItemDeleting
        };
        private string ListTitle = "Announcements";

        public SPRemoteEventResult ProcessEvent(SPRemoteEventProperties properties)
        {
            SPRemoteEventResult result = new SPRemoteEventResult();

            using (ClientContext clientContext = TokenHelper.CreateAppEventClientContext(properties, false))
            {
                Web hostWeb = clientContext.Web;
                clientContext.Load(hostWeb);

                List docLib = clientContext.Web.Lists.GetByTitle(ListTitle);

                string opContext = OperationContext.Current.Channel.LocalAddress.Uri.AbsoluteUri.Substring(0,
                   OperationContext.Current.Channel.LocalAddress.Uri.AbsoluteUri.LastIndexOf("/"));
                string remoteUrl = string.Format("{0}/RemoteEventReceiver.svc", opContext);

                if (properties.EventType == SPRemoteEventType.AppInstalled)
                {
                    foreach (var eventType in EventType)
                    {
                        EventReceiverDefinitionCreationInformation newEventReceiver = new EventReceiverDefinitionCreationInformation()
                        {
                            EventType = eventType,
                            ReceiverAssembly = Assembly.GetExecutingAssembly().FullName,
                            ReceiverClass = "SignalRProxyService.Services.RemoteEventReceiver",
                            ReceiverName = ReceiverName + eventType.ToString(),
                            ReceiverUrl = remoteUrl,
                            SequenceNumber = 1000
                        };
                        docLib.EventReceivers.Add(newEventReceiver);
                    }
                    clientContext.ExecuteQuery();
                }
                else if (properties.EventType == SPRemoteEventType.AppUninstalling)
                {
                    IEnumerable<EventReceiverDefinition> receivers = clientContext.LoadQuery(docLib.EventReceivers
                        .Where(e => e.ReceiverName == ReceiverName));

                    foreach (var rec in receivers)
                    {
                        rec.DeleteObject();
                    }
                    clientContext.ExecuteQuery();
                }
            }
            return result;
        }

        public void ProcessOneWayEvent(SPRemoteEventProperties properties)
        {
            // This method is not used by app events
        }
    }

We then want to code up the Remote Event Receiver to handle the events triggered on the site and push a message via SignalR to our page. This is a very simplistic example that shows that the item affected in the list we’ve subscribed to is what is causing the update on the page.

    public class RemoteEventReceiver : IRemoteEventService
    {
        private SharePointRHub client = new SharePointRHub();

        public SPRemoteEventResult ProcessEvent(SPRemoteEventProperties properties)
        {
            client.NotifyDataChanged(properties.ItemEventProperties.ListTitle, properties.EventType.ToString());
            return new SPRemoteEventResult();;
        }

        public void ProcessOneWayEvent(SPRemoteEventProperties properties)
        {
        }
    }

The final step is to code up the page. There are a number of aspects here which should be completed to get the branding right for the App – these are covered in greater detail in the how-to guide listed at the end of this post. For now i’ll highlight the main code needed to hook up SignalR on the page (note that references to scripts not yet mentioned in this post are also covered in that guide).

    <!--Script references. -->
    <script src="//ajax.aspnetcdn.com/ajax/4.0/1/MicrosoftAjax.js"></script>
    <script src="/Scripts/jquery-1.8.2.min.js"></script>
    <script src="/Scripts/json2.js"></script>
    <script src="/Scripts/chromeLoader.js"></script>
    <script src="/Scripts/jquery.signalR-1.0.0.js"></script>
    <script src="/signalr/hubs"></script>

    <!--Add script to update the page and send messages.-->
    <script type="text/javascript">
        $(function () {
            var connection = $.hubConnection();
            proxy = connection.createHubProxy('SharePointRHub')
            proxy.on('dataChanged', function (eventType) {
                $('#MainContent').append('<li>Data changed - ' + eventType + '</li>');
            });
            connection.start()
                .done(function () {
                    //Subscribe to Announcements list
                    proxy.invoke('Subscribe', "Announcements");
                    $('#MainContent').append('<span>Now connected, connection ID=' + connection.id + '</span>');
                })
                .fail(function () {
                    $('#MainContent').append('<span>Could not Connect!</span>');
                });
        });
    </script>

And that’s about it! The image below gives a sneak peak into how an event triggered via a SharePoint list can be reflected on a separate page in real time. It is far more impressive however seeing it in person, so I’d encourage you to follow the guide provided below and get ‘SharePointR’ up and running for yourself – the possibilities this technology opens up is limitless and it will be exciting to see the kinds of solutions designed which leverage SignalR within SharePoint in the near future.

PreviewDownload: Walkthrough-Deploying a SignalR autohosted App to Office 365

How my latest mobile-optimised site was built using SharePoint 2013

About a year and a half ago now I wrote about How the Career Centre mobile site was built using SharePoint. Having recently worked on another large scale public facing website with the requirement for mobile-optimised capabilities the opportunity has presented itself to compare how far the product has come and offer up another technique to build your mobile-optimised sites in SharePoint. This post will be made up of 2 parts – the background information and decision making process which lead to the solution, and the solution itself. If you’re only interested in the latter then look for the heading ‘The Solution’ further down the page.

The Decision

Due to the site not yet having been released I’ve decided to keep the specifics of the site under wraps which is why this post will not include any screenshots of the design or links to view the site itself, however this should not detract from its content. Once the site does go live I may come back to this post and fill in the blanks.

The first thing that came to mind when faced with this requirement was Device Channels and Image Renditions. Whenever a new version of the platform comes out it’s spruiked as the saviour for a number of different use cases however in reality things tend to fall short when it comes to actually implementing them in the real world, so I was interested to see how it would go. I figured that seeing I had some success with the Career Centre site technique I could always fall back to that option if need be.

I started out as I usually do, looking for any and as much information I could find regarding building mobile optimised sites on SharePoint 2013. Surprisingly, content was fairly sparse. As usual with anything WCM related, Waldek Mastykarz was one of the first names I came across. His article on Optimizing SharePoint 2013 websites for mobile devices is one that definitely should be read when comparing the techniques of Device Channels and Responsive Designs. I’d also recommend reading his articles on Device Channels in SharePoint 2013 and Image Renditions in SharePoint 2013. For a bit of context or counter-argument then Shawn’s SharePoint 2013 and WCXM post should also be read.

Although its targeted at SharePoint 2010, Mike Hacker has written a good article on Mobile Access for SharePoint 2010 Internet Sites which provides a number of other alternate options for consideration.

As always when faced with a particular problem there are always some requirements which will dictate the solution. In this case there was a few. The mobile design was being provided with no consideration to the previous desktop design’s HTML, there would be a requirement to shift from the mobile site to the desktop site and vice versa (via the user clicking a link), all pages would require a mobile equivalent however the mobile site may contain many more pages or a different structure of pages depending on the content being displayed. Just to make it trickier, the exception to the ‘all pages require a mobile equivalent’ rule existed for a couple of pieces of functionality that needed to remain desktop-only.

Responsive layouts were immediately ruled out considering the designs had been created without consideration to the other – the original view was designed to perform reasonably on mobile devices, however we also needed to implement a completely mobile-optimised experience.

Altering the existing design using client-side technologies was also ruled out. This approach had been used by some colleagues of mine in the past however time was of the essence and I feared how long that would take to implement, let alone the complexity of customisation that would need to be maintained.

Device channels did not immediately look promising. The requirement to be able to switch to a desktop view immediately concerned me as I figured SharePoint would automatically take the mobile device to the mobile optimised page. The structure of the mobile site also differing concerned me a bit too seeing Device Channels appeared to be suited for a one to one mapping of pages. I’d also read a few other things which concerned me about Device Channels, particularly the lack of support for web part zones within Device Panels. The following quote was taken from How to: Add a Web Part zone snippet in SharePoint 2013.

You cannot insert a Web Part zone inside a Device Channel Panel. If you want to allow authors to add Web Parts to a page, and if you are not concerned about the page weight for mobile devices, you can add a Rich Text Editor page field to a Device Channel Panel, and then instruct authors to add Web Parts there. You can add Web Parts directly to a Device Channel Panel (without a Web Part zone).

That seemed like a deal-breaker, as this was a highly complex site and web parts were used throughout. It seemed I was left with the Career Centre option but I was going cold on that one too – I had concerns around redirecting the user (this was primitive in the previous implementation and only worked at the site’s root URL – not something I wanted to repeat or even could repeat given I was using friendly URLs). I had misgivings around the manual maintenance of pages, ‘mobile links’ and the client side code which altered content links and rendered the popup warning the user about non-optimised pages. I decided to bite the bullet and push on with Device Channels and see how I could make it work regardless of its shortfalls.

The Solution

Where there’s a will there’s a way. The mobile experience for the site was successfully implemented using Device Channels in SharePoint 2013.

The first piece of the puzzle was being able to immediately target all mobile devices by default, seeing we weren’t going to go down the path of optimising for each individual device. The user agent string $FALLBACKMOBILEUSERAGENTS; came in use here, you can read more about how it works in Waldek’s Device Channels post linked above.

The concern around not being able to switch between a mobile and desktop view was also countered. It is possible to override the device channel in a number of different ways. For testing purposes, specifying ?DeviceChannel=channel is a great way to achieve it, however you can also use cookies for a more permanent and lasting result. I placed a link in the footer of both the mobile and desktop master page linking to the other, which when clicked ran the following code (Default or Mobile depending on the current state):

Response.Cookies["deviceChannel"].Value = "Default";
Response.Redirect(Request.RawUrl);

This was wrapped in an UpdatePanel so that the post back and redirect didn’t appear to the user to be 2 separate things, however its probable I could have also achieved this solely using client side code. The redirect was necessary as the link event is handled too late in the page lifecycle for it to take affect in that load.

The differing structure between the mobile and desktop sites was also easily remedied. Firstly, the mobile-only pages were created and given managed navigation nodes with friendly URLs, however were removed from the Global and Current navigations. This ensured that the links would only appear where they were specifically placed on the mobile version. To counter the user going back to the desktop from one of these mobile-specific pages I added the following code to the redirect code above:

string rawUrl = Request.RawUrl;

// Some of the mobile specific pages don't have equivalent desktop versions - so we need to manipulate the redirect to the base location
if (rawUrl.Contains("xxx")) Response.Redirect("/parent-of-xxx");
else if (rawUrl.Contains("/m/aaa") || rawUrl.Contains("/m/bbb") || rawUrl.Contains("/m/ccc")) Response.Redirect("/");
else Response.Redirect(Request.RawUrl);

The web part zone issue was actually not as limiting as you’d first think. The main issue is that when a web part exists in a web part zone, but that zone does not exist in the other device panel, it will automatically be moved to the next available web part zone. This as you can imagine causes havoc to your layouts. The solution of adding the web parts to a rich text zone (or the Publishing HTML zone by default) is all good and well if you want that web part displaying on both the desktop and mobile versions – however this particular site often had significantly different displays (and HTML) for the same list-driven content and thus 2 separate web parts were required.

The solution was actually quite simple. The web part zones CAN be placed within a device panel – just place them in one (the default panel). You can place all your desktop-specific web parts there. I included 2 extra Publishing HTML fields in the mobile channel to serve as psuedo-web part zones for the mobile channel, and all mobile specific web parts were placed there. The final resulting layouts looked something like this:

<asp:Content ContentPlaceHolderID="PlaceHolderMain" runat="server">
    <WebPartPages:SPProxyWebPartManager runat="server" id="ProxyWebPartManager">
    </WebPartPages:SPProxyWebPartManager>
    <PublishingWebControls:MobilePanel runat="server" IncludedChannels="DEFAULT">
        <div class="html-structure-has-been-removed">
	    <webpartpages:webpartzone runat="server" id="WebPartZoneCentreTop" title="Webparts Centre Top" partchrometype="None">
                    <zonetemplate></zonetemplate>
            </webpartpages:webpartzone>
	    <PublishingWebControls:RichHtmlField ID="PublishingPageContent" FieldName="PublishingPageContent" runat="server" />
	    <webpartpages:webpartzone runat="server" id="WebPartZoneCentreBottom" title="Webparts Centre Bottom" partchrometype="None">
                    <zonetemplate></zonetemplate>
            </webpartpages:webpartzone>
	</div>
    </PublishingWebControls:MobilePanel>
    <PublishingWebControls:MobilePanel runat="server" IncludedChannels="Mobile">
        <div class="html-structure-has-been-removed">
            <%-- Need to use publishing fields rather than web part zones so existing web parts don't automatically
                 move across into the mobile display --%>
            <PublishingWebControls:RichHtmlField ID="MobileContentTop" FieldName="MobileContentTop" runat="server" />
            <PublishingWebControls:RichHtmlField ID="MobilePublishingPageContent" FieldName="PublishingPageContent" runat="server" />
            <PublishingWebControls:RichHtmlField ID="MobileContentBottom" FieldName="MobileContentBottom" runat="server" />
        </div>
    </PublishingWebControls:MobilePanel>
</asp:Content>

Ensuring particular pages were desktop-only was also relatively simple. All that was required was to have one or more page layouts which were dedicated to those pages specifically which overrode the Master Page setting for that layout. This can be achieved with the following code snippet in the page layout:

<script runat="server">
protected override void OnPreInit(EventArgs e)
{
    base.OnPreInit(e);
    this.MasterPageFile = "DefaultChannel.master";
}
</script>

While I mentioned previously that separate mobile-specific web parts were created to generate the mobile-specific HTML, it wasn’t the only customisation required. Mobile-specific display templates were also created to ensure any content-by-search or search-result web parts rendered out the correct mobile-friendly HTML. These steps wouldn’t have been as necessary if the original design was based on a common HTML structure.

The last piece of the puzzle was handling image functionality for the mobile version of the site – either large images which appeared on the page within content or an image carousel web part. The latter I decided to only include in web part zones rather than within page content to ensure they wouldn’t display on the mobile version (this sometimes resulted in also having to maintain the page heading in a content editor web part and separately in the top mobile content zone to achieve the right look and feel). The former is where I figured image renditions would come in.

Image renditions have been touted as a great feature for mobile-optimised sites, as have Device Channels, so I would have assumed they’d work perfectly together. Unfortunately, they don’t. As the image renditions are displayed via a query string value in the source link stored in content, this is not adjustable out of the box depending on which device channel is being displayed. This was hugely disappointing in my opinion and it seemed like a great opportunity had been missed – in the end I just decided that smaller, web optimised images could remain in content and any large images could exist within web part zones so they didn’t appear on the mobile site. Waldek has actually provided a work-around for this issue with his Responsive Image Renditions with SharePoint 2013 functionality however its something I decided I could do without.

So overall the approach I was able to take in SharePoint 2013 to build a mobile-optimised site in my opinion ended up being miles ahead of what I managed to achieve in MOSS 2007. The amount of time it took to implement the entire site was minimal compared to the impact it has – the majority of mobile-optimised web parts only required changing the HTML rendered in ASP.NET repeaters and was able to leverage the same cached data as the desktop site, meaning performance is good. While there were some hurdles to jump, a number of these would have been easily mitigated if both the desktop and mobile designs were based on the same HTML structure – a happy medium between a fully responsive design and two completely separate sites.

If you’re tasked with building a mobile optimised version of your public facing website I’d definitely recommend looking into Device Channels to implement your solution and hope that some of the techniques listed here help you on your journey.

Using SPWebConfigModification to Update the Web.config in SharePoint 2013

Seven months ago I wrote an article on Jumping the Hurdles of using SPWebConfigModification to Update the Web.config – that article was based on my experiences in SharePoint 2010 and having researched the topic thoroughly at the time I was interested to see how things fared in SharePoint 2013. Thankfully I had the opportunity to architect a solution from the ground up in 2013 which gave me the opportunity to once again adhere to the vow I made in Application Settings in SharePoint to never manually modify the web.config file.

While this post has not been as thoroughly researched at the time of writing as I usually would like, a quick search on SPWebConfigModification in SharePoint 2013 brought up few results and when I was looking into it for the project mentioned above, little had been written about it. A recent question on the OZMOSS mailing list showed that there were still some questions around how SPWebConfigModification did (or didn’t) work so I thought it would be useful to document my findings here for anyone wanting to ‘do the right thing’ and keep their hands off the web.config file.

For a bit of background for this post I’d thoroughly recommend you read the article I link to above – the purpose of this post is to compare the behaviour of the functionality between versions of the platform and it will make far more sense if you understand my (and others) previous findings.

However for those of you who care little about the past and just want to know what the deal is in 2013 – here is the quick summary:

SPWebConfigModification in SP2010 came with a number of issues however none which were completely insurmountable with a little work. The majority were already documented by others (and I link to them in that post) however one phenomenon had little written about it and even less in terms of a solution.

This problem was that even though the removal code ran successfully and even removed the entry from the modification collection, the element in the web.config file still physically existed. I came up with a workaround where  in the removal code I first applied another modification reverting the entry back to its original state, then ran the code to remove all modifications.

So how did this fair in SP2013? There was good news and bad news. On the plus side, this issue had been fixed! Removing the entry from the modification collection successfully removed the entry from the web.config file. On the negative side it came with a nasty side effect – if the modification you were removing was a change to an attribute on a pre-existing element then that whole element was removed from the web.config, not just your change.

This was clearly unacceptable behaviour particularly if the entry being removed was essential to successfully loading the website.

Once again however I managed to jump the hurdles that always seem to exist with this tricky class. The workaround this time was to remove all the customisations by default at the beginning of both the FeatureActivated and FeatureDeactivating functions and then add the customisation required. In FeatureActivated this would be your customised entry. In FeatureDeactivating this would be the original entry (if it existed in the web.config before your modifications). Again this solution isn’t full-proof and is prone to falling over in future iterations of the platform, however it is a solid workaround as things stand now. I’ve provided a code snippet to achieve this below:

public class WebConfigModificationsFeatureEventReceiver : SPFeatureReceiver
{
	// Due to what appears to be a bug or just and unfortunate side-effect in SP2013, using SPConfigModification on an existing element
	// replaces that element with a new one. When removing the customisation, that element is removed, hence meaning that the original
	// entry no longer exists. To counter this we will re-add the original values in the deactivating feature.

	public override void FeatureActivated(SPFeatureReceiverProperties properties)
	{
		SPWebApplication webApp = properties.Feature.Parent as SPWebApplication;
		RemoveAllCustomisations(webApp);

		#region Enable session state

		httpRuntimeModification = new SPWebConfigModification();
		httpRuntimeModification.Path = "configuration/system.web/pages";
		httpRuntimeModification.Name = "enableSessionState";
		httpRuntimeModification.Sequence = 0;
		httpRuntimeModification.Owner = "WebConfigModifications";
		httpRuntimeModification.Type = SPWebConfigModification.SPWebConfigModificationType.EnsureAttribute;
		httpRuntimeModification.Value = "true";
		webApp.WebConfigModifications.Add(httpRuntimeModification);

		httpRuntimeModification = new SPWebConfigModification();
		httpRuntimeModification.Path = "configuration/system.webServer/modules";
		httpRuntimeModification.Name = "add[@name='Session']";
		httpRuntimeModification.Sequence = 0;
		httpRuntimeModification.Owner = "WebConfigModifications";
		httpRuntimeModification.Type = SPWebConfigModification.SPWebConfigModificationType.EnsureChildNode;
		httpRuntimeModification.Value = "<add name='Session' type='System.Web.SessionState.SessionStateModule' preCondition='' />";
		webApp.WebConfigModifications.Add(httpRuntimeModification);

		#endregion

		/*Call Update and ApplyWebConfigModifications to save changes*/
		webApp.Update();
		webApp.Farm.Services.GetValue<SPWebService>().ApplyWebConfigModifications();
	}

	public override void FeatureDeactivating(SPFeatureReceiverProperties properties)
	{
		SPWebApplication webApp = properties.Feature.Parent as SPWebApplication;
		RemoveAllCustomisations(webApp);

		#region Revert session state

		httpRuntimeModification = new SPWebConfigModification();
		httpRuntimeModification.Path = "configuration/system.web/pages";
		httpRuntimeModification.Name = "enableSessionState";
		httpRuntimeModification.Sequence = 0;
		httpRuntimeModification.Owner = "WebConfigModifications";
		httpRuntimeModification.Type = SPWebConfigModification.SPWebConfigModificationType.EnsureAttribute;
		httpRuntimeModification.Value = "false";
		webApp.WebConfigModifications.Add(httpRuntimeModification);

		#endregion

		/*Call Update and ApplyWebConfigModifications to save changes*/
		webApp.Update();
		webApp.Farm.Services.GetValue<SPWebService>().ApplyWebConfigModifications();
	}

	private void RemoveAllCustomisations(SPWebApplication webApp)
	{
		if (webApp != null)
		{
			Collection<SPWebConfigModification> collection = webApp.WebConfigModifications;
			int iStartCount = collection.Count;

			// Remove any modifications that were originally created by the owner.
			for (int c = iStartCount - 1; c >= 0; c--)
			{
				SPWebConfigModification configMod = collection[c];

				if (configMod.Owner == "WebConfigModifications")
				{
					collection.Remove(configMod);
				}
			}

			// Apply changes only if any items were removed.
			if (iStartCount > collection.Count)
			{
				webApp.Update();
				webApp.Farm.Services.GetValue<SPWebService>().ApplyWebConfigModifications();
			}
		}
	}
}

So once again we’re left with an option to do things right which doesn’t quite act the way you’d expect, but as you can see, can still be used to achieve the end result desired. For all its ills in both versions of the products i’d still strongly recommend taking this approach to making changes to the web.config file over making manual modifications any day. I hope this post has made doing so a little easier moving forward.

UPDATES

We shouldn’t be surprised, but over time new hurdles have presented themselves which need documenting (and I apologise, the first one I mention I should have covered a long time ago when I encountered it!).

The approach documented above worked fine in my single-server development environment. As soon as we tried activating the feature in a multi-server farm test environment all sorts of issues occurred. This has been documented however and you can read about it in Jeremy Jameson’s post Waiting for SharePoint Web.config Modifications to Finish and Simon Doy’s post PowerShell to Detect Web Configuration Modification jobs.

The code changes required include a few functions to be added and a slight modification to the ApplyWebConfigModifications call as indicated in the below code.

        private static bool IsJobDefined(SPFarm farm)
        {
            SPServiceCollection services = farm.Services;

            foreach (SPService service in services)
            {
                foreach (SPJobDefinition job in service.JobDefinitions)
                {
                    if (string.Compare(job.Name, jobTitle, StringComparison.OrdinalIgnoreCase) == 0)
                        return true;
                }
            }

            return false;
        }

        private bool IsJobRunning(SPFarm farm)
        {
            SPServiceCollection services = farm.Services;

            foreach (SPService service in services)
            {
                foreach (SPRunningJob job in service.RunningJobs)
                {
                    if (string.Compare(job.JobDefinition.Name, jobTitle, StringComparison.OrdinalIgnoreCase) == 0)
                        return true;
                }
            }

            return false;
        }

        private void WaitForOneTimeJobToFinish(SPFarm farm)
        {
            float waitTime = 0;

            do
            {
                if (!IsJobDefined(farm) && !IsJobRunning(farm))
                    break;

                const int sleepTime = 500; // milliseconds

                Thread.Sleep(sleepTime);
                waitTime += (sleepTime / 1000.0F); // seconds

            } while (waitTime < 20);
        }

        /*Call Update and ApplyWebConfigModifications to save changes*/
        webApp.Update();
        WaitForOneTimeJobToFinish(webApp.Farm);
        webApp.Farm.Services.GetValue<SPWebService>().ApplyWebConfigModifications();

The second issue is one we’ve encountered more recently. The issue was in regards to the inability to successfully extend a web application which already had the web.config modifications applied. There’s 2 ways to approach this one – if you know before the fact, then make sure you extend your web application before applying any of the web.config modifications. Otherwise, hope that you’ve implemented decent deactivation code and be sure to deactivate the features before extending, and then reapply after the fact.

Thoughts on the Salem™ Certified Practitioner Online Course

It’s been a while since I’ve completed a course for SharePoint, generally sourcing new knowledge myself from where ever I can get it. I recently however had the great opportunity to undertake the Salem™ Certified Practitioner online course and thought it worthwhile jotting down my thoughts on both the content and the course itself. In the interests of full disclosure it’s important to note that I was given this opportunity complementary for my contributions to the SharePoint community, particularly my writings around Governance in SharePoint, however it in no means came with the promise of a favourable (or any) review so I can assure you the following is written with an open and unbiased point of view.

I first came across Salem™ as a company when I stumbled across Step By Step Search Engine Optimization for SharePoint Internet Sites while reviewing existing material for my own series on Search Engine Optimisation (SEO) for SharePoint Sites. I was highly impressed that a lot of the content that I already knew and wanted to communicate through my writings was so well structured into a complete and sequenced approach to implementing the topic – its a book that I’ve recommended to a number of people already. It motivated me to read through all 150 pages of SharePoint Governance & the Seven Pillars of Wisdom (which it now seems is only available through taking the practitioner course) that I came across during my readings for the Governance series I mentioned previously – this again left me highly impressed with the methodical approach to the topic.

It’s therefore no surprise that I now find out that the whole premise behind the name Salem™ is that it represents a Structured and Logical Enterprise Methodology to approaching SharePoint implementations. To be honest it couldn’t have come at a better time for me personally – I was consuming a lot of great information but was struggling to identify how it could all be applied at a client (read: sell the ‘complete’ message rather than always approaching tasks with a specific focus). All of the presentations in this area tend to focus on a specific topic (for instance governance, adoption) which while they all have merit tend to be harder to sell to an organisation individually.

I’d go as far as saying that was the most valuable take away I received from completing the course. Even though Salem™ is a great delivery methodology and logical in its approach, more importantly it serves as a vehicle to sell the story of approaching SharePoint implementations as a complete and business-focused programme of work.

I found the framework itself to be very solid. From looking at it initially it wasn’t obvious how all the pieces would fit together, however after taking the course I’m comfortable that every piece to the puzzle is justified and covers the wide breadth of what SharePoint has to offer as a platform.

Salem™ is essentially broken up into 15 different business service modules which are often inter-related and positioned in the framework diagram for specific reasons. While I’d love to be able to show the diagram to better explain the process, you’ll have to take the course yourself to see how succinctly it encapsulates a complete body of work. The modules are divided into different channels which also have a logical separation, supported by a number of layers which remove the focus of technology and supporting concepts from the business services themselves (while still emphasizing their importance to the ‘whole’) – allowing you to focus in on a particular service and break them down through various workshops.

As you can probably tell, i’m quite a fan of the framework and find it hard to fault, so i’ll turn my attention to the course itself and the certification it enables. I’d highly recommend if you have an interest in these topics that you take the course yourself and discover what it has to offer.

There is a bunch of information regarding the course which is worth a look if you have an interest in taking it or finding out a bit more about what it is all about. The Genius! website has a short synopsis and video which explains what the course will contain (it is the introduction video to the course so you get a feel for what the video material will be like, and if you have super human eyes you may even be able to gain a sneak-peak into what the Salem™ diagram will end up looking like!) while if reading is more your thing then take a look at the course summary at The Independent which contains a great deal of information.

I found the course content to be well structured in terms of breaking up a huge amount of information into consumable parts. Most sections within the course contain an introduction text, slides, a video and associated study reading material – you’re then required to take a short multiple question test. I did have a number of suggestions to improve the overall aspects of the course (for instance including contextual notes in the slides, not providing the answers to the tests until they’re passed and explaining why particular answers within the tests were right or wrong) which was all taken on board and thoroughly explained – some was even already known and in the process of being adjusted. From what I could gather the small number of improvements I could see were already well on the way to being improved before I had mentioned them which is testament to the quality of the course in that it will continually improve.

I found the length of the course suitable – it covered off the information you needed to know in enough forms for it to sink in. I was also highly motivated at the start to consume the information which never faded throughout, also testament to the quality of the content. I’ve read that the course is meant to take roughly 40 hours however I managed to do it in less than half of that (granted I had already read the Governance book which shaved some time).

The one thing I did find is that the course left me craving more – particularly its minimal focus on breaking down each individual business service into separate workshops. Of course this is where the Master course comes in so it’s no great surprise this is the case – but i’d just warn that you should be prepared for the same desire and/or strongly consider purchasing one of the packaged course offerings for a cheaper price! I’m looking forward to the opportunity of completing both the Master course and the Presenting the Salem™ Framework Workshop Masterclass in the near future.

After completing the course you’re granted full membership of the World Association of SharePoint Business Strategists. I love the idea of the group and particularly as a way of grouping like-minded individuals, disseminating information and providing a code to work by, however i’m unconvinced such a group should be associated with one particular proprietary framework and also feel it would be given further significance if it was backed by Microsoft themselves. Those points aside, I admire the entrepreneurial mindset that identified the gap in the market and filled it initially and still consider myself a proud member!

If you’re left with any doubt about the quality of Salem™ then it’s worth having a read of Gartner’s Competitive Landscape: Microsoft SharePoint Consulting and Implementation Services, North America and Western Europe. While it is discussing it in the context of consulting services, it does give a strong insight into how a company like Gartner rates the framework itself. For another opinion on the Salem™ Certified Practitioner Online Course you can have a listen to Dell’s Simon Farquharson discuss his experiences with the course.

So that about covers it. I’d be happy to answer any questions anyone has about the framework or the course itself, and i’m sure Ian himself would be more than happy to do so as well going by how generous he has been with his time and replies to the questions I had – so feel free to fire away.

Building public facing websites on SharePoint 2013 – Part 2

Following on from my slides discussion at Building public facing websites on SharePoint 2013 – Part 1, this article will dive into the 4 demonstrations which accompanied the session.

Branding with the Design Manager

Before I started, I had performed the following steps:

  1. Took the home page template, moved all resources into the same directory and adjusted the links (resources could be stored anywhere really, this is how the examples generally have it)
  2. Removed any ‘form’ elements and replaced them with DIVs (these could be made functional later)
  3. Ensured the template was XHTML compliant
  4. Created a new web application with a site collection using the Publishing template

Using the Design Manager to create a custom branded Master Page:

  1. Click on the Design Your Site link on the welcome page or access it via the ‘cog’ which represents site actions and select Design Manager (this page gives you a wizard type interface to complete the process – however a lot of the steps are just information regarding how to carry out a particular step)
  2. The first step we’re interested in is uploading our design files and to achieve this we need to map the network drive
    1. Open File Explorer
    2. Right click on ‘Network’ and click ‘Map network drive’
    3. Connect to the Master Page catalogue (which is linked to on that page)
    4. Copy the entire design folder in there
  3. Once we’ve done that we want to publish all the files in SharePoint either by going to the Master Page gallery in Site Settings or using Manage Content and Structure (which has been removed from the Site Actions, you now have to go to Content and Structure in Site Settings or just type in the address manually)
  4. Back to Design Manager and Convert an HTML file to a SharePoint Master Page (select our html file and click insert)
  5. Click on the link to the new master page to open it up and see if there are any issues – note that the content placeholder is placed at the bottom of the HTML file
  6. Open up the HTML file again via the mapped drive and move the snippet to the right location – we can grab the placeholder main snippet and paste it where we want it to be, removing the temporary display div (you can see that a number of snippets have been inserted into the HTML file so it’s important not to mess around with them if you’re not sure of what you’re doing)
  7. Save the changes – that automatically updates the master page, we can now publish it and assign it to be the master page for our site
  8. View the home page – the master page has been implemented

A few other features I showed but didn’t delve into:

  1. You can create a linked page layout by going to Edit Page Layouts in the Design Manager and clicking on Create a page layout (if we go into our mapped drive we can see that the HTML has been created and can then edit that like we did the master page which will automatically update the layout)
  2. We also have the snippet generator which we can get to by opening one of our master pages and clicking the Snippets link (there’s a bunch of pre-defined snippets which you can customise and copy into your master page or layouts – you can also use the custom ASP.NET Markup snippet to insert pretty much anything you can think of including references to custom controls you’ve deployed)

A couple of things worth mentioning:

When I was first looking into the suitability of the design manager I weighed it up against the starter master pages Randy Drisgill creates (he has 2013 versions). I ended up going down that path for a number of reasons:

  • for highly customised sites the whole snippet generator function didn’t seem efficient
  • the source control and deployment story for the branding elements is less than optimal, even though you can export packages it’s easier to keep everything together in one solution
  • editing the HTML with all the snippets in there seemed more confusing, particularly for page layouts which are generally much more simple in VS
  • my general impressions were that the Design Manager was great for designers and that was its purpose – if you’re a SharePoint developer, the standard VS method of development was more efficient

However later when I was figuring out a bug which existed in an earlier version of the starter master pages (and still appears to be there), it became apparent that the master page generated from that process was almost identical to the starter master page – so in future I would use the Design Manager to create the initial Master Page, bring it into Visual Studio and take it from there.

Another thing worth mentioning is that if you are trying to peel back the Master Page to make it as minimal as possible, you may be tempted like I was to remove the AjaxDelta elements which exist for the Minimal Download Strategy which is unfortunately not available for Publishing Sites. SharePoint however relies on the one surrounding the main content placeholder to insert the web part property editing toolbar so make sure you at least leave that in (that was actually the bug in the early version of the starter master page). I also wouldn’t advise on removing the DIVs with the IDs s4-workspace and s4-bodyContainer unless you want to lose your scrollbars.

Managed Navigation & Friendly URLs

  1. Firstly you need to ensure that the Managed Metadata Service application is available for your web application as it is what drives the friendly URLs
  2. You also need to ensure that a default Managed Metadata Service has been selected – you do this in Central Administration (if this has been done before the web application is created then the site will use managed navigation by default)
    1. Go to Manage Service Applications
    2. Click on the Managed Metadata Service connection (the proxy) and click Properties
    3. Check the ‘This service Application is the default storage location for column specific term sets’ checkbox
  3. If your site is still defaulting to the /Pages/Page.aspx URL then you need to set up Managed Navigation as the default in site settings
    1. Go to Site Settings and the Navigation link
    2. Select the Managed Navigation options and create a term set and click OK (the Add new pages automatically and create friendly URLs automatically are selected by default – this can be changed if desired)
  4. Go back to the site and see that the URL is now in a more friendly fashion
  5. Add a new page from the site actions menu to see that it is automatically given a friendly URL to access the page by
  6. Go back into the Navigation settings where we can open the Term Store Management Tool and configure our managed navigation – if we select our managed navigation term set there are a few useful options which will come in handy for setting up the managed navigation for a site
    1. Custom Sort is particularly useful if you want a navigation order different to the default alphabetical
    2. You can move the terms around if you want a particular navigation structure other than what was created by default
    3. For a term which is acting as a container rather than a page itself you can select the Simple link or header navigation node type
    4. You may also want a particular page to not appear in the navigation – particularly pages like disclaimers and copyright statements etc – you can uncheck the Visibility in Menus options in these cases
    5. You have the ability to customise the friendly URL if you weren’t happy with the default value automatically assigned via the pages title
    6. If you’re creating the navigation structure after the fact for pre-existing pages, or if you create additional duplicate navigation for a page, you can set the Target Page for this Term to associate the term with the relevant page

3 major points I’ve come across when working with Managed Navigation:

  • If you’re creating a custom navigation control or using something like Waldek Mastykarz’ Templated Menu you’ll need to access the Global or Current NavigationTaxonomyProvider rather than the Global or Current NavSiteMapProvider. One issue to consider here as well is that because you can access the page via the friendly URL or the standard structured URL the taxonomy provider will only work if you accessed it via the friendly URL
  • If you’re editing content and want to link to your page, you should no longer use the Link from SharePoint option and browse to the page you want to link to, because it will use the structured URL in the link. Even though the page will load at this URL you lose the friendly URL when you visit the page and you also cause search engines to think there is duplicate content on your site if it can hit the page from 2 different URLs – so unfortunately you should only use the Link from address option
  • The third point is more of an annoyance than anything else, but I’ve noticed a lot if you do particular actions on a page such as editing the page properties, when you’ve finished that task you’re taken back to the structured URL rather than the friendly URL, so I think there are still a few kinks to iron out

Content by Search & Display Templates

In the network connection to the Master Page Gallery set up earlier there is a folder called Display Templates – depending on what sort of template you want to edit or create, in general you’ll find that for content by search web parts you’ll need to edit within the Content Web Parts folder and for search results you’ll need to edit within the Search folder.

You’ll see both HTML and JavaScript files within there – this is similar to the link between the Master Pages and the HTML files we saw before – you only need to edit the HTML file and the JavaScript will automatically be updated. Similarly so if you are creating a new display template then you just need to drop the new HTML file in there and the JavaScript will be automatically created.

The best technique I’ve found to work with these templates is to grab an existing HTML file, edit it to suit your needs then drop it back in to the folder. This will automatically create the JavaScript which you can then take and deploy in your solution.

What is important in this process is how you define the template within the feature – I’ve identified a subset of data that if you happen to leave out then your display template won’t appear for selection in the various web parts (this subset was discovered after some research following reading the thread How to deploy display templates via feature).

 <Module Name="SearchDisplayTemplates" Url="_catalogs/masterpage/display templates/search" Path="Display Templates" RootWebOnly="TRUE">
    <File Url="Item_ETI.js" Type="GhostableInLibrary">
      <Property Name="Title" Value="ETI Item" />
      <Property Name="TemplateHidden" Value="FALSE" />
      <Property Name="TargetControlType" Value=";#SearchResults;#" />
      <Property Name="DisplayTemplateLevel" Value="Item" />
      <Property Name="ManagedPropertyMapping" Value="'Path'{Path}:'Path','Title'{Title}:'SeoBrowserTitleOWSTEXT','ETIDescription'{ETIDescription}:'ETIDescriptionOWSMTXT'" />
      <Property Name="_ModerationStatus" Value="0" />
    </File>
    <File Url="Control_ETI_HelpAndAdviceSearchResults.js" Type="GhostableInLibrary">
      <Property Name="Title" Value="ETI Help and Advice Search Results" />
      <Property Name="TemplateHidden" Value="FALSE" />
      <Property Name="TargetControlType" Value=";#SearchResults;#" />
      <Property Name="DisplayTemplateLevel" Value="Control" />
      <Property Name="_ModerationStatus" Value="0" />
    </File>
  </Module>

When taking a look at the elements file snippet above which deploys the templates you can see the subset of data which is required. The target control type and display template level are both quite important fields as they determine where the template is available to select within the web parts, for instance a SearchResults Item template will be available from within the search results web part at the item level. The only difference among these entries is the managed property mapping property which is required if the template is dealing with non-standard fields – anyone familiar with editing XSL for the content query web part to display managed properties will see this as being a fairly familiar process.

Aside from the obvious use cases of determining the display of both content by search and search results web parts the other use I’ve found for editing these templates is to modify the hover over effect from within the search results – for anyone who’s not seen it basically when you hover over a search result you get a hover item come up to preview the document or page similar to how current web search engines respond and then there are some link options available such as open, share, alert and so forth – however on public facing sites it may not make sense to show some of these links, particularly for alerts where the anonymous user won’t be able to sign up for them, so you can create a new template which hides those links and associate the new template with the particular file type via the manage result types interface in site settings.

SEO features in SP2013

This one was nice and quick. As a prerequisite you need the publishing feature activated to access the SEO properties.

For any given publishing page across your site you can edit the page, go to the Page tab in the ribbon, drop down Edit Properties and select Edit SEO Properties.

The main features here are the ability to set a browser title, the meta description and keywords. We also have the ability to exclude a page from the generated XML site map if we don’t want it to be crawled.

Once we’ve entered in those values, we can view the source of the page and see that the page title is the Browser title and the meta description and keywords exist on the page.

Funnily enough the Browser Title feature (which seems almost redundant) had an unintended benefit for a current site I was building – the design included a heading structure whereby the largest title was one reflected on a parent node (could be many levels deep) and the sub-heading was the title of the page – I was able to set the Browser Title to be that of the page and set the page title to be that of the parent’s title which needed to be shown and rendered that through a field control without fear that the incorrect title would affect the Browser bar’s title or have any SEO implications.

The other SEO features are in Site Settings under Search Engine Optimization Settings which allows you to do a couple of things:

  • Firstly, you can enter in other meta fields which will be inserted onto your page
  • Secondly, you can list Canonical URLs which are basically where a given page would display the same content even if the query string was different – you can get the search engines to ignore those query elements

Another SEO feature requires you to activate a Site Collection Feature being the Search Engine Sitemap feature – this ensures SharePoint generates a Sitemap.xml and robots.txt file for the site. That feature exposes another link in Site Settings called Search Engine Sitemap Settings where the robots.txt entries can be customised. Your site is required to have anonymous authentication enabled for the sitemap feature to work.

Building public facing websites on SharePoint 2013 – Part 1

Yesterday I had the great opportunity to present at the Perth leg of SharePoint Saturday for 2013. Overall the event was a resounding success – the turnout was reasonable considering it was the day of the state election and while the numbers may have fluctuated across the entire day, a number of sessions were well attended in all of the time slots. I had roughly 25 in my session which didn’t reach the great heights of my user group session on Harnessing Client-Side Technologies to Enhance your SharePoint Site however seemed a decent turnout considering there were 2 other quality sessions on at the same time and the overall numbers would have been less than the many which attend the monthly UG sessions.

After having some success with my previous session linked above I decided to follow the same presentation format – half an hour dedicated to slides and theory and another half hour dedicated to demonstrations. At the end of the day I probably whipped through the slides a little quicker than I expected which guaranteed I had enough time to finish off the 4 demo’s I had hoped to get through.

This post will run through the background to each slide shown below, then part 2 will give a run through of the steps and comments associated with each demonstration performed.

Building public facing websites on SharePoint 2013

The cover slide gave me an opportunity to thank everyone for supporting SharePoint Saturday in Perth and particularly choosing my session for the time slot – it’s definitely something I appreciate and is worth repeating in this post. I covered off a little bit about myself and explained my public facing website journey on SharePoint starting at Tourism on the now infamous westernaustralia.com website and various partner sites to my more recent experiences at the Department of Training and Workforce Development on various departmental and TAFE institution web sites – the most recent being developed on SharePoint 2013.

Agenda

There were 2 main things I wanted to get out of my session – firstly I wanted to generate some excitement on the possibilities that existed around building public facing websites on SharePoint 2013 but also transfer some knowledge around the key areas which are often neglected when building an internet facing site. I wanted to cover off how each version of the platform performed in each area then explore the new and exciting features which exist for web content management on SharePoint 2013, backed up by a number of demonstrations on my favourite features.

Branding & UX

Most people think branding SharePoint is all about slicing and dicing images into Master Pages and Page Layouts – and while this is a large part of it, there are a number of other factors which should be considered for a successful site. Implementing Custom Error Pages is one factor which is often forgotten about and can make the difference between your site looking professional or incomplete. The client-side and search experience is another area which can transform your site from something which looks good to something which performs great.

Search Engine Optimisation

SEO is often neglected or considered as an afterthought once the website has already gone live and traffic is not up to expectations – it is a crucial factor to consider to maximise the number of visitors to your public facing site. The topic is worthy enough of a presentation of its own however I have a 3-part blog series on Search Engine Optimisation for SharePoint Sites and Ian McNeice has written a great global SEO strategy book which cover the details. The main takeaway from this slide was that effective SEO requires a 3-phased approach – identify the keywords and phrases which are most effective to optimise for, optimise your site for those keywords and phrases and finally think outside the box for ways in which you can drive traffic to your site – rating well in the search engines in merely one component to an overall strategy.

Performance Optimisation

If SEO is all about bringing people to your site then Performance Optimisation is all about keeping them there. There are some great statistics and studies on the web which link bounce rates to page performance so it is most definitely an important factor to consider when building a site. There is a common misperception that SharePoint is inherently slow but that’s not a theory I subscribe to – a poorly developed and optimised site will perform badly on any platform – it’s just that SharePoint is easy to blame. Again this is another area in which an entire presentation could be dedicated and while I have another 3-part series on Performance Optimising SharePoint Sites the main takeaway was that while there are a number of generic and SharePoint-specific tasks you can target to optimise a site, there are also a number of great free tools available to benchmark and measure the performance of your pages including webpagetest.org, ySlow and Google Page Speed.

Accessibility

From a couple of topics which are often neglected or postponed to one which is often discarded completely. While it’s understandable how accessibility sometimes falls by the wayside due to its at-times difficult and time consuming implementation on SharePoint it’s interesting to note the time often dedicated to cross-browser compatibility for browsers with a lower percentage of use compared to the numbers that would benefit from an accessible site. This is particularly important for WA Government Organisations who have a mandate to ensure all websites are WCAG 2.0 compliant by the end of 2013. Vision Australia has written a great whitepaper on Achieving Accessibility in SharePoint 2010 which should be read for more information.

SharePoint 2007

So how does each version of SharePoint fare in these areas? Before I start it’s important to point out that any discussion on Licensing is generic and ballpark – I’m no licensing expert and the numbers have been taken in US dollars and at the time they were relevant – see the hyperlinks for the source.

Licensing: MOSS was fairly expensive to host public facing sites. At roughly 40k per internet server plus external connector licenses and with best practice guidance recommending a staging server with content deployment to production you can quickly see how even with a small web farm of 2 front end web servers and an application server how costs would add up.

Branding: Nothing to write home about here – we had ASP.NET 2.0 Master Pages and Page Layouts but that’s about it. Guidance was low in the early days, the starter master pages had yet to become mainstream. Custom Error Pages (other than 404) were particularly difficult to implement. jQuery had yet to take hold which left us with the AJAX toolkit and UpdatePanels which unfortunately required a number of manual web.config modifications to get working. MOSS for internet sites required an enterprise license, so at least we had enterprise search.

SEO: Practically non-existent. Even targeting the most basic of generic techniques generally required custom development.

Accessibility: If SEO was bad then accessibility was worse. View source of a MOSS-hosted site and you’ll see table hell – I pity anyone who had the task of getting MOSS 2007 accessible.

SharePoint 2010

Things were a little better in 2010, this is generally how it measured up:

Licensing: While we were no worse of in 2010, we weren’t really much better off either. Enterprise internet servers were still around the 40k ballpark although the external connectors were a bit cheaper. We did have the ability to purchase Standard licenses for internet sites which would be about 25% of the price – however there were a few caveats which often meant this wasn’t feasible, particularly the inability to host multiple domains on the server.

Branding: The Master Page and Page Layout story was the same, however there was far more information available plus the starter master pages had become widely used. jQuery had taken off and there were a number of blog posts regarding how to include it in SharePoint and a number of plugins which could be leveraged, and if you were still stuck with the AJAX toolkit then at least it was supported out of the box. We had the option of using FAST search for internet sites (for an additional license cost) which gave some flexibility around search. The custom error page story was much better – far easier to implement across the board compared to SharePoint 2007.

SEO: Unfortunately much the same – custom development was still required.

Accessibility: Thankfully much better – SharePoint 2010 launched with the goal to being WCAG 2.0 AA compliant and while it didn’t quite get there, the HTML rendered was much cleaner (aside from the tables generated by web part zones) and there was guidance around making 2010 completely compliant via the whitepaper I’ve mentioned above.

SharePoint 2013

Definitely the best of the bunch which is no surprise considering I dedicated an entire presentation to it.

Licensing: While there may be no official licensing details available, a number of sources suggest that the licensing for 2013 will be far more affordable. No longer do we need internet server licenses – an enterprise license with internal CALs will suffice. FAST search is now also inbuilt rather than being an additional licensing requirement.

Branding: Has changed completely in 2013. While we can still use the tried and true method of Master Pages and Page Layouts in our VS solutions, we now have the Design Manager which puts the power into the Designer’s hand and removes the need to have SharePoint developers implementing branding. There is also webdav support to connect to the master page gallery directly with an ability to edit linked HTML files which will automatically update the associated master page. A custom 404 page is provisioned and editable straight out of the box in 2013 and the other error pages are just as easy to implement as 2010.

SEO: Finally the platform has treated SEO with the respect it deserves – a number of generic SEO requirements being available straight out of the box.

Accessibility: The HTML markup is cleaner again, web part zones no longer rendering out tables. I’m unsure if 2013 is completely WCAG 2.0 compliant however if it is not, it would definitely take less effort to ensure that it was.

New Features for WCM in SP2013

A large number of new features exist for web content management in SharePoint 2013 however these are the ones I believe will be the most useful for building public facing sites.

Pasting content directly from word: Previously the experience of pasting content directly from word was a painful one – embedded styles would remain and cause certain pages to look completely different from the rest of the site’s style. This new ability will remove the need to use notepad as a go-between when pasting content from word to SharePoint.

Image renditions: This feature is getting a lot of airplay currently over the web and for good reason – it’s a great new feature. Essentially allowing you to upload one larger image and create different ‘renditions’ of the image at different dimensions, this feature will be great for mobile versions of websites which will allow a smaller file-sized image to be downloaded for rendering. Requires the BLOB cache to be enabled.

Cross-site publishing: Another highly useful feature in SP2013. Allows content to be published from one site collection to another. Many practical uses include separating editing/publishing environments, variations and particularly relevant for myself and government departments who host multiple sites – being able to share content between them while maintaining the individual branding of each department’s site.

Managed navigation / friendly URLs: My favourite feature of the lot – managed navigation is a step away from the structured navigation we’ve grown accustomed to. Allows you to define a taxonomy hierarchy which will drive navigation. Most importantly it allows you to implement friendly URLs which users have been crying out for for some time but also allows you to decouple structure from navigation ensuring you no longer have to create numerous sub-sites just to position a particular page at a given URL.

Search driven content: Used significantly for the purposes of cross-site publishing, it also has other uses including amalgamating content. Similar in theory to the content query web part however uses search as its content source – necessary if you want your friendly URLs to be rendered via the query. The best part is that XSL is no longer required to style the output – we can now use HTML and JavaScript via Display Templates.

Design Manager + Device-specific targeting: While I mentioned the Design Manager previously, another great feature is device-specific targeting. This allows you to use the same pages and content but recognise the device accessing it so you can use different master pages and styles to render that content – highly valuable for mobile sites.

SEO features: Some search engine optimisation techniques are now available out of the box which is fantastic news for those building public facing websites on SharePoint 2013.

WCM Feature Demos

Refer to Building public facing websites on SharePoint 2013 – Part 2 for a run through of the demonstrations which were performed in this session and some of the comments surrounding them.

Questions? Comments? More info..

While no questions were asked on the day it probably had as much to do with the session running its full hour and lunch having already been served rather than the presentation being so comprehensive that no questions were necessary! If anyone has any questions or comments feel free to leave them on this post or get in touch with me directly.

Thanks for listening

And thanks for reading! It was a pleasure being able to present this session on such a great day at such a well organised event, Brian Farnhill and the team did a great job putting everything together, the sponsors came on board to offer a great venue, food and prizes for the day and I look forward to being a part of it sometime again in the future.