Referencing resources in SharePoint: File system or content database?

Recently I’ve gone through a particular project and modified the solution structure and philosophy behind deploying resources for use within a number of public facing SharePoint sites. It’s a concept I actually hadn’t given too much thought to in the past – previously the majority of projects I’ve worked on or created have simply deployed all resources to the /_layouts/ folder and that was that. There are varying reasons why you would choose one method over the other and a discussion on these will form the basis for this post.

First though a bit of background on what brought this topic to the fore. The project I was analysing was a shared solution used for a large number of public facing websites (5+) hosted on the MOSS platform. All resources (CSS, Images, JavaScript, Flash) were deployed to the 12 hive within a folder structure suitable for identifying the site the resources belonged to. There were some smarts in the code which determined where to reference the resources in question.

The problem was 3-fold. Firstly, some site owners wanted the ability to make changes to the CSS or images themselves and didn’t have access to the servers to do so. Secondly, even if the changes were to be made by the development team they still needed to be done within the weekly change window to deploy the new solution. Finally, because the solution was shared between a number of sites and contained code as well as resources, any minor changes made to an image or CSS file would need to go through a complete and rigorous UAT and regression testing phase before they could be deployed.

The solution was obvious; the resources needed to be stored in the content database, and there are a number of benefits gained from storing resources for SharePoint in there.

  1. Site owners can be given the ability to make modifications to the images and CSS files if desired. Assuming they don’t have access to the server itself, and they shouldn’t, getting the files into the content database is the best way to enable this.
  2. Immediate access. Assuming you’re deploying your resources to the file system via a solution package (and there should be no other option), you’ll require the solution to be deployed and in many cases this needs to be scheduled during a deployment window.
  3. Versioning. Storing these files within the content database will be able to leverage the in-built versioning capabilities of SharePoint assuming they’re enabled. This could be countered with the fact that source control could be considered a form of file versioning.
  4. Backups. This in my opinion is somewhat of a minor benefit as any decent disaster recovery plan would incorporate multiple forms of backup which would include source control and the SharePoint server (assuming virtual environments), however having the files in the content database enables the resources to be backed up both via SharePoint mechanisms and via backups of the SQL database.
  5. Isolation. By storing the files within the content database you’re ensuring that any changes made are far less likely to affect other sites hosted on the same server. Storing the files within separate sub-folders on the file system mitigates this to an extent, but it’s less safe.
  6. Sandboxed solutions. While not relevant to my own situation in MOSS, deploying the resources to the content database enables the solution to be deployed as a sandboxed solution, opening the door to deploying in a hosted environment.

So why would anyone want to deploy their resources to the file system in the first place? I’m going to start by being cynical – developer laziness. Frankly, with tools such as WSPBuilder in 2007 or the build-in developer tools in 2010 it’s simply easier to place your resources into a mapped folder in your solution rather than creating the feature and module to deploy the files to the content database. There are however legitimate reasons why you may want to go down this path.

  1. Global access. It is possible that some of the resources you are deploying need to be used across sites for the purpose of common branding. By deploying to the file system all sites will have access to the files without having to duplicate content into the individual content databases.
  2. Restricting access. While the granular security in SharePoint allows you to lock down access quite significantly, there is the possibility that a user may have levels of access which would give them the ability to modify these resources when they shouldn’t. It would be a bad security scheme that enabled this, but keeping the files out of the content database and on the file system would ensure this never happened.
  3. Performance. Out of the box you’d get better performance having the resources on the file system rather than the content database. You can get around this for the most part using the build-in caching mechanisms within SharePoint such as BLOB caching, however for MOSS at least there are known issues associated with this noted in Chris O’Brien’s post on Optimization, BLOB caching and HTTP 304s.

It’s a question that seems to polarise the SharePoint community and there are a bunch of blog posts, MSDN articles and discussions on this very subject matter. My personal opinion now, which would appear to be backed by a number of credible posts I’ve since read, is that ideally resources for SharePoint should be stored in the content database whenever possible.

Applying the MVVM pattern to create SharePoint list-driven interactive tools using Knockout

MVC, MVP, MVVM. They’re common acronyms that any developer worth their salt would have at least a vague understanding of what they mean and entail. I remember doing an MVC hands-on lab at Tech-Ed 2-3 years ago in ASP.NET for my own curiosity, see what all the fuss was about. Having been in the SharePoint realm for the past 3 and a half years though it’s something that I never thought would have had much application in my day to day job. Surely it was just left to the ASP.NET crowd, nothing to do with SharePoint. How wrong I was.

To be honest this article could go on forever with explanations and definitions, examples and possibilities. What I’ve since found when researching for this post however is that the information available on the net is actually pretty comprehensive. Unsure what the difference is between the various view-model design patterns? Take a look at Niraj Bhatt’s post on MVC vs. MVP vs. MVVM. In fact somewhat to my surprise there are even a number of quality SharePoint related posts in regards to implementing MVVM. The majority of them relate to MVVM in Silverlight hosted in SharePoint – you can read up on a great article series by Andrew Connell regarding Silverlight, MVVM & SharePoint. There’s a quality post by Shailen Sukul regarding Applying MVVM Principles to SharePoint 2010 Development. There’s even a post on Using MVVM with Office365 and SharePoint 2010 REST API by Sahil Malik.

Sahil’s post is somewhat similar to my own. The reason being, we’ve both implemented the MVVM pattern using Knockout – a library to ‘Simplify dynamic JavaScript UIs by applying the Model-View-View Model (MVVM) Pattern’. There is a difference however – where Sahil had the luxury of using the SharePoint 2010 REST APIs which go hand in hand perfectly for this purpose, my task was to implement it in MOSS 2007.

The tools created using this method are a part of the suite of tools that exist on the Career Centre Portal including the Resume Builder, Slider tools, Choices and Hands On, Career Possibility Generator, Card Sort tools and multiple forms throughout the site. A number of developers have played a large part in the creation of these tools and forms including Paul Maddox, Elliot Wood, Lay Hooi Tan and myself. Some are SharePoint list-driven and others are CRM-driven. For the purpose of this article I’ll focus on the Career Possibility Generator.

So how does Knockout work? It could really do with a post of its own. I’d suggest reading the documentation for a full explanation of how it works and its capabilities. Trying to keep it simple however, essentially you use data-binds to map UI elements to values stored in a data model. If you define the value as ‘observable’ you ensure that updates to the UI are reflected in the model and vice-versa. Knockout also harnesses the jQuery.tmpl template engine to render repeatable blocks of code.

It’s driven really well by using a JSON object as the underlying data model. This is why it ties in so nicely with the SharePoint 2010 REST services – you can quite easily pull back a JSON object instead of an Atom response via those services and use it within the Knockout framework. To get around this missing functionality an ASHX handler was created to return the JSON object for us. This is quite easy to do – the generic shell of the handler would look something like that shown below:

public class MyHandler : IHttpHandler
{
    #region IHttpHandler Members

    public bool IsReusable
    {
        get { return true; }
    }

    public void ProcessRequest(HttpContext context)
    {
        List<CloudListItem> items = new List<CloudListItem>();
        // Retrieve items from SharePoint and populate into the generic list
        JavaScriptSerializer ser = new JavaScriptSerializer();
        string jsonObject = ser.Serialize(items);
        context.Response.Clear();
        context.Response.ContentType = "application/json";
        context.Response.Write(jsonObject);
    }

    #endregion
}

public class CloudListItem
{
    // Various properties to return in the JSON object
}

The next step is to bind the returned JSON object to Knockout and in turn allow Knockout to bind the values to the UI using applyBindings.

<script id="Occupations_Cloud_Tmpl" type="text/x-jquery-tmpl">
    <li class="cloud${Count}"><a target='_blank' href="${PageUrl}">
      <label data-bind="text:Title"></label>
    </a></li>
</script>

<div id="tag-cloud">
    <ul id="occupations-cloud"
        data-bind="template: {name: 'Occupations_Cloud_Tmpl', foreach: viewModel.Cloud}">
    </ul>
</div>

<script type="text/javascript">
    $.getJSON('/_controltemplates/ccosdportal/handlers
               /ListToJsonHandler.ashx?ListName=NormalisedKSA
               &Web=occupations', function(data){
        //viewModel.Cloud = ko.mapping.fromJS(data);
        viewModel.Cloud = data;
        $.each(viewModel.Cloud, function(index, value) {
            value.Count = ko.observable(0);
        });
        ko.applyBindings(viewModel.Cloud);
    });
</script>

Essentially what the above code will result in is an unordered list of Occupation profile links with a distinct class applied. I’ve left in the comment above to demonstrate one of the gotchyas I encountered using Knockout. By using ko.mapping.fromJS() you’re essentially ensuring that all properties within the JSON object are observable. This causes some memory issues if the object is too large and can result in the following warning in Internet Explorer browsers:

The only thing left to explain here is how the ‘observable’ functionality works. The tool itself includes a number of checkboxes which users can select which in turn drives how the cloud looks on the screen. The checkboxes are mapped in SharePoint lists to various occupations and thus when a checkbox is selected that maps to a given occupation, the class applied to the relevant list item is modified to reflect that link. As you may have noticed, the only observable property in the object is the ‘Count’ which is used in the list item class=”cloud${Count}”. By incrementing the Count value in the model, the UI is updated to reflect the new value and hence changes the class of the list item, changing the way it displays on the screen.

function AdjustCloud(checkbox) {
    var addValue = -1;
    if (checkbox.checked) addValue = 1;

    $.getJSON('/_controltemplates/ccosdportal/handlers
               /ListToJsonHandler.ashx?ListName=OccupationKSA
               &Web=occupations&KSAID=' + checkbox.value, function(data){
    if (data.length > 0)
    {
        $.each(data, function(ind, dtwd) {
            var dtwdVal = dtwd.DTWDANZSCO;
            $.each(viewModel.Cloud, function(index, value) {
              var clouddtwdVal = value.DTWDANZSCO;
              if (clouddtwdVal == dtwdVal)
                if (((viewModel.Cloud[index].Count() + addValue) >= 0) &&
                   ((viewModel.Cloud[index].Count() + addValue) <= 9))
                       viewModel.Cloud[index].Count(viewModel.Cloud[index].Count() + addValue);
            });
        });

         HideItems();
    }
    });
}

A visual example of the tool can be seen below (click for the full-sized image).

If you take the time to look at the Career Possibility Generator and all the tools throughout the Career Centre Portal you’ll get an appreciation of how powerful it is to leverage the MVVM pattern via Knockout to create highly interactive applications within SharePoint.

Integrating WordPress into SharePoint

The title here may be slightly misleading. The majority of this post will be applicable to any .NET application integrating with WordPress – however my experiences were derived from creating the WordPress functionality within the Career Centre portal. The requirements were for an extract of the latest post to appear on the front page and for any relevant categories to be presented on the Career Connect page. The featured article functionality was initially intended to come from WordPress but ended up being list-driven as there was no obvious way to mark the WordPress blog as a feature (In hindsight I may have created a ‘Feature’ category in WordPress and use the API to bring it back that way, but as it stands it’s relatively static).

This actually turned out a little more challenging than I was expecting. There appeared to be a lot of information in regards to WordPress APIs as it related to plugin development, but the information I was searching for wasn’t as obvious to find. I eventually found out that WordPress supports both the Atom Publishing Protocol (AtomPub) and XML-RPC for client development and that currently WordPress provides more features and capabilities via XML-RPC than AtomPub. XML-RPC seemed the way to go.

This snippet of information actually made life a lot easier. Turns out that the information WordPress provides on this subject is actually pretty useful. Firstly there’s the XML-RPC Support page which identified that 3 APIs were supported; Blogger, metaWeblog and Movable Type. There’s also an extension to the Movable Type API specific for WordPress (conveniently called the WordPress API) that should be used as a first resort if possible. That API is pretty well documented too on the XML-RPC wp page.

It seemed pretty promising – getCategories? Perfect. There didn’t seem to be an obvious way to get the latest blog post though from this API so the search continued. Reading through the alternate API’s metaWeblog seemed the easiest to use and appeared to have the functionality I needed. Unfortunately I found it to be pretty poorly documented – it seemed that no one wanted to make this task easy! I did stumble across one page; Eric’s WordPress XML-RPC – MetaWeblog API. Without this page I think I would have had a much harder time getting everything to work – getRecentPosts? Perfect.

The only thing left to do was to code it up in .NET. Turns out there’s an excellent library for this purpose by Charles Cook called XML-RPC.NET. The download includes the DLL you’ll need to reference (CookComputing.XmlRpcV2.dll) and a bunch of examples to get you started.

Time to pull out a few code snippets. Firstly you need to create an interface with the functions you intend to call:

public interface IStateName : IXmlRpcProxy
{
    [XmlRpcMethod("wp.getUsersBlogs")]
    Blog[] GetUsersBlogs(string username, string password);

    [XmlRpcMethod("metaWeblog.getRecentPosts")]
    BlogEntry[] GetPosts(int blog_id, string username, string password, int numberOfPosts);

    [XmlRpcMethod("wp.getCategories")]
    Category[] GetCategories(int blog_id, string username, string password);
}

The arrays are public structures with the properties you’re wanting to pull back – note that casing is important for the properties – make sure it matches the documentation exactly. Next is the code to retrieve the most recent post, and then all the blog categories:

IStateName proxy = XmlRpcProxyGen.Create<IStateName>();
proxy.Url = XMLRPCUrl;

BlogEntry[] blogEntries = null;
BlogEntry blogEntry = new BlogEntry();
Blog[] blogs = proxy.GetUsersBlogs(Username, Password);
if (blogs.Length > 0)
{
    Blog blog = blogs.FirstOrDefault(s => s.blogName == BlogName);

    if (blog.blogid != null)
    {
        blogEntries = proxy.GetPosts(int.Parse(blog.blogid), Username, Password, 1);
        blogEntry = blogEntries[0];
    }
}

 

IStateName proxy = XmlRpcProxyGen.Create<IStateName>();
proxy.Url = XMLRPCUrl;

Category[] categories = null;
Blog[] blogs = proxy.GetUsersBlogs(Username, Password);
if (blogs.Length > 0)
{
    Blog blog = blogs.FirstOrDefault(s => s.blogName == BlogName);

    if (blog.blogid != null)
    {
        categories = proxy.GetCategories(int.Parse(blog.blogid), Username, Password);
    }
}

The XMLRPCUrl is your wordpress address /xmlrpc.php (ie http://spmatt.wordpress.com/xmlrpc.php), the BlogName is the name of your blog (ie SPMatt) and your Username and Password is pretty self explanatory – use the account you use to author/administer your blog (I believe some of the actions you’re able to perform are dependant on the credentials of the account you use – using the administrator account is obviously the bad practice / guaranteed success way of getting it working).

That’s pretty much it. There’s one last potential gotchya and that’s to do with organisational proxies on your SharePoint servers. If the server requires a proxy to access the internet then there’s a fair chance you’re going to end up getting an error page like this (click to expand):

What’s required is to find the proxy line in your web.config and change it to the following:

<system.net>
  <defaultProxy>
    <proxy proxyaddress="http://your-proxy-address:port" bypassonlocal="true" />
  </defaultProxy>
</system.net>

And that should be it! You’re good to go. Pretty basic integration I know, but for most public facing sites this is all you’ll ever need. Dig in to the APIs and see the true power you can get interacting with WordPress through the APIs (anyone feeling like authoring blogs in SharePoint and publishing them to WordPress and vice-versa?).

One last point I should mention – I’ve yet to link in this blog to a man who seems to be a bit of a guru in this stuff – at least it seems that way from the research I did. His name is Joseph Scott and you can view a slidedeck of a presentation on the WordPress APIs here. If you’re scouring the net for decent advice I’d suggest looking out for his name would be a good start.

How the Career Centre mobile site was built using SharePoint

I’ve recently had the pleasure of working on a large public facing portal solution for the Career Centre. I came onto the project midway through so inherited a lot of branding and functionality, however early enough to create a number of components from scratch, having an input into the design and implementation decisions along the way. One such piece of functionality was the mobile site which can be accessed on your phones by clicking the link above, or in your browsers by accessing the mobile site directly.

The reason for this post is that it hasn’t been done exactly the way one would imagine. I thought I’d take the chance to explain the decision process behind the implementation and potentially open the lines of discussion on whether the decisions made were valid, based on false logic or whether there was a better solution available. For the purpose of context the Career Centre portal was developed on MOSS 2007 Enterprise Edition.

We’ll start with the requirements. The mobile site design had been provided to us so barring some minor tweaks, was not something that required a large amount of attention. The content of the site was to be a sub-set of the live site; a combination of landing pages which were to differ from the live site, content pages which mirrored the live site exactly, content pages which either differed from or didn’t exist in the live site, ‘multi-part’ content pages (based on a custom page layout) and functional aspects of the site (including but not limited to various search capabilities).

My thoughts were that with a custom master page, mobile-specific page layouts and mobile-specific controls and web parts, the majority of requirements would be easy enough to satisfy, and that largely turned out to be the case. My main concern was with content and ensuring the most efficient solution without resorting to content duplication, and the thought process behind that is what will form the basis of this post.

I had a fair understanding already that the ‘best practice’ way of implementing a mobile site in SharePoint was to leverage the variation capabilities of the product. Some initial research I performed backed up that presumption; the most impressive post I personally found for the topic was Jamie McAllister’s discussion on Exposing your MOSS WCM to Mobile Devices. I’d had some brief exposure to SharePoint variations from my time at TourismWA on westernaustralia.com. There, variations were used for the purposes of multi-lingual sites (the complexity of that implementation is definitely worthy of a post on its own!). I was therefore already hesitant to embrace this functionality for my purpose.

Some extra research I did around variations did little to calm my nerves. Some of the more relevant posts I read include Jamie McAllister’s Do MOSS Variations really SUCK, Alex’s Common Problems with MOSS Variations and Jeremy Jameson’s Dumping MOSS 2007 variations Part 1, 2 and 3. There were a bunch of other negative posts I read that escape me now. For the sake of balance, Stefan Goßner’s series on the Variation Feature should also be read.

My main concern however was that changing a page’s layout from the initial site’s to the mobile-specific layout would cause a variation and hence no longer receive content updates; funnily enough, this proved a non-issue. Through my experimentation however I did come up with a raft of other concerns including;

  1. A variation required the source to be a sub-web of the root – which means we couldn’t have careercentre.dtwd.wa.gov.au, it would have to be careercentre.dtwd.wa.gov.au/careercentre or something to that effect. The varied site would then also be a sub-site like careercentre.dtwd.wa.gov.au/mobile. This may seem like a small issue – but the URL of the site definitely was no small issue for the client.
  1. You can set up variations initially so that they auto-populate the varied site when new sites/pages are created, or you can set them up so they do not. We would have needed to turn that off considering the mobile site was to be a small sub-section of the main site. Problem being, once that feature was turned off we would then have to manually build up any new site structure to match. Once the variation hierarchy has been created automatically it can’t be done again.
  1. When I automatically created this hierarchy, it seemed to pick and choose which pages it recreated. I noticed only 2nd level sub-site pages where being brought across. I also noticed that for each site brought across, instead of recreating the default page it created a new default welcome page. This was obviously a huge concern, and would amount to a fair degree of manual labour to get it all up and running initially.
  1. Lists don’t get brought across and can’t be considered linked-up varied content. That means all the lists (that we needed for the mobile site) would need to be created in the mobile site manually and a process set up to copy content across in the event it was added or changed. This was more programmatic work that we were trying to avoid due to time constraints. (If this is a concern of yours have a read of another of Jamie’s posts)

For these reasons along with the research I had conducted and my own prior bias, rightly or wrongly I decided to turf the idea of using variations and proceeded to investigate an alternate solution.

Thanks to a huge degree of good fortune I had only days before been playing around with the Content Query Web Part along with CommonViewFields and styling with ItemStyle.xsl. I decided to leverage this functionality and the result was instant. I exported the Content Query Web Part and redeployed it under a different name to include the CommonViewFields of PublishingPageContent,HTML; and all fields associated with my ‘multi-part’ page layout. Important to note is that when used on the page and after pointing it to the relevant Pages library the page to copy exists in, an additional filter of Title equal to the current page’s title needed to be set – in hindsight I could have created a new web part overriding the content query web part to set this filter and the CommonViewFields automatically.

I then edited the ItemStyle.xsl file to include templates for all the fields listed in CommonViewFields:

<xsl:template name="PageHTMLOnly" match="Row[@Style='PageHTMLOnly']" mode="itemstyle">
    <xsl:variable name="DisplayTitle">
        <xsl:call-template name="OuterTemplate.GetTitle">
            <xsl:with-param name="Title" select="@Title"/>
            <xsl:with-param name="UrlColumnName" select="'LinkUrl'"/>
        </xsl:call-template>
    </xsl:variable>
    <xsl:value-of disable-output-escaping="yes" select="@PublishingPageContent"/>
</xsl:template>

Only 2 issues remained; firstly, all in-content links pointed to the non-optimised pages regardless if a mobile-optimised page existed and secondly, users weren’t being warned if they were being taken to an un-optimised page within the site. jQuery to the rescue.

To ensure mobile-optimised pages were linked to appropriately, I included the attribute class=”mobilelink” on all hyperlinks that needed it. The following jQuery took care of the rest:

$('.mobilelink').attr('href', function (index, val) {
  var returnVal = val;

  if (val.toString().toUpperCase() == '/CAREERPLANNING/KNOWINGYOURSELF/PAGES/KNOWINGYOURSELF.ASPX')
    returnVal = '/mobile/careerplanning/pages/knowingyourself.aspx';
  else if (val.toString().toUpperCase() == '/PAGES/CONTACTUS.ASPX')
    returnVal = '/mobile/contactus/pages/contactus.aspx';
  else if (val.toString().toUpperCase() == '/HOWWECANHELP/PAGES/HOWWECANHELP.ASPX')
    returnVal = '/mobile/pages/howwecanhelp.aspx';
  else
    returnVal = '/mobile' + val;

  return returnVal;
});

It would have been a little neater had the mobile site structure mapped more accurately to the live site structure, but never mind.

To ensure users were warned if they were about to enter a non-optimised page, jQuery Dialog was harnessed:

<div id="dialog" title="Confirmation Required">
  You are about to leave the mobile-optimised site.<br /><br />
  Would you like to continue?
</div>
$("#dialog").dialog({
  modal: true,
  bgiframe: true,
  width: 280,
  height: 185,
  autoOpen: false
});

$("a").filter(function (index) {
  var theHREF = $(this).attr("href");
  return theHREF != undefined &&
  theHREF != null &&
  theHREF != "" &&
  theHREF.toUpperCase().indexOf("JAVASCRIPT:") == -1 &&
  theHREF.toUpperCase().indexOf("/MOBILE/") == -1 &&
  theHREF.toUpperCase().indexOf("TEL:") == -1 &&
  theHREF.toUpperCase().indexOf("MAILTO:") == -1 &&
  theHREF != "#" &&
  theHREF.charAt(0) != "#";
}).click(function(e) {
  var theHREF = $(this).attr("href");
  var target = $(this).attr("target");
  e.preventDefault();

  $("#dialog").dialog('option', 'buttons', {
    "Confirm" : function() {
      if (target != undefined && target == "_blank")
      {
        window.open(theHREF, target);
      }
      else
      {
        window.location.href = theHREF;
      }

      $(this).dialog("close");
    },
    "Cancel" : function() {
      $(this).dialog("close");
    }
  });

  $("#dialog").dialog("open");
});

The end result was a mobile site that needed to be created manually initially (but with little effort – my experimentation with variations concluded that I would have had to do the same if not more to get that working initially) but would largely be set and forget assuming the mobile site structure didn’t need to change. Content updates would be instant rather than relying on the synchronisation process between the source and varied page. We were also able to maintain the ‘root’ URL of careercentre.dtwd.wa.gov.au by not requiring the source variation site to be created.

The last remaining problem was getting there. Seeing we hadn’t harnessed variations, we didn’t have the VariationRoot page to direct browsers to the mobile version of the site. Instead I ensured the default page of the root site had a web part on it that contained the logic to perform the redirect courtesy of Detect Mobile Browsers.

So there we have it. A result not particularly conforming with stated best practices however one the client is stoked with and should prove easily maintainable and effective. I’d be particularly interested to hear about other ways in which mobile sites have been created in SharePoint and anyone’s experience using SharePoint variations for that purpose.

How I passed 70-576

I had the good fortune today to sit, and thankfully pass, exam 70-576 – PRO: Designing and Developing Microsoft SharePoint 2010 Applications. Aside from the fact the computer tried to tell me I was a no-show even after turning up half an hour early, it all went rather smoothly.

I’ve had a bit of experience sitting the Microsoft exams in the past. Going back as far as June 2008 sitting the core .NET and Winforms exams and then moving on to 3 of the 4 2007 SharePoint exams in 2009/2010, so I figured I kind of knew what to expect. This is probably the most amount of study I’ve put into a cert as well and as such I’ve learnt more than I ever have before – definitely the main benefit you get out of sitting the exam is the amount of content you learn to pass it.

But anyway, back to the point. I discovered a lot of different valuable resources that assisted me along the way to passing this exam. A few I didn’t end up getting to but i’ll include them anyway for the sake of completeness – they may come in handy.

Blogs: Rather than create these types of posts which was my original intention, I thankfully found some existing blogs that covered off what I wanted – namely breaking down the exam into it’s components and either explaining or linking off to MSDN/Technet content. The most useful I found was Martin Bodocky’s set of posts but beware – it’s definitely not for the short of time. This formed the majority of my study, and while the content could be found on MSDN/Technet directly – his posts saved me a lot of time!

Another blog I read towards the start was Pedro Pinto’s guide to the exam – it covered a reasonable amount of content and would be more attractive to those short on time or allergic to constant link-clicking in MSDN.

Training and Labs: Admittedly I never got to any of this content. I would eventually like to, but time didn’t permit. As it turned out, I didn’t really need it to pass the exam – but i’m sure it would suit those more inclined to hands-on learning rather than reading.

Videos: Now this is where I struck gold. Watched all of these and they were well worth it. The exam-cram series by Ted Pattison is a must see – shout out to my colleague Elliot Wood for tracking them down.

That about covers it. I’ll sign off with a bit of my own advice. Read the questions carefully – very carefully. Use process of elimination – there will be the obvious 1 or 2 incorrect answers but more important to see is (on some occassions) the 1 word in the question that makes all the difference to the answer. If you want advice on how to pass 70-573 then take a look at my post How I passed 70-573.

Good luck!

Welcome!

So, this is it.

My first post on what is, for me, a long overdue foray into the world of work-related social media. It’s something i’ve been meaning to do for a while. Since I started coding back in 2005 I always imagined I’d host a humble site dedicated to the things I’ve encountered along the way. Back then it was .NET. Predominantly Winforms and then bridging into the online world of ASP.NET. It never happened.

A few years ago my career path changed and I entered into the wonderful world of SharePoint. That was surely the motivation I needed to start up a personal blog. There was definitely enough to write about. It never happened.

So I’ve taken the opportunity to start one up now. Over a year after SharePoint 2010 reached RTM, in an environment where SharePoint blogs and the dissemination of information is more common than ever before. So why now?

I’ve definitely encountered a lot in my time with the product to date. The breadth of experience i’ve been lucky to have has opened my eyes to 2 things. Firstly, There’s a hell of a lot to write about, and a hell of a lot I have personal opinions on. Secondly, while the information always seems to be out there eventually, it’s not always that easy to find. I figure the more sources there are the better, and I’ll get to put my 2 cents in along the way.

My goal is that someone manages to get something out of the things I write here. Noble huh? I may even hope as far as thinking some of the things I write may spark some debate and increase my own levels of knowledge.

Enjoy.

Follow

Get every new post delivered to your Inbox.