Riding the Elephant

In recent years I have been consumed by several sizeable software development projects encompassing a stack of different products and technologies – among them the .NET framework, SharePoint, Nintex, K2, C#, SQL, the Global Assembly Cache, Web User Controls, Server Controls, and a lot of object oriented code. While building such leviathans, I can’t help noticing their performance – or rather the lack of it.

It brings to mind my experiences working on open source web projects with PHP and MySQL – the simplicity, the speed, and the flexibility of the LAMP (Linux, Apache, MySQL, PHP) platform. It’s been a few years now since I’ve had the chance to work with PHP, and I have to admit I miss it.

While C# is a wonderful language, and the Visual Studio development environment is fantastic, you can’t help noticing the amount of horsepower required to get things done. While waiting for the solution to build earlier, I found myself mentally comparing it to a new computer with Windows installed – and the inevitable dragging down of it’s possible performance as soon as you start installing layers of software.

When I left college (25 years ago!), C++ was a new language, and OS2 was handing Windows it’s backside. I had learned to program in Pascal, knew a little C, a bit of assembler – and had been taught from day one at college that great developers wrote clean, fast, elegant code. It came as quite a shock when I entered the world of professional software development a few years later, and discovered that commercial pressures almost always ensured that software was never quite as good as it might be.

Parallel to my career, the open source movement grew like a weed, and it was no accident that I eventually became involved – writing one of the early popular blogging scripts, and a content management system. They were engineered from the ground up for speed, minimal footprint, utility, maintainability, clarity, and elegance.

And so we finally return to the title of the post – my thoughts about .NET development.

The .NET framework, and C# to a lesser extent, are one hell of an achievement. So is Visual Studio. Granted, they steal ideas from many other software development languages (chiefly Java), but there is a clarity of thought and depth of implementation present that is incredibly impressive. While I can understand why MSIL exists, and the advantages it brings, I also think Microsoft have missed an enormous trick by not letting us compile .NET code directly to machine code. The Framework does it via it’s intermediate language and caches it – why not let the developer do it up front too? Just as an option ?

I suppose I’m just getting a bit old. It seems a shame to have fantastic hardware in front of you, and then lay an operating system on top of it that restricts all access to the hardware. You then overlay a framework on top of the operating system to run (and cache) interpreted code, and finally you create innumerable code libraries to allow communication with database servers, web services, and such like by implementing further layers of complexity.

More bloat. More expense for the hardware. More, more, more.

I have to keep reminding myself that we no longer have to worry about memory management, file handles, pointers, linked lists, and the various other demons that haunted the software developers of yore. It just feels some days that we start each project with a gazelle humming away under the desk, and slowly turn it into an elephant… a really heavy, slow, lethargic elephant.

Provisioning SharePoint WebPart Pages, and WebParts with PowerShell

One of the more common exercises you might undertake in a PowerShell script when automating the deployment of infrastructure is the creation of WebPart pages, adding WebParts onto those pages, and potentially making one of those pages the default front page for a given site.

The following code snippets illustrate the methods required for each step of the process.

Connect to SharePoint

Before we can issue any instructions to SharePoint we need to add the SharePoint SnapIn to PowerShell, and connect to a web.

if ((Get-PSSnapin "Microsoft.SharePoint.PowerShell" -ErrorAction SilentlyContinue) -eq $null) 
    Add-PSSnapin "Microsoft.SharePoint.PowerShell"
$web = Get-SPWeb "https://intranet.contoso.com"

Create a WebPart Page in the Site Pages Library

Next we create a page in the site pages library – notice that the layout template is chosen by it’s internal ID – you can find these out by searching MSDN for “SharePoint Page Layout Template enumeration”.

$site_pages_library = $web.lists["Site Pages"]
$pageTitle = "My Page"
$layoutTemplate = 4  # Template code
$xml = "" + $site_pages_library.ID + "NewWebPageNewWebPartPage" + $layoutTemplate + "true" + $pageTitle + ""
$result = $web.ProcessBatchData($xml)

Add a WebPart to the Page

First we need to instantiate a WebPart manager object, which will be used to manipulate the webparts within a given page.

$webpartmanager = $web.GetLimitedWebPartManager($web.Url + "/SitePages/My%20Page.aspx", [System.Web.UI.WebControls.WebParts.PersonalizationScope]::Shared)

Add a Content Editor WebPart to the Page

Some example code to add a content editor WebPart to a WebPart page. Notice in particular the final command to the WebPart Manager object, which details the section of the page, and an index number within that section to place the WebPart.

$webpart = new-object Microsoft.SharePoint.WebPartPages.ContentEditorWebPart
$webpart.ChromeType = [System.Web.UI.WebControls.WebParts.PartChromeType]::None
$webpart.Title = "Example Content Editor WebPart"
$docXml = New-Object System.Xml.XmlDocument
$contentXml = $docXml.CreateElement("Content")
$inner_xml = "

Hello World!

" $contentXml.set_InnerText($inner_xml) > $null $docXml.AppendChild($contentXml) > $null $webpart.Content = $contentXml $webpartmanager.AddWebPart($webpart, "Header", 1) > $null

Add a List View WebPart to the Page

It turns out adding list views to the page is a bit easier than a content editor webpart. Note that the webpart will take a copy of the view, so if you change the underlying view it is using within the list, it will not affect the WebPart.

$list               = $web.Lists["My List"]
$view               = "All Items"
$webpart            = new-object Microsoft.SharePoint.WebPartPages.ListViewWebPart
$webpart.ChromeType = [System.Web.UI.WebControls.WebParts.PartChromeType]::TitleOnly
$webpart.Title      = "Example List View WebPart"
$webpart.ListName   = $list.ID
$webpart.ViewGuid   = $view.ID
$webpartmanager.AddWebPart($webpart, "Body", 1) > $null

Make the new Page the default front page for the site

One of the more common reasons to provision a page through Powershell is as part of a dashboard that will become the user interface for the SharePoint site – this is how you do that.

$root_folder = $web.RootFolder
$root_folder.WelcomePage = "SitePages/My%20Page.aspx"

Release Resources

Finally, we need to release the resources PowerShell is holding onto with SharePoint.


Thoughts on Building a Better PDF Factory

One of the more common requirements I come across while working on document management projects is the automatic generation of documents. Sometimes it may be as simple as the production of form letters, and sometimes it may be the production of complex business reports.

Traditionally, the Microsoft Office family of products have enabled automatic document creation through a mixture of database and word processor integration. A power user would typically arrange a data set in Microsoft Access, connect to it from Microsoft Word, and run a Mail Merge function to automatically generate documents incorporating the connected data. The number of skills required to successfully configure and run a mail-merge is high enough that even power users end up having to review their own instructions.

Just to make things a little more interesting, I also commonly see another requirement allied to the automatic creation of documents – the automatic generation of PDF files.

Coming from a development background with Microsoft SharePoint, MY first thought was that I might author my own Microsoft Word files (because DOCX files are essentially XML), and push them through Word Automation Services to render them as PDFs.

Word Automation Services has been strangely forgotten by Microsoft over the years. Back in the days of SharePoint 2007, if you wanted to manipulate Word files on a server, you would need to instantiate an instance of Word on the server – and Word has never been the lightest, or cleanest application in the world. Continually stopping and starting Microsoft Word on a server is a pretty good recipe for disaster – leaking memory like a seive, and not giving it back to the operating system. Microsoft knew all about this, so included an almost totally undocumented headless version of Microsoft Office in SharePoint 2010 called “Word Automation Services”. It allows you to convert files between Word and PDF format (among others) via API calls – yes, you read that correctly – Word Automation Services has no user interface.

Here’s the rub though – Word Automation Services runs as a service, and batches it’s work – so any concept of providing “on demand” PDF generation becomes unrealistic.

Thankfully, Adobe opened the PDF file format in 2008, and made it a standard (versus a proprietary format) – this opened the floodgates for the open source community to embrace the format, and begin developing tools to work with it. While numerous commercial products such as ActivePDF arrived, providing rapid application development toolsets, slowly but surely open source projects overtook them both in terms of functionality, performance, and stability.

Suddenly I am able to gather together the open source code from projects such as MigraDoc and PDFSharp, wrap them in my own class libraries and web services, and deliver them within a Web Solution Package for Microsoft SharePoint. I can build a re-useable turnkey PDF generator that builds PDFs in fractions of seconds, using a fraction of the resources required by previous solutions.

There seems to be a tremendous fear in going the bespoke route when designing solutions for customer projects – for all sorts of reasons. I will suddenly be burdened with unit testing, system testing, source code version control, intellectual property management, and so on – all of which comes at a cost. When a re-useable solution can be developed that provides orders of magnitude better performance and results than “out of the box” functionality though, I must pause for thought.

Getting Things Done

In order to combat the endless torrent of tasks, requirements and commitments surrounding me over the years, I have experimented endlessly with task lists, note taking, and various methods to keep on top of things. The only one that has really stuck with me has been the “Getting Things Done” methodology. I first heard about it while reading Merlin Mann’s 43 Folders website, and then bought the book about it by David Allen. I have caught myself returning to it again and again over the last few years – amost to remind myself what I could be doing, which is quite often very different than what I end up doing.

“Getting Things Done” is based on a loose set of ideas – a toolkit to help bring organisation to your chaos. The tools are not specified – only the manner in which you might use them. Each person will prefer different tools, and each person’s chaos will be different.

My chaos consists of a demanding full time career as a professional software developer, a sometimes equally demanding “second string” career as a freelance web developer, and the remainder as the husband of an infinitely better half, and the father of three young children.

Central to the “Getting Things Done” or GTD methodology are three basic ideas;

  • If it’s on your mind, your mind isn’t clear. Anything you consider unfinished in any way must be captured in a trusted system outside your mind, that you know you’ll come back to regularly and sort through.
  • You must clarify exactly what your commitment is and decide what you have to do, if anything, to make progress toward fulfilling it.
  • Once you’ve decided on all the actions you need to take, you must keep reminders of them organized in a system you review regularly.

So the basic idea is to forget about everything you don’t need to be thinking about – to store it away somewhere – and to regularly pull tasks from that store as they need to be done.

The idea of freeing your mind from anything that doesn’t matter right now has been the most difficult for me to embrace. While listening to one of Leo Laporte’s podcasts recently, a rather novel nuclear tactic of sorts was put forward – if your desk has mountains of unknown “stuff” on it, get a great big box, and sweep everything into it – then mark the box “some day”. You will of course return to the box, but only to fish out things that need to be done as they crop up.

Over the years I have experimented with a number of tools for my “trusted store”. Remember the Milk, Toodledo, Google Keep, Things, Wunderlist, Trello, Basecamp… the list goes on and on. For several months I used (and loved) 37Signals Backpack – it was simple, flexible, and good enough – and then the developers pulled the plug on it.

After tinkering, playing, using, breaking, and misusing all manner of task list software, websites and services, I probably have more perspective than most about what I need in a solution, which it turns out is very different than what I would like.

The concept of an In-Box is perhaps the lasting influence Remember the Milk had on me – with the idea that you could throw any tasks immediately into an inbox to get them out of your way. It falls into the same paradigm as Inbox Zero (where you try to keep your email inbox empty as far as possible).

I was going to write about my specific solution, but I’m not so sure it’s really of much importance to anybody except myself. The great thing about any productivity methodology is that you are free to modify and adapt it to the tools you have. It doesn’t really matter if you have a desktop computer, a laptop, a paper notebook, an iPhone, or whatever – the most important thing is the formation of a regular habit to manage your store of notes.

Software Development Methodologies

We all know that managers like to talk about models, methodologies, and methods. They like to sound clever, throwing acronyms around, and spouting catch phrases and buzz words as often as possible. No realm of information technology seems to display this mania quite so aptly as software development – or rather the process by which you might imagine software is designed and developed. Given the amount of hoodoo, fear, uncertainty and outright rubbish written about the various ideas, I thought it might be timely to write a post outlining what each one really means – both for my own reference, and my own sanity – if this is of any use to you, that’s a bonus.

What is a development methodology?

Broadly speaking, it’s the description of an approach to building software – the reason to have the description in the first place would be to get a team of developers to work in a similar manner to each other, and so that team leaders have a clue what’s going on.

Right. So what are the various well known methodologies?


The waterfall model is a sequential development process, in which development is seen as flowing steadily downwards – like a waterfall – through the phases of Conception, Initiation, Analysis, Design/Validation, Construction, Testing and maintenance. The process is followed with rigour, and loved by pedantic team leaders who like to tick boxes, and make everybody’s life hell. It’s incredibly expensive to do, and customers both love it and hate it – they love it because it can be run on a fixed price, but they hate it because a calculator application ends up costing as much as the space shuttle.

Iterative and Incremental

Iterative and Incremental development is a cyclic software development process developed in response to the weaknesses of the waterfall model. It starts with an initial planning and ends with deployment with the cyclic interaction in between. So, essentially, this is Waterfall where we admit that waterfall is idiotic, and we agree to go round and round in circles, until we’ve spent just as much time, effort and money as Waterfall. I guess brakes can be applied in the form of somebody in the middle of the mayhem who continually asks “is this good enough – will it do?”. Iterative development is often tied to the “Rational, Unified Process” – another meaningless description heard often, but understood by nobody.

Rapid Application Development

Rapid application development is a structured technique where early designs are turned immediately into prototypes, which are then iteratively evaluated, refined, redeveloped, ad nauseum until the finished product is produced. “RAD” was invented to combat the main problem of Waterfall based development methodologies – by the time anything got built, the requirements had changed – and by the time the redeveloped solution re-appeared, the requirements had changed again. Rapid application development became very fashionable in the mid 1990s with the advent of visual design tools such as Visual Basic and Delphi that allowed fast interface development. It also caused some of the worst spaghetti code in the known universe due to nobody paying their “code tax” and inviting developers to go back and clean up after requirements have changed.

Agile Development

If you are a fellow developer, you were expecting this one to be in the list – probably because it’s the fashion of the moment, and all managers in the known universe think Agile sounds cool when talking to clients. I expect they stand in a “ready for action” fake karate pose when they say it. In reality, the “Agile” label covers a swathe of similar methodologies – the Wikipedia description reads as follows;

Agile methodologies generally promote a project management process that encourages frequent inspection and adaptation, a leadership philosophy that encourages teamwork, self-organization and accountability, a set of engineering best practices that allow for rapid delivery of high-quality software, and a business approach that aligns development with customer needs and company goals.

Phew! So it sounds like it will save the known universe – and it’s growing popularity has resulted in more being written about it than any other method – meaning basically that Managers can now have big fat books about it on their shelves, and communicate in pure acronym when discussing project plans. In reality, all Agile really means is that you will communicate, you will try to make things work, you will be trusted (!?), and you will not blow a gasket when requirements change. Of course the customer also has to realise that change equals longer.

Extreme Programming

Quite unlike extreme ironing, extreme programming does not involve carting a laptop halfway up Aconcagua to write some C++. It is however similar in taking ideas from several flavours of Agile development, and constructing a set of “ideals”, or “expected behaviours” around them. I can only imagine the anal, ivory towered developers that dreamed up Extreme Programming as a methodology – whereas most of us might well follow a lot of the ideas anyway, there is a strict swathe of rules, behaviours, and guidelines that you can follow if you really want to be an extreme programmer. I’m guessing the people who like working this way also have 20 sided dice in their desk drawer. I’m being cruel, aren’t I. One of the ideas within Extreme Programming that I really like is working together so that one of you programs while the other thinks. Can you imagine – sit there, with your feet up, sipping coffee and spouting lofty ideas at somebody all day?

In summary…

I’m guessing this blog post is going to generate it’s fair share of laughter, snorts of derision, outright anger, incensed murmurs of “he didn’t get it”, and various other rumblings of discontent.

It’s worth remembering that 99% of development teams use elements of all the methodologies that have been written about in the text books. It’s also worth noting that all attempts to build software in a faster, more efficient, more responsive manner are eventually defeated by millions of words being written about them in textbooks, and managers applying so much structure, measurement and review that you may as well call them all Waterfall and have done with it.

A Collection of Great Books about Computers, Technology, and the Internet

Books about technology don’t have to be filled with hardware specifications and configuration instructions. Sometimes they can be filled with stories about the people – the forgotten people who’s brilliance led to the devices, networks, and software we now rely on in daily life. I’ve read all of the books listed below, and would without hesitation recommend them on to anybody else with even a passing interest in the history of the internet, or software development in general.

Where Wizards Stay Up Late by Katie Hafner

In the 1960s, when computers were regarded as giant calculators, J.C.R. Licklider at MIT saw them as the ultimate communication device. With Defence Department funds, he and a band of computer whizzes began work on a nationwide network of computers. This is an account of their daring adventure.

What Just Happened by James Gleick

For the past decade change seemed to happen over night, every night. Fueled by the exponential rise of technology, the digital revolution was difficult for many to make sense of, but James Gleick watched and analyzed, criticized and commended, participated in and prophesized about the instantaneous transformations of the world as we knew it.

Hackers by Steven Levy

This 25th anniversary edition of Steven Levy’s classic book traces the exploits of the computer revolution’s original hackers — those brilliant and eccentric nerds from the late 1950s through the early ’80s who took risks, bent the rules, and pushed the world in a radical new direction. With updated material from noteworthy hackers such as Bill Gates, Mark Zuckerberg, Richard Stallman, and Steve Wozniak, Hackers is a fascinating story that begins in early computer research labs and leads to the first home computers.

The Mythical Man Month by Frederick P. Brooks Jr.

Few books on software project management have been as influential and timeless as The Mythical Man-Month. With a blend of software engineering facts and thought-provoking opinions, Fred Brooks offers insight for anyone managing complex projects. These essays draw from his experience as project manager for the IBM System/360 computer family and then for OS/360, its massive software system. Now, 20 years after the initial publication of his book, Brooks has revisited his original ideas and added new thoughts and advice, both for readers already familiar with his work and for readers discovering it for the first time.

Weaving the Web by Tim Berners Lee

Given the way the Web has become the dominant communications technology of our time, one could argue that Berners-Lee is the guy who invented the future. Yet up to now he has remained reticent about how he did it. Weaving the Web is therefore the definitive account of how the World Wide Web came to be. No one else could have written this book–the history of the Web straight from the source.

Accidental Empires by Robert X. Cringely

Robert X. Cringely manages to capture the contradictions and everyday insanity of computer industry empire building, while at the same time chipping away sardonically at the PR campaigns that have built up some very common business people into the household gods of geekdom. Despite some chuckles at the expense of all things nerdy, white and male in the computer industry, Cringely somehow manages to balance the humour with a genuine appreciation of both the technical and strategic accomplishments of these industry luminaries. Whether you’re a hard-boiled Silicon Valley marketing exec fishing for an IPO or just a plain old reader with an interest in business history and anecdotal storytelling, there’s something to enjoy here.

iWoz by Steve Wozniak and Gina Smith

Wozniak’s life – before and after Apple – is a “home-brew” mix of brilliant discovery and adventure, as an engineer, a concert promoter, a fifth-grade teacher, a philanthropist, and an irrepressible prankster. From the invention of the first personal computer to the rise of Apple as an industry giant, iWoz presents a no-holds-barred, rollicking, firsthand account of the humanist inventor who ignited the computer revolution.

The Cathedral and the Bazaar by Eric S. Raymond

The Cathedral and the Bazaar takes its title from an essay of the same name which Raymond read at the 1997 Linux Congress and that was previously available only online. The essay documents Raymond’s acquisition, re-creation and numerous revisions of an email utility known as fetchmail. Raymond engagingly narrates the fetchmail development process while at the same time elaborating upon the on- going bazaar development method he employs with the assistance of numerous volunteer programmers who participate in the writing and debugging of the code. The essay smartly spares the reader from the technical morass that could easily detract from the text’s goal of demonstrating the efficacy of the Open Source, or bazaar, method in creating robust, usable software.

Burn Rate by Michael Wolff

Michael Wolff was a journalist and writer; in 1998 he is a journalist and writer again. But in the first half of the ’90s he was an Internet entrepreneur, Chairman and CEO of Wolff New Media. This is Wolff’s story. BURN RATE is hugely informative about the world of the net and the web, search engines, closed systems, online pornography; it is also incredibly funny. As readable as a novel, BURN RATE is an all too human story of one man, at first idealistic and naive, then corrupted and increasingly cynical, and eventually burned out and tired, and of a world that bears as much resemblance to the school playground (not least in the age of it’s major players) as it does to the world of conventional businesses. If there is one book which tells us about what is going on in the complex and confusing struggle for the future of the Internet it is this one.

A Brief History of the Future by John Naughton

The Internet is the most remarkable thing human beings have built since the Pyramids. John Naughton’s book intersperses wonderful personal stories with an authoritative account of where the Net actually came from, who invented it and why, and where it might be taking us. Most of us have no idea of how the Internet works or who created it. Even fewer have any idea of what it means for society and the future. In a cynical age, John Naughton has not lost his capacity for wonder. He examines the nature of his own enthusiasm for technology and traces its roots in his lonely childhood and in his relationship with his father. A Brief History of the Future is an intensely personal celebration of vision and altruism, ingenuity and determination and above all, of the power of ideas, passionately felt, to change the world.

Deeper by John Seabrook

Although the author of this journey in cyberspace hardly ever goes anywhere – he just sits in front of his computer – his story is full of travel and incident. Readers meet Bill Gates and other major people in the industry via e-mail, join a virtual community to find out what daily life is like, adapt to the World Wide Web, and build a Web site. The voice of the book is at times comic, at others rueful, wanting to believe in the good thing s about the Net but sceptical of the hype, trying to account for the engrossing nature of this new frontier.

Microserfs by Douglas Coupland

Microserfs is not about Microsoft–it’s about programmers who are searching for lives. A hilarious but frighteningly real look at geek life in the nineties, Coupland’s book manifests a peculiar sense of how technology affects the human race and how it will continue to affect all of us. Microserfs is the hilarious journal of Dan, an ex-Microsoft programmer who, with his coder comrades, is on a quest to find purpose in life. This isn’t just fodder for techies. The thoughts and fears of the not-so-stereotypical characters are easy for any of us to relate to, and their witty conversations and quirky view of the world make this a surprisingly thought-provoking book.

JPod by Douglas Coupland

Ethan and his five co-workers are marooned in JPod, a no-escape architectural limbo on the fringes of a massive game-design company. There they wage battle against the demands of boneheaded marketing staff who torture them with idiotic changes to already idiotic games. Meanwhile, Ethan’s personal life is being invaded by marijuana grow-ops, people-smuggling, ballroom dancing, global piracy and the rise of China. Everybody in both worlds seems to inhabit a moral grey zone, and nobody is exempt, not even his seemingly strait-laced parents or Coupland himself.

Automattic Change the Game

Automattic, the developers of the WordPress web content management platform, started flexing their muscles last year – reminding everybody not only how good their existing hosted blogging platform is, but also how pro-active they are in developing it further.

A press release arrived one day, telling us about the native integration of the WebP graphics format into wordpress.com hosted blogs – cutting the size of images down significantly, and therefore the speed content loads for readers.

Then Automattic also released the Windows version of the WordPress application – the end result of a quiet 2 year engineering effort to re-architect the under-pinnings of the entire platform. That the developers have managed to evolve the platform while it is live is quite frankly astonishing.

I trialled the WordPress desktop application, and have to say – it’s impressive. It’s very fast – but then it’s a native application, so it should be fast. When using desktop or laptop computer from now on, I can’t see myself using the web browser to author content for WordPress hosted sites – the experience is a game changer.

You might surmise that the WordPress desktop application is a reaction to the likes of Ghost, that developed their blogging platform exclusively with node.js – allowing them to unify the language used (Javascript) on the server and the desktop. Personally, I think Automattic have made better choices than the Ghost team.

Automattic have essentially re-imagined WordPress as a client/server application, where the client can be either the web, or a desktop application. They have “dog fooded” their own platform – using the same public API available to others to build the native content management interface. The desktop application and website look almost identical – for good reason – they are both running the same underlying code.

While perhaps not as idealogically “pure” as Ghost, there are advantages to basing the server-side of the platform on entirely different technology than the client. Google has long provided services on the internet by presenting a public API, and concealing users from any concern over implementation. Doing so underlines the entire reason for providing an API in the first place – the abstraction allows the engineering team running the backend the freedom to evolve independently of the interface, not constrained by language, operating system, or hardware.

Automattic have essentially turned WordPress from a web application into a content hosting service with a public API. Perhaps the best content hosting service available on the internet.