Tuesday, December 29, 2009

So you think you can Host?

Flickr CC by 2.0

Through out my career I’ve often come across small-sized dev-shops that believe…

a) That being their own Application Service Provider (ASP) and hosting their product on their own servers is cheaper, easier and safer than letting a third-party handle it

b) That any dev with a bit of interest for servers and hardware is capable of filling the roles of a full-fledge developer and an IT Pro

Through out my career I’ve never seen this work out particularly well.

The reason why it never works is seldom the lack of talent for the poor dev who ‘stood closest to the server when the last dev-slash-it-guy left’ (freely quoted from Richard Campbell of DotNetRocks). It’s just that being a IT Pro is just as much of a full-day job as being a professional programmer.

In this crazy world of new technologies, languages, frameworks, tools and methodologies that pops up every five minutes, there’s just NO WAY a poor soul can handle two full-time jobs like that and still be GOOD AT BOTH. Some things just got to suffer.

Being a developer by heart – and by job description – it’s pretty obvious which one of those jobs that will suffer. The problem is that you can probably live with this situation for a while before it really hits you. But be sure; it will hit you.

You can have 99,5% uptime for 3 years in a row. But when that server goes up in flames and the backup system won’t restore your last 3 months worth of data, you’ve ruined your uptime numbers for the next 3 decades.

Being a IT Pro means being pro-active. It’s a constant fight to stay ahead of any troubles. And to be prepared and having fail-over when trouble hits you.

Being a dev-slash-it-guy means you won’t have neither time nor devotion to being pro-active. Instead you’re being post-active; you’re putting out small fires every now and then, but you’re seldom doing much to prevent them from catching on.

If you’re a startup company with most customers on beta-programs and not much paying customers yet, that might be ok. But someday you’ll hopefully find yourself with a nice list of paying customers that depends on that nice little piece of software that you hacked together wrote.

They might not expect your software to be flawless (even though they probably should), but they expect it to be there when they need it. They start demanding uptime guarantees and Service Level Agreements, SLAs (or at least they should demand guarantees and SLAs). And you better take steps to make sure that you can provide a level of expected professionalism when it comes to hosting your own services.

Do you think you can deliver that with a (at most) half-time IT pro? My best guess is ‘probably not’. image

From my experience in the field, here’s some questions you should start asking yourself if you find yourself at this stage;

(Now, here comes a full disclosure up front; I’m definitely no IT Pro myself – and I have no intentions what-so-ever to become one. This list might therefore not be 100%-water-and-bulletproof, but if you find some misjudgments or something you’d like to add to the list, please feel free to correct me or give suggestions in the comments below)

  • How many ports are open and how many services are running and available from the outside on your public server(s)? (The server(s) that hosts your software that is). Do you for instance allow remote desktop connections to your public server(s) to be able to troubleshoot it?
  • What happens if someone from the outside takes control over your public server? Do they get access to your local network and domain as well?
  • How many servers are actually accessible from the outside?
  • Do you have a working Virtual Private Network, VPN, that anyone in your business can use? And if so; Is it secure enough?
  • How many times in the last 6 months have you verified that you can actually restore all the data from your backup device? And how sure are you that you’re actually backing up everything you need? Or put it this way; if your office burns down today, will you have all the necessary data available to do business-as-usual tomorrow?
  • How often do you scan your network for suspicious activities? Are you sure you’re alone on your network?
  • Do you have a wireless network available in your office? If so; what minimum level of security does it demand? Do you have just a pre-shared key which then gives you full access to the domain, or do you have something that is actually secure enough to prevent teenage hackers to access your file servers?

I’m not saying that any 3-5 man shops must hire a full time IT Pro to handle this. This is off course a question of cost. But just like you’re probably out-sourcing accounting to some professional book-keeper, you should also out-source other areas that is just as critical for your business.

imageIf you’re a small- or medium-sized dev-shop, hosting is in my experience always handled better by professional ASPs. And the same goes for securing and managing your IT infrastructure.

Don’t get blinded by your luck so far; sooner or later your luck will run out. Then it will no longer be neither cheaper, easier nor safer to handle hosting and infrastructure by yourself – and there’s nothing you can do about it.

Thursday, December 3, 2009

My favorite Windows 7 Hotkeys

I’ve been using Win7 since beta 2 and I like it a lot. It’s in my opinion by far the best Windows operating system – a lot better than both Vista and XP. Not that it’s such an enormous change from Vista. It’s just the sum of all the small things. Like the perceived overall UI performance. The improved task bar. The speed of the start menu. The search on the start menu (I just can’t remember the last time I actually needed to click my way through the start menu to start a program. I just type in whatever app or feature I need to start or open, and 99 out of 100 it comes up among the top 3-4 items). And not at least; the hotkeys:

  • Win + Arrow: Docks the active window to the left or right, or minimizes (down arrow) or maximizes (up arrow) it. Or if the window is maximized and you hit Win + Down Arrow, the window will be restored. If you hit the same Win + Down Arrow again it will be minimized.
  • Win + Shift + Left/Right Arrow: Moves a window to the monitor on the left or right keeping the same position as it had on the monitor you moved it from.
  • Win + Home: Hides all open windows except the active (you can do the same by grabbing hold of the window title bar with the left mouse button (just as if you’d want to move the window around) and ‘shake’ the window you want to leave open)
  • Win + T: Puts focus on the taskbar so that you can use the arrow keys to move between the programs on the task bar and then Enter to activate them, Shift+Enter to open a new instance, or the ‘right menu button’ to get up the menu for each program.
  • Win + Number: Opens the program on the given index on your taskbar. For instance if you have pinned Outlook to the first position on the task bar, Win + 1 will start Outlook (or maximize it and bring it to the front if it’s minimized or behind some other windows).
  • Win + P: Opens up the “Connect to a projector/external display” dialog with the options to show your desktop on the computer only, duplicate it, extend it or show it on the projector (external display) only. The 'Connect to a projector' dialog
  • Win + X: Opens up the “Windows Mobility Center” where you can adjust display brightness, volume, battery mode, wireless connectivity, external display, sync connected devices, and presentation settings. Windows Mobility Center
  • Win + Space: Peek at the desktop – that is; make all windows transparent so that you can see the desktop.
  • Win + E: Opens the Windows Explorer
  • Ctrl + Shift + Esc: Opens the Task Manager
  • Win + R: Opens the Run dialog
  • Ctrl + Shift (when launching apps): Runs the program as administrator. For instance; open the Run dialog and type ‘cmd.exe’ (or just ‘cmd’). If you just hit Enter you will open the command prompt under the privileges of the user you’re currently logged in as. But if you instead hit Shift+Enter, you will open the command prompt with Administrator privileges.

Some of these are not really new to Win7, but I just threw them in there anyway because they’re just so incredibly useful :)

These hotkeys (or shortcut keys) just makes my working day a little better and a little bit more productive. Mix these in with some of the new features of the Windows Explorer and it can just make your day a bit brighter;

  • Shift + Right-click on a folder gives you some extra options compared to just right-click. For instance the “Open command window here” and “Open in new process”.
  • Ctrl + Shift + N: Creates a new folder
  • Right-click + Drag folder to the Windows Explorer icon on the taskbar pins the folder, so that you can right-click on the Windows Explorer icon on the taskbar and get direct access to the folder from the jump list.

Tuesday, November 17, 2009

Configure WCF to run on Windows 7

When you’ve installed Windows 7 and installed all the appropriate IIS features, WCF will still not be available on your box by default. I’ve had this little note to myself laying around somewhere on the file system, but I just keep forgetting where it is every time I need it. So I’ll put it up here, just to make the search a bit easier :)

Open up the command prompt in Administrator mode, and run the following command;

c:\…>"%windir%\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\ServiceModelReg.exe" -r

This will map the svc file type to aspnet_isapi.dll  and will make IIS recognize WCF services and startup the ServiceHost for you. In other words; the svc MIME type will be registered with IIS. The parameters on the end is;

-r: Re-registers this version of WCF and updates scriptmaps at the IIS metabase root and for all scriptmaps under the root. Existing scriptmaps are upgraded to this version regardless of the original versions.

(copied from the official docs on the “ServiceModel Registration Tool”)

 

If you’ll be running integration tests against your services and the test will do WCF self hosting (instead of IIS), you also need to authorize the urls that your self-hosting service will be using;

c:\…>netsh http add urlacl url=http://+:[port]/ user="[windows user name]"

As for the ServiceModelReg command, this one will also need administrator privileges on your command prompt. Replace the [windows user name] with the account you’ll be running the test under. Usually this will be the account you’re logged in with, e.g. “domain\user”. The [port] parameter will be the port number you’ve configured on your WCF endpoint (typically 8000 for testing).

 

And just to be sure; restart the IIS after you’ve run these commands;

c:\…>iisreset

Friday, July 10, 2009

The Great Git In The Sky

Every developer should know that having a good versioning system for your source files is crucial. Having the possibilities to go back in time and see what your class or module or project looked like is indispensible. And if you’re more than one developer on a project, having a common place – a repository – to store all files is even more indispensible.

Throughout the years I’ve tried a couple of different source control systems. Being a .Net developer on the Microsoft platform, I’ve tried both Visual SourceSafe (VSS) and Team Foundation Server (TFS) and I’ve also used the open source alternative SubVersion (SVN). Lately there’s a new source control system that has drawn my attention, namely Git.

Git is a fairly new source control system and was originally developed by Linus Torvalds to be used developing the Linux kernel. The first version of Git came in 2005, but wasn’t available on the Windows platform until late 2007 through the open source project msysgit (unless you were running the unix emulator cygwin, that is).

Git has a bit different take on how you manage source control than what I’ve been used to from the previous tools I’ve used. TFS, VSS and SVN lets you setup a centralized repository where you store all your files, keep track of version history, do branching, etc. But Git is a bit different in the sense that the repository is now localized on your machine and when several people are working on the same project, all repositories are essentially synchronized across all development machines – a so called distributed source control system. Which means that you have access to the full history of your source files locally. You can also have a remote Git-repository which local repositories can push and pull changes to and from, but all local histories still have the full version history.

Creating a Git repository using msysgit

For each project you want to put under source control, you just add a Git repository to the root folder of your project. Say you have some code laying on ‘C:\Code\Work\MyProject’. If you want to place this under Git’s source control and you’ve installed msysgit with the integration to Windows Explorer, then you just right click on the folder and choose ‘Git GUI Here’ (or ‘Git Bash Here’ if you’d rather use the command prompt). You have to choose ‘Create New Repository’ and then put in the directory ‘C:\Code\Work\MyProject’ in the textbox that follows (a little glitch in the UX design there; it would have been friendlier if it actually had remembered where I opened the GUI from and then put in that directory by default).

In the list of ‘Unstaged Changes’ in the upper right corner you can now choose the files you want to include in the repository. Select the appropriate files and then press Ctrl+T (Or Commit > Stage to commit). Then you can commit these into the repository with Ctrl+Return (Commit > Commit).

Alternatively you can do the same from the command line, and in fact there are a couple of things you should do on the command line before you start committing to the repository. First of all you should enter your name and email, as this will be used on all commits;

Configure name and email

The ‘—global’ parameter hints that this is a configuration setting that will be effective across all Git repositories on this machine. You could also have done this through the Git GUI (Edit > Options…), but the next thing I’ll setup I couldn’t find a way to do in the current version (v 0.12.0.23); adding ignore patterns. In a typical .net project you wouldn’t want to add for instance the bin and obj folders to the repository, and the way to ignore these files is to add a ‘.gitignore’ file to the root folder of your project. You can try to do this in Windows Explorer, but I think you’ll quickly find that this is actually not possible. But through the bash it’s a walk in the park.

To make my point I’ve created a console app called GitExample and I’ve open the bash in the root directory. I can now issue a git init command that will initialize a Git repository here, and by calling git status I can list all files and folders that are currently not under source control:

Initialize git repository

And as we can see there are a lot of things here that I really don’t want to add to the repository. Let’s be ignorant;

Create ignore file

The touch command will create the file and you can now edit it in your favorite text editor. The following list shows my ignorance;

Ignore pattern

With this in place we can now run the status command and we’ll see that there’s a lot less to care about for our Git commit process;

List of files without the ignored ones

Now it’s time to add the files to the repository, which you off course can do either through the GUI or the command line, but since we’re already in Unix mode let’s do it the hard core way.

Add to staging area and commit to repository

The procedure around committing files to the repository is 2-phased. We do a “add .” to add all untracked files in our working directory into the staging area of the repository. The staging area is set of files that are ready to be committed and you can do a lot of add (and remove) before you finally decide to commit all changes into the actual repository. And to commit the files you call the commit command with a “-m” parameter to add a comment to the files you’re committing. And as you can see from the screenshot above, after we’ve moved the files from the working directory to the staging area and then from the staging area to the repository, our working directory is now clean.

Mesh It Up!

I mentioned a bit earlier that if you want to share the repository with someone you can setup a remote repository. A popular repository host is GitHub which let you store up to 300mb free of charge (unlimited storage for public repositories). You can then push and pull changes to and from this location (take a look at this excellent blog post by Jason Meridth to see how you can do this).

Another alternative is to use your Live Mesh account as the remote repository. Pål Fossmo did a great blog on how you can setup Git together with Mesh which shows you how to configure your mesh folders. To initialize the Mesh repository you can use the clone command as shown below:

image

The clone command does what you think it does; it makes a copy of your repository and the bare parameter tells Git to strip down the repository to only what is necessary for the change tracking. That means no working copy of the source files – only the binaries, diffs, etc, that the Git database needs. You can then push and pull between local repositories and the Mesh repository which then will be synced with the cloud.

image

Why use Git?

“Git – the fast version control system”. I guess that the slogan will make a solid statement by itself, and if you’ve worked with source control systems before you’ll definitely appreciate the speed of Git. Visual SourceSafe is notoriously slow on just about every operation you perform (especially over http(s)). SubVersion is pretty fast on check-ins, but not that fast on check-outs. And TFS is pretty fast overall and also gives you the possibility to setup local proxies if you have distributed teams. But for a more quantified view on Git’s performance you can check out Scott Chacon which has compared the speed of Git to Mercurial and Bazaar.

TFS might compete to a certain extent on speed, but when it comes to the install footprint and not at least the effort it takes to actually install TFS 2008, Git will outperform TFS any day of the week. That said; TFS is a lot more than just a version control system. But if you plan on using TFS solely for the purpose of tracking your precious source files, my advice is pretty clear; Don’t! It’s not worth it – neither in time nor money.

Compared to SubVersion it strikes me that the merging capabilities of Git are a bit better. Git tracks the content of files – not the files itself, and so merging operations seams more likely to be correct in Git. In my opinion the merge operations in SVN is probably one of its weakest point; doing large merge operations in SVN is just pain and you just know you’re about to get burned. TFS on the other hand seems a bit better on the merging than SVN, but then again; time and money…

I guess it’s time for a little disclaimer here; I haven’t really used Git much yet, and so I haven’t done any large merge operations and so I might be wrong here. But from what I’ve read and from how Git is built as a distributed source control system, I have a strong feeling that merging is really one of Git’s sweat spots.

Anyways, if you have any other opinions on the subject – or to anything else in this post – please feel free to speak your mind in the comments below :)

Resources

“Git Manual Page” is the official documentation on Git and it’s actually quit good. Lot’s of good examples and pretty well written. RTFM, right?

“Everyday GIT With 20 Commands Or So” from the official tutorial will give you a head start on the most used commands.

“Git Ready” has put some of the commands into 3 categories; beginner, intermediate, and advanced.

“Git For Windows Developers” – the title says it all I guess.

“Git – SVN Crash Course” will give you a head start on using Git if you’re already familiar to SubVersion.

“Why Git is better than X” has done some (slightly biased?) comparisons against other source control systems.

msysgit is the tool to download and install if you need Git to run on a Windows box.

TortoiseGit is another client for Git repositories. If you’re familiar with TortoiseSVN for SubVersion, the learning curve will be close to zero.

GitHub lets you store up to 300mb in private repositories (unlimited storage for public repositories).

Tuesday, June 23, 2009

NDC 2009 Highlights

The Norwegian Developers Conference 2009 took place in Oslo last week and I was lucky enough to be one of the around 1000 attendees. That’s about half the crowd the organizer was hoping for, but I guess we’ll have to blame the ongoing financial turbulence for that. It was definitely not due to the speaker list, because that was straight out impressing. And the pricing seemed very reasonable too. Or maybe calling it the Norwegian Developers Conference scared away any foreigners? I don’t know, but those who weren’t there really missed out on a great event.Photo by Rune Grothaug

I’ll try to summarize some of my thoughts and impressions from this 3 day conference in this post, so let’s start with the most important part; the sessions. Most of the sessions was taped and will be available online in the (hopefully) near future. I had already studied the agenda in detail before I went, but as always when attending conferences like this; the plan was due to change. But I’ll try to list some of my favorite sessions from the conference and I really recommend taking a look at these when they come online. So here’s my top 5 in descending order;

1. Michael Feathers: “Working Effectively with the Legacy Code: Taming the Wild Code Base”

I’ve watched a couple of talks by Feathers up on InfoQ and he’s a really skilled speaker as well as writer. I haven’t had time to read his book “Working Effectively with the Legacy Code” yet, but it’s definitely one I will pick up soon. The talk was great and he had lots of good tips if you’re faced with a codebase that is not built to be testable.

2. Kevlin Henney: “The Uncertainty Principle”

On day 3 of NDC my original plan was to attend Scott Bellware’s whole day workshop on testing, but I was too late for the registration so the workshop filled up before I got to sign up. Instead I spend the whole day with Kevlin, which really was a great alternative. I was lucky enough to get to hear him doing a talk here in Trondheim about a month prior to the conference, so I knew that this was going to be good. Kevlin has done some great work on design patterns and his talks are both informative and entertaining. I really recommend all of his talks, but if I were to pick one favorite I’d go for “The uncertainty principle”.

3. Glenn Block: “Building Maintainable Enterprise Applications with Silverlight and WPF”

I’m a big fan of the PRISM and we’re using it on our current project. The talk was mainly about PRISM, but he also had some great tips on how to ease some of the pain in regards to databinding the ViewModel to the View. Now, don’t get me wrong here; I love databinding in WPF, but there’s some pain points regarding refactoring when it comes to the string-based databinding against properties in the ViewModel. Glenn showed off some interesting tools that he’s working on to make this easier, and it will be up on CodePlex in not so long (I hope!). The essence of the tool was that if you name your controls in the View the same as the corresponding property in the ViewModel, and then it could perform an auto-mapping between the View and ViewModel. Anyway; it was a great talk and I got some valuable tips to take with me. Unfortunately this was one of the none-taped sessions, so this will not be available online as far as I know.

4. Udi Dahan: “Designing High Performance, Persistent Domain Models”

Design patterns are in many ways lessons learned over the mere 50 years of developing software. PRISM is, among other things, a set of design patterns to apply if you’re building composite applications and is focused mainly on the presentation layer. Domain-driven design on the other hand are design patterns that focus on the core of the business; the domain model. Udi gave an excellent talk on the performance perspective of DDD.

5. Peter Provost: “Code First Analysis and Design with Visual Studio Team System 2010 Arch Ed Microsoft Visual”

It’s just amazing to see what the architect edition of VS10 contains and I really look forward to some of the features that Peter showed off here. He’s a joy to listen to and it just reeks knowledge of this guy. If you like a quick tour of VS10 architect edition and see how you can read code in a new dimension, I highly recommend this session.

Runners Up

Photo by Rune Grothaug Other memorable sessions to watch will be the .NET Rocks! episode recorded live [Update: Download the podcast here]. As always hosted by the Hardy & Hardy of the .Net community, Carl Franklin and Richard Campbell, and this time they had invited the HaaHa Brothers (Scott Hanselman and Phil Haack) to do a show. And what a show! Porn, beer and Bing – I say no more…

And the HaaHa Brothers show was also a blast. Haack showed some nice tricks to hack Hanselman’s “secure” bank application and the two of them just put together a great show. Put Hanselman on stage and you’re guaranteed a good time!

If you’re into DDD you’ll also find Jimmy Nilsson’s sessions quit interesting. Among other things he showed how one could use the upcoming Entity Framework 4 as the O/R-mapper in a DDD scenario. The way he turned user stories into BDD-ish unit tests was also quit interesting and definitely something I will try out myself.

The social side of it

Going to conferences like this is off course not only about the talks and the technical stuff. The social aspect of it is also a great part of it, and the NDC organizers had really put a lot of effort into making that part as equally successful as the technical side. The geek beer on Wednesday started off with an unforgettable jam session with Carl Franklin and Roy Osherove. Anyone who’s attended one of Roy’s talks knows that he has some amusing “alternative lyrics” on familiar songs. But what most people might not know is that Carl Franklin is a fantastic guitar player with an impressive voice. Where can we get your CD, Mr Franklin? Great gig! [UPDATE: Some guys from TypeMock recorded the jam session and they’ve published some clips here]

After a couple of beers we headed towards the city and some place to eat. Scott Hanselman’s got this ‘thing’ where he just got to dig up an Ethiopian restaurant in every city he visits. And so we joined Hanselman, Phil Haack, and some other guys for an exotic dinner at Mama Africa. Scott and Phil are just some incredibly nice guys and it was a memorable dinner – both the food and the company :)

The Big Party started with a decent dinner on Thursday evening. I mean; you really don’t expect much when you sit down with a cardboard plate filled with some sort of Photo by Rune Grothaugstew’ish dinner at a conference like this, but it really wasn’t that bad this time. And as the dinner had sunk in and the beer was starting to function, Datarock entered the stage. I must admit that electronica is not my favorite genre, but the performance that Datarock delivered was impressive. And how could you possibly go wrong with lyrics like this at NDC?

I ran into her on computer camp
(Was that in 84?)
Not sure
I had my commodore 64
Had to score

-- Datarock, Computer Camp Love

And after the Datarock concert we headed up to the geek bar. Loveshack had tuned in some never-dying 80ies classics that really got the geeks rocking. Great show!

Once again I was impressed that some of the speakers choose to hang out with us mere mortals. Phil Haack, Peter Provost, Scott Bellware, and Udi Dahan were all hanging around and took the time to socialize. Much appreciated!

Photo by Rune Grothaug

A picture speaks more than the 1332 words on this page; NDC 2009 was just a big smile! Too bad it’s a whole year ‘till next time…

Thursday, May 28, 2009

My NDC 2009 Agenda

The Norwegian Developer Conference 2009 will take place in Oslo from June 17th to 19th. I’ve been lucky enough to get my hands on a full 3-day ticket and I’m really looking forward to this event. I attended both TechEd Barcelona in 2007 and PDC last year in LA, but I can’t help thinking that NDC 2009 has got an even more impressive speaker lineup than both of those – at least if you’re in to agile practices and software craftsmanship.

imageIf you’re thinking pure technology NDC might not be that impressive, but I personally believe that  the quality of a conference is a lot more about the quality of the speakers and how they present their thoughts and ideas, and less about the technological content. I’d rather spend an hour reading some good articles and try out some new technology hands on, than spending an hour on a bad chair in a room that always seem to lack oxygen listening to a mediocre speaker reading out loud every word on his/her powerpoint slides.

Going to conferences is about getting inspired. It’s about getting that tickling feeling of neurons going amok and new ideas swirl around in your head. It’s about triggering activity in your anterior superior temporal gyrus. And it’s all about the speakers. Skilled speakers with a lot of experience and confidence on stage giving a talk on a topic dear and near to their heart can really make a difference. And with a speaker lineup with names like Feathers, Rahien, Hanselman, Bolognese, Miller, Haack, Dahan, Osherove, Block, Provost, Bustamente, C. Martin, Lhotka… It’s just no way that this is going to be a mediocre event. It’s destined for success!

The worst part of this conference will actually be to pick which sessions to attend. It’s just impossible to not miss a great session, but hopefully they will all be videotaped and available online shortly after the conference. But the sessions are always best live, and one got to choose something. As it looks right now I believe this will be my agenda;

image 

DAY 1
   
Ayende Rahien Building Multi Tenant Apps Haven’t had a chance to see Rahien live yet, but I’ve read and used some of his works

Michael Feathers

Working Effectively with the Legacy Code: Taming the Wild Code Base

I’ve seen some videos of Feathers up on InfoQ and I highly recommend his sessions

Juval Löwy

Productive Windows Communication Foundation

Don’t know much about Löwy to be honest, but getting productive with WCF is never bad.

Rockford Lhotka

Implementing Permission-based Authorization in a Role-based World

Got to have some technical sessions to, and though I’ve never used Rocky’s CSLA framework, I’ve listen to a couple of the DotNetRocks episodes he has attended. Besides; the content suits the project I’m currently working on perfectly :)

Udi Dahan

Intentions and Interfaces - Making Patterns Complete

Yet another one of those gurus you just read and hear a lot about.

Michael Feathers

Design Sense Deep Lessons in Software Design

Feathers again; he’s just that good.

 

DAY 2
   

Jeremy D. Miller

Convention over Configuration applied to .NET

Been following his blog for some time and I like his involvement with the Alt.Net community. Great interview with him on the Alt.Net podcast. And besides; CoC is facinating.

Roy Osherove

Unit Testing Best Practises

Went to Osherove’s sessions at TechEd in 2007 and it was well worth it. Hope he brings his guitar :)

Ted Neward

Extend the Customization Possibilities of your .NET App with Script

Ted is a great speaker and the scripting possibilities is something I’d really like to look more into.

Robert C. Martin

Clean Code: Functions

One of the most energetic speakers out there and Clean Code will be read in the upcoming weeks.

Rafal Lukawiecki

Architectual use of Business Intelligence in Application Design

BI has always been one of those fields that were interesting, but never had the time to really dig into. And from what I’ve heard Rafal was one of the top rated speakers at TechEd 2007 (or was it 2008?).

Jimmy Nilsson

Entity Framework + Domain-Driven Design = true?

I’ve read Nilsson’s book on DDD and seen his session at Øredev last year. I’m currently working on a project were we try to follow the guidelines of DDD, and so it will be interesting to see his take on EF + DDD.

Richard Campbell

Carl Franklin

.NET Rocks! Live

I’ve followed the .NET Rocks podcast for quite some time and the live recordings are never dull. Will be interesting to see who they gather at their panel this time.

 

DAY 3
   
Scott Bellware Full Day Tutorial: Good Test, Better Code I’m a strong believer in TDD and Bellware is certainly one of the gurus in this field.

As you might have noticed from my list of speakers I try to spread my sessions to cover as many different as possible. That way I know which one I can spend time with when the videos come online.

And a little tip if you’re going to NDC (or any other conference); do not hesitate to leave a session that you find boring or uninteresting. It’s your time and you’d better spend it right!

Tuesday, May 26, 2009

PRISM: Your guide to a well-structured UI layer in WPF/SilverLight – Part 2

In Part 1 I talked a bit about testability as one of the major drivers for why you would choose to use Prism as your guidance for a composite application. In this post I’ll try to give you some hints on how Prism address common challenges like separation of concerns, single responsibility and supporting multiple platforms.

Lego Bricks Modularity

Modularity is what makes composite applications composite. Modularity is one of those design principles that has been around ‘forever’, and it’s just as relevant today as ever. “Modules”, “Packages”, and “Components” are all naming of the same concept; grouping related functionality together. That means that cohesion inside a module should be high; the objects within a module should work within the same context and address a common problem. If the grouping of functionality is done right then the coupling between modules should be low, because it shouldn’t be any need to reference objects that are unrelated.

The concept of modules in Prism will guide you towards the goal of high cohesion / low coupling. Modules in Prism don’t tell you how far up or down the architectural layers you should or could go, but it typically will include at least the presentation layer. Whether you choose to implement a complete, vertical slice of your application all the way down to the database, or you choose to stop right below the presentation layer is up to you. What is important to keep in mind is that a module should preferably reference neither any other modules nor the host application itself. The module must be kept as separate and isolated as possible. And because these modules are independent of its surroundings, they should be pretty easy to load into the application and by that make it possible to compose an application from these building blocks.

 

Wrench Maintainability

The biggest maintenance problems I’ve found myself in have usually been due to either large, difficult to follow code-behind files. Large classes and methods with a lot of functionality are in general hard to maintain, but my code-behind files from the pre-TDD-era had a distinct tendency of getting bloated. And not only were they big; they also had a lot of different responsibilities; from UI-logic and validation to business rules and data flow. And even data access in those early days (after all; that was what those on-stage demos and MSDN documentation thought us, right?).

Prism tackles the code-behind problem by showing you how to use UI design patterns to separate out functionality into presenter and presentation model classes. These classes do not have any graphical components related to them and so they lend themselves really nice to unit testing. The Patterns & Practices team chose to implement what Martin Fowler calls the Presentation Model pattern. The more WPF-specific implementation of this pattern is often referred to as a Model-View-ViewModel pattern coined by John Gossman, but because there’s no “official” documentation of the MVVM pattern (just a whole lot of blog posts), P&P chose to refer to the well-documented Presentation Model. But if you want to google your way to more intel on the UI pattern used in Prism, enter MVVM or Model-View-ViewModel as your search criteria. That way you’ll have a better shot at getting WPF or SilverLight related search results. A good start would be the MSDN article by Gossman, Dan Crevier’s early series on DM-V-VM, various blog posts on the MVVM-subject by Josh Smith, and Karl Shifflet’s M-V-VM articles.

 

Winnie-the-PoohMulti-Targeting

Should you choose WPF or SilverLight? The short and evasive answer is off course; it depends. I’m not going to elaborate on when you should choose either, but if your answer is both, then the guidance in Prism can show you how you can do this in a very smooth way. In fact; the difference between the WPF and the SilverLight version in Prism’s reference application, is 95% xaml. That is; everything but the Views are the exact same code. And by exact I literally mean the same code; instead of having the Presenter/PresentationModel-code duplicated, they actually link the SilverLight files to the corresponding WPF-files. The SilverLight projects therefore contain mostly Views, and the shared code lies in the WPF projects.

The last 5% difference implies that you can’t get all the way by changing the xaml alone; there is still some tweaking to get the WPF and SilverLight working nicely together. Since there are some subtle differences between WPF and SilverLight when it comes to functionality (SilverLight is not a pure subset, since it contains some functionality that not (yet) exists in WPF), the P&P team has used preprocessor directives on those places where they’ve had to customize specifically for the platforms.

Wrapping It Up

Building applications that are highly testable and maintainable is key for long-lived software. Splitting functionality into well-defined modules that can be developed in parallel by separate teams is key for scaling out the development process. But keep in mind that not all application will benefit from the Composite Application Guidance. Prism is not a silver bullet and it will bring more complexity into your development process. But if your needs justifies the added complexity and you know that your must ‘embrace change’ for years to come, Prism can really lay the foundation for a successful development story. And remember that Prism is guidelines, not framework.

Monday, April 20, 2009

PRISM: Your guide to a well-structured UI layer in WPF/SilverLight

I’ve had the opportunity to work with Composite Application Guidance for WPF and SilverLight (codenamed “PRISM”) for a couple of months now, and I’m really impressed with what the Patterns & Practices team has shipped this time. The forefather of Prism is in many senses the CAB framework (Composite UI Application Block) and even though I never worked with the CAB Framework myself, I’ve heard that it is a quite large and not that easy to grasp. Prism on the other hand, is quite light weight and the documentation is very concise and well written.

imageThe feedback from CAB has also been that it’s too intrusive; it’s an all or nothing application block and it’s hard to take advantage of the UI composition patterns in existing applications. CAB is from what I have found, meant to be built upon – not with (remember, I haven’t worked with CAB myself, so if you’d like to correct me, please feel free to do so in the comments below). With Prism P&P has taken quite another approach; you’re free to use (or not use) any part of the Composite Application Library in Prism. And you can switch out whatever part that doesn’t suite your needs. For instance, a core principle in Prism is to use an IoC container to make the application highly testable and loosely coupled. And since P&P has developed an IoC container themselves, namely the Microsoft Unity, the examples and the reference application in Prism uses Unity. But if you’d rather use Windsor, StructureMap, Ninject, Autofac, or any other IoC you’re definitely free to do so.

The big difference here’s that where CAB is an application block, Prism is an application guidance. And it guides you towards building applications that are testable, maintainable, multi-targeted and modularized. I’ll dive into these concepts in more details, so let’s start with;

Testability

Everybody tests their code and there are two ways to do it;

a) Manually; set some breakpoints, fire up the app, input some data and push some buttons, let the debugger hit the breakpoints, inspect some variables, and check that everything works as expected (or more often; try to find out why it doesn’t work as expected)

b) Automated; use a testing framework like NUnit, xUnit, or MSTest, write some tests, and then let the machines do the tedious work of verifying that you didn’t break anything you didn’t mean to

If you enjoy your time with the debugger, I won’t try to convince you that automation is good. But I consider myself a pretty lazy programmer and whenever I see an opportunity to automate boring, repetitive tasks, I always try to do so. I prefer to code, not debug, and therefore I automate my testing. Therefore I write unit, integration and UI tests that can be run by an unattended build machine whenever I check in some code changes. I’m a coder, not a debugger.

imageBut writing unit tests can be hard if you haven’t architected you’re classes and methods in a way that opens up for testing. If you instantiate objects inside your classes or in other ways are tightly coupled to other classes, mocking out those classes that are not in the scope of the current unit test will be hard. It’s not impossible, it’s just hard. One of the areas that are notoriously hard to unit test is the “code behind” of graphical components. Because when you instantiate a GUI component, it makes you dependent on a GUI thread when you run the test. On a build machine that’s going to run your tests without any interactive user logged in, this is just not the case; there’s no GUI thread available. And besides; it is bloody annoying and time consuming to have those forms and windows pop up whenever you run your test suite.

Opening your class for dependency injection and using an IoC container to manage the wiring of dependent objects is a well-proven and easy way to solve this problem. Prism explains and shows you how to write your application using an IoC container for the hot-wiring. And as I’ve already mentioned; if you prefer any other IoC Container, it’s totally up to you. But if you choose to not use Unity, you’ll have to be prepared to write some wiring code when initializing your application. Prism comes with the wiring code in form of a class called UnityBootstrapper. And there’s no surprise to the naming here; this class takes care of booting up your application with the Unity IoC container. So if you want to use any other container, you’ll need to rewrite the UnityBootstrapper to suite your choice. Or if you’re lucky; use the source code from someone who’s already done it (like the Castle Windsor adapter and bootstrapper that you can find in the Composite WPF Contrib project over at CodePlex).

 

All right! I think that’s enough for one post. I promised to write about maintainability, multi-targeting and modularity as well, so these will be the subjects for my next post.

Thursday, March 19, 2009

MSDN Live: Slides & Demo Code from “WPF Done Right!”

Me and my colleague Pål Fossmo was invited to give a talk on the Composite Application Guidance (codenamed Prism) on the MSDN Live March 2009 tour. It was great fun, but man we spent many hours preparing for this event! Given the fact that there were 2 of us given the talk, one could assume that this meant just half the work. But, no. So many hours of discussing what to include, how to do the talk, who does what, synchronizing the talk, rehearsal… 

Pål talking about IoC ContainersAnd we thought we had it all figured out as we started out in Stavanger on March 5th. But the feedback from the session suggested that maybe we ought to change the talk a bit. The score wasn’t as good as we’d hoped and we knew we could do better. So we spent the weekend adjusting the talk for Bergen on March 10th. The tip from MSDN General Rune G was clear; more code equals higher score. So we added some quality time in Visual Studio to the talk and the score went up. I must admit that I was perhaps the one that resisted to take “live coding” into the talk in the first place, but seeing the score from Stavanger and Bergen made it pretty clear that this was a bad call. The reason for my resistance was perhaps the fear of ‘something’ going wrong during live coding; staying in PowerPoint is safe, jumping around in Visual Studio is a lot more risky. So many things can go wrong and to stand in front of the crowd with an app that crash and burn is just not much fun. Believe me; I’ve tried it. The demo-God was nice to us though, and I think we got away with some nice demos on how to get started with Prism.

Download slides & codeThe Slides

We spent about half of the talk in PowerPoint and rest was demo. The slides focus on the what and why of Prism, while the how was in Visual Studio. Since one of the key concepts of Prism is the use of an IoC/D I-container, we decided to spend about 8-10 minutes explaining the concepts of Dependency Injection and Inversion of Control using some example code in PowerPoint. Rune Grothaug will publish a screen-recording of the session we did in Oslo, and I guess will be available in a couple of days (I’ll update this article with a link to the recording when it’s available). If you want to take a look on the slides, you can download them here (for you non-Norwegian speaking out there; sorry, the slides are (mostly) in Norwegian, but if you’d like a copy in English just let me know and I’ll translate and upload it).  

Download slides & codeThe Code

The goal of the demo was to show off some of the key concepts in Prism; modules, regions, views and communication. As we started to prepare for this talk, we quickly found the need of a very lightweight and small app that we could demo. The reference application that comes with Prism is really good, and I highly recommend everyone to take a walk around the code from the Patterns & Practices team. It’s nicely done and I think most of us can learn a lot just be reading this code. But alas the reference app is nice and well done; we still wanted something smaller and more fit to our purpose, so we decided to roll our own little composite application. And since Pål is a big fan of Twitter, he built a nice little Twitter client using the concepts from Prism. The demo app, called ‘Kvittre’, consists of a shell with 4 regions and in the main app we had 3 modules; one for the login view, one for posting tweets, and one for listing tweets from those you follow.

For the live demo we wanted to show how to build a module, and since TinyUrl is a popular service to shorten down url’s in tweets we decided to build a module that could take an url, ask the TinyUrl service for a shortened version, and then insert the tiny url into the message. And to demo that you could build a module separate from the ‘main solution’, we coded the module in a separate solution. To test-run the module we added a ‘host application’ project that contained a bootstrapper and a region to host the view from the module. When the module was tested and looked okay, we deployed it back to Kvittre. Kvittre was set up with a DirectoryModuleCatalog that would load any module in a given catalog. Since the 4th region in the Kvittre shell was set up to host the TinyUrl module, the module was loaded and displayed in Kvittre. Then we used an EventAggregator for communicating between modules and wrapped it up by demoing some unit testing of the Login method in Download slides & codethe Presenter class of the LoginView. If you want to check out the code, it’s all wrapped up and ready for download right here.

Tuesday, February 24, 2009

“Legacy Code is Code Without Tests”

I wish I’d come up with that phrase first, but it was Michal Feathers who stated this in his “Working effectively with legacy code”. It’s a great statement, and it pretty much sums up what testing is all about; if you’re not covered by tests, it is hard to refactor and change the code and at the same time know that you didn’t break anything. And if you have code that is resistant to change or that makes you nervous every time you touch it, then you have code that won’t be changed. You have legacy code. And you can try to wrap it, hide it, and forget it, but someday it will blow up. And someday you’ll have to go in there and make it work. But you won’t have any safety net. You’ll have to change something you do not know the reach of, and you’ll have to do it blindfolded and pray that your changes aren’t going to break something somewhere else. But I promise you; it will.

image And man I can tell you; it is good to be a consultant with skills in a technology where the software industry hasn’t had time to produce that much legacy code yet! That alone should be a good enough reason for you to invest some of your time into learning new skills. Skills that make you more valuable in the projects that produce new code, instead of maintaining legacy code that someone else hacked together years ago. It’s those green field projects that are fun!

And because there’s not that much WPF-based apps out there yet, there can still be time to save some poor souls from aiming at that same old pit of failure. The pit of strongly coupled, untestable, monolithic monsters. There’s hope and I believe in the goodness of coders. I believe that we want to make solid code. I believe that we want to produce code that is maintainable and changeable. And I believe that we, the residents of the software community, can make the leap into software craftsmanship. It’s just a matter of making those right choices. And I believe that loose couplings, testability and modularity are definitely the right choices in most cases. These are the key principles that will make you a better person (or at least a better developer).

Loose couplings and testability are tightly coupled (touché!). If you’re doing test-driven development, or behavior-driven development, or any other development practice that use tests to drive the design, you will end up with code that is loosely coupled. And if you’re building an app with loose couplings between modules and classes, you’ll end up with code that lends itself to testing very well. And testable, loosely coupled systems will be easier to maintain and change than a tightly coupled system with no tests to verify your code.

Modularity is another beast though. Modularity is about splitting the application in to pieces that multiple teams can work on in parallel - without getting in the way of each other. Modularity is about scalability and maintainability. Adding new functionality without ending up with a logarithmicimage time/functionality curve is an important factor in software development (maybe not for you and me, but for those white collars* that are deciding whether to fund or close down your project, predictability is extremely important). And modularity is about mastering complexity. How do you master too complex challenges? You break it down in to smaller, more manageable parts. And in software terms those parts are modules.

So if you take these 3 ingredients – loose couplings, testability, and modularity – and you shake it together with WPF (shake, not stir), you’ll have a fantastic opportunity to do WPF right. You’ll end up with code, not legacy code.

imageIf you’re in Stavanger on the 5th of March, Bergen on the 10th, Trondheim on the 12th or in Oslo on the 19th of March, you can hear me and my colleague Pål Fossmo give a talk on this topic at the MSDN Live event.

 

 

* Which btw just managed to bankrupt Iceland and is about to break the back of some of the strongest economies in the world… how the he** did they do that?!

Thursday, January 15, 2009

“A desk is a dangerous place from which to view the world”

A while ago a colleague of mine posted a blog about his desk at work. He used the words of Gunnery Sgt Hartman, and so I will be no less of a man;

DSC_0180-1“The Desk is a system. That system is our enemy. But when you're inside, you look around, what do you see? Businessmen, teachers, lawyers, carpenters. The very minds of the people we are trying to save. But until we do, these people are still a part of that system and that makes them our enemy.”

(almost a quote from the great Morpheus)

This desk fetish was picked up by Anders Hammervold, which again challenged Joar Øyen, which again challenged me… And since Joar also challenged Pål Fossmo – who still haven’t published his desk – the pressure is now on The Reverend…

Oh, and before I forget; that quote in the title is by John Le Carré. A fabulous quote if I may say so.

Sunday, January 11, 2009

Custom iTunes installation

I love my iPod and I use it almost every day. Mostly I'm listening to podcasts, but also music off course. But I hate iTunes. Or maybe that's a bit strong. I hate the iTunes installer. I think it’s all too intrusive and it doesn't give me all the choices I feel that it should.

ipod If you go to the download page on iTunes’ web site and download the iTunes 8 version suited for your operating system, you’ll get an iTunesSetup.exe file. If you’ve tried to run this file, you might have noticed that you also end up with a bunch of apps and services that you didn’t asked for. This includes;

Bonjour – Apple’s take on implementing the Zeroconf for discovery of services on a local network. The only reason why I might need the Bonjour service, is if I want to share my iTunes library on my LAN. But for now, running iTunes on a single machine, there’s no reason to have this service running around wasting resources.

Apple Mobile Device Support and Mobile Me – Both of these are meant for synchronization between a computer and an iPhone/IPod Touch. I only have an iPod Nano, so why would I need a service to sync between my pc and something I don’t have?

Apple Software Update – This service will check for new updates on regular intervals, just like Windows Update. Luckily it won’t install anything automatically, it will only notify you if there’s a new iTunes version and let you decide if you want to download and install.

QuickTime – I’m not even going to start elaborating why I dislike QuickTime so much. It would just make me angry. Luckily, there are alternatives.

The last one in the package is off course iTunes itself.

As you might have guess by now, there are only one or two out of six that I actually want to have running on my pc. And it’s really not that hard to actually get it that way. It turns out that the iTunesSetup.exe is just a self-extracting package that contains installers for all of the apps and services above. So if you’d like to have a custom install of iTunes without the nagging apps and services that comes out of this black box, you’ll need a packaging-app like WinRAR or 7-Zip. Then you can just extract the iTunesSetup.exe and delete the parts you don’t need/like. The only thing to be aware of is that iTunes requieres QuickTime, but if you’ve installed QuickTime Alternative before running the iTunes installer, you’ll be safe and sound. So to make a short-list of how to install iTunes only and keep your system a bit less cluttered;

  1. Download and install QuickTime Alternative
  2. Download, but don’t install iTunes
  3. Extract ‘iTunesSetup.exe’
  4. Delete the files you want need. For me that means everything except iTunes.msi and AppleSoftwareUpdate.msi.
  5. Open the command prompt, navigate to the folder where you extracted iTunes.msi, and run the following command; msiexec /i iTunes.msi /passive
  6. If you’d like to be reminded of new updates (which you definitely should), run the same command for the updater service; msiexec /i AppleSoftwareUpdate.msi /passive

(If you want the regular GUI-based installation, you can just skip the “/passive” parameter (or just double-click the msi-file in Windows Explorer).)

Now, I didn’t figure out all this by myself. Google helped me find this article by Ed Bott and this thread over at the PC Pitstop forum.