tag:blogger.com,1999:blog-32580742967763826692024-03-05T05:35:14.546+01:00A World Full of Sharp ObjectsRandom thoughts on anything slightly related to software development in general - and .NET in specific.Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.comBlogger31125tag:blogger.com,1999:blog-3258074296776382669.post-91342643550693161602015-07-29T11:32:00.001+02:002020-06-03T23:02:47.558+02:00JavaScript anno 2015 – GulpGulp is an automation tool used for build automation. Just like MSBuild, Nant, make, and pretty much any build automation tool out there, Gulp runs one or more tasks that you define. <br /><br />Gulp is built on Node.js which means that you define your tasks in JavaScript. To get started you need to install Gulp and create a <em>gulpfile.js</em> which will be the home for your automation tasks:<br />
<pre><code class="bash">npm install gulp -g
npm install gulp --save-dev
new-item gulpfile.js -type file
invoke-item gulpfile.js
</code>
</pre>
<br />
(btw I’m using Powershell as my shell)<br /><br />If you run those 4 commands you should now have your gulp file open and ready to define some tasks. Here is an example of a Gulp task;<br />
<pre><code class="javascript">var gulp = require('gulp');
gulp.task('default', function() {
process.stdout.write("Gulp is running the default task\n");
});</code></pre>
<br />
Since Gulp is running in Node.js we can use Node’s <em>process</em> object to write to the console. To run this task just run the <em>gulp</em> command in your shell and the default task will run.<br /><br />Gulp doesn’t really do much itself. In fact, there’s only 4 functions in the Gulp API:<br />- src<br />- dest<br />- task<br />- watch<br />
<br />
<h2>
Src & Dest</h2>
<br />
<em>src</em> and <em>dest</em> are filesystem operations. To read files from the filesystem, use <em>gulp.src()</em>. To write files, use <em>gulp.dest().</em> The <em>src</em> function takes a glob pattern as input (using Node’s glob which again uses the <em>minimatch </em>library), so to read all js-files in a scripts folder you can use this syntax to recursively find all files:<br />
<pre><code class="javascript">gulp.src('Scripts/**/*.js')
</code></pre>
<br />
The <em>dest</em> function takes a path as parameter and will output files to that folder (and create the folder if it doesn’t exists). Matching files that already exists will be overwritten. To create a simple copy task, we can chain the <em>src</em> and <em>dest</em> functions together, and Gulp uses the pipeline pattern to do this (similar to pipes in Powershell and bash):<br />
<pre><code class="javascript">gulp.src('Scripts/**/*.js').pipe(gulp.dest('Copies'));</code></pre>
<br />
<br />
This will copy all js-files from the Scripts-folder to a folder called Copies. Note that it will keep the directory structure from the source, so if the file index.js exists in a folder ‘src’ in the Scripts folder it will end up in ‘Copies\src\index.js’.<br />
<br />
<h2>
Task</h2>
<br />
If we want to make a task for the file copying above we could define it like this:<br />
<pre><code class="javascript">gulp.task('copy', function(){
gulp.src('Scripts/**/*.js')
.pipe(gulp.dest('Copies'));
});</code></pre>
<br />
<br />
To run a specific task, we just use the <em>gulp</em> command and send in the name of the task as parameter:<br />
<pre><code class="bash">gulp copy</code></pre>
<br />
<br />
If you don’t provide a named task, Gulp will look for a task called ‘default’ and run it.<br /><br />The <em>task</em> function has two required parameters and one optional;<br />- name: the name of the task<br />- function: the function that defines what the task do<br />- dependencies (optional): an array of strings that contain names of other tasks that should be run prior to this one<br /><br />As with all build automation tools, tasks can be chained together. If we want the <em>default</em> task to depend on the <em>copy </em>task, we can define it like this:<br />
<pre><code class="javascript">gulp.task('default', ['copy'], function() {
process.stdout.write("Gulp is running the default task\n");
});</code></pre>
<br />
<br />
<h2>
Watch</h2>
<br />
The <em>watch</em> function is (as <em>src() </em>and <em>dest())</em> filesystem related and it takes a glob pattern as input. With<em> watch()</em> you can get notified when a file is changed and act accordingly. For instance if we want the <em>copy</em> task to automatically copy files when changes occur, we can setup a watch for that:<br />
<pre><code class="javascript">gulp.task('watch', function() {
gulp.watch('Scripts/**/*.js', ['copy']);
});</code></pre>
When this task is running all changed files will be copied. Any new files will also be copied, but since the copy function only copies (doh!) it will not remove deleted files in the source folder from the destination folder. If we want that we need to add a new task that deletes files and that reacts to a ‘deleted’ event:<br />
<pre><code class="javascript">gulp.task('watch', function() {
gulp.watch('Scripts/**/*.js', function(event){
if(event.type === 'deleted') {
deleteFile(event.path);
}
else {
gulp.start('copy');
};
})
});</code></pre>
<br />
<br />
Note: I haven’t shown the implementation of the function <em>deleteFile</em> as this is just an example on how to react to different types of events. The available event types are: <em>changed, added </em>and <em>deleted</em>.<br />
<br />
<h2>
Plugins</h2>
<br />
Since Gulp itself doesn’t do much it depends upon plugins to provide the usefulness. And there’s a lot of plugins to choose from. At the time of writing there’s 1690 plugins listed on the Gulp home page. Let’s start with one of the most popular of them; the JSHint plugin. Since Gulp is running on Node it’s off course using npm so we install <em>gulp-jshint</em> just like any other npm package:<br />
<pre><code class="bash">npm install gulp-jshint --save-dev</code></pre>
<br />
<br />
Now back to the Gulp file and let JSHint do some error checking on our JavaScript code:<br />
<pre><code class="javascript">var jshint = require('gulp-jshint');
gulp.task('jshint', function(){
gulp.src('./src/scripts/*.js')
.pipe(jshint())
.pipe(jshint.reporter('jshint-stylish'));
});</code></pre>
<br />
<br />
<h2>
Alternatives</h2>
<br />
Gulp is not the only build automation / task runner for JavaScript. It’s predecessor is <a href="http://gruntjs.com/getting-started">Grunt</a> and it’s quite similar to Gulp, but the definition of tasks are configuration-based and therefore more verbose. You configure tasks instead of defining them as JavaScript functions. Gulp is also more flexible than Grunt. Personally I prefer the Gulp-way of doing it, but Grunt is still the most popular task runner AFAIK.<br /><br /><a href="http://coffeescript.org/">Cake</a> is pretty much the same as Gulp, but uses Coffeescript instead of pure JavaScript (as a side note; you can actually use Coffeescript with gulp too, but that’s another story).<br /><br /><a href="https://github.com/broccolijs/broccoli">Broccoli</a> seems to be the new kid on the block right now and it seems promising. For bigger projects the watch-and-rebuild can take a ‘long’ time (everything is relative), so Broccoli was designed to do incremental builds. That is; only build whatever has changed since the last build. Broccoli is still in it’s early stages (at version 0.16 at the time of writing) and the “Windows support is still spotty” according to their own documentation.<br /><br />Last but not least; you don’t actually need a dedicated automation tool as npm can already do this for you. I’m not going to go into detail on this one, but you can check out <a href="http://blog.keithcirkel.co.uk/why-we-should-stop-using-grunt/">this blog post</a> by Keith Cirkel for more information.<br />
<br />
<h2>
Resources</h2>
<br />
<a href="http://gulpjs.com/plugins/">Gulp plugins</a> – Search for available plugins<br /><a href="https://github.com/isaacs/node-glob">Node Glob</a> – Documentation for the glob syntax in Node<br /><a href="https://scotch.io/tutorials/automate-your-tasks-easily-with-gulp-js">Automate your tasks easily with Gulp.js</a> by Justin Rexroad<br />Smashing Magazine: <a href="http://www.smashingmagazine.com/2014/06/building-with-gulp/">Building with Gulp</a><br />For more information on Node.js and npm, take a look at my previous post in this <em>Getting started with JavaScript</em> series: <a href="http://www.kjetilk.com/2015/07/javascript-anno-2015-node-and-npm.html">JavaScript anno 2015 - Node and npm</a>Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com0tag:blogger.com,1999:blog-3258074296776382669.post-69535879824785960822015-07-25T12:15:00.001+02:002020-06-03T23:09:31.146+02:00JavaScript anno 2015 – Node and npmNode.js is server-side JavaScript; a “runtime environment for server-side and networking applications” [Wikipedia]. It’s open source as just about everything in the JavaScript world of frameworks and tools. Node is an amazing runtime and in just about 5 lines of code you can have a simple web server up and running:<br />
<pre><code class="javascript">var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(1337, '127.0.0.1');</code></pre>
<br />
<br />
<br />
We’re not using Node per se in our project yet. We’re running our ASP.NET web application on IIS, but it might be something we’ll be looking more at for specific tasks later on. But for now we’re using it implicit through the Node Package Manager.<br />
<br />
<h2>
Node Package Manager (NPM)</h2>
<br />
A package manager is responsible for installing, upgrading, configuring and uninstalling software packages for a given platform. NPM is a package manager for JavaScript, just like NuGet is a package manager for .NET, CPAN for Perl, Maven and Ivy for Java, RubyGems for Ruby, and so on. <br /><br />NPM is bundled with Node so the way to get NPM on your machine is to install Node. With Node installed you can run npm commands from PowerShell or the command prompt.<br /><br />NPM differentiates between modules and executables. When you install a module it will be placed in a <em>node_modules</em> folder, whereas executables are placed in a sub-folder called <em>.bin</em>. You can run the <em>npm root</em> and <em>npm bin</em> commands respectively to see where those folders exists on your local machine. <br />
<br />
<h2>
npm install </h2>
<br />
NPM also differentiates between installing globally or locally. Locally means local to the folder you run the <em>npm install</em> command from (typically your project root folder). A good rule is to never install modules globally. Only executables that need to be available across many project should be installed globally, but to avoid versioning dependencies one should strive to install most packages locally. To install a package globally you just run the install command with the --<em>global</em> (or -<em>g</em>) option.<br /><br />Example:<br /><br /><em>npm install browserify</em> => Downloads and unpack the Browserify package to a local <em>node_modules</em> folder<br /><br /><em>npm install grunt-cli –g =</em>> Downloads and unpack the Grunt command line interface to the central <em>.bin</em> folder, and adds <em>grunt.exe</em> to your PATH.<br />
<br />
<h2>
Dependencies</h2>
<br />
Often a package uses other packages and these dependencies are expressed in the <em>package.json</em> file inside the package. So when npm installs a package it will look in the <em>package.json</em> file and install any package that they rely on. Therefore an install of for instance Browserify will install no less than 49 direct dependencies, which again install their dependencies. <br /><br />Unlike NuGet the dependencies of a package will not be installed at the same directory level as their dependent<em>. </em>Instead they will be installed in a <em>node_modules</em> folder inside the Browserify folder. Inside each dependent module there might be other dependencies. In fact, installing Browserify will create no less than 87 (!) <em>node_modules</em> folder underneath the Browserify folder.<br /><br />Placing all dependencies inside each module is a great way to prevent versioning issues between dependencies of different modules. In .NET and NuGet this wouldn’t work since there’s no way to reference 2 different versions of the same assembly into the same project. You can however reference assemblies that are dependent on different versions of the same assembly. The conflict between them is solved by assembly binding redirects in web/app.config, but it can lead to problems if a dependent assembly has breaking changes between versions. In the JavaScript world there is no concept of binary assemblies. The modules are just one or more js-files and as long as they are loaded within different scopes, they will not cause any conflicts.<br /><br />There’s a lot more to say about dependencies in npm, but the one thing you need to be aware of is the difference between <em>dependencies</em> and <em>devDependencies</em>. The first one is all dependencies required to run, while the latter is additional dependencies needed for development (there’s also <em>peerDependencies</em> and <em>bundledDependencies</em>, but I won’t go into that here). Typically <em>devDependencies</em> will include unit tests, test harnesses, minification, transpilers, etc. <br />
<br />
<h2>
Package.json</h2>
<br />
If you run <em>npm install</em> without any package name, npm will look for a <em>package.json</em> in the directory where you run the install command. If it finds it npm will install all dependencies listed (including <em>devDependencies)</em>. The <em>package.json </em>is similar to <em>packages.config</em> in NuGet, but I dare to say that npm seems a lot more sophisticated and solid than NuGet. <br /><br />Typically you will have a <em>package.json</em> file in the root directory of your project and you will place all your dependencies there. A simple package file might look like this:<br />
<pre><code class="javascript">{
“name”: “my-app”,
“version”: “0.0.1”
“dependencies”: {
“browserify”: “11.0.x”
}
}</code></pre>
<br />
Note the ‘x’ in the Browserify version. This means that any 11.0-versions will do, so when you do a <em>npm install</em> (or update) 11.0.0, 11.0.1, etc, is OK, but not 11.1.0 or 12.0.0. Npm follows semantic versioning (semver) and the numbers means ‘<em>major.minor.patch</em>’. If you’re happy to trust that Browserify will be backward compatible across minor versions (which it should be if they follow semver), you can specify ‘11.x’ instead of ‘11.0.x’. If you just want the newest version – regardless of breaking changes (not recommended!), then you can just put an ‘x’ instead of ‘11.0.x’.<br /><br />You could create the <em>package.json</em> file manually, but a better way is to use the <em>init</em> command:<br /><br /><em>npm init<br /></em>Note; if you create the file manually be sure that the file is ASCII encoded. Unicode will result in a parsing error in npm and UTF-8 will result in the file not been updated (e.g. when running with the <em>--save</em> flag below).<br /><br />You could also edit your <em>package.json</em> file manually and add all dependencies by hand, but again; it’s better to let npm handle this. You do this by appending a --<em>save </em>flag (-S for short) to the install command;<br /><br /><em>npm install browserify --save<br /></em>If the package is only meant for development purpose and not applicable for the production environment, for instance testing frameworks, you can use the --<em>save-dev </em>instead;<br /><br /><em>npm install jasmine --save-dev<br /></em>When you have a lot of packages installed, it’s a great chance that some of them share some dependencies. Because of npm’s hierarchical structure, it’s possible to optimize these shared dependencies by moving them further up the tree and thereby get rid of duplicated modules. The command for that is <em>dedupe:<br /></em><em>npm dedupe</em><br /><br />If you want to remove any packages that is not in your <em>package.json</em>, you can run the <em>prune</em> command:<br /><br /><em>npm prune<br /></em>If you run the <em>prune</em> command with the --<em>production</em> flag, all <em>devDependencies</em> will be removed (nice when deploying to production).<br /><br />If you want to see all installed packages there’s a <em>ls</em> command for that:<br /><br /><em>npm ls<br /></em>Note that this will list all top level packages as well as all their dependencies. If you’re only interested in the top level packages you can add the –<em>depth 0</em> parameter to <em>ls</em>.<br /><br />If you want to search for available packages, there’s a <em>search:<br /></em><em>npm search<br /></em>…which also can do regular expressions. But for the most part it’s just easier to browse and search on npm’s home page.<br />
<br />
<h2>
npm update</h2>
<br />
Installing is just one side of the story. Once you’ve added any dependencies to your project you would like to keep them updated as well:<br /><br /><em>npm update<br /></em>If you run it without specifying a particular package to update, npm will go through the <em>package.json</em> file and see if any newer versions are available. It will off course respect the versioning you’ve applied, so if won’t upgrade to a newer major version if you have specified that only minor versions are acceptable.<br /><br />You can update a specific package by providing the name of the package. If the package is installed in the global scope you need to add the –<em>g</em> flag. <br /><br /><em>npm update </em>will also download any missing packages, but you need to add the --<em>dev</em> flag to get all <em>devDependencies. </em>One thing to be aware of is that npm will not do any recursive update of all package dependencies. It will only update the top level packages, but you can force recursion with the --<em>depth </em>flag.<br /><br />As with the <em>install</em> command you can let npm update your <em>package.json</em> file with the updated package versions;<br /><br /><em>npm update --save<br /></em>If you want to check whether any new packages exists without updating them, you can run the <em>outdated</em> command:<br /><br /><em>npm outdated</em><br />
<br />
<h2>
npm uninstall</h2>
<br />
Removing packages is as easy as installing. Just run the <em>uninstall</em> command with the name of the package to remove and the package is gone;<br /><br /><em>npm uninstall browserify<br /></em>As with <em>install</em> and<em> update</em> you can let npm update <em>package.json:<br /></em><em>npm uninstall browserify --save</em><br />
<br />
<h2>
<em></em>Wrap up</h2>
<br />
The three major take-aways from this post should be that<br /><br />1. Npm packages comes in two flavors; executables and modules. Executables are typically command line tools, while modules are libraries that you want to use in your code.<br /><br />2. Npm has two modus operandi; global and local. In general you should install executables in the global scope and modules in the local.<br /><br />3. Put a <em>package.json</em> file in the root of your project and add all of your project dependencies there.<br /><br />What I haven’t talked about is configuring npm. There’s a lot to say about this, but I’m just going to keep it short and say that npm is highly configurable and I’ll just point you to the resources below.<br />
<br />
<h2>
Resources</h2>
<br />
<a href="http://npmjs.org/">npmjs.org</a> – The home page for npm where you can search and browse for available packages<br /><a href="http://docs.npmjs.org/">docs.npmjs.org</a> – The documentation for npm is darn good if I may say so. I really recommend taking a look at it as I promise you’ll learn a lot from it.<br />As for configuring npm, <a href="https://docs.npmjs.com/files/folders">here</a> is a starting point for you.<br />For a great explanation of the difference between the various dependencies in npm, take a look at the top-voted answer to <a href="http://stackoverflow.com/questions/18875674/whats-the-difference-between-dependencies-devdependencies-and-peerdependencies">this</a> question on StackOverflow.<br />To get some insight into the history of npm and why it is as it is, read through the answers from Isaac Schlueter (the main developer on npm) <a href="https://github.com/joyent/node/issues/5132">in this thread</a>.<br /><a href="https://nodejs.org/">nodejs.org</a> – The home page for Node.Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com0tag:blogger.com,1999:blog-3258074296776382669.post-38830890596613118792015-06-03T23:13:00.000+02:002020-06-04T08:48:00.789+02:00Logging to SQL Server with Log4NetHow do you know what’s happening on your production servers? Logging off course (if you wonder; no, ‘debug & breakpoints’ is never the correct answer. Never ever. Ever.). <br />
We have been using Log4Net as our logging tool for 3-4 years by now and I just wanted to share how we are using it and how incredibly powerful good logging can be. <br />
First of all, if you are not familiar with Log4Net it is an open source, free-for-use logging framework under the Apache Foundation umbrella. Among its strengths is that it is fairly easy to get started with, it has a low impact on the application performance and it has a lot of adapters that lets you log to a lot of different destinations (console, file, database, event log, etc). <br />
At the beginning we set up logging to console (for those systems that had console output) and file, but after a while we added logging to SQL Server. It is the combination of logs stored in a SQL database and full-text indexing of these logs that really gives us eyes in to what happens on our production servers. <br />
<h3>
Log to console</h3>
Logging to console is definitely the easiest way to get started with Log4Net. But writing to the console output is also the one that gives you least payback in form of long-term insight into your production systems. Log4Net can be configured either using xml or code, but xml is by far the most used. Typically you do the xml configuration in your app/web.config, but you can also keep the Log4Net configuration in separate xml files if you prefer. We chose the app/web.config approach and so the xml for console logging looks like this:<br />
<pre><code class="xml"><?xml version="1.0" encoding="utf-8"?>
<configuration>
<configSections>
<section
name="log4net"
type="log4net.Config.Log4NetConfigurationSectionHandler,Log4net" />
</configSections>
<log4net>
<root>
<level
value="DEBUG" />
<appender-ref
ref="ConsoleAppender" />
</root>
<appender
name="ConsoleAppender"
type="log4net.Appender.ConsoleAppender">
<layout
type="log4net.Layout.PatternLayout">
<param
name="ConversionPattern"
value="%d [%t] %-5p [%x] - %m%n" />
</layout>
<filter
type="log4net.Filter.LevelRangeFilter">
<param
name="LevelMin"
value="DEBUG" />
<param
name="LevelMax"
value="FATAL" />
</filter>
</appender>
</log4net>
</configuration></code></pre>
<br />You can do this configuration in code as well, but the great benefit of using xml for the configuration is that you can change the settings (for instance the log level threshold) without re-deploying your application. In the case of web hosts you can even change it without restarting the application. If you’ve been a good boy/girl and set up debug-level logging in your code, you can just flip an xml-switch and additional log entries will start flowing in.<br />
<br /><h3>
Log to file </h3>
<br />If you want your logs to survive application restarts (and the console window buffer) and/or have an application that doesn’t have console output, logging to file would be the next step on logging ladder. <br />
The main thing to keep in mind when logging to file is to set limits on how large each log file can get. Log4Net has some defaults that might not suit your situation so be sure to check out the documentation on how you can configure logging to file. For one of our systems we chose to have a 10 mb limit on each file which you can see in this xml config: <br />
<pre><code class="xml"><log4net>
<root>
<level value="DEBUG" />
<appender-ref ref="LogFileAppender" />
</root>
<appender
name="LogFileAppender"
type="log4net.Appender.RollingFileAppender">
<param
name="File"
value="logs.txt" />
<param
name="AppendToFile"
value="true" />
<!-- Logfiles are rolled over to backup files when size limit is reached -->
<rollingStyle
value="Size" />
<!-- Maximum number of backup files that are kept before the oldest is erased -->
<maxSizeRollBackups
value="10" />
<!-- Maximum size that the output file is allowed to reach before being rolled over to backup files -->
<maximumFileSize
value="10MB" />
<!-- Indicating whether to always log to the same file -->
<staticLogFileName
value="true" />
<layout type="log4net.Layout.PatternLayout">
<param
name="ConversionPattern"
value="%-5p%d{yyyy-MM-dd hh:mm:ss} – %m%n" />
</layout>
</appender>
</log4net></code></pre>
<br />The above config specifies that maximum 100 mb of logs will be kept on file (10 mb pr file and max 10 files).<br />
<br /><h3>
Log to console and file </h3>
<br />There is no problem logging to both console and file simultaneously and you can even set different log levels on each appender. If you want to have different files for different log levels (e.g. ‘debug.log’, ‘info.log’, etc), you can just configure as many file appenders as you need. Here is an example of logging to both console and file at the same time: <br />
<pre><code class="xml"><log4net>
<root>
<level value="INFO" />
<appender-ref ref="LogFileAppender" />
<appender-ref ref="ConsoleAppender" />
</root>
<appender name="LogFileAppender" type="log4net.Appender.RollingFileAppender">
<filter type="log4net.Filter.LevelRangeFilter">
<param name="LevelMin" value="WARN" />
<param name="LevelMax" value="FATAL" />
</filter>
...
</appender>
<appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender">
...
</appender>
</log4net>
</code></pre>
<br />The default log level is set to INFO, which means that unless otherwise specified in the appenders, messages with level INFO, WARN, ERROR and FATAL will be logged. The file appender is set to only log WARN, ERROR and FATAL though.<br />
<br /><h4>
Log to SQL Server </h4>
<br />As already mentioned the logging to file and console is easy to get started with and does not take much effort to set up. Setting up logging to a database takes a bit more work, but it is far from difficult. Here is how we configured logging to a SQL database from one of our web hosts: <br />
<pre><code class="xml"><root>
<level value="DEBUG" />
<appender-ref ref="AdoNetAppender" />
</root>
<appender
name="AdoNetAppender"
type="log4net.Appender.AdoNetAppender">
<threshold>INFO</threshold>
<bufferSize
value="50" />
<connectionType
value="System.Data.SqlClient.SqlConnection, System.Data, Version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
<connectionString
value="data source=SERVERNAME;initial catalog=DATABASE;integrated security=false;persist security info=True;User ID=USERNAMEN;Password=PASSWORD" />
<commandText
value="INSERT INTO Logs ([Date],[Thread],[Source],[Level],[Logger],[Message],[Exception],[HostName]) VALUES (@log_date, @thread, 'LOG SOURCE',@log_level, @logger, @message, @exception, @hostname)" />
<parameter>
<parameterName value="@log_date" />
<dbType value="DateTime" />
<layout type="log4net.Layout.RawTimeStampLayout" />
</parameter>
<parameter>
<parameterName value="@thread" />
<dbType value="String" />
<size value="255" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%thread" />
</layout>
</parameter>
<parameter>
<parameterName value="@hostname" />
<dbType value="String" />
<size value="255" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%property{log4net:HostName}" />
</layout>
</parameter>
<parameter>
<parameterName value="@log_level" />
<dbType value="String" />
<size value="50" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%level" />
</layout>
</parameter>
<parameter>
<parameterName value="@logger" />
<dbType value="String" />
<size value="255" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%logger" />
</layout>
</parameter>
<parameter>
<parameterName value="@message" />
<dbType value="String" />
<size value="-1" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%message" />
</layout>
</parameter>
<parameter>
<parameterName value="@exception" />
<dbType value="String" />
<size value="-1" />
<layout type="log4net.Layout.ExceptionLayout" />
</parameter>
</appender>
</log4net>
</code></pre>
<br />The xml config is the same whether you are configuring logging in web- or app.config (you need to insert your own values for servername, database and login). <br />
The main thing to point out here is the ‘buffer’ element, which tells Log4Net how many log entries to buffer up before writing them the database. There isn’t any <i>correct number</i> here and you need to figure out what suits your environment the best. The trade-offs are performance versus reliability, since a low buffer will take more resources because of the many writes to the database table (and yes, we learned that the hard way off course). A high buffer limit will be less reliable because if your application crashes, the logs not yet written will never be written. <br />
Also; it might make sense to have different buffer limits for different environments. In the development and test/QA environments, a low limit might be preferable since the logs will be written faster to the database. And since the number of log entries will be far less than in the production system, it might be long time to wait for the logs to be available if you run with the same limits as in production. In a production environment, instant logs are in most cases not relevant and performance is more critical. Then again, reliability is also a good thing so you need to find a good trade off. <br />
Another thing to notice is that we have a lot of subsystems (web hosts, windows services, message bus, cron jobs, etc) that logs to the database. To know where the logs come from we add the ‘LOG SOURCE’ as the name of the subsystem where the config is defined in (e.g ‘CommandsHost’ as the web host that receives commands from our application). <br />
To get the logs into a database, you will need to create a table that matches the log entry that you have defined in the appender config. Here is the t-sql to create a table that matches the above config: <br />
<pre><code class="tsql;">CREATE TABLE [dbo].[Logs](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Date] [datetime] NOT NULL,
[Thread] [varchar](255) NOT NULL,
[Level] [varchar](50) NOT NULL,
[Logger] [varchar](255) NOT NULL,
[Message] [nvarchar](max) NOT NULL,
[Exception] [nvarchar](max) NULL,
[Source] [varchar](100) NULL,
[HostName] [nvarchar](255) NULL
CONSTRAINT [PK_Log] PRIMARY KEY CLUSTERED
(
[Id] ASC
)
</code></pre>
<br /><h3>
Xml transforms </h3>
<br />Using xml transforms is an easy way to set up different settings for different environments. For web projects this is built into Visual Studio and MSBuild/MSDeploy, so the tooling support for this is pretty good. The only caveat is that the transformation is only run during deployment – not during the build. So if your switching between different build configs in Visual Studio, the web host on your dev machine will only use the web.config – not any of the web.debug.config, web.release.config, etc (unless you are actually deploying to your local IIS). <br />
If you are developing a console/WPF/WebForms application you still can take advantage of the same xml transform as web projects, but the tooling is not built into Visual Studio or MSBuild/MSDeploy. There is however an excellent free tool (VS extension) called SlowCheetah developed by Sayed Ibrahim Hashimi that will do this for you. You can download it as a Visual Studio extension, and it has an extra gem that Visual Studio doesn’t have; transformation preview. <br />
<h3>
SQL Server Full-Text search </h3>
<br />The real power when it comes to database log entries is when you pair it with full-text searching. Full-text search will require quite a bit of resources in the form of hardware (disk, memory, cpu), but you don’t have to (and shouldn’t) set up the full-text indexing on your production database server. Instead you should set up log shipping in SQL Server (or some other form of pulling the logs off your production servers) and then do your full-text indexing and searching on a separate database server.<br />
<br />Pair full-text search of logs with a message based (event driven) system, and you have an incredible insight to your production system and an invaluable, searchable history.<br />
<br /><h5>
Resources</h5>
<br />Log4Net: <a href="http://logging.apache.org/log4net/" title="http://logging.apache.org/log4net/">http://logging.apache.org/log4net/</a><br />
<br />SlowCheetah: <a href="https://visualstudiogallery.msdn.microsoft.com/69023d00-a4f9-4a34-a6cd-7e854ba318b5" title="https://visualstudiogallery.msdn.microsoft.com/69023d00-a4f9-4a34-a6cd-7e854ba318b5">https://visualstudiogallery.msdn.microsoft.com/69023d00-a4f9-4a34-a6cd-7e854ba318b5</a><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com6tag:blogger.com,1999:blog-3258074296776382669.post-50621754793357747852014-08-11T13:34:00.001+02:002020-06-03T23:15:41.039+02:00Getting started with Powershell Desired State Configuration (DSC)I wanted to try out the DSC in Powershell 4.0 on my Windows 8.1 Pro machine, but I got stuck on the ‘getting started’ part. I just couldn’t figure out how to generate the actual configuration files (.mof-files). <br />
I google around quite a bit before I finally got it; the configuration file that you create is off course just like a normal script file in the sense that it doesn’t actually do anything. All it does is to define a function that you need to call!<br />
So in order to actually generate the .mof-files, I ‘dot-sourced’ the script into the current session and called the function from the ps1-file.<br />
Here’s an example configuration file called ‘demoConfig.ps1’:<br />
<pre><code class="powershell">configuration Demo
{
Node localhost
{
File TestFiles
{
SourcePath = "c:\temp\test.txt"
DestinationPath = "c:\temp\testdir"
Ensure = "Present"
Type = "File"
}
}
}</code></pre>
<br />
And to generate the .mof files:<br />
<pre><code class="shell">PS C:\temp> . .\demoConfig.ps1
PS C:\temp> Demo</code></pre>
Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com0tag:blogger.com,1999:blog-3258074296776382669.post-28750166025565412652013-06-25T14:34:00.001+02:002020-06-04T08:54:38.224+02:00T-SQL joinsWhat's the difference between an inner and full join in T-SQL? Or a right versus left join? I never have this at the top of my head when I need it, so for future references I've assembled a little example that shows the resulting difference between them.<br />
Given the following t-sql:<br />
<pre><code class="tsql">declare @t1 table (id int)
declare @t2 table (id int)
insert into @t1 values(1),(2),(3)
insert into @t2 values(3),(4)
select 't1' as 'Table name', * from @t1
select 't2' as 'Table name', * from @t2
select 'inner join' as 'Join', t1.id as 'Left', t2.id as 'Right'
from @t1 as t1 inner join @t2 as t2 on t1.id = t2.id
select 'left join' as 'Join', t1.id as 'Left', t2.id as 'Right'
from @t1 as t1 left join @t2 as t2 on t1.id = t2.id
select 'right join' as 'Join', t1.id as 'Left', t2.id as 'Right'
from @t1 as t1 right join @t2 as t2 on t1.id = t2.id
select 'full join' as 'Join', t1.id as 'Left', t2.id as 'Right'
from @t1 as t1 full join @t2 as t2 on t1.id = t2.id
</code></pre>
<br />
<div>
This is the result from the joins: </div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgoCYce2D1APFW1LwsxXvuWDKkYhjGrsUKXiEPv1MF30Eozc7g9LeCn0P0GU5RjZt5dVirmsZhoxWAD_g7W_Awfqd3olba7XF8ozwo-K-lVsHay5qX7PIObYK6zEDMjaA_Bm8y9sAVFUcY/s1600/joins.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="617" data-original-width="208" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgoCYce2D1APFW1LwsxXvuWDKkYhjGrsUKXiEPv1MF30Eozc7g9LeCn0P0GU5RjZt5dVirmsZhoxWAD_g7W_Awfqd3olba7XF8ozwo-K-lVsHay5qX7PIObYK6zEDMjaA_Bm8y9sAVFUcY/s1600/joins.png" /></a></div>
<br />Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com2tag:blogger.com,1999:blog-3258074296776382669.post-6254548667913640962012-11-28T12:20:00.001+01:002020-06-04T09:00:45.328+02:00Unit testing asynchronous operations with the Task Parallel Library (TPL)Unit testing asynchronous operations has never been easy in C#. The most common methods (or at least the methods I usually end up with) is either;<br />
<ol>
<li>Write a synchronous version of the method to test, unit test this one and then call the synchronous method from another method that runs it asynchronous in the production code. </li>
<li>Raise an event in the production code when the asynchronous operation has finished, subscribe to the event in the unit test, and use the <em>ManualResetEvent</em> to wait for the event before making any assertions.</li>
</ol>
Neither is a good solution. <br />
Writing a synchronous version and let the production code call it is probably the easiest one, but breaks down once you need to do more than just call the synchronous method in production (e.g. orchestrating several dependent asynchronous operations, or have some logic run when the asynchronous operation(s) completes). And the worst part of it; a vital part of the production code will be untested.<br />
The <em>ManualResetEvent</em> is better, but it takes a lot more code, makes the unit tests harder to read and you need to fire events in the prod code that possibly only unit tests are interested in. And unit tests dependent on <em>ManualResetEvent</em> tends to be fragile when run in parallel.<br />
But with the Task Parallel Library (TPL) the table has turned; TPL makes unit testing asynchronous code a lot easier. That is; it’s easy if you now how to do it. <br />
Running some code asynchronously without any concerns for testability is pretty straight forward with TPL:<br />
<pre><code class="csharp">Task.Factory.StartNew(MyLongRunningJob);</code></pre>
And in fact; it’s not much harder to make it test-friendly. You only need a bit insight into what’s going in the <em>Task Factory</em>. And to have it straight from the horse’s mouth; here’s what MSDN says about it:<br />
<blockquote>
<em>Behind the scenes, tasks are queued to the ThreadPool, which has been enhanced with algorithms (like hill-climbing) that determine and adjust to the number of threads that maximizes throughput. This makes tasks relatively lightweight, and you can create many of them to enable fine-grained parallelism. To complement this, widely-known work-stealing algorithms are employed to provide load-balancing.</em></blockquote>
<br />
The <em>Task Factory</em> will use a <em>Task Scheduler </em>to queue the tasks and the default scheduler is the <em>ThreadPoolTaskScheduler</em>, which will run the tasks on available threads in the thread pool. <br />
<br />
The trick when unit testing TPL code is to not have those tasks running on threads that we have no control over, but to run them on the same thread as the unit test itself. The way we do that is to replace the default scheduler with a scheduler that runs the code synchronously. Enter the <em>CurrentThreadTaskScheduler</em>;<br />
<br />
<pre><code class="csharp">public class CurrentThreadTaskScheduler : TaskScheduler
{
protected override void QueueTask(Task task)
{
TryExecuteTask(task);
}
protected override bool TryExecuteTaskInline(
Task task,
bool taskWasPreviouslyQueued)
{
return TryExecuteTask(task);
}
protected override IEnumerable<Task> GetScheduledTasks()
{
return Enumerable.Empty<Task>();
}
public override int MaximumConcurrencyLevel { get { return 1; } }
}</code></pre>
<br />
<em>TaskScheduler </em>is an abstract class that all schedulers must inherit from and it only contains 3 methods that needs to be implemented;<br />
<ol>
<li>void QueueTask(Task) </li>
<li>bool TryExecuteTaskInline(Task, bool) </li>
<li>IEnumerable<Task> GetScheduledTasks()</li>
</ol>
In the more advanced schedulers like the <em>ThreadPoolTaskScheduler</em>, this is where the heavy-lifting of getting tasks to run on different threads in a thread-safe manner happens. But for running tasks synchronously, we really don’t need that. In fact, that’s exactly what we <em>don’t</em> need. So instead of scheduling tasks to run on different threads, the <em>TryExecuteTaskInline </em>method will just execute them immediately on the current thread.<br />
<br />
Now it’s time to actually use it in the production code;<br />
<br />
<pre><code class="csharp">public TaskScheduler TaskScheduler
{
get
{
return _taskScheduler
?? (_taskScheduler = TaskScheduler.Default);
}
set { _taskScheduler = value; }
}
private TaskScheduler _taskScheduler;
public Task AddAsync(int augend, int addend)
{
return new TaskFactory(this.TaskScheduler)
.StartNew(() => Add(augend, addend));
}</code></pre>
<br />
To be able to inject a different <em>TaskScheduler</em> from unit tests, I’ve made the dependency settable through a public property on the class I’ll be testing. If no <em>TaskScheduler</em> has been explicitly set (which it won’t be when executed ‘in the wild’), the default <em>TaskScheduler</em> will be used.<br />
<br />
The method <em>Task AddAsync(int, int)</em> is the method we would like to unit test. As you can see it’s a highly CPU intensive computation that will add 2 numbers together. Just the kind of work you’d want to surround with all the ceremony and overhead of running asynchronously. <br />
<br />
The important part here is the instantiation of the <em>TaskFactory</em> that will take the <em>TaskScheduler</em> as a constructor parameter.<br />
<br />
With that in place we can set the <em>TaskScheduler</em> from the unit tests:<br />
<br />
<pre><code class="csharp">[Test]
public void It_should_add_numbers_async()
{
var calc = new Calculator
{
TaskScheduler = new CurrentThreadTaskScheduler()
};
calc.AddAsync(1, 1);
calc.GetLastSum().Should().Be(2);
}</code></pre>
<br />
The <em>System Under Test</em>, SUT, is the <em>Calculator</em>-class that has the <em>AddAsync</em>-method we’d like to unit test. Before calling the <em>AddAsync</em>-method we set the <em>CurrentThreadTaskScheduler</em> that the TaskFactory in the Calculator should use.<br />
<br />
Since <em>AddAsync</em> doesn’t return the result of the calculation, I’ve added a method to get the last sum. Not exactly production-polished code, but it’ll do for the purpose of this example.<br />
<br />
Anyway; the end result is that the test pass. And if I don’t assign the <em>CurrentThreadTaskScheduler</em> to <em>Calculator.</em><em>TaskScheduler </em>– that is it runs with the default <em>ThreadPoolTaskScheduler</em> – it will fail, because the addition will not be finished before the assertion.<br />
<br />
But don’t trust me on this. I’ve uploaded the complete (absurd) example to GitHub, so you can run the tests and see for yourself; <a href="https://github.com/bulldetektor/TplSpike" title="https://github.com/bulldetektor/TplSpike">https://github.com/bulldetektor/TplSpike</a>.<br />
<br />
<h4>
References</h4>
<br />
You can read the MSDN-article that I quoted from here; <a href="http://msdn.microsoft.com/en-us/library/dd537609.aspx" title="http://msdn.microsoft.com/en-us/library/dd537609.aspx">http://msdn.microsoft.com/en-us/library/dd537609.aspx</a><br />
<br />
I found the code for the <em>CurrentThreadTaskScheduler</em> in the TPL samples here; <a href="http://code.msdn.microsoft.com/windowsdesktop/Samples-for-Parallel-b4b76364" title="http://code.msdn.microsoft.com/windowsdesktop/Samples-for-Parallel-b4b76364">http://code.msdn.microsoft.com/windowsdesktop/Samples-for-Parallel-b4b76364</a>. The samples contains a dozen or so <em>TaskSchedulers</em>, for instance;<br />
<br />
<ul><br />
<li><em>QueuedTaskScheduler - </em>provides control over priorities, fairness, and the underlying threads utilized </li>
<li><em>OrderedTaskScheduler - </em>ensures only one task is executing at a time, and that tasks execute in the order that they were queued. </li>
<li><em>ReprioritizableTaskScheduler</em> - supports reprioritizing previously queued tasks </li>
<li><em>RoundRobinTaskSchedulerQueue </em>- participates in scheduling that support round-robin scheduling for fairness </li>
<li><em>IOCompletionPortTaskScheduler - </em>uses an I/O completion port for concurrency control </li>
<li><em>IOTaskScheduler</em> - targets the I/O ThreadPool </li>
<li><em>LimitedConcurrencyLevelTaskScheduler </em>- ensures a maximum concurrency level while running on top of the ThreadPool </li>
<li><em>StaTaskScheduler</em> - uses STA threads </li>
<li><em>ThreadPerTaskScheduler</em> - dedicates a thread per task </li>
<li><em>WorkStealingTaskScheduler - </em>a work-stealing scheduler, not much more to say about that</li>
</ul>
Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com2tag:blogger.com,1999:blog-3258074296776382669.post-82615322674238949262012-10-01T21:02:00.001+02:002012-10-01T21:02:59.133+02:00Commit-Driven Development<p>Test-Driven Development, TDD, is the art of writing a test before you start implementing anything in the production code. TDD is often known as Test-First Development. </p> <p>Behavior-Driven Development, BDD, is the art of describing a behavior in an executable format before you start implementing anything in the production code. Since these behavior-descriptions are often written as scenarios within a feature, BDD is also known as Scenario-Driven Development.</p> <p>Commit-Driven Development is the art of writing a comment for your next commit before you start implementing. You could say it’s Comment-First Development.</p> <p>So why all this emphasis on “Something”-First development? Does the order of things really matter that much? As it turns out; it really does. It helps you focus.</p> <p>By writing a test first – before any of the logic is implemented – you say to yourself; this is what I’m going to do the next 5-10 minutes. I will focus solely on getting this test to past. Until then I will not do any refactoring or touch any other parts of the code base. </p> <p>The same thing goes with a scenario; until this scenario pass, I will not focus on anything else. When the tests pass, I can rename classes/methods/variables, move methods in to new/other classes, extract smaller methods, and so on. </p> <p>By writing the comment for the next commit first, you get the same benefit as test- and scenario-first; Focus. When you have a clean sheet with not pending, un-committed changes and you write that comment first, your telling yourself (and your subconscious) that this – and preferably <i>only</i> this – is what I will work on now. </p> <p>If you get interrupted along the way and you lose track of what it was you actually was supposed to work on, you can just take a look on the commit comment and your right back on track. </p> <p>When I leave the office I usually try to end the day with a failing test. I write the test, see it fail, and call it a day. Next morning I can just run the test suite and I know exactly where to start coding. The context switch gets really cheap. With the pending commit comment I also get the bigger picture of what I was working on; I know when I should be done and commit my changes.</p> <h3>Added benefit</h3> <p>There’s another side to writing the commit comment first; it makes it easier to write better comments. </p> <p>When you write a commit comment after you’ve made all changes, it’s easy to fall into the I-did-this-then-that style of comment. Take a look at this fairly common change-log;</p> <ul> <li>Fixed bug #123: Error when saving customer</li> <li>Increased first name max length</li> <li>Added field on customer</li> <li>Added Cancel-button on customer list</li> </ul> <p>Implicitly these comments say ‘I fixed bug…’, ‘I increased…’, etc. Problem is; I don’t care if <i>you</i> did it. I already know that from the change-log. What I want to know is; how does the behavior of the <i>system</i> differ from prior to this commit?</p> <p>Writing the comment first – before you actually do something – makes it easier to compose comments where the focus is on the behavior of the system. You write the comments describing how the system should act when you’ll do the commit later on.</p> <p>Here’s an attempt to re-write the comments above;</p> <ul> <li>When the user tries to save a customer with an invalid email address, then an error message will be displayed (bug #123)</li> <li>Max length of a customer first name increased from 64 to 256 characters</li> <li>Corporate customers can be assigned a contact person</li> <li>When loading the customer list takes too long, then clicking the Cancel-button will cancel any further loading. All customers loaded at the time of cancellation will be displayed in the list.</li> </ul> <p>I don’t say that you couldn’t have written these commit comments even if you did it on the time of committing the changes. It’s just a lot easier to write them if you do it upfront. And it’s a lot easier to verify that you did what you set out to do, than it is to try to figure out what you’ve actually done when it’s time to commit.</p> <p>When I write my commit comment I try to think of them as release notes. Preferably I could just extract all comments from the repository log since last deploy and paste them right into the release notes for the new deploy.</p> <h4>References</h4> <p>This blog post is highly influenced by Arialdo Martini and his excellent post “<a href="http://arialdomartini.wordpress.com/2012/09/03/pre-emptive-commit-comments/">Preemptive commit comments</a>”. If you haven’t already, please go read it now.</p> Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com6tag:blogger.com,1999:blog-3258074296776382669.post-63908883340710010592011-10-31T22:29:00.001+01:002011-10-31T22:31:04.446+01:00Auto-Wiring EventAggregator Subscription in Caliburn.Micro<p> </p> <p>Just wanted to make a quick note about how to get auto-wiring of the <em>EventAggregator</em> subscription up and running for Caliburn.Micro. What I want to accomplish is to avoid having to write this:</p> <p><a href="http://lh4.ggpht.com/-rbXDXLAneSU/Tq8TRAkomSI/AAAAAAAAAsQ/xS6w1YXjBXQ/s1600-h/CaliburnMicro_AutoEA_1%25255B4%25255D.png"><img style="background-image: none; border-bottom: 0px; border-left: 0px; padding-left: 0px; padding-right: 0px; display: inline; border-top: 0px; border-right: 0px; padding-top: 0px" title="CaliburnMicro_AutoEA_1" border="0" alt="CaliburnMicro_AutoEA_1" src="http://lh3.ggpht.com/-d1mu3jmEcDY/Tq8TRv6xWBI/AAAAAAAAAsU/cec5ovXdodE/CaliburnMicro_AutoEA_1_thumb%25255B2%25255D.png?imgmax=800" width="534" height="340"></a></p> <p>… and instead make this "just happen" when a type implements <em>IHandle</em>. And as you can see from the code above, the IoC I use here is MEF. </p> <p>So I haven't used MEF before, but I found <a href="http://pwlodek.blogspot.com/2010/11/introduction-to-interceptingcatalog.html" target="_blank">this post</a> ("Introduction to InterceptingCatalog – Part I") by Piotr Włodek and figured that with a little bit of tweaking this should work.</p> <p>This code relies on the MEFContrib project up on CodePlex/GitHub, so if you haven't already downloaded it you can get it from there or just NuGet-it into your project;</p> <p><a href="http://lh6.ggpht.com/-aYOHuaLLaB8/Tq8TR-oPr4I/AAAAAAAAAsg/t6OvZavHG6o/s1600-h/CaliburnMicro_AutoEA_2%25255B3%25255D.png"><img style="background-image: none; border-bottom: 0px; border-left: 0px; padding-left: 0px; padding-right: 0px; display: inline; border-top: 0px; border-right: 0px; padding-top: 0px" title="CaliburnMicro_AutoEA_2" border="0" alt="CaliburnMicro_AutoEA_2" src="http://lh5.ggpht.com/-wnNKmvR209Y/Tq8TSYJEegI/AAAAAAAAAsk/IRxYAgbczKc/CaliburnMicro_AutoEA_2_thumb%25255B1%25255D.png?imgmax=800" width="431" height="73"></a></p> <p>To be able to get any class that implements <em>IHandle</em> to list itself as a subscriber in the <em>EventAggregator</em>, we need to hook into the creation pipeline in MEF. And MEF hasn't any hooks that let us do this, but fortunately MEFContrib has and it's called an <em>InterceptingCatalog</em>. </p> <p>The <em>InterceptingCatalog</em> takes two arguments; a <em>ComposablePartCatalog</em> and an <em>InterceptionConfiguration</em>. It's the <em>InterceptionConfiguration</em> that let's us provide an interceptor that can do the auto-wiring for us. But first, let's create the class that will do the interception - the <em>EventSubscriptionsStrategy</em>:</p> <p><a href="http://lh5.ggpht.com/-1QfixXVoUwA/Tq8TS_HyzeI/AAAAAAAAAtQ/-3pdduWEdTc/s1600-h/CaliburnMicro_AutoEA_3%25255B12%25255D.png"><img style="background-image: none; border-bottom: 0px; border-left: 0px; padding-left: 0px; padding-right: 0px; display: inline; border-top: 0px; border-right: 0px; padding-top: 0px" title="CaliburnMicro_AutoEA_3" border="0" alt="CaliburnMicro_AutoEA_3" src="http://lh5.ggpht.com/-ezkYWxGWE_M/Tq8TTg8P_wI/AAAAAAAAAtU/TpOiqJ4U9kY/CaliburnMicro_AutoEA_3_thumb%25255B8%25255D.png?imgmax=800" width="557" height="398"></a></p> <p>This object creation strategy will be added to the MEF creation pipeline. This class will be called for every object resolved from MEF, but in this case we're only interested in those who implements the <em>IHandle</em> interface. So if the casting succeeds we now that this is class that will want to subscribe to events. So by using the <em>Intercept</em> method from the <em>IExportedValueInterceptor</em> interface, we can tell the <em>EventAggregator</em> that this object is an event subscriber. </p> <p>The only thing missing then, is to plug our <em>EventSubscriptionStrategy</em> into MEF;</p> <p><a href="http://lh5.ggpht.com/--8alp3AgrKI/Tq8TUYRguKI/AAAAAAAAAtY/eqtM0JjVkkY/s1600-h/CaliburnMicro_AutoEA_4%25255B7%25255D.png"><img style="background-image: none; border-bottom: 0px; border-left: 0px; padding-left: 0px; padding-right: 0px; display: inline; border-top: 0px; border-right: 0px; padding-top: 0px" title="CaliburnMicro_AutoEA_4" border="0" alt="CaliburnMicro_AutoEA_4" src="http://lh3.ggpht.com/-lypzpHkihp8/Tq8TU9Fw4cI/AAAAAAAAAtc/N9xEmKZXJd0/CaliburnMicro_AutoEA_4_thumb%25255B5%25255D.png?imgmax=800" width="556" height="454"></a></p> <p>This is from the default <em>AppBootstrapper </em>from Caliburn.Micro with the changes I made to get the <em>EventSubscriptionStragegy</em> registered in MEF marked in red.</p> <h5>References</h5> <p>Caliburn.Micro: <a href="http://caliburnmicro.codeplex.com/">http://caliburnmicro.codeplex.com/</a> - This is a WPF/SL/WP7 framework along the same lines as PRISM from Microsoft Patterns & Practices. Only much smaller (in size, not necessarily feature set) and much more "opinionated". </p> <p>MEF Contrib: <a href="http://mefcontrib.codeplex.com/">http://mefcontrib.codeplex.com/</a> and <a href="https://github.com/mefcontrib">https://github.com/mefcontrib</a> - Open source extensions to the Managed Extensibility Framework (MEF). In other words; extending the extensibility with extensions…</p> Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com0tag:blogger.com,1999:blog-3258074296776382669.post-27946014434722109082011-08-30T21:17:00.000+02:002011-08-30T21:17:21.918+02:00Create & Boot VHDThis is just quick list for creating and booting from a Virtual Hard Disk (VHD) in Windows 7. <br />
<br />
<h3>1a. Creating VHD from scratch</h3><br />
If you’re creating a VHD from scratch (that is not a child-disk based on another VHD) this is the commands you need to run from command prompt:<br />
<blockquote><span style="font-family: 'Courier New';"><b>:\> diskpart</b></span><br />
<br />
<em><span style="font-family: 'Courier New';">Microsoft DiskPart version 6.1.7601<br />
Copyright (C) 1999-2008 Microsoft Corporation.<br />
On computer: xxxxxxxx</span></em><br />
<br />
<span style="font-family: 'Courier New';"><b>DISKPART> create vdisk file=<em>[FILEPATH] </em>type=expandable maximum 50000</b></span><br />
<br />
<em><span style="font-family: 'Courier New';"> 100 percent completed</span></em><br />
<br />
<em><span style="font-family: 'Courier New';">DiskPart successfully created the virtual disk file.</span></em><br />
<br />
<span style="font-family: 'Courier New';"><b>DISKPART> exit</b></span></blockquote><br />
The ‘:\>’ means command prompt, typically it would be ‘c:\>’ or something similar, but the actual directory you’re in doesn’t matter for the command. <br />
<br />
Running the command ‘<em>Diskpart’</em> will start the DOS-utility that we’ll use for creating the VHD. When diskpart starts, the command prompt will change to ‘DISKPART>’. Now you can type the command <em>‘create vdisk’</em> to create the actual VHD. <br />
<br />
The command has only two required arguments; ‘file’ and ‘maximum’. The ‘file’-argument specifies where to output the vhd and the name of the vhd-file. So [FILEPATH] in the example above will typically be ‘D:\VHD\MyNewVhd.vhd’. <br />
<br />
The ‘maximum’ argument specifies the size of the VHD in megabytes. You can choose between to types of disks (the ‘type’ argument); either <em>fixed</em>, meaning that the VHD file will be created with the size specified by the ‘maximum’ argument, or <em>expandable</em>, which means it will only be as big as the data on the disk requires but not larger than ‘maximum’. The default is <em>fixed</em> and if you have lots of space available it’s the recommended setting as it’s a bit faster than <em>expandable</em> disks. But on my laptop I do not have unlimited storage, so here I’ll go with the expandable option and setting the maximum file size to about 50 GB. It’s also a bit cheaper to do backups of smaller files, so expandable has it’s advantage there.<br />
<br />
<h3>1b. Creating a differencing VHD</h3><br />
If you already have a base-VHD that you want to use as a parent-disk, this is the commands you need to run from command prompt:<br />
<br />
<blockquote><span style="font-family: 'Courier New';"><b>:\> diskpart</b></span><br />
<br />
<em><span style="font-family: 'Courier New';">Microsoft DiskPart version 6.1.7601<br />
Copyright (C) 1999-2008 Microsoft Corporation.<br />
On computer: xxxxxxxx</span></em><br />
<br />
<span style="font-family: 'Courier New';"><b>DISKPART> create vdisk file=<em>[FILEPATH] </em>parent=[PARENT_FILEPATH]</b></span><br />
<br />
<em><span style="font-family: 'Courier New';">100 percent completed</span></em><br />
<br />
<em><span style="font-family: 'Courier New';">DiskPart successfully created the virtual disk file.</span></em><br />
<br />
<span style="font-family: 'Courier New';"><b>DISKPART> exit</b></span></blockquote><br />
As with the 1a) option above, the [FILEPATH] specifies the path and name of the disk to create. The ‘parent’ argument should be pretty obvious and [PARENT_FILEPATH] is the fully qualified name of the existing VHD you want to use as base-image. <br />
<br />
Note that you cannot specify ‘maximum’ or ‘type’ as arguments for a differencing disk because the size of the child disk is set from the parent. It’s also possible to merge a diff-disk with it’s parent at a later point, to create a new VHD.<br />
<br />
Remember to make your parent-disk read-only before you use it for differencing disk. Failing to do so and then accidentally starting up or booting directly in to your parent VHD, will render your differencing disks useless (and yes; I learned that the hard way).<br />
<br />
Another gotcha to be aware of; if you want to boot in to the new differencing disk, it needs to be located on the same disk volume as the parent disk. They can be in different folders, but they cannot reside on another volume – not even if the volumes is on the same physical disk.<br />
<br />
<h3>2. Boot it up!</h3><br />
When the VHD is ready to use, issue these commands to add them to your boot menu;<br />
<blockquote><span style="font-family: 'Courier New';"><b>:\>bcdedit /copy {current} /d “My virtual boot entry”</b></span><br />
<br />
<em><span style="font-family: 'Courier New';">The entry was successfully copied to {df399c86-f723-11df-85b6-8d1c50594e14}.</span></em><br />
<br />
<span style="font-family: 'Courier New';"><b>:\>bcdedit /set {df399c86-f723-11df-85b6-8d1c50594e14} device vhd=[locate]\Path\To\Disk.vhd</b></span><br />
<br />
<em><span style="font-family: 'Courier New';">The operation completed successfully.</span></em><br />
<br />
<span style="font-family: 'Courier New';"><b>:\>bcdedit /set {#<em>GUID#</em>} osdevice vhd=[locate]\Path\To\Disk.vhd</b></span><br />
<br />
<em><span style="font-family: 'Courier New';">The operation completed successfully.</span></em><br />
<br />
<span style="font-family: 'Courier New';"><b>:\>bcdedit /set {#<em>GUID#</em>} detecthal on</b></span><br />
<br />
<em><span style="font-family: 'Courier New';">The operation completed successfully.</span></em></blockquote><br />
If you run the ‘bcdedit’ command without any arguments, you should be seeing your new boot entry at the bottom. And it should look something like this;<br />
<br />
<em><span style="font-family: 'Courier New';">Windows Boot Loader<br />
-------------------<br />
identifier {df399c86-f723-11df-85b6-8d1c50594e14}<br />
device vhd=[locate]\Path\To\Disk.vhd<br />
path \Windows\system32\winload.exe<br />
description My virtual boot entry<br />
locale en-US<br />
inherit {bootloadersettings}<br />
recoverysequence {df399c79-f723-11df-85b6-8d1c50594e14}<br />
recoveryenabled Yes<br />
osdevice vhd=[locate]\Path\To\Disk.vhd<br />
systemroot \Windows<br />
resumeobject {df399c77-f723-11df-85b6-8d1c50594e14}<br />
nx OptIn<br />
detecthal Yes</span></em><br />
<br />
The first command, ‘bcdedit /copy’ will copy the default boot entry and create a new with the name specified by the /d argument. Then you’ll use the /set argument to modify the new entry. You’ll need to copy the id that will be displayed after the initial copy command, and input this as the first argument for the /set command.<br />
<br />
There’s 3 things that need to be set; the <em>device</em>, the <em>osdevice</em> and the <em>detecthal</em>. The first to are similar and takes the path to the VHD you want to boot from as input. Note the ‘[LOCATE]’ syntax in the path; this will tell the boot manager to figure out which drive to locate the VHD on. So instead of ‘vhd=d:\path\to\disk.vhd’, you need to enter ‘vhd=[locate]\path\to\disk.vhd’.<br />
<br />
A little tip if you have spaces in the path or filename for the VHD; surround the path with apostrophes (‘). <br />
<br />
The last /set command will give some instructions to the kernel to detect certain hardware information (HAL = Hardware Abstraction Layer), which is needed on some x86-based system.<br />
And that’s it; Ready to boot!<br />
<br />
<h4>Resources</h4><br />
In some of my earlier posts I wrote a bit about how to <a href="http://www.kjetilk.com/2011/08/compact-virtual-space.html">compact VHDs</a>, how to <a href="http://www.kjetilk.com/2011/05/installing-windows-is-boring.html">automate Windows install on VHDs</a> and some <a href="http://www.kjetilk.com/2010/08/waking-up-to-new-virtual-reality_31.html">pros and cons with virtual machines</a>. <br />
<br />
Also, David Loongnecker has some good tips in his blog post <a href="http://tiredblogger.wordpress.com/2009/08/06/tips-for-bootingusing-vhds-in-windows-7/">Tips for Booting/Using VHDs in Windows 7</a>.Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com0tag:blogger.com,1999:blog-3258074296776382669.post-69870925200938944182011-08-04T23:17:00.000+02:002011-08-04T23:17:58.594+02:00Compact virtual spaceKeeping a virtual machine tidy will make it run faster and the VHD file itself smaller. Here’s a little list of things you can do to tidy it up a bit.<br />
<br />
<h3>The usual suspects</h3><br />
First things to do is what you’ll do on any machine – virtual or not – to keep things as optimal as possible; The usual maintenance jobs includes deleting temporary files, emptying the recycle bin, cleaning up the registry, etc. I’ve found Glary Utilities (links at the end of this blog post) to be a good tool for this. Just install the free version and run the ‘Scan for issues’ under ‘1-click maintenance’. <br />
<br />
If this is a fresh VHD which haven’t been used much you’ll probably won’t gain much from the usual housekeeping, but defragmenting and checking the disk (chkdsk) and Windows system files (sfc /scannow) seldom hurts. <br />
<br />
<h3>Shrink wrap</h3><br />
To shrink the size of the VHD file you can compact the disk, but before you do that you need to pre-compact it. Compacting the disk is an operation you’ll do on the VHD-file, while pre-compacting is something you’ll do on the running virtual machine to make the compacting operation as effective as possible.<br />
<br />
To pre-compact the VHD you need to run the precompact.exe which you can find on an ISO in the Virtual PC install directory (on my box the precompact.iso is in the folder c:\program files (x86)\windows virtual pc\integration components, but the exact location will vary depending on your OS and Virtual PC version).<br />
<br />
precompact.exe must be run inside the virtual machine. You can either boot into it or fire it up in Virtual PC. Once you’re running the virtual machine you need to open the command line and navigate to the folder where that precompact.exe resides. Make sure you’re running with administrator privileges and run the following command:<br />
<br />
:\> precompact –Silent –SetDisks:C<br />
<br />
The last parameter specifies which disks to pre-compact and if not specified it will pre-compact all disks. In my case that wouldn’t be very wise, since I also have access to the host partition from my virtual machines. <br />
<br />
When pre-compacting is done (can take quite some time depending on the size of your disk(s)), shut down the virtual machine. <br />
<br />
Now it’s time for the actual compact process, so from your host OS fire up the command line and start the disk part utility;<br />
<br />
:\> diskpart<br />
<br />
When running the diskpart, enter the following commands (assuming the VHD file is named ‘parent.vhd’ and is in a folder called ‘vhd’ on partition ‘d’);<br />
<br />
DISKPART> select vdisk file=”d:\vhd\parent.vhd”<br />
DISKPART> attach vdisk readonly <br />
DISKPART> compact vdisk <br />
DISKPART> detach vdisk<br />
DISKPART> exit<br />
<br />
And that’s about it. You should now have a small and fast virtual machine (everything’s relative, right?)<br />
<br />
<h4>Resources</h4><br />
<a href="http://www.glaryutilities.com/">Glary Utilities</a> – Optimization software for Windows. Comes in two flavors; a free version for the basic optimization that’ll be good enough for most users and a paid version with more advanced tools.Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com5tag:blogger.com,1999:blog-3258074296776382669.post-9382281084994841582011-05-26T22:41:00.000+02:002011-05-26T22:41:11.670+02:00Installing Windows is boringSo why not automate it?<br />
<a href="http://lh3.ggpht.com/-nMbZDHuevxw/Td65SlfCD5I/AAAAAAAAAoU/J5sd8u79u5k/s1600-h/automatic-for-the-people-by-rem%25255B5%25255D.jpg"><img align="right" alt="automatic-for-the-people-by-rem" border="0" height="240" src="http://lh5.ggpht.com/-0WUFnXRNzak/Td65TDJmiEI/AAAAAAAAAoY/pBUi9bHFNkg/automatic-for-the-people-by-rem_thumb%25255B3%25255D.jpg?imgmax=800" style="background-image: none; border: 0px currentColor; display: inline; float: right; margin: 0px 0px 0px 10px; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="automatic-for-the-people-by-rem" width="240" /></a><br />
I’m a big fan of virtual machines and especially booting from VHD natively in Windows 7 (as you might have guesed from my <a href="http://www.kjetilk.com/2010/08/waking-up-to-new-virtual-reality_31.html">previous post</a>).<br />
<br />
On those virtual machines I run Windows 7 for the most part. Creating new VHD is pretty easy, but installing the OS and everything you need is a tedious task.<br />
<br />
One way to solve this is to create a parent VHD with Windows 7 and then create child VHD based on this one. This is what I do for the most part, but sometimes I just need a fresh install of Windows 7.<br />
<br />
I could off course do this by creating a blank VHD, boot into it and install Windows 7. But where’s the fun in that? Everything boring must be automated, so here’s the way to make your life just a <em>little</em> less boring. <br />
<ol><li>Grab your copy of Windows 7 and extract/copy the content of the install disk into a local directory (if it’s a ISO file, use a tool like <span style="background-color: yellow;"></span><a href="http://www.7-zip.org/">7-zip</a><span style="background-color: yellow;"></span>). For exemplification I’ll use ‘d:\installs\win7’ as the directory containing the extracted Win7. </li>
<li>Download the ISO <a href="http://www.microsoft.com/downloads/details.aspx?familyid=696DD665-9F76-4177-A811-39C26D3B3B34&displaylang=en">Windows Automated Installation Kit (AIK) for Windows 7</a>. As the name implies, this one is needed for automating the installation of Win7. </li>
<li>Extract the AIK files from the downloaded ISO file (or burn it to disk) and install it. </li>
<li>Download a neat little tool called <a href="http://code.msdn.microsoft.com/wim2vhd">WIM2VHD</a> (which stands for Windows Image to Virtual Hard Disk), which is the tool that will actually do the automation for us. </li>
</ol>The WIM2VHD download is a <em>Windows Script File</em> (.wsf), which you will run after you’ve found out the correct SKU on your <em>Windows Install Media </em>(WIM). A Windows 7 installation disk may contain one or more versions (or <em>stock-keeping units</em> a.k.a. SKUs), e.g. Home, Premium, and Ultimate. To be able to install from the installation source extracted to ‘d:\install\win7’, the automation tool needs to know which version you intend to install. And for that you need a tool called ImageX, which came with the AIK you just installed. <br />
<ol><li>Go to the ‘tools’ folder in the AIK install folder (defaults to ‘c:\program files\windows aik\tools’) </li>
<li>Find the appropriate version of the ImageX (if you’re running on a 32-bit OS that would be in the ‘x86’-folder, for 64-bit it’s the ‘ia64’) and copy it to the same folder as the WIM2VHD script. </li>
<li>Run ImageX to find out which SKU to install </li>
<li>Then run the WIM2VHD script with the the path to the WIM and the desired SKU as params; </li>
</ol><em><strong>cscript wim2vhd.wsf</strong> <strong>/wim</strong>:d:\installs\win7 \sources\install.wim <strong>/sku</strong>:ultimate</em><br />
<br />
The script will then start making you a brand new VHD in the same folder you’re running the script from (alternatively you can add a /vhd param to the command above to specify a path to the output vhd). <br />
<br />
Some minutes later you’ll have a (almost) pre-installed VHD with Windows 7.Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com0tag:blogger.com,1999:blog-3258074296776382669.post-63730846616426121252010-08-31T08:14:00.001+02:002011-06-15T13:12:51.418+02:00Waking up to a New Virtual Reality<div><p>My first experience with a virtualized developer environment was back in 2007. I was taking over a project that had been in prod for a a year or so. It wasn’t a very large project and I was just in to fix some bugs and add some features. </p> <p>But anyone who has been added to a project ‘just’ to add some value knows it can be a tedious affair just to get the development environment up and running. This was a project based on EpiServer (a SharePoint’ish Content Management System) running on ASP.NET 2.0 on the frontend and Sql Server 2005 in the backend. </p> <p><a href="http://lh3.ggpht.com/_hlZzgPJTEUM/THyd0m0YGXI/AAAAAAAAAng/KBnaUTPdC08/s1600-h/NewVR%5B3%5D.png"><img style="background-image: none; border-bottom: 0px; border-left: 0px; margin: 0px 0px 10px 10px; padding-left: 0px; padding-right: 0px; display: inline; border-top: 0px; border-right: 0px; padding-top: 0px" title="New VR" border="0" alt="New VR" align="right" src="http://lh3.ggpht.com/_hlZzgPJTEUM/THyd1GIOAlI/AAAAAAAAAnk/6R91SB19kq8/NewVR_thumb%5B2%5D.png?imgmax=800" width="246" height="140" /></a>I would probably have had to spend at least 2 or 3 days just to get my dev machine setup for this project. That would be; <em>if</em> I could get my dev environment set up. </p> <p>I had never worked on EpiServer before and so just set this up would have been a long and windy road. The version we were running was a couple of versions old and the web contained little to none information on the version we were running. Even getting the correct installer would probably take a couple of days…</p> <p>Lucky for me this project had been using VMware to virtualize both the developer, test and staging environments, and so getting my first dev build up was merely a matter of installing the VMware client, copy over the virtual image and start it up. Almost too easy!</p> <p>The dev machine in this case was a Windows XP image with Visual Studio 2008 and Sql Server 2005. An environment quit fit for virtualization. My next project was onsite at a customer with an already setup up machine, and so I didn’t have the chance to virtualize anything there. But starting on a new project again in late 2008, a decided to give VMware a try again. </p> <p>This time it was on a Vista box (with bitlocker) and a dev environment that also required Vista. Needless to say; Vista rendered, to my big disappointment, useless to both be virtualized and to be the virtualization host. At least for a developer environment. </p> <p>Then came Windows 7 around and I saw some blog posts on how you could boot directly off a Virtualized Hard Drive (VHD). Meaning you would only suffer about 3-5% performance loss due to virtualization. The only hardware that is actually virtual is the hard drive. Everything else, CPU, memory, graphic card, network, usb, are all non-virtualized. You’re running directly off the hardware.</p> <p>I didn’t take the time to test out VHD boot as long as I had my dev environment already set up and everything was running fine. But about a month ago I got a brand new Lenovo W510, and so I finally got the ‘excuse’ I needed to give virtualization a new chance. <a href="http://lh6.ggpht.com/_hlZzgPJTEUM/THyd1qwbduI/AAAAAAAAAno/rDUhIrQjWOU/s1600-h/Gold.jpg"><img style="background-image: none; border-bottom: 0px; border-left: 0px; margin: 7px 0px 7px 10px; padding-left: 0px; padding-right: 0px; display: inline; border-top: 0px; border-right: 0px; padding-top: 0px" title="Gold" border="0" alt="Gold" align="right" src="http://lh6.ggpht.com/_hlZzgPJTEUM/THyd2GDZJdI/AAAAAAAAAns/K1H5aIb18jg/Gold_thumb.jpg?imgmax=800" width="246" height="175" /></a></p> <p>And I can tell you; so far it’s been <strong>pure gold!</strong></p> <p>From what I’ve experienced so far, here are the pros and cons of VHD native boot on Windows 7:</p> <h3>Pros</h3> <p>- Easy to set up new dev environment for testing out new tools, framework, languages or what-have-you. You just need to keep a copy of your ‘base images’ so that you can start fresh from there.</p> <p>- Easy to get new members of a team up and running. A little disclaimer here as I haven’t actually tried this, but it <em>should</em> only be a matter of running <em>sysprep</em> with the ‘generalize’ option on the virtual machine. </p> <p>- Backup is just a file copy operation</p> <p>- Getting up and running on a new physical machine is just a matter of installing Windows 7, edit the boot manager (<em>bcdedit</em>) and copy the VHD-file over to your new machine</p> <p>- “Avoid” BitLocker; Now this might seem like a strange thing to do, but as a consultant I have two sets of security manuals to confirm to. One for my employer and one for the customer that hires me. Now the security regulation is seldom at level between these, and so I always have to be set up to meet whoever has the highest security bar. Most often that would be my employer. </p> <p>And as every dev knows; the more layers of security you add to your machine, the longer does your compile take. For some reason I was not equipped with a lot of patience at birth – and I haven’t gotten any since – and so sluggish machines does not suit me well. BitLocker, enterprise anti-virus clients, and other well-intended enterprise security apps, can really suck the life out of any machine. If all you need to do your job is Outlook, Word, Excel and a browser, that would be fine. When you need to compile a 65-project-large solution 400 times a day, it isn’t. And so if I’m working for a client who doesn’t require disk encryption and sluggish anti-virus software, then I’m perfectly fine with that. </p> <h3>Cons</h3> <p><a href="http://lh6.ggpht.com/_hlZzgPJTEUM/THyd23Lmv3I/AAAAAAAAAnw/CnnaLbNY1IU/s1600-h/HanselmanButt2.png"><img style="margin: 0px 0px 2px 7px; display: inline" class="wlDisabledImage" title="Scott Hanselman" alt="Source of performance numbers" align="right" src="http://lh6.ggpht.com/_hlZzgPJTEUM/THyd3eQo5YI/AAAAAAAAAn0/fOEy0Jcciqw/HanselmanButt_thumb2.png?imgmax=800" width="135" height="234" /></a>- A performance hit; I’ve seen 3-5%, but those numbers apparently came out of Scott Hanselman’s butt (<a href="http://www.hanselman.com/blog/LessVirtualMoreMachineWindows7AndTheMagicOfBootToVHD.aspx">his words</a>, not mine) so take that for what it’s worth (I could make a pun and say that ain’t worth sh**, but I’ll refrain myself from adding that kind of toilet humor to this blog). What I can say, though, is that I can’t tell the difference between running on my virtual and my ‘real’ Windows 7 installation. In a blind-test I don’t think I’d be able to tell them apart. </p> <p>- Hibernation is not supported on the virtualized machine</p> <p>- Calculation of the <em>Windows Experience Index</em> (WEI) not supported</p> <p>For me the pros outweigh the cons. The loss in performance is leveled out by not running BitLocker (which also gives you a 3-5% perf hit). Hibernation is nice, but I can live without it and I still have WEI on the host.</p> <p> </p> <p>My next blog posts will cover how I created my VHDs and got my multi-boot set up.</p> </div>Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com0tag:blogger.com,1999:blog-3258074296776382669.post-6132566651416054052010-02-14T14:05:00.001+01:002010-02-14T14:08:10.754+01:00Option Explicit On – Commands in CQRS<p>The idea behind commands in a CQRS architecture is that they should be very explicit and very specific about their intention. You would try to shed away from generic CRUD operations and rather try to capture the essence of <em>what the user is trying to accomplish</em>. Meaning; instead of a ‘one form to save all data’ you would rather let the user explicitly tell what (s)he wants to achieve.</p> <p>Ok, example. Say you have an application for car dealers. In here you have the possibility to set the price of the car you want to sell. Now, you can either put this as a ‘price field’ among a bunch of other car related data like registration number, brand, model, horsepower, etc, and save it along the rest of the data. <em>Or</em> you can make sure that the changing of a car’s price is a operation of it’s own. </p> <p>In CQRS you would typically go for the second option. You would make sure that the user’s intent is expressed very explicit by giving this operation it’s own command. That is; instead of putting it inside some big <em>SaveCar</em> method, you make it a method of its own. Something like <em>ChangeCarPrice,</em> or even <em>LowerCarPrice</em> and <em>RaiseCarPrice</em>. </p> <p>Wouldn’t that be an awful lot of commands, you say? Will you be making a command for every change of value in the application? Hell, no. That <em>would </em>be a lot of commands. And that’s why we don’t do that. We’re making a specific command for ‘car price’ value because the change of this very value is something that has a specific meaning in this domain. <img style="margin: 10px 0px 10px 10px; display: inline" align="right" src="http://z.about.com/d/classicfilm/1/0/G/C/-/-/command_decision.jpg" width="374" height="530" /></p> <p>Lowering the price of car is probably an action you need to do because nobody is willing to pay the price you set earlier on. And it can be one of several other marketing actions you can take in order to make the car more saleable. Adding more equipment or freshening up the sales description can be other ‘marketing actions’. </p> <p>Tracking these specific actions can be very valuable for the car dealer, because having a car taking up space in your warehouse for a longer period of time is bad for profit. Having cars in stock for a minimum amount of time is good for profit. And so tracking which marketing actions that are most effective over time can be very lucrative for a car dealer (or any kind of dealer I guess).</p> <h4>Behave!</h4> <p>Another way of looking at commands is that they should capture the behavior expressed in your domain model. Patterns like <em>Table Module </em>are arguably more focused on data than behavior, which makes them very well suited for systems where complexity is not that high. And contrary for complex domains; <em>Domain Model</em> is more focused on behavior and less data centric.</p> <p>I would argue that your average Customer Relationship Management system (CRM) or Content Management System (CMS) are examples of systems were data is more important, or rather more valuable, than the behavior of the system. As to all things in life there’s exceptions, but from my own experience the typical CRM and CMS system would make a good fit for a <em>Table Module</em> or <em>Record Set </em>pattern.</p> <p>Systems built using data centric models are far easier to build and maintain. That is off course until you start having too much logic – too much behavior – sprinkled around the code. In that case you’re probably better off using using something like the <em>Domain Model</em> pattern.</p> <p>So let’s focus on the Domain Model again, because in a CQRS architecture there will typically be a Domain Model that contains the essential business logic. The core of the business so to speak. </p> <p>In a sufficiently complex system there will be a lot of behavior and complex rules attached to those behaviors. Let’s take for instance the aforementioned <em>ChangeCarPrice. </em>Larger car dealers can have hundreds of cars for sale, and all cars will have a designated ‘responsible salesman’. Each salesman can have several cars which they are responsible for and they probably will have some kind of bonus arrangement tied to how many cars they sell. </p> <p><img style="margin: 10px 0px 10px 10px; display: inline" align="right" src="http://cache.finn.no/mmo/6/209/346/36_1251535183.jpg" width="240" height="181" />Imagine a scenario where a potential car buyer walks into the shop. Let’s call our potential customer ‘Johnny’.  Johnny has some preferences to what car he want, but for the most part he’s pretty open to which exact car he’ll end up buying. He’s looking for a 4x4 station wagon, preferably black or dark gray, with diesel engine and leather seats. Johnny’s got about $50.000 to spend on the car - which by the way is a mid-priced car here in Norway. (Yes, I know. It’s an expensive country and everything cost more than it should and blah, blah, blah. It’s a whole other story.)</p> <p>The salesman of this story, let’s call him Bob, doesn’t have any cars that fits within Johnny’s preferences. At least non that appeals enough to make him leave his $50.000 in the shop. Johnny did however spot a BMW at $55.000 that he really liked, but the extra $5.000 is more than Johnny can afford at the moment. And Bob is not willing to let the BMW go for as little as $50.000, so no business is done. </p> <p>4 weeks go by and the BMW is still in the shop, but now it’s starting to be costly to having it just standing there, and so the price is lower to $50.000. Wouldn’t it be nice if the Bob’s software were smart enough to notify Johnny about this event? </p> <p>Yes, it would, but building a system that can handle these kind of events is actually very tricky. Having an explicit command that triggers when the car price changes makes it a whole lot easier to add a business rule like ‘notify customer if price drops to or below $50.000’, because you know exactly were to put that behavior.</p> <p>If you have a system where business logic has been randomly added from the UI all the way down to the database, this will be a lot tougher job to get done. </p> <h4>So what about the CRUD?</h4> <p>I believe you can still have your ‘store these 30 fields to the database’-operations in a domain driven CQRS architecture. You can have your <em>SaveData</em> command. But commands like that, CRUD commands, were you don’t care about anything but persisting the data, will not trigger any behavior in you domain model. They will just persist data into your relational database, file system, blob storage, or whatever medium that holds your data.</p> <p>Then when new requirements arrive and you need to attach behavior to some of the data in that <em>SaveData</em> command, you will just extract those properties out into their own command and you make that new behavior explicit. </p> <p>Maybe even all the way from UI and down to the domain model. That way you will capture the user’s intent and you will have means to encapsulate that precious domain knowledge inside your model.</p> <p><strong></strong></p> <h5><strong>Further Reading</strong></h5> <p>For more background and resources on CQRS you can take a look at my previous post <a href="http://www.kjetilk.com/2010/02/growth-is-optional-choose-wisely.html">“Growth is optional. Choose wisely.”</a>.</p> <p>I mentioned the <em>Domain Model, Table Module</em> and <em>Record Set </em>patterns and there’s no better way to learn about these – and other patterns – than to read Martin Fowler’s excellent <a href="http://martinfowler.com/books.html#eaa">“Patterns of Enterprise Application Architecture”</a>. A short description of the patterns can be found in the <em>P of EAA Catalog’s </em><a href="http://martinfowler.com/eaaCatalog/domainModel.html">"Domain Model"</a><em>,</em> <a href="http://martinfowler.com/eaaCatalog/tableModule.html">"Table Module"</a> and <a href="http://martinfowler.com/eaaCatalog/recordSet.html">"Record Set"</a>.</p> <p>I also touched <em>Domain Driven Design</em> a bit, and again; No better source than the source itself. If you haven’t already – go read Eric Evans <a href="http://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215">“Domain Driven Design – Tackling Complexity in the Heart of Software”</a>. Just do it. And you can come back and thank me for the tip afterwards :)</p> Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com0tag:blogger.com,1999:blog-3258074296776382669.post-891994790220409692010-02-11T23:15:00.001+01:002010-02-11T23:15:47.192+01:00Growth is optional. Choose Wisely.<p>Command Query Responsibility Segregation, or CQRS for short, is an architectural pattern based on the idea of Command Query Separation, CQS. It’s a pattern currently advocated by people like Udi Dahan, Greg Young, Mark Nijhof and Pål Fossmo (see below for links and resources). <a href="http://lh4.ggpht.com/_hlZzgPJTEUM/S3SBj92E8OI/AAAAAAAAAmM/8nFeJMDIjhA/s1600-h/sw_fake_ballot_sa03045%5B7%5D.jpg"><img style="border-right-width: 0px; margin: 10px 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="sw_fake_ballot_sa03045" border="0" alt="sw_fake_ballot_sa03045" align="right" src="http://lh3.ggpht.com/_hlZzgPJTEUM/S3SBkerZnII/AAAAAAAAAmQ/55DrCSMZlfs/sw_fake_ballot_sa03045_thumb%5B9%5D.jpg?imgmax=800" width="420" height="287" /></a></p> <p>The background for CQRS is a mathematical theorem called the <em>CAP Theorem</em> put forward by Eric Brewer. It states that;</p> <p><em>“You can have <strong>at most two</strong> of these properties for any shared-data system:</em><em> Consistency, Availability, and tolerance of network Partitions.”</em></p> <p>You can only get two out of three, which basically means that you have to choose between scalability and continuous consistent data. CQRS is an architectural approach that let’s you scale out and deliver high availability, but is a bit more relaxed on the consistency. Meaning that Consistency has to step aside for Availability and Scalability. </p> <p>Wouldn’t inconsistent data be a bad thing and something we would really strive to avoid? Yes, it would – if data were to be permanently inconsistent. But as long as the data <em>eventually</em> becomes consistent, this is no longer such a bad thing. </p> <p>After all; how long are data in a multiuser application 100% consistent anyway? Think about it; As soon as the data has left the database – or whatever storage you might have – and is heading up to the user’s screen, someone else could have updated or even deleted the records. The data can be inconsistent even before they hit the screen!</p> <p>Making a clear separation of commands (writes) from queries (reads) in an application gives you the ability to better scale out the parts that turns out to be bottlenecks. In most applications there are far more reads than writes, and so scaling out the read part will for most scenarios give a performance boost. </p> <p>Now, calling it ‘eventual consistency’ might sound like it will take ‘forever’ before data is consistent, but just as you can scale the command and query parts of the system, you can also scale out the transport mechanism between them. </p> <p>The transport is typically some kind of queue, for instance MSMQ-based, and so the time before data is consistent is coherent to the speed of the transport. Throw in some more power on the queuing machinery, and you get more up-to-date data.</p> <h5>Further reading</h5> <p>Udi Dahan’s <a href="http://www.udidahan.com/2009/12/09/clarified-cqrs/">“Clarified CQRS”</a> is a good and thorough intro to CQRS. </p> <p>More introductory on CQRS and how it relates to DDD by Pål Fossmo here; <a href="http://blog.fossmo.net/post/Command-and-Query-Responsibility-Segregation-(CQRS).aspx">Command and Query Responsibility Segregation (CQRS)</a>.</p> <p>Greg Young gives some clarifications on CQS vs CQRS in <a href="http://codebetter.com/blogs/gregyoung/archive/2009/08/13/command-query-separation.aspx">"Command Query Separation?"</a>.</p> <p>For some more practical samples check out Mark Nijhof’s blog post <a href="http://blog.fohjin.com/blog/2009/11/12/CQRS_a_la_Greg_Young">"CQRS à la Greg Young"</a>, were he introduces his demo app on CQRS and Event Sourcing. </p> <p>Jonathan Oliver has a run-through of CQRS vs Active Record vs Traditional Domain Model in <a href="http://jonathan-oliver.blogspot.com/2009/10/dddd-why-i-love-cqrs.html">"DDDD: Why I Love CQRS"</a></p> <p>If you’re in the mood for some more background material on <em>Brewer’s CAP Theorem</em>, Julian Browne has an excellent article called <a href="http://www.julianbrowne.com/article/viewer/brewers-cap-theorem">"Brewer's CAP Theorem – The cool aid Amazon and Ebay have been drinking"</a>.</p> <p>And just when you’re all pumped up and high on CQRS; Read the <a href="http://blog.robustsoftware.co.uk/2009/12/cqrs-crack-for-architecture-addicts.html">“CQRS: Crack for architecture addicts”</a> by Gary Shutler. It might get you down on the ground again. I might not agree with him, but he makes some valid points.</p> <p>And of course, for all DDD-related topics; The <a href="http://tech.groups.yahoo.com/group/domaindrivendesign/">Yahoo Group for Domain Driven Design</a>. Lot’s of good discussion there – including CQRS.</p> Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com2tag:blogger.com,1999:blog-3258074296776382669.post-36843070033881119342009-12-29T11:40:00.001+01:002009-12-29T22:24:29.240+01:00So you think you can Host?<a title="Photo by sylvar (Flickr Creative Commons)" href="http://www.flickr.com/photos/sylvar/31436964/"><img style="border-right-width: 0px; margin: 10px 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="Flickr CC by 2.0" border="0" alt="Flickr CC by 2.0" align="right" src="http://lh3.ggpht.com/_hlZzgPJTEUM/SzncrZTMnxI/AAAAAAAAAmA/zoNWlVuMcuU/image%5B7%5D.png?imgmax=800" width="234" height="307" /></a> <p>Through out my career I’ve often come across small-sized dev-shops that believe…</p> <p>a) That being their own Application Service Provider (ASP) and hosting their product on their own servers is cheaper, easier and safer than letting a third-party handle it</p> <p>b) That any dev with a bit of interest for servers and hardware is capable of filling the roles of a full-fledge developer <em>and </em>an<em> </em>IT Pro</p> <p>Through out my career I’ve never seen this work out particularly well. </p> <p>The reason why it never works is seldom the lack of talent for the poor dev who ‘stood closest to the server when the last dev-slash-it-guy left’ (freely quoted from <a href="http://twitter.com/richcampbell">Richard Campbell</a> of <a href="http://www.dotnetrocks.com/">DotNetRocks</a>). It’s just that being a IT Pro is just as much of a full-day job as being a professional programmer.</p> <p>In this crazy world of new technologies, languages, frameworks, tools and methodologies that pops up every five minutes, there’s just NO WAY a poor soul can handle two full-time jobs like that and still be GOOD AT BOTH. Some things just got to suffer.</p> <p>Being a developer by heart – and by job description – it’s pretty obvious which one of those jobs that will suffer. The problem is that you can probably live with this situation for a while before it really hits you. But be sure; it <em>will </em>hit you.</p> <p>You can have 99,5% uptime for 3 years in a row. But when that server goes up in flames and the backup system won’t restore your last 3 months worth of data, you’ve ruined your uptime numbers for the next 3 decades.</p> <p>Being a IT Pro means being pro-active. It’s a constant fight to stay ahead of any troubles. And to be prepared and having fail-over when trouble hits you. </p> <p>Being a dev-slash-it-guy means you won’t have neither time nor devotion to being pro-active. Instead you’re being post-active; you’re putting out small fires every now and then, but you’re seldom doing much to prevent them from catching on.</p> <p>If you’re a startup company with most customers on beta-programs and not much paying customers yet, that might be ok. But someday you’ll hopefully find yourself with a nice list of paying customers that depends on that nice little piece of software that you <strike>hacked together</strike> wrote. </p> <p>They might not expect your software to be flawless (even though they probably should), but they expect it to be there when they need it. They start demanding uptime guarantees and Service Level Agreements, SLAs (or at least they <em>should</em> demand guarantees and SLAs). And you better take steps to make sure that you can provide a level of expected professionalism when it comes to hosting your own services. </p> <p>Do you think you can deliver that with a (at most) half-time IT pro? My best guess is ‘probably not’. <a href="http://www.everystockphoto.com/photo.php?imageId=3774884"><img style="border-right-width: 0px; margin: 10px auto; display: block; float: none; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="http://lh3.ggpht.com/_hlZzgPJTEUM/SzncsfeY8jI/AAAAAAAAAmE/4zSqesgJmgc/image%5B12%5D.png?imgmax=800" width="446" height="299" /></a></p> <p>From my experience in the field, here’s some questions you should start asking yourself if you find yourself at this stage;</p> <p>(Now, here comes a full disclosure up front; I’m definitely no IT Pro myself – and I have no intentions what-so-ever to become one. This list might therefore not be 100%-water-and-bulletproof, but if you find some misjudgments or something you’d like to add to the list, please feel free to correct me or give suggestions in the comments below)</p> <ul> <li>How many ports are open and how many services are running and available from the outside on your public server(s)? (The server(s) that hosts your software that is). Do you for instance allow remote desktop connections to your public server(s) to be able to troubleshoot it? </li> <li>What happens if someone from the outside takes control over your public server? Do they get access to your local network and domain as well? </li> <li>How many servers are actually accessible from the outside? </li> <li>Do you have a working Virtual Private Network, VPN, that anyone in your business can use? And if so; Is it secure enough? </li> <li>How many times in the last 6 months have you verified that you can actually restore all the data from your backup device? And how sure are you that you’re actually backing up everything you need? Or put it this way; if your office burns down today, will you have all the necessary data available to do business-as-usual tomorrow? </li> <li>How often do you scan your network for suspicious activities? Are you sure you’re alone on your network? </li> <li>Do you have a wireless network available in your office? If so; what minimum level of security does it demand? Do you have just a pre-shared key which then gives you full access to the domain, or do you have something that is actually secure enough to prevent teenage hackers to access your file servers? </li> </ul> <p>I’m not saying that any 3-5 man shops must hire a full time IT Pro to handle this. This is off course a question of cost. But just like you’re probably out-sourcing accounting to some professional book-keeper, you should also out-source other areas that is just as critical for your business. </p> <p><a href="http://www.everystockphoto.com/photo.php?imageId=1679"><img style="border-right-width: 0px; margin: 10px 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" align="right" src="http://lh3.ggpht.com/_hlZzgPJTEUM/SznctDg9ZrI/AAAAAAAAAmI/bHbN_1yuhjw/image%5B16%5D.png?imgmax=800" width="397" height="299" /></a>If you’re a small- or medium-sized dev-shop, hosting is in my experience always handled better by professional ASPs. And the same goes for securing and managing your IT infrastructure.</p> <p>Don’t get blinded by your luck so far; sooner or later your luck <em>will</em> run out. Then it will no longer be neither cheaper, easier nor safer to handle hosting and infrastructure by yourself – and there’s nothing you can do about it.</p> Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com0tag:blogger.com,1999:blog-3258074296776382669.post-64778089886792846072009-12-03T23:25:00.001+01:002009-12-03T23:34:07.112+01:00My favorite Windows 7 Hotkeys<p>I’ve been using Win7 since beta 2 and I like it a lot. It’s in my opinion by far the best Windows operating system – a lot better than both Vista and XP. Not that it’s such an enormous change from Vista. It’s just the sum of all the small things. Like the perceived overall UI performance. The improved task bar. The speed of the start menu. The search on the start menu (I just can’t remember the last time I actually needed to click my way through the start menu to start a program. I just type in whatever app or feature I need to start or open, and 99 out of 100 it comes up among the top 3-4 items). And not at least; the hotkeys:</p> <ul> <li><strong>Win + Arrow</strong>: Docks the active window to the left or right, or minimizes (down arrow) or maximizes (up arrow) it. Or if the window is maximized and you hit <em>Win + Down Arrow</em>, the window will be restored. If you hit the same <em>Win + Down Arrow</em> again it will be minimized. </li> <li><strong>Win + Shift + Left/Right Arrow</strong>: Moves a window to the monitor on the left or right keeping the same position as it had on the monitor you moved it from. </li> <li><strong>Win + Home</strong>: Hides all open windows except the active (you can do the same by grabbing hold of the window title bar with the left mouse button (just as if you’d want to move the window around) and ‘shake’ the window you want to leave open) </li> <li><strong>Win + T</strong>: Puts focus on the taskbar so that you can use the arrow keys to move between the programs on the task bar and then <em>Enter </em>to activate them, <em>Shift+Enter</em> to open a new instance, or the ‘right menu button’ to get up the menu for each program. </li> <li><strong>Win + Number</strong>: Opens the program on the given index on your taskbar. For instance if you have pinned Outlook to the first position on the task bar, <em>Win + 1</em> will start Outlook (or maximize it and bring it to the front if it’s minimized or behind some other windows). </li> <li><strong>Win + P</strong>: Opens up the “Connect to a projector/external display” dialog with the options to show your desktop on the computer only, duplicate it, extend it or show it on the projector (external display) only. <img style="border-bottom: 0px; border-left: 0px; margin: 10px auto; display: block; float: none; border-top: 0px; border-right: 0px" title="The 'Connect to a projector' dialog" border="0" alt="The 'Connect to a projector' dialog" src="http://lh4.ggpht.com/_hlZzgPJTEUM/Sxg6uTz66PI/AAAAAAAAAlk/mfzBfe1rGqc/image%5B13%5D.png?imgmax=800" width="597" height="130" /></li> <li><strong>Win + X</strong>: Opens up the “Windows Mobility Center” where you can adjust display brightness, volume, battery mode, wireless connectivity, external display, sync connected devices, and presentation settings. <a href="http://lh5.ggpht.com/_hlZzgPJTEUM/Sxg6uznN4zI/AAAAAAAAAlo/XOU0NGujP9s/s1600-h/image%5B14%5D.png"><img style="border-bottom: 0px; border-left: 0px; margin: 10px auto; display: block; float: none; border-top: 0px; border-right: 0px" title="Windows Mobility Center" border="0" alt="Windows Mobility Center" src="http://lh6.ggpht.com/_hlZzgPJTEUM/Sxg6veLoDGI/AAAAAAAAAlw/f29C7R0yUv0/image_thumb%5B6%5D.png?imgmax=800" width="595" height="299" /></a></li> <li><strong>Win + Space</strong>: Peek at the desktop – that is; make all windows transparent so that you can see the desktop. </li> <li><strong>Win + E</strong>: Opens the Windows Explorer </li> <li><strong>Ctrl + Shift + Esc</strong>: Opens the Task Manager </li> <li><strong>Win + R</strong>: Opens the <em>Run</em> dialog </li> <li><strong>Ctrl + Shift (</strong>when launching apps): Runs the program as administrator. For instance; open the <em>Run</em> dialog and type ‘cmd.exe’ (or just ‘cmd’). If you just hit <em>Enter</em> you will open the command prompt under the privileges of the user you’re currently logged in as. But if you instead hit <em>Shift+Enter</em>, you will open the command prompt with Administrator privileges. </li> </ul> <p>Some of these are not really new to Win7, but I just threw them in there anyway because they’re just so incredibly useful :)</p> <p>These hotkeys (or shortcut keys) just makes my working day a little better and a little bit more productive. Mix these in with some of the new features of the Windows Explorer and it can just make your day a bit brighter;</p> <ul> <li><strong>Shift + Right-click</strong> on a folder gives you some extra options compared to just right-click. For instance the “Open command window here” and “Open in new process”. </li> <li><strong>Ctrl + Shift + N</strong>: Creates a new folder </li> <li><strong>Right-click + Drag </strong>folder to the Windows Explorer icon on the taskbar pins the folder, so that you can right-click on the Windows Explorer icon on the taskbar and get direct access to the folder from the jump list. </li> </ul> Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com0tag:blogger.com,1999:blog-3258074296776382669.post-51837288280529870242009-11-17T13:05:00.001+01:002009-12-03T11:56:37.433+01:00Configure WCF to run on Windows 7<p>When you’ve installed Windows 7 and installed all the appropriate IIS features, WCF will still not be available on your box by default. I’ve had this little note to myself laying around somewhere on the file system, but I just keep forgetting where it is every time I need it. So I’ll put it up here, just to make the search a bit easier :)</p> <p>Open up the command prompt in <strong>Administrator mode</strong>, and run the following command;</p> <p>c:\…><strong>"%windir%\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\ServiceModelReg.exe" -r</strong></p> <p>This will map the s<em>vc</em> file type to <em>aspnet_isapi.dll</em>  and will make IIS recognize WCF services and startup the ServiceHost for you. In other words; the <em>svc</em> MIME type will be registered with IIS. The parameters on the end is;</p> <p><strong>-r: </strong><em>Re-registers this version of WCF and updates scriptmaps at the IIS metabase root and for all scriptmaps under the root. Existing scriptmaps are upgraded to this version regardless of the original versions.</em></p> <p>(copied from the official docs on the “<a href="http://msdn.microsoft.com/en-us/library/ms732012.aspx">ServiceModel Registration Tool</a>”)</p> <p> </p> <p>If you’ll be running integration tests against your services and the test will do WCF self hosting (instead of IIS), you also need to authorize the urls that your self-hosting service will be using;</p> <p>c:\…><strong>netsh http add urlacl url=http://+:[port]/ user="[windows user name]"</strong></p> <p>As for the ServiceModelReg command, this one will also need administrator privileges on your command prompt. Replace the <em>[windows user name]</em> with the account you’ll be running the test under. Usually this will be the account you’re logged in with, e.g. “domain\user”. The [port] parameter will be the port number you’ve configured on your WCF endpoint (typically 8000 for testing).</p> <p> </p> <p>And just to be sure; restart the IIS after you’ve run these commands; </p> <p>c:\…><strong>iisreset</strong></p> Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com1tag:blogger.com,1999:blog-3258074296776382669.post-37102924114766158212009-07-10T21:40:00.001+02:002009-07-10T21:40:32.671+02:00The Great Git In The Sky<p>Every developer should know that having a good versioning system for your source files is crucial. Having the possibilities to go back in time and see what your class or module or project looked like is indispensible. And if you’re more than one developer on a project, having a common place – a repository – to store all files is even more indispensible. </p> <p>Throughout the years I’ve tried a couple of different source control systems. Being a .Net developer on the Microsoft platform, I’ve tried both Visual SourceSafe (VSS) and Team Foundation Server (TFS) and I’ve also used the open source alternative SubVersion (SVN). Lately there’s a new source control system that has drawn my attention, namely Git.</p> <p><a href="http://git-scm.com/">Git</a> is a fairly new source control system and was originally developed by Linus Torvalds to be used developing the Linux kernel. The first version of Git came in 2005, but wasn’t available on the Windows platform until late 2007 through the open source project <a href="http://code.google.com/p/msysgit/">msysgit</a> (unless you were running the unix emulator <a href="http://www.cygwin.com/">cygwin</a>, that is).</p> <p>Git has a bit different take on how you manage source control than what I’ve been used to from the previous tools I’ve used. TFS, VSS and SVN lets you setup a centralized repository where you store all your files, keep track of version history, do branching, etc. But Git is a bit different in the sense that the repository is now localized on your machine and when several people are working on the same project, all repositories are essentially synchronized across all development machines – a so called <em>distributed source control system</em>. Which means that you have access to the full history of your source files locally. You can also have a remote Git-repository which local repositories can push and pull changes to and from, but all local histories still have the full version history.</p> <h4>Creating a Git repository using msysgit</h4> <p>For each project you want to put under source control, you just add a Git repository to the root folder of your project. Say you have some code laying on ‘C:\Code\Work\MyProject’. If you want to place this under Git’s source control and you’ve installed <a href="http://code.google.com/p/msysgit/">msysgit</a> with the integration to Windows Explorer, then you just right click on the folder and choose ‘Git GUI Here’ (or ‘Git Bash Here’ if you’d rather use the command prompt). You have to choose ‘Create New Repository’ and then put in the directory ‘C:\Code\Work\MyProject’ in the textbox that follows (a little glitch in the UX design there; it would have been friendlier if it actually had remembered where I opened the GUI from and then put in that directory by default). </p> <p>In the list of ‘Unstaged Changes’ in the upper right corner you can now choose the files you want to include in the repository. Select the appropriate files and then press Ctrl+T (Or <em>Commit </em>> <em>Stage to commit</em>). Then you can commit these into the repository with Ctrl+Return (<em>Commit > Commit</em>).</p> <p>Alternatively you can do the same from the command line, and in fact there are a couple of things you should do on the command line before you start committing to the repository. First of all you should enter your name and email, as this will be used on all commits;</p> <p><a href="http://lh4.ggpht.com/_hlZzgPJTEUM/SleZE9W9ezI/AAAAAAAAAjc/6t82rG_nLw8/s1600-h/image16.png"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="Configure name and email" border="0" alt="Configure name and email" src="http://lh4.ggpht.com/_hlZzgPJTEUM/SleZFZQbjQI/AAAAAAAAAjg/BHBLQKkFdoE/image_thumb10.png?imgmax=800" width="451" height="102" /></a> </p> <p>The ‘—global’ parameter hints that this is a configuration setting that will be effective across all Git repositories on this machine. You could also have done this through the <em>Git GUI</em> (Edit > Options…), but the next thing I’ll setup I couldn’t find a way to do in the current version (v 0.12.0.23); adding ignore patterns. In a typical .net project you wouldn’t want to add for instance the <em>bin </em>and <em>obj </em>folders to the repository, and the way to ignore these files is to add a ‘.gitignore’ file to the root folder of your project. You can try to do this in Windows Explorer, but I think you’ll quickly find that this is actually not possible. But through the bash it’s a walk in the park.</p> <p>To make my point I’ve created a console app called <em>GitExample</em> and I’ve open the bash in the root directory. I can now issue a <em>git init</em> command that will initialize a Git repository here, and by calling <em>git status</em> I can list all files and folders that are currently not under source control:</p> <p><a href="http://lh6.ggpht.com/_hlZzgPJTEUM/SleZGHCGttI/AAAAAAAAAjk/HJWGw-EsDUA/s1600-h/image15.png"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="Initialize git repository" border="0" alt="Initialize git repository" src="http://lh4.ggpht.com/_hlZzgPJTEUM/SleZGl32oeI/AAAAAAAAAjo/pzP36JFpkOc/image_thumb9.png?imgmax=800" width="583" height="284" /></a> </p> <p>And as we can see there are a lot of things here that I really don’t want to add to the repository. Let’s be ignorant;</p> <p><a href="http://lh5.ggpht.com/_hlZzgPJTEUM/SleZGwCu2pI/AAAAAAAAAjs/FsInjeFZxwY/s1600-h/image43.png"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="Create ignore file" border="0" alt="Create ignore file" src="http://lh3.ggpht.com/_hlZzgPJTEUM/SleZHTrSdII/AAAAAAAAAjw/w_FraDQ5QTo/image_thumb25.png?imgmax=800" width="153" height="21" /></a> </p> <p>The <em>touch </em>command will create the file and you can now edit it in your favorite text editor. The following list shows my ignorance;</p> <p><a href="http://lh3.ggpht.com/_hlZzgPJTEUM/SleZHlBVXyI/AAAAAAAAAj0/Zey_tVRPZL8/s1600-h/image35.png"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="Ignore pattern" border="0" alt="Ignore pattern" src="http://lh6.ggpht.com/_hlZzgPJTEUM/SleZIOI21MI/AAAAAAAAAj4/RiP7goHUxOM/image_thumb21.png?imgmax=800" width="121" height="115" /></a> </p> <p>With this in place we can now run the <em>status</em> command and we’ll see that there’s a lot less to care about for our Git commit process;</p> <p><a href="http://lh5.ggpht.com/_hlZzgPJTEUM/SleZIYdM-iI/AAAAAAAAAj8/wZZrN8prSv8/s1600-h/image40.png"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="List of files without the ignored ones" border="0" alt="List of files without the ignored ones" src="http://lh5.ggpht.com/_hlZzgPJTEUM/SleZIwxif-I/AAAAAAAAAkA/cbV9OwZOHAc/image_thumb24.png?imgmax=800" width="563" height="186" /></a> </p> <p>Now it’s time to add the files to the repository, which you off course can do either through the GUI or the command line, but since we’re already in Unix mode let’s do it the hard core way. </p> <p><a href="http://lh3.ggpht.com/_hlZzgPJTEUM/SleZJcv2lwI/AAAAAAAAAkE/FyL1D6jtxqg/s1600-h/image47.png"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="Add to staging area and commit to repository" border="0" alt="Add to staging area and commit to repository" src="http://lh6.ggpht.com/_hlZzgPJTEUM/SleZJxevIzI/AAAAAAAAAkI/mItEGNKyktU/image_thumb27.png?imgmax=800" width="551" height="441" /></a> </p> <p>The procedure around committing files to the repository is 2-phased. We do a “<em>add .”</em> to add all untracked files in our working directory into the staging area of the repository. The staging area is set of files that are ready to be committed and you can do a lot of <em>add</em> (and <em>remove</em>) before you finally decide to commit all changes into the actual repository. And to commit the files you call the <em>commit</em> command with a “-m” parameter to add a comment to the files you’re committing. And as you can see from the screenshot above, after we’ve moved the files from the working directory to the staging area and then from the staging area to the repository, our working directory is now clean.</p> <h4>Mesh It Up!</h4> <p>I mentioned a bit earlier that if you want to share the repository with someone you can setup a remote repository. A popular repository host is <a href="http://github.com">GitHub</a> which let you store up to 300mb free of charge (unlimited storage for public repositories). You can then push and pull changes to and from this location (take a look at <a href="http://www.lostechies.com/blogs/jason_meridth/archive/2009/06/04/git-for-windows-developers-git-series-part-2.aspx">this excellent blog post by Jason Meridth</a> to see how you can do this). </p> <p>Another alternative is to use your <a href="http://www.mesh.com">Live Mesh</a> account as the remote repository. Pål Fossmo did a great blog on how you can <a href="http://blog.fossmo.net/post/Ten-steps-on-how-to-store-Git-repositories-in-Live-Mesh!.aspx">setup Git together with Mesh</a> which shows you how to configure your mesh folders. To initialize the Mesh repository you can use the <i>clone</i> command as shown below:</p> <p><a href="http://lh5.ggpht.com/_hlZzgPJTEUM/SleZKbooGsI/AAAAAAAAAkM/CYLR9x-x1us/s1600-h/image52.png"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="http://lh6.ggpht.com/_hlZzgPJTEUM/SleZK9An4FI/AAAAAAAAAkQ/sVPRyEV98BU/image_thumb30.png?imgmax=800" width="534" height="52" /></a></p> <p>The <i>clone</i> command does what you think it does; it makes a copy of your repository and the <i>bare</i> parameter tells Git to strip down the repository to only what is necessary for the change tracking. That means no working copy of the source files – only the binaries, diffs, etc, that the Git database needs. You can then <i>push</i> and <i>pull</i> between local repositories and the Mesh repository which then will be synced with the cloud.</p> <p><a href="http://lh3.ggpht.com/_hlZzgPJTEUM/SleZLUH-0MI/AAAAAAAAAkU/L9SiyfCJi-I/s1600-h/image3.png"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="http://lh4.ggpht.com/_hlZzgPJTEUM/SleZLwDl9dI/AAAAAAAAAkY/3vYh_a7411Q/image_thumb1.png?imgmax=800" width="549" height="127" /></a></p> <h4>Why use Git?</h4> <p>“Git – the fast version control system”. I guess that the slogan will make a solid statement by itself, and if you’ve worked with source control systems before you’ll definitely appreciate the speed of Git. Visual SourceSafe is notoriously slow on just about every operation you perform (especially over http(s)). SubVersion is pretty fast on check-ins, but not that fast on check-outs. And TFS is pretty fast overall and also gives you the possibility to setup local proxies if you have distributed teams. But for a more quantified view on Git’s performance you can check out Scott Chacon which has <a href="http://whygitisbetterthanx.com/#git-is-fast">compared the speed of Git to Mercurial and Bazaar</a>.</p> <p>TFS might compete to a certain extent on speed, but when it comes to the install footprint and not at least the effort it takes to actually install TFS 2008, Git will outperform TFS any day of the week. That said; TFS is a lot more than just a version control system. But if you plan on using TFS solely for the purpose of tracking your precious source files, my advice is pretty clear; Don’t! It’s not worth it – neither in time nor money. </p> <p>Compared to SubVersion it strikes me that the merging capabilities of Git are a bit better. Git tracks the content of files – not the files itself, and so merging operations seams more likely to be correct in Git. In my opinion the merge operations in SVN is probably one of its weakest point; doing large merge operations in SVN is just pain and you just <i>know </i>you’re about to get burned. TFS on the other hand seems a bit better on the merging than SVN, but then again; time and money…</p> <p>I guess it’s time for a little disclaimer here; I haven’t really used Git much yet, and so I haven’t done any large merge operations and so I might be wrong here. But from what I’ve read and from how Git is built as a distributed source control system, I have a strong feeling that merging is really one of Git’s sweat spots. </p> <p>Anyways, if you have any other opinions on the subject – or to anything else in this post – please feel free to speak your mind in the comments below :) </p> <h4>Resources</h4> <p><a href="http://www.kernel.org/pub/software/scm/git/docs/git.html">“Git Manual Page”</a> is the official documentation on Git and it’s actually quit good. Lot’s of good examples and pretty well written. RTFM, right?</p> <p><a href="http://www.kernel.org/pub/software/scm/git/docs/everyday.html">“Everyday GIT With 20 Commands Or So”</a> from the official tutorial will give you a head start on the most used commands.</p> <p><a href="http://gitready.com/">“Git Ready”</a> has put some of the commands into 3 categories; beginner, intermediate, and advanced.</p> <p><a href="http://www.lostechies.com/blogs/jason_meridth/archive/2009/06/01/git-for-windows-developers-git-series-part-1.aspx">“Git For Windows Developers”</a> – the title says it all I guess. </p> <p><a href="http://git.or.cz/course/svn.html">“Git – SVN Crash Course”</a> will give you a head start on using Git if you’re already familiar to SubVersion.</p> <p><a href="http://whygitisbetterthanx.com/">“Why Git is better than X”</a> has done some (slightly biased?) comparisons against other source control systems.</p> <p><a href="http://code.google.com/p/msysgit/">msysgit</a> is the tool to download and install if you need Git to run on a Windows box.</p> <p><a href="http://code.google.com/p/tortoisegit/">TortoiseGit</a> is another client for Git repositories. If you’re familiar with <a href="http://tortoisesvn.net/">TortoiseSVN</a> for SubVersion, the learning curve will be close to zero.</p> <p><a href="http://github.com/plans">GitHub</a> lets you store up to 300mb in private repositories (unlimited storage for public repositories).</p> Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com1tag:blogger.com,1999:blog-3258074296776382669.post-44009248599885505532009-06-23T08:56:00.001+02:002009-07-02T12:41:10.501+02:00NDC 2009 Highlights<p>The <a href="http://www.ndc2009.no/">Norwegian Developers Conference 2009</a> took place in Oslo last week and I was lucky enough to be one of the around 1000 attendees. That’s about half the crowd the organizer was hoping for, but I guess we’ll have to blame the ongoing financial turbulence for that. It was definitely not due to the speaker list, because that was straight out impressing. And the pricing seemed very reasonable too. Or maybe calling it the <em>Norwegian</em> Developers Conference scared away any foreigners? I don’t know, but those who weren’t there really missed out on a great event.<a href="http://www.flickr.com/photos/grothaug/3639721848/in/set-72157619854994646/"><img style="border-right-width: 0px; margin: 10px auto 0px; display: block; float: none; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="Photo by Rune Grothaug" border="0" alt="Photo by Rune Grothaug" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8PQl2kTobtFbJeX1pN20LswXtzC4wMwYIwAIlM1wrj3UqEWOQcbHgh_uy788heFbBq5gOu5DRne5mtXTMh6ptKo8J0MnqrtbXZee6qkUuZBi66YuRK_DITxowG8BV38O6v5cUbhHSL_A/?imgmax=800" width="466" height="319" /></a></p> <p>I’ll try to summarize some of my thoughts and impressions from this 3 day conference in this post, so let’s start with the most important part; the sessions. Most of the sessions was taped and will be available online in the (hopefully) near future. I had already studied the agenda in detail before I went, but as always when attending conferences like this; <a href="http://www.kjetilk.com/2009/05/my-ndc-2009-agenda.html">the plan</a> was due to change. But I’ll try to list some of my favorite sessions from the conference and I really recommend taking a look at these when they come online. So here’s my top 5 in descending order;</p> <h5>1. Michael Feathers: “Working Effectively with the Legacy Code: Taming the Wild Code Base” </h5> <p>I’ve watched a couple of talks by Feathers up on <a href="http://www.infoq.com/presentations/error-prevention-ethics">InfoQ</a> and he’s a really skilled speaker as well as writer. I haven’t had time to read his book “Working Effectively with the Legacy Code” yet, but it’s definitely one I will pick up soon. The talk was great and he had lots of good tips if you’re faced with a codebase that is not built to be testable.</p> <h5>2. Kevlin Henney: “The Uncertainty Principle”</h5> <p>On day 3 of NDC my original plan was to attend Scott Bellware’s whole day workshop on testing, but I was too late for the registration so the workshop filled up before I got to sign up. Instead I spend the whole day with Kevlin, which really was a great alternative. I was lucky enough to get to hear him doing a talk here in Trondheim about a month prior to the conference, so I knew that this was going to be good. Kevlin has done some great work on design patterns and his talks are both informative and entertaining. I really recommend all of his talks, but if I were to pick one favorite I’d go for “The uncertainty principle”. </p> <h5>3. Glenn Block: “Building Maintainable Enterprise Applications with Silverlight and WPF”</h5> <p>I’m a big fan of the <a href="http://compositewpf.codeplex.com">PRISM</a> and we’re using it on our current project. The talk was mainly about PRISM, but he also had some great tips on how to ease some of the pain in regards to databinding the ViewModel to the View. Now, don’t get me wrong here; I love databinding in WPF, but there’s some pain points regarding refactoring when it comes to the string-based databinding against properties in the ViewModel. Glenn showed off some interesting tools that he’s working on to make this easier, and it will be up on CodePlex in not so long (I hope!). The essence of the tool was that if you name your controls in the View the same as the corresponding property in the ViewModel, and then it could perform an auto-mapping between the View and ViewModel. Anyway; it was a great talk and I got some valuable tips to take with me. Unfortunately this was one of the none-taped sessions, so this will not be available online as far as I know.</p> <h5>4. Udi Dahan: “Designing High Performance, Persistent Domain Models”</h5> <p>Design patterns are in many ways lessons learned over the mere 50 years of developing software. PRISM is, among other things, a set of design patterns to apply if you’re building composite applications and is focused mainly on the presentation layer. Domain-driven design on the other hand are design patterns that focus on the core of the business; the domain model. Udi gave an excellent talk on the performance perspective of DDD.</p> <h5>5. Peter Provost: “Code First Analysis and Design with Visual Studio Team System 2010 Arch Ed Microsoft Visual”</h5> <p>It’s just amazing to see what the architect edition of VS10 contains and I really look forward to some of the features that Peter showed off here. He’s a joy to listen to and it just reeks knowledge of this guy. If you like a quick tour of VS10 architect edition and see how you can read code in a new dimension, I highly recommend this session.</p> <h5>Runners Up </h5> <p><a title="Left to right; Phil Haack, Scott Hanselman, Richard Campbell, and Carl Franklin" href="http://www.flickr.com/photos/grothaug/3639842316/in/set-72157619854994646/"><img style="border-right-width: 0px; margin: 0px 10px 10px 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="Photo by Rune Grothaug" border="0" alt="Photo by Rune Grothaug" align="left" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhOgjNtAZoydfYMMCtXYf_t78l_9hxKV44k_1I8UHlcVGv-mlPeY9hbEvEn-e8HLHiz20Pd4gXcLJbI0HgfnXAtWsUq4sIH-abWzHXE3O11GmYJFiOrYe580ChFUQrnB0YYr2JLhr3Bl7Q/?imgmax=800" width="369" height="259" /></a> Other memorable sessions to watch will be the <em>.NET Rocks!</em> episode recorded live [<strong>Update:</strong> Download the <a href="http://www.dotnetrocks.com/default.aspx?showNum=458">podcast here</a>]. As always hosted by the <a href="http://www.laurel-and-hardy.com/">Hardy & Hardy</a> of the .Net community, Carl Franklin and Richard Campbell, and this time they had invited the HaaHa Brothers (Scott Hanselman and Phil Haack) to do a show. And what a show! Porn, beer and Bing – I say no more…</p> <p>And the HaaHa Brothers show was also a blast. Haack showed some nice tricks to hack Hanselman’s “secure” bank application and the two of them just put together a great show. Put Hanselman on stage and you’re guaranteed a good time!</p> <p>If you’re into DDD you’ll also find Jimmy Nilsson’s sessions quit interesting. Among other things he showed how one could use the upcoming Entity Framework 4 as the O/R-mapper in a DDD scenario. The way he turned user stories into BDD-ish unit tests was also quit interesting and definitely something I will try out myself.</p> <h4>The social side of it</h4> <p>Going to conferences like this is off course not only about the talks and the technical stuff. The social aspect of it is also a great part of it, and the NDC organizers had really put a lot of effort into making that part as equally successful as the technical side. The geek beer on Wednesday started off with an unforgettable jam session with Carl Franklin and Roy Osherove. Anyone who’s attended one of Roy’s talks knows that he has some amusing “alternative lyrics” on familiar songs. But what most people might not know is that Carl Franklin is a fantastic guitar player with an impressive voice. Where can we get your CD, Mr Franklin? Great gig! [<strong>UPDATE:</strong> Some guys from TypeMock recorded the jam session and they’ve <a href="http://blog.typemock.com/2009/07/videos-from-typemocks-unit-testing-open.html">published some clips here</a>]</p> <p>After a couple of beers we headed towards the city and some place to eat. Scott Hanselman’s got this ‘thing’ where he just got to dig up an Ethiopian restaurant in every city he visits. And so we joined Hanselman, Phil Haack, and some other guys for an exotic dinner at <em>Mama Africa</em>. Scott and Phil are just some incredibly nice guys and it was a memorable dinner – both the food and the company :)</p> <p>The Big Party started with a decent dinner on Thursday evening. I mean; you really don’t expect much when you sit down with a cardboard plate filled with some sort of <a href="http://www.flickr.com/photos/grothaug/sets/72157619854994646/"><img style="border-right-width: 0px; margin: 10px 0px 10px 10px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="Photo by Rune Grothaug" border="0" alt="Photo by Rune Grothaug" align="right" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhe-3oD4E2muu4qK1Z3zLmNz6wPhQmhLSM2hfwR8UeqF5oqBEnfFJfoLQ4GZZfPW6AqTSuvEnkYrhkjWG4_3VsdkPCRDAtZt3D5x-UUvaS6XXNDlAh2my4sJnqQMq7-sy0YDWn0i4NpQuY/?imgmax=800" width="379" height="279" /></a>stew’ish dinner at a conference like this, but it really wasn’t that bad this time. And as the dinner had sunk in and the beer was starting to function, <a href="http://www.datarockmusic.com/">Datarock</a> entered the stage. I must admit that electronica is not my favorite genre, but the performance that Datarock delivered was impressive. And how could you possibly go wrong with lyrics like this at NDC?</p> <p><em>I ran into her on computer camp <br />(Was that in 84?) <br />Not sure <br />I had my commodore 64 <br />Had to score</em></p> <p>-- Datarock, Computer Camp Love</p> <p>And after the Datarock concert we headed up to the geek bar. <a href="http://www.loveshack.no/">Loveshack</a> had tuned in some never-dying 80ies classics that really got the geeks rocking. Great show!</p> <p>Once again I was impressed that some of the speakers choose to hang out with us mere mortals. Phil Haack, Peter Provost, Scott Bellware, and Udi Dahan were all hanging around and took the time to socialize. Much appreciated!</p> <p><a href="http://www.flickr.com/photos/grothaug/sets/72157619854994646/"><img style="border-right-width: 0px; display: block; float: none; border-top-width: 0px; border-bottom-width: 0px; margin-left: auto; border-left-width: 0px; margin-right: auto" title="Photo by Rune Grothaug" border="0" alt="Photo by Rune Grothaug" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwCVDw2NzblGlR_4iXeGejepaxRjTHep8boVeDv5HfeRp0kATCH9po5S_9IE4-a5gTH0niR_kZqZkAGVpjqGgIkfnH6M0ieVzx-QIzPITq9gfp5-JOY4rFClBrtNYa8QM0pIL24OHY4zQ/?imgmax=800" width="496" height="317" /></a> </p> <p></p> <p>A picture speaks more than the 1332 words on this page; NDC 2009 was just a big smile! Too bad it’s a whole year ‘till next time…</p> Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com4tag:blogger.com,1999:blog-3258074296776382669.post-27633284941373507202009-05-28T00:30:00.001+02:002009-05-28T00:30:08.467+02:00My NDC 2009 Agenda<p>The <a href="http://www.ndc2009.no/">Norwegian Developer Conference 2009</a> will take place in Oslo from June 17th to 19th. I’ve been lucky enough to get my hands on a full 3-day ticket and I’m really looking forward to this event. I attended both TechEd Barcelona in 2007 and PDC last year in LA, but I can’t help thinking that NDC 2009 has got an even more impressive speaker lineup than both of those – at least if you’re in to agile practices and software craftsmanship. </p> <p><a href="http://www.ndc2009.no/index.aspx?cat=1069&id=43583"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; margin-left: 0px; border-left-width: 0px; margin-right: 0px" title="image" border="0" alt="image" align="right" src="http://lh4.ggpht.com/_hlZzgPJTEUM/Sh2-7edNE5I/AAAAAAAAAi0/8VCVRQo0y8w/image%5B9%5D.png?imgmax=800" width="154" height="216" /></a>If you’re thinking pure technology NDC might not be <em>that</em> impressive, but I personally believe that  the quality of a conference is a lot more about the quality of the speakers and how they present their thoughts and ideas, and less about the technological content. I’d rather spend an hour reading some good articles and try out some new technology hands on, than spending an hour on a bad chair in a room that always seem to lack oxygen listening to a mediocre speaker reading out loud every word on his/her powerpoint slides. </p> <p>Going to conferences is about getting inspired. It’s about getting that tickling feeling of neurons going amok and new ideas swirl around in your head. It’s about triggering activity in your <a href="http://www.memory-key.com/news/2004/news_2004Apr.htm#insight">anterior superior temporal gyrus</a>. And it’s all about the speakers. Skilled speakers with a lot of experience and confidence on stage giving a talk on a topic dear and near to their heart can really make a difference. And with a speaker lineup with names like Feathers, Rahien, Hanselman, Bolognese, Miller, Haack, Dahan, Osherove, Block, Provost, Bustamente, C. Martin, Lhotka… It’s just no way that this is going to be a mediocre event. It’s destined for success!</p> <p>The worst part of this conference will actually be to pick which sessions to attend. It’s just impossible to not miss a great session, but hopefully they will all be videotaped and available online shortly after the conference. But the sessions are always best live, and one got to choose something. As it looks right now I believe this will be my agenda;</p> <p><a href="http://www.ndc2009.no/"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="http://lh4.ggpht.com/_hlZzgPJTEUM/Sh2-73U_tbI/AAAAAAAAAi4/BRp-H3MMPW0/image%5B10%5D.png?imgmax=800" width="645" height="108" /></a> </p> <table border="0" cellspacing="0" cellpadding="2" width="637"><tbody> <tr> <td valign="top" width="137"> <h5>DAY 1</h5> </td> <td valign="top" width="153"> </td> <td valign="top" width="345"> </td> </tr> <tr> <td valign="top" width="137">Ayende Rahien</td> <td valign="top" width="153">Building Multi Tenant Apps</td> <td valign="top" width="345">Haven’t had a chance to see Rahien live yet, but I’ve read and used some of his works</td> </tr> <tr> <td valign="top" width="137"> <p>Michael Feathers</p> </td> <td valign="top" width="153"> <p>Working Effectively with the Legacy Code: Taming the Wild Code Base</p> </td> <td valign="top" width="345">I’ve seen some videos of Feathers up on <a href="http://www.infoq.com">InfoQ</a> and I highly recommend his sessions</td> </tr> <tr> <td valign="top" width="137"> <p>Juval Löwy</p> <p></p> </td> <td valign="top" width="153"> <p>Productive Windows Communication Foundation</p> </td> <td valign="top" width="345">Don’t know much about Löwy to be honest, but getting productive with WCF is never bad.</td> </tr> <tr> <td valign="top" width="137"> <p>Rockford Lhotka</p> </td> <td valign="top" width="153"> <p>Implementing Permission-based Authorization in a Role-based World</p> </td> <td valign="top" width="345">Got to have <em>some</em> technical sessions to, and though I’ve never used Rocky’s <a href="http://www.lhotka.net/cslanet/">CSLA</a> framework, I’ve listen to a couple of the <a href="http://www.dotnetrocks.com/">DotNetRocks</a> episodes he has attended. Besides; the content suits the project I’m currently working on perfectly :)</td> </tr> <tr> <td valign="top" width="137"> <p>Udi Dahan</p> </td> <td valign="top" width="153"> <p>Intentions and Interfaces - Making Patterns Complete</p> </td> <td valign="top" width="345">Yet another one of those gurus you just read and hear a lot about. </td> </tr> <tr> <td valign="top" width="137"> <p>Michael Feathers</p> </td> <td valign="top" width="153"> <p>Design Sense Deep Lessons in Software Design</p> </td> <td valign="top" width="345">Feathers again; he’s just that good.</td> </tr> </tbody></table> <p> </p> <table border="0" cellspacing="0" cellpadding="2" width="636"><tbody> <tr> <td valign="top" width="142"> <h5>DAY 2</h5> </td> <td valign="top" width="142"> </td> <td valign="top" width="350"> </td> </tr> <tr> <td valign="top" width="142"> <p>Jeremy D. Miller</p> </td> <td valign="top" width="142"> <p>Convention over Configuration applied to .NET</p> </td> <td valign="top" width="350">Been following his blog for some time and I like his involvement with the Alt.Net community. Great interview with him on <a href="http://www.altnetpodcast.com/episodes/18-talking-with-jeremy-miller-about-alt-net">the Alt.Net podcast</a>. And besides; CoC is facinating.</td> </tr> <tr> <td valign="top" width="142"> <p>Roy Osherove</p> </td> <td valign="top" width="142"> <p>Unit Testing Best Practises</p> </td> <td valign="top" width="350">Went to Osherove’s sessions at TechEd in 2007 and it was well worth it. Hope he brings his guitar :)</td> </tr> <tr> <td valign="top" width="142"> <p>Ted Neward</p> </td> <td valign="top" width="142"> <p>Extend the Customization Possibilities of your .NET App with Script</p> </td> <td valign="top" width="350">Ted is a great speaker and the scripting possibilities is something I’d really like to look more into.</td> </tr> <tr> <td valign="top" width="142"> <p>Robert C. Martin</p> </td> <td valign="top" width="142"> <p>Clean Code: Functions</p> </td> <td valign="top" width="350">One of the most energetic speakers out there and <a href="http://www.amazon.co.uk/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882/ref=sr_1_1?ie=UTF8&s=books&qid=1243461137&sr=8-1">Clean Code</a> will be read in the upcoming weeks.</td> </tr> <tr> <td valign="top" width="142"> <p>Rafal Lukawiecki</p> </td> <td valign="top" width="142"> <p>Architectual use of Business Intelligence in Application Design</p> </td> <td valign="top" width="350">BI has always been one of those fields that were interesting, but never had the time to really dig into. And from what I’ve heard Rafal was one of the top rated speakers at TechEd 2007 (or was it 2008?).</td> </tr> <tr> <td valign="top" width="142"> <p>Jimmy Nilsson</p> <p></p> </td> <td valign="top" width="142"> <p>Entity Framework + Domain-Driven Design = true?</p> </td> <td valign="top" width="350">I’ve read Nilsson’s <a href="http://www.amazon.co.uk/Applying-Domain-Driven-Design-Patterns-Using/dp/0321268202/ref=sr_1_1?ie=UTF8&s=books&qid=1243461687&sr=8-1">book on DDD</a> and seen his <a href="http://www.oredev.org/topmenu/video/ddd.4.5a2d30d411ee6ffd28880002148.html">session at Øredev</a> last year. I’m currently working on a project were we try to follow the guidelines of DDD, and so it will be interesting to see his take on EF + DDD.</td> </tr> <tr> <td valign="top" width="142"> <p>Richard Campbell</p> <p>Carl Franklin</p> </td> <td valign="top" width="142"> <p>.NET Rocks! Live</p> </td> <td valign="top" width="350">I’ve followed the .NET Rocks podcast for quite some time and the live recordings are never dull. Will be interesting to see who they gather at their panel this time.</td> </tr> </tbody></table> <p> </p> <table border="0" cellspacing="0" cellpadding="2" width="634"><tbody> <tr> <td valign="top" width="145"> <h5>DAY 3</h5> </td> <td valign="top" width="142"> </td> <td valign="top" width="345"> </td> </tr> <tr> <td valign="top" width="145">Scott Bellware</td> <td valign="top" width="142">Full Day Tutorial: Good Test, Better Code</td> <td valign="top" width="345">I’m a strong believer in TDD and Bellware is certainly one of the gurus in this field.</td> </tr> </tbody></table> <p>As you might have noticed from my list of speakers I try to spread my sessions to cover as many different as possible. That way I know which one I can spend time with when the videos come online. </p> <p>And a little tip if you’re going to NDC (or any other conference); do not hesitate to leave a session that you find boring or uninteresting. It’s your time and you’d better spend it right!</p> Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com0tag:blogger.com,1999:blog-3258074296776382669.post-17570679399890353782009-05-26T08:00:00.000+02:002009-05-26T08:00:00.490+02:00PRISM: Your guide to a well-structured UI layer in WPF/SilverLight – Part 2<p>In <a href="http://www.kjetilk.com/2009/04/prism-your-guide-to-well-structured-ui.html">Part 1</a> I talked a bit about testability as one of the major drivers for why you would choose to use <a href="http://compositewpf.codeplex.com">Prism</a> as your guidance for a composite application. In this post I’ll try to give you some hints on how Prism address common challenges like separation of concerns, single responsibility and supporting multiple platforms.</p> <h4><a href="http://lh6.ggpht.com/_hlZzgPJTEUM/ShsOHo84QiI/AAAAAAAAAic/IKv1U30ESmw/s1600-h/image%5B21%5D.png"><img style="border-right-width: 0px; margin: 0px 10px 10px 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="Lego Bricks" border="0" alt="Lego Bricks" align="left" src="http://lh6.ggpht.com/_hlZzgPJTEUM/ShsOIOuiFUI/AAAAAAAAAig/pxHycQTjAgw/image_thumb%5B13%5D.png?imgmax=800" width="209" height="240" /></a> Modularity</h4> <p>Modularity is what makes composite applications composite. Modularity is one of those design principles that has been around ‘forever’, and it’s just as relevant today as ever. “Modules”, “Packages”, and “Components” are all naming of the same concept; grouping related functionality together. That means that <em>cohesion </em>inside a module should be high; the objects within a module should work within the same context and address a common problem. If the grouping of functionality is done right then the <em>coupling </em>between modules should be low, because it shouldn’t be any need to reference objects that are unrelated. </p> <p>The concept of modules in Prism will guide you towards the goal of high cohesion / low coupling. Modules in Prism don’t tell you how far up or down the architectural layers you should or could go, but it typically will include at least the presentation layer. Whether you choose to implement a complete, vertical slice of your application all the way down to the database, or you choose to stop right below the presentation layer is up to you. What is important to keep in mind is that a module should preferably reference neither any other modules nor the host application itself. The module must be kept as separate and isolated as possible. And because these modules are independent of its surroundings, they should be pretty easy to load into the application and by that make it possible to compose an application from these building blocks.</p> <p> </p> <h4><a href="http://lh3.ggpht.com/_hlZzgPJTEUM/ShsOI26UZ7I/AAAAAAAAAik/59n7XbrGDPA/s1600-h/image%5B7%5D.png"><img style="border-right-width: 0px; margin: 0px 10px 10px 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="Wrench" border="0" alt="Wrench" align="left" src="http://lh5.ggpht.com/_hlZzgPJTEUM/ShsOJszjqdI/AAAAAAAAAio/J71WZh6a0_Y/image_thumb%5B3%5D.png?imgmax=800" width="244" height="165" /></a> Maintainability</h4> <p>The biggest maintenance problems I’ve found myself in have usually been due to either large, difficult to follow code-behind files. Large classes and methods with a lot of functionality are in general hard to maintain, but my code-behind files from the pre-TDD-era had a distinct tendency of getting bloated. And not only were they big; they also had a lot of different responsibilities; from UI-logic and validation to business rules and data flow. And even data access in those early days (after all; that was what those on-stage demos and MSDN documentation thought us, right?). </p> <p>Prism tackles the code-behind problem by showing you how to use UI design patterns to separate out functionality into presenter and presentation model classes. These classes do not have any graphical components related to them and so they lend themselves really nice to unit testing. The <a href="http://www.microsoft.com/practices">Patterns & Practices team</a> chose to implement what Martin Fowler calls the <a href="http://martinfowler.com/eaaDev/PresentationModel.html">Presentation Model</a> pattern. The more WPF-specific implementation of this pattern is often referred to as a <a href="http://blogs.msdn.com/johngossman/archive/2005/10/08/478683.aspx">Model-View-ViewModel</a> pattern coined by John Gossman, but because there’s no “official” documentation of the MVVM pattern (just a whole lot of blog posts), P&P chose to refer to the well-documented Presentation Model. But if you want to google your way to more intel on the UI pattern used in Prism, enter MVVM or Model-View-ViewModel as your search criteria. That way you’ll have a better shot at getting WPF or SilverLight related search results. A good start would be the <a href="http://msdn.microsoft.com/en-us/magazine/dd419663.aspx">MSDN article by Gossman</a>, <a href="http://blogs.msdn.com/dancre/archive/2006/10/11/datamodel-view-viewmodel-pattern-series.aspx">Dan Crevier’s early series on DM-V-VM</a>, <a href="http://joshsmithonwpf.wordpress.com/category/mvvm/">various blog posts on the MVVM-subject by Josh Smith</a>, and <a href="http://karlshifflett.wordpress.com/mvvm/">Karl Shifflet’s M-V-VM articles</a>.</p> <p> </p> <h4><a href="http://lh5.ggpht.com/_hlZzgPJTEUM/ShsOKHHZp_I/AAAAAAAAAis/8i-fg8zxudw/s1600-h/image%5B3%5D.png"><img style="border-right-width: 0px; margin: 0px 10px 10px 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="Winnie-the-Pooh" border="0" alt="Winnie-the-Pooh" align="left" src="http://lh5.ggpht.com/_hlZzgPJTEUM/ShsOK4bKvZI/AAAAAAAAAiw/N0ER3f2xYQA/image_thumb%5B1%5D.png?imgmax=800" width="184" height="191" /></a>Multi-Targeting</h4> <p>Should you choose WPF or SilverLight? The short and evasive answer is off course; it depends. I’m not going to elaborate on when you should choose either, but if your answer is <a href="http://en.wikipedia.org/wiki/Winnie-the-Pooh">both</a>, then the guidance in Prism can show you how you can do this in a very smooth way. In fact; the difference between the WPF and the SilverLight version in Prism’s reference application, is 95% xaml. That is; everything but the Views are the exact same code. And by <i>exact</i> I literally mean the same code; instead of having the Presenter/PresentationModel-code duplicated, they actually link the SilverLight files to the corresponding WPF-files. The SilverLight projects therefore contain mostly Views, and the shared code lies in the WPF projects. </p> <p>The last 5% difference implies that you can’t get all the way by changing the xaml alone; there is still some tweaking to get the WPF and SilverLight working nicely together. Since there are some subtle differences between WPF and SilverLight when it comes to functionality (SilverLight is not a pure subset, since it contains some functionality that not (yet) exists in WPF), the P&P team has used <a href="http://msdn.microsoft.com/en-us/library/ed8yd1ha(vs.71).aspx">preprocessor directives</a> on those places where they’ve had to customize specifically for the platforms. </p> <h4>Wrapping It Up</h4> <p>Building applications that are highly testable and maintainable is key for long-lived software. Splitting functionality into well-defined modules that can be developed in parallel by separate teams is key for scaling out the development process. But keep in mind that not all application will benefit from the Composite Application Guidance. Prism is not a silver bullet and it <i>will</i> bring more complexity into your development process. But if your needs justifies the added complexity and you know that your must ‘embrace change’ for years to come, Prism can really lay the foundation for a successful development story. And remember that Prism is guidelines, not framework. </p> Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com0tag:blogger.com,1999:blog-3258074296776382669.post-733886966389376592009-04-20T22:41:00.001+02:002009-04-20T22:41:35.732+02:00PRISM: Your guide to a well-structured UI layer in WPF/SilverLight<p>I’ve had the opportunity to work with <a href="http://www.codeplex.com/compositewpf">Composite Application Guidance for WPF and SilverLight</a> (codenamed “PRISM”) for a couple of months now, and I’m really impressed with what the Patterns & Practices team has shipped this time. The forefather of Prism is in many senses the CAB framework (Composite UI Application Block) and even though I never worked with the CAB Framework myself, I’ve heard that it is a quite large and not that easy to grasp. Prism on the other hand, is quite light weight and the documentation is very concise and well written. </p> <p><img style="border-right-width: 0px; margin: 0px 10px 10px 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" align="left" src="http://lh5.ggpht.com/_hlZzgPJTEUM/Sezd95lJj_I/AAAAAAAAAiM/mhtyGhIn9YA/image5.png?imgmax=800" width="244" height="108" />The feedback from CAB has also been that it’s too intrusive; it’s an all or nothing application block and it’s hard to take advantage of the UI composition patterns in existing applications. CAB is from what I have found, meant to be built upon – not with (remember, I haven’t worked with CAB myself, so if you’d like to correct me, please feel free to do so in the comments below). With Prism P&P has taken quite another approach; you’re free to use (or not use) any part of the Composite Application Library in Prism. And you can switch out whatever part that doesn’t suite your needs. For instance, a core principle in Prism is to use an IoC container to make the application highly testable and loosely coupled. And since P&P has developed an IoC container themselves, namely the Microsoft Unity, the examples and the reference application in Prism uses Unity. But if you’d rather use Windsor, StructureMap, Ninject, Autofac, or any other IoC you’re definitely free to do so. </p> <p>The big difference here’s that where CAB is an application <b>block</b>, Prism is an application <b>guidance</b>. And it guides you towards building applications that are testable, maintainable, multi-targeted and modularized. I’ll dive into these concepts in more details, so let’s start with;</p> <h3>Testability</h3> <p>Everybody tests their code and there are two ways to do it; </p> <p>a) Manually; set some breakpoints, fire up the app, input some data and push some buttons, let the debugger hit the breakpoints, inspect some variables, and check that everything works as expected (or more often; try to find out why it doesn’t work as expected)</p> <p>b) Automated; use a testing framework like NUnit, xUnit, or MSTest, write some tests, and then let the machines do the tedious work of verifying that you didn’t break anything you didn’t mean to</p> <p>If you enjoy your time with the debugger, I won’t try to convince you that automation is good. But I consider myself a pretty lazy programmer and whenever I see an opportunity to automate boring, repetitive tasks, I always try to do so. I prefer to code, not debug, and therefore I automate my testing. Therefore I write unit, integration and UI tests that can be run by an unattended build machine whenever I check in some code changes. <b>I’m a coder, not a debugger</b>. </p> <p><a href="http://lh4.ggpht.com/_hlZzgPJTEUM/Sezd-8FwTaI/AAAAAAAAAiQ/4h1QQfVfwTI/s1600-h/image%5B4%5D.png"><img style="border-right-width: 0px; margin: 0px 0px 10px 10px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" align="right" src="http://lh4.ggpht.com/_hlZzgPJTEUM/Sezd_klhqeI/AAAAAAAAAiY/XLUjorjC0HY/image_thumb%5B1%5D.png?imgmax=800" width="174" height="244" /></a>But writing unit tests can be hard if you haven’t architected you’re classes and methods in a way that opens up for testing. If you instantiate objects inside your classes or in other ways are tightly coupled to other classes, mocking out those classes that are not in the scope of the current unit test will be hard. It’s not impossible, it’s just hard. One of the areas that are notoriously hard to unit test is the “code behind” of graphical components. Because when you instantiate a GUI component, it makes you dependent on a GUI thread when you run the test. On a build machine that’s going to run your tests without any interactive user logged in, this is just not the case; there’s no GUI thread available. And besides; it is bloody annoying and time consuming to have those forms and windows pop up whenever you run your test suite.</p> <p>Opening your class for dependency injection and using an IoC container to manage the wiring of dependent objects is a well-proven and easy way to solve this problem. Prism explains and shows you how to write your application using an IoC container for the hot-wiring. And as I’ve already mentioned; if you prefer any other IoC Container, it’s totally up to you. But if you choose to not use Unity, you’ll have to be prepared to write some wiring code when initializing your application. Prism comes with the wiring code in form of a class called <i>UnityBootstrapper. </i>And there’s no surprise to the naming here; this class takes care of booting up your application with the Unity IoC container. So if you want to use any other container, you’ll need to rewrite the <i>UnityBootstrapper</i> to suite your choice. Or if you’re lucky; use the source code from someone who’s already done it (like the Castle Windsor adapter and bootstrapper that you can find in the <a href="http://compositewpfcontrib.codeplex.com/">Composite WPF Contrib</a> project over at CodePlex).</p> <p> </p> <p>All right! I think that’s enough for one post. I promised to write about maintainability, multi-targeting and modularity as well, so these will be the subjects for my next post. </p> Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com0tag:blogger.com,1999:blog-3258074296776382669.post-14239899526012178152009-03-19T22:35:00.001+01:002009-03-19T22:54:13.057+01:00MSDN Live: Slides & Demo Code from “WPF Done Right!”<p>Me and my colleague <a href="http://blog.fossmo.net">Pål Fossmo</a> was invited to give a talk on the <a href="http://www.codeplex.com/CompositeWPF">Composite Application Guidance</a> (codenamed <i>Prism) </i>on the MSDN Live March 2009 tour. It was great fun, but man we spent many hours preparing for this event! Given the fact that there were 2 of us given the talk, one could assume that this meant just half the work. But, no. So many hours of discussing what to include, how to do the talk, who does what, synchronizing the talk, rehearsal…  </p> <p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8TIeEwqV-9Z2GUOAa9GU2RV6L0CuZkGRC6SN_dygGds5pR0GqO3CrV7fmeOourUMywB1aZ9y2VemlxqknmTGqMaz-C_qFOwupdlq8d1-UjiB-mmoHJ-x1GoL1_gdvithYPCn58NJyAu0/s1600-h/image%5B34%5D.png"><img style="border-right-width: 0px; margin: 10px auto; display: block; float: none; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="Pål talking about IoC Containers" border="0" alt="Pål talking about IoC Containers" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYaSsgB6wO75NH17cH-dS_Fa8Mh3kEzr-rQVtAfI8GCXa0i7OZNwdGbU2oGDeoCX0RQMi2krnhu6x1meUp3AFdFJc9MIFTL8pFSAILqBp5z4jhv21MlVMb953ICYHZ0uBLcIKGjNNmk2s/?imgmax=800" width="420" height="295" /></a>And we thought we had it all figured out as we started out in Stavanger on March 5th. But the feedback from the session suggested that maybe we ought to change the talk a bit. The score wasn’t as good as we’d hoped and we knew we could do better. So we spent the weekend adjusting the talk for Bergen on March 10th. The tip from MSDN General <a href="http://blogs.msdn.com/grothaug/">Rune G</a> was clear; more code equals higher score. So we added some quality time in Visual Studio to the talk and the score went up. I must admit that I was perhaps the one that resisted to take “live coding” into the talk in the first place, but seeing the score from Stavanger and Bergen made it pretty clear that this was a bad call. The reason for my resistance was perhaps the fear of ‘something’ going wrong during live coding; staying in PowerPoint is safe, jumping around in Visual Studio is a lot more risky. So many things can go wrong and to stand in front of the crowd with an app that crash and burn is just not much fun. Believe me; I’ve tried it. The demo-God was nice to us though, and I think we got away with some nice demos on how to get started with Prism.</p> <h3><a href="http://tinyurl.com/wpfdoneright"><img style="border-right-width: 0px; margin: 0px 10px 10px 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="Download slides & code" border="0" alt="Download slides & code" align="left" src="http://lh6.ggpht.com/_hlZzgPJTEUM/ScK6uNFXbUI/AAAAAAAAAh4/RPIOV7rlEbE/image%5B16%5D.png?imgmax=800" width="227" height="176" /></a>The Slides</h3> <p>We spent about half of the talk in PowerPoint and rest was demo. The slides focus on the <i>what</i> and <i>why</i> of Prism, while the <i>how</i> was in Visual Studio. Since one of the key concepts of Prism is the use of an IoC/D I-container, we decided to spend about 8-10 minutes explaining the concepts of <i>Dependency Injection </i>and <i>Inversion of Control</i> using some example code in PowerPoint. Rune Grothaug will publish a screen-recording of the session we did in Oslo, and I guess will be available in a couple of days (I’ll update this article with a link to the recording when it’s available). If you want to take a look on the slides, you can <a href="http://tinyurl.com/wpfdoneright">download them here</a> (for you non-Norwegian speaking out there; sorry, the slides are (mostly) in Norwegian, but if you’d like a copy in English just let me know and I’ll translate and upload it).   </p> <h3><a href="http://tinyurl.com/wpfdoneright"><img style="border-right-width: 0px; margin: 0px 10px 10px 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="Download slides & code" border="0" alt="Download slides & code" align="left" src="http://lh5.ggpht.com/_hlZzgPJTEUM/ScK6up0cygI/AAAAAAAAAh8/2Lv7kaX4lvw/image%5B24%5D.png?imgmax=800" width="158" height="275" /></a>The Code</h3> <p>The goal of the demo was to show off some of the key concepts in Prism; modules, regions, views and communication. As we started to prepare for this talk, we quickly found the need of a very lightweight and small app that we could demo. The reference application that comes with Prism is really good, and I highly recommend everyone to take a walk around the code from the Patterns & Practices team. It’s nicely done and I think most of us can learn a lot just be reading this code. But alas the reference app is nice and well done; we still wanted something smaller and more fit to our purpose, so we decided to roll our own little composite application. And since Pål is a big fan of <a href="http://www.kjetilk.com/www.twitter.com">Twitter</a>, he built a nice little Twitter client using the concepts from Prism. The demo app, called ‘Kvittre’, consists of a shell with 4 regions and in the main app we had 3 modules; one for the login view, one for posting tweets, and one for listing tweets from those you follow. </p> <p>For the live demo we wanted to show how to build a module, and since <a href="http://tinyurl.com">TinyUrl</a> is a popular service to shorten down url’s in tweets we decided to build a module that could take an url, ask the TinyUrl service for a shortened version, and then insert the tiny url into the message. And to demo that you could build a module separate from the ‘main solution’, we coded the module in a separate solution. To test-run the module we added a ‘host application’ project that contained a <i>bootstrapper</i> and a region to host the view from the module. When the module was tested and looked okay, we deployed it back to Kvittre. Kvittre was set up with a <i>DirectoryModuleCatalog</i> that would load any module in a given catalog. Since the 4th region in the Kvittre shell was set up to host the TinyUrl module, the module was loaded and displayed in Kvittre. Then we used an <i>EventAggregator</i> for communicating between modules and wrapped it up by demoing some unit testing of the <i>Login</i> method in <a href="http://tinyurl.com/wpfdoneright"></a><a href="http://tinyurl.com/wpfdoneright"><img style="border-right-width: 0px; margin: 10px 0px 0px 10px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="Download slides & code" border="0" alt="Download slides & code" align="right" src="http://lh3.ggpht.com/_hlZzgPJTEUM/ScK6vOg4mVI/AAAAAAAAAiA/FECanmP044Y/clip_image002%5B4%5D%5B3%5D.gif?imgmax=800" width="164" height="79" /></a><a href="http://tinyurl.com/wpfdoneright"></a>the <i>Presenter </i>class of the <i>LoginView</i>. If you want to check out the code, it’s all wrapped up and ready for <a href="http://tinyurl.com/wpfdoneright">download right here</a>.</p> <p><a href="http://tinyurl.com/wpfdoneright"></a></p> Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com0tag:blogger.com,1999:blog-3258074296776382669.post-23794974269001034022009-02-24T08:21:00.001+01:002009-02-24T08:28:25.132+01:00“Legacy Code is Code Without Tests”<p>I wish I’d come up with that phrase first, but it was Michal Feathers who stated this in his “<a href="http://www.amazon.com/Working-Effectively-Legacy-Robert-Martin/dp/0131177052" target="_blank">Working effectively with legacy code</a>”. It’s a great statement, and it pretty much sums up what testing is all about; if you’re not covered by tests, it is hard to refactor and change the code and at the same time know that you didn’t break anything. And if you have code that is resistant to change or that makes you nervous every time you touch it, then you have code that won’t be changed. You have legacy code. And you can try to wrap it, hide it, and forget it, but someday it will blow up. And someday you’ll have to go in there and make it work. But you won’t have any safety net. You’ll have to change something you do not know the reach of, and you’ll have to do it blindfolded and pray that your changes aren’t going to break something somewhere else. But I promise you; it will. </p> <p><img style="border-right-width: 0px; margin: 0px 10px 0px 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" align="left" src="http://lh3.ggpht.com/_hlZzgPJTEUM/SaOf9UCimhI/AAAAAAAAAhY/EG-e1A5a1O4/image%5B5%5D.png?imgmax=800" width="244" height="164" /> And man I can tell you; it is good to be a consultant with skills in a technology where the software industry hasn’t had time to produce that much legacy code yet! That alone should be a good enough reason for you to invest some of your time into learning new skills. Skills that make you more valuable in the projects that produce new code, instead of maintaining legacy code that someone else hacked together years ago. It’s those green field projects that are fun! </p> <p>And because there’s not that much WPF-based apps out there yet, there can still be time to save some poor souls from aiming at that same old pit of failure. The pit of strongly coupled, untestable, monolithic monsters. There’s hope and I believe in the goodness of coders. I believe that we <i>want</i> to make solid code. I believe that we <i>want</i> to produce code that is maintainable and changeable. And I believe that we, the residents of the software community, can make the leap into software craftsmanship. It’s just a matter of making those right choices. And I believe that loose couplings, testability and modularity are definitely the right choices in most cases. These are the key principles that will make you a better person (or at least a better developer).</p> <p>Loose couplings and testability are tightly coupled (touché!). If you’re doing test-driven development, or behavior-driven development, or any other development practice that use tests to drive the design, you will end up with code that is loosely coupled. And if you’re building an app with loose couplings between modules and classes, you’ll end up with code that lends itself to testing very well. And testable, loosely coupled systems will be easier to maintain and change than a tightly coupled system with no tests to verify your code.</p> <p>Modularity is another beast though. Modularity is about splitting the application in to pieces that multiple teams can work on in parallel - without getting in the way of each other. Modularity is about scalability and maintainability. Adding new functionality without ending up with a logarithmic<img style="border-right-width: 0px; margin: 10px 0px 0px 10px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" align="right" src="http://lh6.ggpht.com/_hlZzgPJTEUM/SaOf-GyytxI/AAAAAAAAAhc/feCKEa0TuN8/image%5B16%5D.png?imgmax=800" width="170" height="184" /> time/functionality curve is an important factor in software development (maybe not for you and me, but for those white collars* that are deciding whether to fund or close down your project, predictability is extremely important). And modularity is about mastering complexity. How do you master too complex challenges? You break it down in to smaller, more manageable parts. And in software terms those parts are modules.</p> <p>So if you take these 3 ingredients – loose couplings, testability, and modularity – and you shake it together with WPF (shake, not stir), you’ll have a fantastic opportunity to do WPF right. You’ll end up with code, not legacy code.</p> <p><a href="http://www.microsoft.com/norge/msdn_technet_live/agenda.aspx"><img style="border-bottom: 0px; border-left: 0px; margin: 0px 10px 0px 0px; display: inline; border-top: 0px; border-right: 0px" title="image" border="0" alt="image" align="left" src="http://lh4.ggpht.com/_hlZzgPJTEUM/SaOf_X735uI/AAAAAAAAAho/j3Kc3KT3Hcs/image%5B21%5D.png?imgmax=800" width="106" height="67" /></a>If you’re in Stavanger on the 5<sup>th</sup> of March, Bergen on the 10<sup>th</sup>, Trondheim on the 12<sup>th</sup> or in Oslo on the 19<sup>th</sup> of March, you can hear me and my colleague <a href="http://blog.fossmo.net" target="_blank">Pål Fossmo</a> give a talk on this topic at the <a href="http://www.microsoft.com/norge/msdn_technet_live/agenda.aspx" target="_blank">MSDN Live</a> event. </p> <p> </p> <p> </p> <p><font size="2">* Which btw just managed to bankrupt Iceland and is about to break the back of some of the strongest economies in the world… how the he** did they do that?!</font></p> Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com1tag:blogger.com,1999:blog-3258074296776382669.post-87183474235666588572009-01-15T14:05:00.000+01:002009-01-15T14:05:12.936+01:00“A desk is a dangerous place from which to view the world”<p>A while ago a colleague of mine posted a <a target="_blank" href="http://www.timeexpander.com/blog/index.php?itemid=256">blog about his desk</a> at work. He used the words of <em>Gunnery Sgt Hartman</em>, and so I will be no less of a man;</p> <p><a href="http://lh6.ggpht.com/_hlZzgPJTEUM/SWvDOVY1LXI/AAAAAAAAAhE/VT4qHG-WcsU/s1600-h/DSC_0180-1%5B15%5D.jpg"><img style="border-width: 0px; margin: 0px 10px 0px 0px; display: inline;" title="DSC_0180-1" alt="DSC_0180-1" src="http://lh4.ggpht.com/_hlZzgPJTEUM/SWvDO6JlbxI/AAAAAAAAAhI/mlvNkl4a6GE/DSC_0180-1_thumb%5B13%5D.jpg?imgmax=800" align="left" border="0" height="244" width="350" /></a>“The Desk is a system. That system is our enemy. But when you're inside, you look around, what do you see? Businessmen, teachers, lawyers, carpenters. The very minds of the people we are trying to save. But until we do, these people are still a part of that system and that makes them our enemy.”</p> <p>(<em>almost</em> a quote from the great <em>Morpheus</em>)</p> <p>This <em>desk fetish</em> was picked up by <a target="_blank" href="http://anders.hammervold.com/2008/12/you-are-my-one-and-only-desk.html">Anders Hammervold</a>, which again challenged <a target="_blank" href="http://www.joaroyen.com/2008/12/how-my-desk-at-work-looks-like.html">Joar Øyen</a>, which again challenged me… And since Joar also challenged <a target="_blank" href="http://blog.fossmo.net/">Pål Fossmo</a> – who still haven’t published his desk – the pressure is now on <em>The Reverend…</em></p> <p>Oh, and before I forget; that quote in the title is by John Le Carré. A fabulous quote if I may say so.</p> <p><a href="http://lh6.ggpht.com/_hlZzgPJTEUM/SWvDOVY1LXI/AAAAAAAAAhM/7vh-TfLoSdo/s1600-h/DSC_0180-1%5B12%5D.jpg"></a></p>Kjetil Klaussenhttp://www.blogger.com/profile/15985372289245420671noreply@blogger.com0