Tag Archives: linkedin

WiX Publish Element Skipped if Multiple Next Steps Exist (or Burnin’ the WiX at Both Ends)

candle

I don’t write many installs these days, as most of my work is in SaaS-based web development. On occasion, however, there are some things that still must be done by installing something on-premise for a particular customer.  In these rare cases, I’ll usually use WiX to create the install package. As I was writing such an install package recently, I ran across a problem that stumped me for a bit.

In this particular situation, I had written an Asp.Net Web API project that was hosted in a TopShelf-based Windows Service. Once installed on the customer’s local network, a web application would make JSONP calls (the customer was using older browsers) from the browser directly to the local service to collect data not stored in our SaaS platform. The installer needed to install the service, collect information about the account under which the service would run, and optionally allow the user to select the thumbprint of an X.509 certificate to run the service under SSL.

The flow of the steps in the installer wizard looked like the image below (click for a larger view).

WiXInstallFlow

In my main WXS file, my UI element was as follows…

What was interesting is that every time I would run the installer, the UI moved from ServiceUserDlg to VerifyReadyDlg, skipping the CertificateDlg regardless of whether or not I picked the option in the dialog to do so. I then ran the MSI from the command line using…

msiexec /i MyApplication.msi /l*v MyLogFile.txt

…and pored through the output log file. It indicated that the BASEPROTOCOL property that was controlling the dialog switching was set as expected.

Take a look at these lines from the WXS – specifically the “Order” attribute…

Now I make no such claim as to be a master of WiX, Windows Installer, or even the English language, but when I see the word “Order”, it generally implies that one thing follows another – a sequence if you will. The WiX documentation for the Order attribute reads…

This attribute should only need to be set if this element is nested under a UI element in order to control the order in which this publish event will be started. If this element is nested under a Control element, the default value will be one greater than any previous Publish element’s order (the first element’s default value is 1). If this element is nested under a UI element, the default value is always 1 (it does not get a default value based on any previous Publish elements).

Hmm. It seemed that Order did indeed have something to do with controlling the order in which the publish events are evaluated, but that it about all it told me. To make matters worse, I was so certain of my assumption that “Order” meant “sequence” that for a long time I never even considered the Order attribute as something that could be a problem. I kept thinking that the publish condition still wasn’t right, or something else was affecting it that wasn’t getting logged properly. As such, I searched quite a bit on Google for things like…

  • WiX Publish element not firing
  • WiX dialog out of order
  • WiX next button wrong dialog

..none of which seemed to return anything helpful.

As it turns out, in Windows Installer Land, the word “Order” means something akin to “weight”. When multiple publish conditions are found that have the same “Dialog”, “Control”, and “Event” attributes, it will evaluate the conditions in order of largest to smallest Order, short-circuiting and using the first one that evaluates to true, and skipping the rest. I ended up finding this out by trial and error, eventually switching the numbers in the “Order” attributes, which made it behave correctly.

After finally figuring it out, I went back and did several Google searches specifically about Order being backwards, I eventually came upon this post, archived on Nabble from the WiX mailing list, where according to Blair Murri-3…

If you have more than one [BaseAddressDlg] event that has a true condition, the one with the highest order number will execute, and the others will be ignored. You need to make your conditions mutually exclusive.

So there we have it. In order to make one Publish element fire before another (when they both have the same “Dialog”, “Control”, and “Event” attributes), you must make its “Order” attribute larger than the others.

Since this took a while to figure out, I thought I’d post it. Hopefully this will help someone get to the answer faster than I did.

Advertisements
Tagged ,

LiveReload on Windows 8 (or “So You’re Okay With Pressing the Windows Key All The Time, But Not Okay With Pressing F5 All The Time?”)

WindowsKey

I still use a “big” computer without a touchscreen monitor (or “grampa box“, as the cool kids call it). It’s still just too hard to code on a tablet device.  I’ve tried  Codeanywhere on my iPad, and looked at a few other solutions such as the ones in this post but, being spoiled by the conveniences of Visual Studio as a primary development tool, I haven’t yet been able to convince myself that the “good” of being light and portable outweighs the “bad” of missing features. With all the push by major companies into cloud development it seems that something polished would exist by now, but it still doesn’t seem be there. Perhaps this means that cloud-based editors are an open market right now. Alas, if only I had more time in the evenings…

Windows 8 has been on my main computer for a few months now, and its admittedly a strange experience. It will stay there, as there are indeed some good features, but any way you slice it sometimes feels awkward. I think that this is partially because much of my memory is often mapped to three-dimensional physical locations. When working merrily away in “desktop mode” I can either see the location of each window or “see” its z-axis position behind the other windows in my mind’s eye. This is great, but after some time passes I’ll invariably perform some action that opens an app in “tablet mode”. The new app covers up all of the windows I had open. This is always unexpected, and breaks my concentration. The root cause of this concentration-breaking is still elusive, but I have yet to be able to think of tablet mode as just another layer sitting on top of all of the other windows. Initially I thought it was the fact that I have been using windows-based (Windows, Mac, Linux IDEs) systems for so many years. The same problem doesn’t seem to exist on tablets, however, as I usually think of the apps in a list on those devices. Only when the two “modes” of operation are mixed does it cause a problem for me.

Once the fullscreen app is open and my concentration is broken, things start to spiral downward. I think bad thoughts about Windows 8 while I reflexively press the Windows key. I seem to expect it to switch back to desktop mode, but it doesn’t. It instead goes to the start screen. More bad thoughts occur while I either hit Windows+D to go back to the desktop, or grab the mouse and click on the Desktop tile. Pressing Windows+Tab to switch back to desktop mode would work much better, but I am always so jarred from the entire contents of the screen changing that I panic and hit the Windows key by itself. I suppose the main point here is that I feel like I am always — and I mean always — having to hit the Windows key in Windows 8.

Pressing keys all the time isn’t fun. Only a certain number of them can be pressed before one dies. Because of this, I decided to try out LiveReload after hearing that it would run on Windows in this TekPub Full Throttle episode. LiveReload is a tool that will watch your project folders and automatically reload them when static content such as HTML, CSS, or JS files have changed. Not only does this save from having to hit the refresh key, but it also answers the ever present question: “Did I already refresh and the code isn’t working? I can’t remember.”

Upon visiting the friendly requirements page, I noticed (at least at the time of this writing) that while the main page simply reads “Requires XP+”, Windows 8 is not specifically mentioned on the compatibility page.

Main page reads "Requires XP+"

Main page reads “Requires XP+”

Compatibility page doesn't specifically mention Windows 8

Compatibility page doesn’t specifically mention Windows 8

After searching around on the internet for a bit, I couldn’t seem to find anyone who had tried LiveReload on Windows 8, so I decided to setup a quick  test. First, I created my test project – a default Asp.Net MVC4 project using the “Internet Application” template.

02 - TestProjectCreation  03 - TestProjectTemplate

Following the instructions on the “Getting Started” page, I then installed LiveReload and added the folder of my test project to it.

05 - LiveReload  05a - LiveReloadFolder

After that, an integration method needed to be chosen. As described on the How do I choose the best integration method? page, there are three main ways of making your site communicate with LiveReload…

  1. Add a JavaScript to your HTML pages.
  2. Install a browser plugin
  3. Use a framework plugin

Option 2 seemed the best for me, as I normally wouldn’t want to have to remember to remove extra scripts from pages before they are deployed, and framework plugins are currently only available for Ruby on Rails. I headed over to the Chrome Web Store and installed the LiveReload plugin.

06 - IntegrationMethod 07 - ChromeExtensions

After that, it was time to test. I enabled the plugin, and ran my test MVC project…

04 - DefaultIndexPage 08 - DefaultIndexInBrowser

… and then added a new heading tag to the HTML and saved the file.

09 - ModifiedIndexPage 10 - ModifiedIndexInBrowser

As you can see from the images above, the browser plugin responded to the save operation and reloaded the page beautifully.

It looks like LiveReload works with Windows 8! I haven’t done more extensive testing yet, but I am going to start using it for some projects at work as soon as I get a chance. I’ll post an update to this if I run into any problems.

Tagged , , , , ,

npm Couldn’t read dependencies (or “So that’s why people don’t develop node apps on Windows”)

Recently, I decided to put the the you-R-here application on github. Since I normally work in Windows, I setup a quick development environment on my Windows 8 machine at home. I installed Visual Studio 2012 Express, Notepad++, and the node.js install package that comes with npm. While those things were installing I thought to my self, “Self, why don’t I know anyone who develops node.js apps on Windows?”. Self didn’t say anything, so I went ahead and copied the files needed over from my Mac.

I chose not to include my node_modules folder in github, and to make things easier, I created a package.json file that listed the dependencies of the project. Doing so would (should?) allow anyone setting up the project from scratch to just run “npm install” in the root project folder to pull down all of the dependencies.

To test this, I deleted npm_modules and ran ye old command in PowerShell and… AND…

npm Couldn’t read dependencies

…an error was displayed. As shown in the screenshot above, I was presented with an error message that read…

Couldn't read dependencies
Failed to parse json
Unexpected token <nonprintable character here>
File: <path>\package.json
package.json must be actual JSON, not just Javascript.

A heavy sigh followed the error message, but wasn’t displayed on screen. Did I get the format wrong – leave off an ending curly brace or the like? I headed over to jsonlint and pasted the code in. It verified fine.

running jsonlint on my package.json file

The only other I could think of that were different than “usual” scenarios were that a) I was using Windows and b) I had created the file in Visual Studio. I couldn’t do much about the first other than switching back to a different OS, so I decided to instead try and create the package.json file somewhere other than Visual Studio. I deleted the file, created it in PowerShell, added it back into my Visual Studio project, added the contents back, and viola! – the dependencies installed and the app ran fine.

Creating the package.json file in PowerShell

Installing the dependencies from package.json

Packages install fine when package.json is created in PowerShell

you-R-here running after installing dependencies from package.json

After some poking around, I looked at the encoding of text files created by both Visual Studio and PowerShell (Notepad++ makes it easy to find by highlighting the current encoding underneath the “encoding” menu). As it turns out, npm doesn’t like UTF-8 encoded files (the Visual Studio default), but it does seem to work fine with ANSI (the PowerShell default).

packages.json created from Visual Studio 2012

packages.json created from PowerShell

I couldn’t find anything in the npm documentation that stated that the the file needed to be ANSI-encoded, but the moral of the story is to make sure your package.json files are ANSI-encoded.

Tagged , , , , , ,

you-Я-here (or “Where Node Target Process Has Gone Before”) – Part 3

In our previous post, we looked at how to setup a Node.js development environment on a Mac. In this post, we’ll write some minimal code to get a functional web-site going, and get ready to style it up and push it out to someplace useful.

index.js

It seems many Node.js projects have a main file named “index.js”, so I chose that convention for mine as well. My index.js file is below. Read through the code first. We’ll discuss it in detail below.

The first things to notice are the “var xxx = requre(“something”);” lines at the top. Node.js uses something called a module loader to load in modules (which in Node.js are just other *.js files). It uses this mechanism to decide what depends on what, in a fashion (to me, at least) that seems reminiscent of the AMD API in RequireJS or even the IoC pattern in type-safe languages.

For us, we require express and socket.io, which we talked about in the previous post. The third requirement, something called “targetprocess”, is just another js file that I wrote that encapsulates access to the Target Process REST API.

Second is a line that tells the express application variable to listen on port 8080. This is a good practice for dev macines where something else, such as a web server, might normally be listening on the usual port 80. After that are a couple of routing calls. These calls tell the routing functionality in express to route all requests for the root web site to a page called “index.html”, and all requests to the root index.css file to a similarly named file in the directory requested.

Next is a variable to hold the names of all users currently connected to the system, and a call to the targetprocess.js component to set the username/password combination for the Target Process (it uses basic authentication) and the URL to its API.

Then come the Socket.IO calls. Socket.IO contains libraries for both server and client-side functionality. Without going through every line in detail, here is a brief summary of the methods that I use…

  • socket.emit – sends a “message” (I would think of it as creating an “event” to which the client html page can respond) to the client represented by “socket”
  • io.sockets.emit – sends a message to all clients connected to the application (including the one represented by “socket”)
  • socket.broadcast.emit – exactly like io.sockets.emit, except that the client represented by “socket” does not receive the message. In other words, this sends a message to everyone except “yourself”.

Documentation is pretty limited on the main Socket.IO site, but check out the links provided in the answer here for more information.

Here in index.js, we are handling various messages. Again, without going line-by-line, below is a list of each with a rough description of purpose…

  • connection – This is a built-in message that triggers whenever a client connects. All further code goes in the handler for this message, and the “socket” argument is used to send messages back to that client.
  • adduser – Triggers when a client sends its username to the server, shortly after connecting
  • updateaudit – A message sent from the server to the client whenever the server has some interesting information that should be displayed in an “Audit” or “Recent Activity” area of the client web page
  • updateusers – A message sent from the server to the client when the user list has been modified
  • changeactiveitem – Triggers when a client clicks on a particular item (user story or bug) on the web page to indicate that a new item is currently being demonstrated.
  • activeitemchanged – A message sent from the server to all clients to indicate that the currently active item has changed
  • disconnect – Another built-in message that triggers when the client represented by “socket” disconnects

index.html

Perhaps the next most interesting page is the client page, index.html. This page contains client-side JavaScript to handle the messages being passed by the server, and the HTML to render to the browser. Let’s look at the code…

You’ll notice several external script references at the top of the page. The first is Socket.IO, required to handle the client-side messaging functionality. After that are jQuery and Underscore, which I am currently only using for its JavaScript templating functionality. Both are loaded from content delivery networks to try and minimize the number of files I have to host, and to speed up delivery of the libraries.

Next we see the initial connection being made to the root of the web site, followed by a bunch of message handlers. Here is a rough list of descriptions of each…

  • connect – Built-in message for initial connection. The handler here sends back the “adduser” message along with the user’s name
  • updateaudit – Listens for audit messages from the server, and lists them out on the web page, in reverse chronological order
  • updateusers – Listens for messages from the server indicating that the user list has changed, and re-renders it to the web page
  • entitiesretrieved – Listens for the message from the server with teh same name, and renders the list to the web page. Also wires up a handler that will send a message to the server to change the active item in the list whenever the user clicks on one in the web page.
  • activeitemchanged – Responds to the message from the server by adding a highlight to the currently active item, and removing the highlight from all other items.

Oh, the Humanity!: An “Issue” With Underscore

One thing to note here is that as I was working with Underscore, I kept getting an error related to an equals sign being invalid. After much fighting, it turns out that whenever you reference a variable or object property inside your template, you ABSOLUTELY MUST put a space between the equals sign and the variable or object property.

i.e. “<%= name %>” works, “<% =name %>” does not

targetprocess.js

The targetprocess.js file, as discussed above, is simply a wrapper for calling into the Target Process REST API. Let’s look at the code…

The module requires the restler module which, as discussed in previous blog posts, provides a simple way to interact with REST-based APIs, not unlike the httparty Ruby gem.

In this module, I used a convention often used in jQuery plugins. The module has but one method, called “api”. If the user uses this method in the “normal” way, it takes a string argument indicating the name of the action to perform, followed by a JavaScript callback function that will be called when the requested information is returned from Target Process. If, on the other hand, the user wants to set global options, he or she can call the method with only one object as an argument, and the username, password, and url will be extracted from that object. This convention can be found in the “api” method toward the bottom of the code file.

Other than that, the only action method available is “getEntitiesForActiveIteration”. This method retrieves all UserStory and Bug objects from Target Process that were completed in an iteration ending on a specific date. Right now, this specific date is hard-coded, but I hope to get that calculated or passed in later, which should be easy enough. For more details on how to form these HTTPGet queries to retrieve Target Process objects, see the Target Process REST API.

One final thing to note is the line at the very bottom of the code file. Whenever a Node.js module is written, you must declare which methods, if any, will be exposed publicly to the modules that use your module. In our case, only the “api” method is exposed publicly.

Oh, the Humanity!: An “Issue” With The Target Process REST API

On my first few attempts, I kept getting a message back from Restler saying that it couldn’t parse the JSON coming back from Target Process because the less-than sign (<) was an invalid character. At first, I thought that this was caused by the fact that we sometimes put HTML into the comments or descriptions of our Bug and UserStory objects. But I narrowed the query to only select one item, and the issue persisted. As it turns out, I had a bad query that I hadn’t tested in the browser, and Target Process was giving an error message back in the form of an HTML response. Apparently when errors occur, they appear in the form of HTML-formatted responses, even if you ask for JSON in the request URL.

index.css

Last (and probably least) is index.css. As of now I have only some minimal formatting, just to make sure that the highlight of the active item shows up, and that my lists aren’t blindingly ugly. Code is as follows…

Testing the App

To test the app, fire up bash (Terminal Window on a Mac), navigate to the path where index.js exists, and type “node index.js”.

Starting the web site

Then fire up a couple of instances of your favorite web browser (they need not be on the same computer, just able to see the web site – I used my Windows computer), type your username on each…

…and then start clicking on items in the list.

Clicking on an item in one browser will make that item highlight in every browser that is connected.

Next Time

Next time, we’ll add some styling to improve the look of the application, and (hopefully) get it running in a real website somewhere (like hieroku).

Tagged , , , , , , , ,

you-Я-here (or “Where Node Target Process Has Gone Before”) – Part 2

Last time, we looked at the idea of having an agile iteration demo helper written in Node.js. In this post, we’ll be looking at how to setup the Node.js environment, and get ready to write some code.

Installing Node.js

Node.js originally wouldn’t run on Windows, at least not natively anyway. You had to install Cygwin and recompile Node.js to get it to run. These days, the windows platform is a first-class citizen for running Node.js. Heck, you can even get Node.js running on IIS. For these posts, however, I chose to setup the environment on a Mac Mini.

Installation itself was a breeze. I just went out to the Node.js site, clicked the big “download” button, and the went through the package installation wizard as shown in the screenshots below…

Node.js site

After clicking big “download” button

Package downloaded

Package installation wizard

One thing to note here is that in addition to Node.js itself, the installation installs the npm, which is an abbreviation for “Node Package Manager”. The npm lets you install useful pieces of code into your project from the web. If you have a Microsoft background, think NuGet packages.

A Place For Code

Before we start installing packages, we’ll need a place to put our code. I just created a folder at “Documents\dev\nodejs\youRhere” for this project (I wasn’t cool enough to make the “R” backwards in the folder name)…

My code folder

Package Installation

Okay, now back to packages. Which packages will we need, exactly?  First off, we’ll need two pretty popular ones – Socket.IO, and Express.

Socket.IO is a library that makes is super easy to do real-time communication in web applications. It is compatible with a variety of browsers and communication technology, and will select the “best” technology and gracefully fall back to “less optimal” technologies when needed. It defaults to using HTML5 Web Sockets, for example, but will fall back to using Adobe Flash Sockets if your browser is not HTML5-compliant. Don’t have Flash installed? Then it will fall back again to use AJAX long polling techniques. This pattern continues until it runs out of options and deems your browser too ancient to use, or finds a compatible technology. To install Socket.io, just fire up ye ol’ bash shell (called “Terminal” in OSX in Finder->Applications->Utilities), navigate to the code folder we created in the previous section, and install the package using the syntax “npm install <pkg>”. “But I don’t know bash commands, and I don’t know how to do a Google search to find them!”, you say? Never fear. Just check out the screenshots below for more info…

Running Terminal from a Finder window

Terminal (Bash shell)

Navigate to code folder

Install the package

Express is the second package we’ll need. The Express web framework makes it easy to do routing, render views on the client, and generally eases the process of making Node.js-based web applications. To install it, just run “npm install express” in the same Bash window.

Install Express

One other package we know we’ll need to install is called “restler”. Restler makes it easy to work with REST-based APIs like the one used by Target Process. Installing it is similar to the others…

Install Restler


Next Time

Now that we have the environment and all (hopefully) of the packages that we’ll be needing, next time we’ll be able to start coding the application.

Tagged , , , ,

you-Я-here (or “Where Node Target Process Has Gone Before”) – Part 1

One of the things I’ve been meaning to take a look at for a while is Node.js. I’ve always had an interest in JavaScript, and both loved and hated it in years past. It would seem only natural that a super-fast, server-side flavor of the language based on Google’s JavaScript runtime might be interesting as well.

To do that, I need a self-assigned project. To make that project more real-world, inspiring, something about which I could be passionate, or what have you, I searched around for a bit to find a Node.js project that seemed like something that might actually be used (or at least have the potential to be used) by real people. To prevent the related blog posts from taking a year, I also searched around for something small enough that I could crank out in a few nights of work at home.

The Idea

After mentioning it to @ryoe_ok, he had a great idea to enhance our agile iteration demo experience for remote attendees. The basic premise was something like this: At our company, we currently use GoToMeeting for agile iteration demos. The product owner and other stakeholders, scrum master, and development team all attend the demos. Sometimes, however, the current feature being demonstrated isn’t as clear as it could be. In order to help with that, I could create a web page. People attending the demo would open the web page in addition to GoToMeeting. The page would simply list the user story currently being demonstrated, and the name of the developer currently demonstrating it. After the demo of that story is finished, the story (and perhaps the name and picture of the demonstrator) would change in real-time, making it very clear for anyone viewing the page exactly what is going on at the moment. He called it “You are here.”

The asynchronous nature of Node.js and its support for web sockets seemed a great fit for the idea. Shortly after finishing the discussion with @ryoe_ok, I promptly stole borrowed his idea, and then elaborated on it a bit. First (just for fun), I thought about the name. I changed it to the more hipster-esque “you-Я-here”. Second, I actually thought of something useful. We are currently using Target Process to help us do agile project management in our semi-distributed team. It has a pretty extensive API for querying user stories, bugs, and other objects. If I could somehow use Node.js to communicate with that API, no one would need to type in the user stories that needed to be demoed. The web page could just figure that out based upon the current date. Just think of all those poor keystrokes we could save!

The UX

Now that the general idea is in place, we need to take a few minutes to flesh out some semblance of a design for the user experience. As far as wireframe tools go, my first choice would probably be Balsamiq, but since it costs around $80 at the time of this writing, and I’m too cheap to buy it  I’m under austerity policies that prevent me from buying it  I’m trying to be frugal, I’ll use Lumzy instead, which is free forever (as far as you know). Here is what I have as a first-pass…

First draft of user interface design

That probably won’t win any design awards, but it shows the general idea for now. We can come back later and make things better if we need. Perhaps, for example, some visual cue could be given such as a fade in/fade out when the data changes.

Next Steps

Now that we know what we want to do, we need to install Node.js and setup the development environment. We’ll be looking at that in the next post.

Tagged , , ,

Octopus and TFS (or “Ned Land Ain’t Harpoonin’ This One”)

One of the tools that I watched a bit during its infancy was the Octopus deployment tool. Unfortunately, I never got around to actually doing anything with it after it became usable. Recently, awesome blogger and DBA Rob Sullivan mentioned over lunch that he was using it, and my interest was again sparked. Still in beta (and consequently free) at the time of this post, I decided that I would see how hard it would be for me to use Octopus to deploy a really simple project.

Setting Up The VM

After reading through the Octopus installation guide, I learned that Octopus (the deployment tool suite) was made up of two parts – a central server (also sometimes referred to as just “Octopus”), and a number of remote agents called “Tentacles”. The Octopus orchestrates things, and the Tentacles (which are to be installed on every deploy location) are responsible for receiving install packages, extracting them, putting them in the correct locations, and executing other commands sent out by the central server.

I decided to start by setting up the central server. I wanted to make a dedicated VM for this and, since I have a background that is primarily ‘softie in nature, I fired up a VM in Hyper V and installed Windows Server 2008 R2.

Excited and eager to get going after waiting 200 years for my VM install to finish, I blindly tried to run the central server installation. I was met, however, with a nice message indicating that Win2K8 R2 SP1 was needed before I could proceed. Deciding to read some more docs and see what else might be missing, I found the requirements on the Installing Octopus page and pored over them for a bit. Then after installing SP1, going through a couple of passes in the Server Manager to add all of the listed Roles and Features, installing the .Net Framework 4, running %WinDir%\Microsoft.NET\Framework\v4.0.30319\aspnet_regiis.exe -i to make sure IIS had the root bound to the 4.0 Framework, I was nearly ready to go.

Octopus requires a Microsoft SQL Server database. It isn’t currently specified in the docs (at least not that I could see) which versions it would run against, but it did say that it could use SQL Express, and provided a link to SQL Express 2008 R2. Since SQL Server 2012 recently came out and I haven’t had much hands-on time with it yet, I decided to use that, and installed a local copy on the VM.

Installing The Octopus Central Service

So now I finally had a VM setup and ready to go. Moving back to installing Octopus again, everything now went pretty easily. I just followed the steps under the “Installation” section on the Installing Octopus page. There were a couple of minor things I had to do that weren’t as simple as clicking through the install wizard, however.

The first was that I had to use a different service user for the Octopus Windows Service that got installed, and the user of the Octopus Portal application pool that it created in IIS. The documentation describes this procedure under the Note for Remote SQL Server Instances, but for whatever reason (most likely the Network Service account didn’t have permissions to my non-express SQL 2012 database) I had to do it even though I was using a local install. In my case, I created an Active Directory user called octopus, gave him the appropriate rights to run the Octopus Windows service, and to access the SQL 2012 database, and was off to the races.

The second thing I did was delete the Default Web Site in IIS on the VM and let the Octopus installer create its Octopus Portal web site. I could have let it install another web site on a different port, but since this VM is only going to be used for Octopus, I wanted to be able to type http://machinename into my web browser instead of http://machinename:<portnumber&gt; (you can only type so many keystrokes before you die, you know).

Installing Tentacles

Making sure to copy the certificate digest string from the Octopus installation (you can get it later if needed by clicking Start->Octopus->Server Tools on the central server), I then started installing Tentacles. In my particular situation, there are three separate staging areas – development, beta, and production. Each of these staging areas has a web farm with at least two web servers. Additional servers exist in each environment acting as database servers, processing machines running Windows services, queuest, etc. To keep things simple, I just installed a single Tentacle on just one of the web servers in the development staging area, and temporarily suspended the other web servers in the farm.

Installation for the Tentacle itself was straightforward. I just followed the steps on the Installing Tentacle page, making sure that the prerequisites were all there first. In my case, all I had to do was remember to add an exception to the Windows Firewall on my development web server machine. The only other notable thing was at one point in the installation I pasted the digest string that I copied from the central server install earlier. This enables trust between the Octopus central server and the Tentacle.

After installing the Tentacle, I tried testing it by browsing to http://<machinename&gt;:10933 as described in the documentation. After a long timeout, I was greeted with the following message in IE…

Couldn’t connect to tentacle. No bueno.

The problem with this was a permissions issue, similar to the one with using the Network Service user as the service and application pool user on the central server. In a similar fashion, I switched both out to use my new octopus Active Directory user that I had created earlier, and then the tentacle came up fine…

Tentacle service after setting service/app pool user. Awesome!

Selecting An Application To Deploy

Now that I had both the Octopus and Tentacle installs finished, I said to myself, “Self, before we move on, we should find an application to deploy.” After a few minutes of thinking, we I decided to use a dirt-simple MVC4 Web API application that I had written a few weeks earlier. That app is a fake version of a real app that would run inside of a customer’s network (primarily used for testing). It simply allows a user to send over an HTTPGet for an RO or PO number (it isn’t important here what RO and PO numbers are) like so…

http://<webserver>/lookups/restlookup/ro/<ronumber>/?$callback=jsonpcallback

..and get back a JSONP response containing some information related to that RO, like so…

jsonpcallback({
    "ACRecords": [
        {
            "ID": "<ronumber>",
            "MePartNumber": "<mepartnumber>",
            "MeSerialNumber": <meserialnumber>,
            "MfgPartNumber": "<mfgpartnumber>",
            "MfgSerialNumber": "<mfgserialnumber>",
            "VendorID": <vendorid>,
            "VendorName": "<vendorname>"
        }
    ],
    "Message": <messageiferror>,
    "Success": <true|false>
});

If you run it in Visual Studio and start the default page, it has a list of links that you can click on to submit the HTTPGet request and have it spit out the corresponding JSONP response.

Our flagship application to deploy – the great and powerful “Sample Lookup Provider”

Getting TFS To Build The NuGet Packages

This is where things started getting slightly more difficult (at least for me). After reading more of the Octopus documentation on the Packaging page, I learned/remembered that Octopus works with NuGet packages. I don’t know much at all about creating NuGet packages. Great. So I could use something like TFS NuGetter to have TFS build a standard NuGet package, right? Wrong. There are subtle differences between a standard NuGet package and what Octopus needs. To handle these differences, Paul Stovell created OctoPack, a command-line utility that builds the Octopus-specific NuGet packages.

At this point I kept focusing on getting TFS to make the NuGet package for some reason. After scanning around the Octopus docs, it appeared that there is currently more/better/easier support for using Team City as a CI/CD server over TFS. I didn’t try and find out whether this was on purpose, whether TFS just makes things too hard, or Team City just makes things really easy (although I suspect the former). I did some quick Googleing to find out if anyone had a good way to get TFS to automate OctoPack-based NuGet builds, and I came across a couple of posts (1, 2) showing how to create custom WF activities to do it. The tough part about that is that while I have used MSBuild a bit, I know almost nothing about TFS Build.

At some later point in time, I came to my senses and realized that instead of initially focusing on automated builds, I should first try and get it building the NuGet package by hand, and then try to get automated builds going later (baby-steps, right?).

So, I decided to go back to the docs and learn enough about OctoPack to manually make a NuGet package with it. I updated to the latest version of NuGet on my machine (which for some reason requires me to always uninstall and reinstall it – clicking the “Update” button from inside the VS Extension Manager window seems to do nothing). Then I ran “Install-Package OctoPack” from the Package Manager Console window. That seemed to install fine.

I added a NuSpec file as described in the documentation. (Be sure to enter some sort of URL for the licenseUrl and projectUrl elements. If you don’t, you’ll get an error.) Then I switched over to Release and built my solution. Holy, cow, it built a NuGet package for me! That was way easier than building custom workflow activies for TFS Build.

Hold the phone, though, after reading more in the documentation, it turns out that by default all code files are included in the NuGet package by default. You have to run the following command from the Windows command prompt to prevent that from happening…

msbuild SampleLookupProvider.csproj “/t:Rebuild” “/t:ResolveReferences” “/t:_CopyWebApplication” /p:Configuration=Release /p:WebProjectOutputDir=publish\ /p:OutDir=publish\bin\

That’s no fun. How the heck was I going to get TFS to do that? The thing I tried next was pretty bone-headed (it was getting late). I installed MVC4 for VS2010 on my development web server (the one where I installed the Tentacle), setup a TFS Build Agent to automatically build the Sample Lookup Provider whenever it was checked in, and then added a post-build event to the project that would run the command line above. Then I checked in my code. Surely that would work, right?

Well, the first thing that happened is that I got this error when TFS tried to run the build. It worked on my machine, but not in TFS…

C:\<SourcesTFSPath>\Lookups\LookupProviders\RestLookupProvider\RestLookupProvider.csproj (243): The imported project “C:\<SourcesTFSPath>\Lookups\LookupProviders\packages\OctoPack.1.0.94\targets\OctoPack.targets” was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.

This was an issue that I had faced before when using NuGet packages. For whatever reason,  the NuGet team didn’t really have source control in mind when building NuGet. Because of that, you have to manually check in the .targets file that is normally in the <MyProject>\..\packages\<packagename>\targets\<whatever>.targets path. I have seen Powershell scripts that work around it, but nothing really easy. At any rate, after checking the targets file into TFS, I got past the problem without too much trouble.

The really awesome thing about doing my little post-build event was discovered next. TFS started its build in response to checking in the targets file, but never finished. The reason was that the command-line in the post-build event would execute the build instructions in the SampleLookupProvider.csproj (itself) file which, in turn, also would executes the command-line in the same post build event. This created an infinite build in TFS. Le sigh.

So, back to the documentation. If I had read just a few more lines down the page, I would have learned that TFS automatically publishes ASP.Net websites into a _PublishedWebSites folder, so there was no need for me to run the command line argument. Le sigh again.

NuGet Feeds

Now I had a proper OctoPack-created NuGet package on the TFS build server. I went back to the documentation to review something I had seen earlier on the Packaging page about Octopus needing to get its NuGet packages from a NuGet server. Luckily I didn’t know anything about creating NuGet servers, so I clicked the remote feed and local feed links to read about them in the NuGet documentation. Local feeds looked to be the easier of the two, as they seemed to just involve putting NuGet packages out on a UNC file share. The remote feed one required building an ASP.Net website and deploying it out to IIS to let it host packages over http. I decided to go the local feeds route.

If I understood local feeds properly, all I would need to do would be to find the simplest and best way to make TFS copy my NuGet package over to a well-known UNC path after the build was complete. After stumbling around in TFS a bit, I finally found that I could change the default output path in the OctoPack.targets file to be a shared UNC path. I checked in the targets file, let the build run again, and verified that TFS was indeed copying it out to the correct location.

Using The Octopus UI

Right. Now I needed to go to the Octopus dashboard and set everything up. Since I had previously deleted the default IIS website on my VM and let Octopus use port 80 for its Octopus Portal, site, all I needed to do was type in the name of my server in a web browser. In my case the server name was devoctopus.

This brought up the dashboard fine, and the first thing I did was setup an environment. For my research, all I needed was a single environment, which I called development. I gave it a name and description for the environment, and then added the name and Tentacle URL for my single dev machine (http://<servername&gt;:10933/).

The Octopus Environments Tab

After that, I went over to the Settings tab to setup my feed. I just added a new feed, gave it a name, and the UNC path where TFS was putting my NuGet packages. I did not specify authentication, but instead made sure that the user running the Octopus Portal application pool had access to the UNC path.

The Octopus Settings Tab

To test the feed, I clicked on the test link on the right-hand side of the page. That took me to a new page with a Search button and a textbox. This page threw me at first because the search is case sensitive. My package, for example, was called RestLookupProvider 1.0.0. If I searched for “rest”, it would return nothing, but if I searched for “Rest”, it would return my package, indicating that the NuGet feed settings were working correctly.

This search is case sensitive

An uppercase “R” made all the difference in the world

After making sure Octopus could find my package, I moved on to the Projects tab, where I added a single project named AirVault. Inside of that project, I added a single deployment step that would deploy my RestLookupProvider 1.0.0 package to my single development machine. I created a new release, called 6.1.0, and ran it.

A Couple of Issues

The deploy finished, but I couldn’t see that my website had been deployed. After looking again at the docs, I discovered that you have to manually create the website once. I did that and then ran the deploy again. Still no files were copied. Upon looking at the logs under the Tasks tab, I saw this output message…

2012-05-05 05:19:40 WARN Could not find an IIS website or virtual directory named ‘RestLookupProvider’ on the local machine. If you expected Octopus to update this for you, you should create the site and/or virtual directory manually. Otherwise you can ignore this message.

Ah, that made sense. My web application was at path /lookups/restlookup/, but it looked like Octopus used the convention that the project name was exactly the same as the web application path name. After digging around some more, I found on the aptly named Variables page that you can set a predefined variable called OctopusWebSiteName to override the convention.

I did so, and set its value to Default Web Site/lookups/restlookup (it wants you to include the web site name in the path), and then redeployed. Still no files were copied, and still I got the same message about Could not find an IIS website…. What the heck?

I looked again in the logs on the Tasks tab, and found that it was not using my variable. As it turns out, you have to have the variables that you want in before you create a release. It looks like their values get “locked in” when you create the release, and there doesn’t appear to be a way to change them after that. I made a new release, and ran it, and voila – it used my variable and worked like a champ!

Again I looked at the logs, and noticed this message…

The IIS website named ‘Default Web Site/lookups/restlookup’ has had its path updated to: ‘C:\Octopus\Tentacle\Applications\RestLookupProvider.1.0.0’

That’s weird. I would have expected it to copy to the web site path that existed and overwrite the files instead of changing the web application to point to some Octopus folder. When I went into IIS and check the path, sure enough, it had changed it…

By default, Octopus changes the web application folder instead of using the existing one

Going back to the docs again, the predefined variable named OctopusPackageDirectoryPath is in charge of the physical path to use. I added that, scoped it to my Step, set its value to c:\inetpub\wwwroot\lookups\restlookup, and then created and ran a new release. That resolved the issue, and deployed the files in the expected directory.

Summary

Overall, Octopus looks really promising for what I do. While integrating with TFS could have been easier, I am not sure that is any fault of Octopus itself. Perhaps a bit of extra documentation (for NuGet dummies) on how to integrate with TFS would have sped me through a couple of issues that I had. Aside from that, however, Octopus is definitely something that I’d recommend for shops primarily using Microsoft tech. In my opinion, it is much more managable than using TFS Build and PowerShell scripts, and it definitely beats the heck out of deploying things manually.

Tagged , , , ,