Tag Archives: ci

Octopus and TFS (or “Ned Land Ain’t Harpoonin’ This One”)

One of the tools that I watched a bit during its infancy was the Octopus deployment tool. Unfortunately, I never got around to actually doing anything with it after it became usable. Recently, awesome blogger and DBA Rob Sullivan mentioned over lunch that he was using it, and my interest was again sparked. Still in beta (and consequently free) at the time of this post, I decided that I would see how hard it would be for me to use Octopus to deploy a really simple project.

Setting Up The VM

After reading through the Octopus installation guide, I learned that Octopus (the deployment tool suite) was made up of two parts – a central server (also sometimes referred to as just “Octopus”), and a number of remote agents called “Tentacles”. The Octopus orchestrates things, and the Tentacles (which are to be installed on every deploy location) are responsible for receiving install packages, extracting them, putting them in the correct locations, and executing other commands sent out by the central server.

I decided to start by setting up the central server. I wanted to make a dedicated VM for this and, since I have a background that is primarily ‘softie in nature, I fired up a VM in Hyper V and installed Windows Server 2008 R2.

Excited and eager to get going after waiting 200 years for my VM install to finish, I blindly tried to run the central server installation. I was met, however, with a nice message indicating that Win2K8 R2 SP1 was needed before I could proceed. Deciding to read some more docs and see what else might be missing, I found the requirements on the Installing Octopus page and pored over them for a bit. Then after installing SP1, going through a couple of passes in the Server Manager to add all of the listed Roles and Features, installing the .Net Framework 4, running %WinDir%\Microsoft.NET\Framework\v4.0.30319\aspnet_regiis.exe -i to make sure IIS had the root bound to the 4.0 Framework, I was nearly ready to go.

Octopus requires a Microsoft SQL Server database. It isn’t currently specified in the docs (at least not that I could see) which versions it would run against, but it did say that it could use SQL Express, and provided a link to SQL Express 2008 R2. Since SQL Server 2012 recently came out and I haven’t had much hands-on time with it yet, I decided to use that, and installed a local copy on the VM.

Installing The Octopus Central Service

So now I finally had a VM setup and ready to go. Moving back to installing Octopus again, everything now went pretty easily. I just followed the steps under the “Installation” section on the Installing Octopus page. There were a couple of minor things I had to do that weren’t as simple as clicking through the install wizard, however.

The first was that I had to use a different service user for the Octopus Windows Service that got installed, and the user of the Octopus Portal application pool that it created in IIS. The documentation describes this procedure under the Note for Remote SQL Server Instances, but for whatever reason (most likely the Network Service account didn’t have permissions to my non-express SQL 2012 database) I had to do it even though I was using a local install. In my case, I created an Active Directory user called octopus, gave him the appropriate rights to run the Octopus Windows service, and to access the SQL 2012 database, and was off to the races.

The second thing I did was delete the Default Web Site in IIS on the VM and let the Octopus installer create its Octopus Portal web site. I could have let it install another web site on a different port, but since this VM is only going to be used for Octopus, I wanted to be able to type http://machinename into my web browser instead of http://machinename:<portnumber&gt; (you can only type so many keystrokes before you die, you know).

Installing Tentacles

Making sure to copy the certificate digest string from the Octopus installation (you can get it later if needed by clicking Start->Octopus->Server Tools on the central server), I then started installing Tentacles. In my particular situation, there are three separate staging areas – development, beta, and production. Each of these staging areas has a web farm with at least two web servers. Additional servers exist in each environment acting as database servers, processing machines running Windows services, queuest, etc. To keep things simple, I just installed a single Tentacle on just one of the web servers in the development staging area, and temporarily suspended the other web servers in the farm.

Installation for the Tentacle itself was straightforward. I just followed the steps on the Installing Tentacle page, making sure that the prerequisites were all there first. In my case, all I had to do was remember to add an exception to the Windows Firewall on my development web server machine. The only other notable thing was at one point in the installation I pasted the digest string that I copied from the central server install earlier. This enables trust between the Octopus central server and the Tentacle.

After installing the Tentacle, I tried testing it by browsing to http://<machinename&gt;:10933 as described in the documentation. After a long timeout, I was greeted with the following message in IE…

Couldn’t connect to tentacle. No bueno.

The problem with this was a permissions issue, similar to the one with using the Network Service user as the service and application pool user on the central server. In a similar fashion, I switched both out to use my new octopus Active Directory user that I had created earlier, and then the tentacle came up fine…

Tentacle service after setting service/app pool user. Awesome!

Selecting An Application To Deploy

Now that I had both the Octopus and Tentacle installs finished, I said to myself, “Self, before we move on, we should find an application to deploy.” After a few minutes of thinking, we I decided to use a dirt-simple MVC4 Web API application that I had written a few weeks earlier. That app is a fake version of a real app that would run inside of a customer’s network (primarily used for testing). It simply allows a user to send over an HTTPGet for an RO or PO number (it isn’t important here what RO and PO numbers are) like so…

http://<webserver>/lookups/restlookup/ro/<ronumber>/?$callback=jsonpcallback

..and get back a JSONP response containing some information related to that RO, like so…

jsonpcallback({
    "ACRecords": [
        {
            "ID": "<ronumber>",
            "MePartNumber": "<mepartnumber>",
            "MeSerialNumber": <meserialnumber>,
            "MfgPartNumber": "<mfgpartnumber>",
            "MfgSerialNumber": "<mfgserialnumber>",
            "VendorID": <vendorid>,
            "VendorName": "<vendorname>"
        }
    ],
    "Message": <messageiferror>,
    "Success": <true|false>
});

If you run it in Visual Studio and start the default page, it has a list of links that you can click on to submit the HTTPGet request and have it spit out the corresponding JSONP response.

Our flagship application to deploy – the great and powerful “Sample Lookup Provider”

Getting TFS To Build The NuGet Packages

This is where things started getting slightly more difficult (at least for me). After reading more of the Octopus documentation on the Packaging page, I learned/remembered that Octopus works with NuGet packages. I don’t know much at all about creating NuGet packages. Great. So I could use something like TFS NuGetter to have TFS build a standard NuGet package, right? Wrong. There are subtle differences between a standard NuGet package and what Octopus needs. To handle these differences, Paul Stovell created OctoPack, a command-line utility that builds the Octopus-specific NuGet packages.

At this point I kept focusing on getting TFS to make the NuGet package for some reason. After scanning around the Octopus docs, it appeared that there is currently more/better/easier support for using Team City as a CI/CD server over TFS. I didn’t try and find out whether this was on purpose, whether TFS just makes things too hard, or Team City just makes things really easy (although I suspect the former). I did some quick Googleing to find out if anyone had a good way to get TFS to automate OctoPack-based NuGet builds, and I came across a couple of posts (1, 2) showing how to create custom WF activities to do it. The tough part about that is that while I have used MSBuild a bit, I know almost nothing about TFS Build.

At some later point in time, I came to my senses and realized that instead of initially focusing on automated builds, I should first try and get it building the NuGet package by hand, and then try to get automated builds going later (baby-steps, right?).

So, I decided to go back to the docs and learn enough about OctoPack to manually make a NuGet package with it. I updated to the latest version of NuGet on my machine (which for some reason requires me to always uninstall and reinstall it – clicking the “Update” button from inside the VS Extension Manager window seems to do nothing). Then I ran “Install-Package OctoPack” from the Package Manager Console window. That seemed to install fine.

I added a NuSpec file as described in the documentation. (Be sure to enter some sort of URL for the licenseUrl and projectUrl elements. If you don’t, you’ll get an error.) Then I switched over to Release and built my solution. Holy, cow, it built a NuGet package for me! That was way easier than building custom workflow activies for TFS Build.

Hold the phone, though, after reading more in the documentation, it turns out that by default all code files are included in the NuGet package by default. You have to run the following command from the Windows command prompt to prevent that from happening…

msbuild SampleLookupProvider.csproj “/t:Rebuild” “/t:ResolveReferences” “/t:_CopyWebApplication” /p:Configuration=Release /p:WebProjectOutputDir=publish\ /p:OutDir=publish\bin\

That’s no fun. How the heck was I going to get TFS to do that? The thing I tried next was pretty bone-headed (it was getting late). I installed MVC4 for VS2010 on my development web server (the one where I installed the Tentacle), setup a TFS Build Agent to automatically build the Sample Lookup Provider whenever it was checked in, and then added a post-build event to the project that would run the command line above. Then I checked in my code. Surely that would work, right?

Well, the first thing that happened is that I got this error when TFS tried to run the build. It worked on my machine, but not in TFS…

C:\<SourcesTFSPath>\Lookups\LookupProviders\RestLookupProvider\RestLookupProvider.csproj (243): The imported project “C:\<SourcesTFSPath>\Lookups\LookupProviders\packages\OctoPack.1.0.94\targets\OctoPack.targets” was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.

This was an issue that I had faced before when using NuGet packages. For whatever reason,  the NuGet team didn’t really have source control in mind when building NuGet. Because of that, you have to manually check in the .targets file that is normally in the <MyProject>\..\packages\<packagename>\targets\<whatever>.targets path. I have seen Powershell scripts that work around it, but nothing really easy. At any rate, after checking the targets file into TFS, I got past the problem without too much trouble.

The really awesome thing about doing my little post-build event was discovered next. TFS started its build in response to checking in the targets file, but never finished. The reason was that the command-line in the post-build event would execute the build instructions in the SampleLookupProvider.csproj (itself) file which, in turn, also would executes the command-line in the same post build event. This created an infinite build in TFS. Le sigh.

So, back to the documentation. If I had read just a few more lines down the page, I would have learned that TFS automatically publishes ASP.Net websites into a _PublishedWebSites folder, so there was no need for me to run the command line argument. Le sigh again.

NuGet Feeds

Now I had a proper OctoPack-created NuGet package on the TFS build server. I went back to the documentation to review something I had seen earlier on the Packaging page about Octopus needing to get its NuGet packages from a NuGet server. Luckily I didn’t know anything about creating NuGet servers, so I clicked the remote feed and local feed links to read about them in the NuGet documentation. Local feeds looked to be the easier of the two, as they seemed to just involve putting NuGet packages out on a UNC file share. The remote feed one required building an ASP.Net website and deploying it out to IIS to let it host packages over http. I decided to go the local feeds route.

If I understood local feeds properly, all I would need to do would be to find the simplest and best way to make TFS copy my NuGet package over to a well-known UNC path after the build was complete. After stumbling around in TFS a bit, I finally found that I could change the default output path in the OctoPack.targets file to be a shared UNC path. I checked in the targets file, let the build run again, and verified that TFS was indeed copying it out to the correct location.

Using The Octopus UI

Right. Now I needed to go to the Octopus dashboard and set everything up. Since I had previously deleted the default IIS website on my VM and let Octopus use port 80 for its Octopus Portal, site, all I needed to do was type in the name of my server in a web browser. In my case the server name was devoctopus.

This brought up the dashboard fine, and the first thing I did was setup an environment. For my research, all I needed was a single environment, which I called development. I gave it a name and description for the environment, and then added the name and Tentacle URL for my single dev machine (http://<servername&gt;:10933/).

The Octopus Environments Tab

After that, I went over to the Settings tab to setup my feed. I just added a new feed, gave it a name, and the UNC path where TFS was putting my NuGet packages. I did not specify authentication, but instead made sure that the user running the Octopus Portal application pool had access to the UNC path.

The Octopus Settings Tab

To test the feed, I clicked on the test link on the right-hand side of the page. That took me to a new page with a Search button and a textbox. This page threw me at first because the search is case sensitive. My package, for example, was called RestLookupProvider 1.0.0. If I searched for “rest”, it would return nothing, but if I searched for “Rest”, it would return my package, indicating that the NuGet feed settings were working correctly.

This search is case sensitive

An uppercase “R” made all the difference in the world

After making sure Octopus could find my package, I moved on to the Projects tab, where I added a single project named AirVault. Inside of that project, I added a single deployment step that would deploy my RestLookupProvider 1.0.0 package to my single development machine. I created a new release, called 6.1.0, and ran it.

A Couple of Issues

The deploy finished, but I couldn’t see that my website had been deployed. After looking again at the docs, I discovered that you have to manually create the website once. I did that and then ran the deploy again. Still no files were copied. Upon looking at the logs under the Tasks tab, I saw this output message…

2012-05-05 05:19:40 WARN Could not find an IIS website or virtual directory named ‘RestLookupProvider’ on the local machine. If you expected Octopus to update this for you, you should create the site and/or virtual directory manually. Otherwise you can ignore this message.

Ah, that made sense. My web application was at path /lookups/restlookup/, but it looked like Octopus used the convention that the project name was exactly the same as the web application path name. After digging around some more, I found on the aptly named Variables page that you can set a predefined variable called OctopusWebSiteName to override the convention.

I did so, and set its value to Default Web Site/lookups/restlookup (it wants you to include the web site name in the path), and then redeployed. Still no files were copied, and still I got the same message about Could not find an IIS website…. What the heck?

I looked again in the logs on the Tasks tab, and found that it was not using my variable. As it turns out, you have to have the variables that you want in before you create a release. It looks like their values get “locked in” when you create the release, and there doesn’t appear to be a way to change them after that. I made a new release, and ran it, and voila – it used my variable and worked like a champ!

Again I looked at the logs, and noticed this message…

The IIS website named ‘Default Web Site/lookups/restlookup’ has had its path updated to: ‘C:\Octopus\Tentacle\Applications\RestLookupProvider.1.0.0’

That’s weird. I would have expected it to copy to the web site path that existed and overwrite the files instead of changing the web application to point to some Octopus folder. When I went into IIS and check the path, sure enough, it had changed it…

By default, Octopus changes the web application folder instead of using the existing one

Going back to the docs again, the predefined variable named OctopusPackageDirectoryPath is in charge of the physical path to use. I added that, scoped it to my Step, set its value to c:\inetpub\wwwroot\lookups\restlookup, and then created and ran a new release. That resolved the issue, and deployed the files in the expected directory.

Summary

Overall, Octopus looks really promising for what I do. While integrating with TFS could have been easier, I am not sure that is any fault of Octopus itself. Perhaps a bit of extra documentation (for NuGet dummies) on how to integrate with TFS would have sped me through a couple of issues that I had. Aside from that, however, Octopus is definitely something that I’d recommend for shops primarily using Microsoft tech. In my opinion, it is much more managable than using TFS Build and PowerShell scripts, and it definitely beats the heck out of deploying things manually.

Tagged , , , ,