Windows Azure Active Directory: Querying the Graph API for group memberships

This is the third part in my series of posts on Windows Azure AD. In this post I go through the steps required to get your application talking to the AD Graph API, and the things that stumped me along the way. The reason we’re interested in the Graph API to begin with is that WAAD Authentication provides us with the user’s identity, but doesn’t tell us anything else about them. If we want more details, such as group memberships, we need to do some extra work.

Dependencies

We begin this part by adding a new Service Reference to the project. For the address of the service, type http://graph.windows.net/offbeatdemo.onmicrosoft.com (replacing offbeatdemo.onmicrosoft.com with your own tenant domain name) and click “Go”. After a moment of waiting, Visual Studio should generate the necessary metadata files. Here you can give your service a namespace – remember, it’s relative to your project’s default namespace, so let’s call it “ActiveDirectoryGraph”.

vs-graph-reference

Once you hit OK, the service reference is generated as per usual. This is where the fun begins. Go to the References node in your project, right-click and select “Manage NuGet Packages…”. Select “Online” from the left, click on the search box and type in “Microsoft.WindowsAzure.ActiveDirectory.Authentication” and you should see a single matching package.

vs-nuget-aal

Click on “Install” to have NuGet fetch the library for you. The library in question is the Windows Azure Authentication Library, or AAL for short. It has all kinds of features, but the reason we need it is to authenticate our application so it can make Graph API requests. Somewhat counterintuitively, AAL depends on System.Windows.Forms of all things, so before your code will compile, you will need to add that as a reference to your project.

Using the API

Create a new class in your project – you can call it what you like, but I’m calling mine “GraphClient”. Paste in the following code – you may have to change some namespaces, but if you’ve added the service reference and the AAL package correctly, it should compile as is:


I’m not going to go through the implementation details of the class now – if you’ve got questions or feedback, leave a comment here or in the gist at GitHub. In order to use this class, what you need to do is find out some details about your tenant domain and Service Principal. It’s not entirely obvious why, but the only place where I’ve found the tenant id so far is the FederationMetadata.xml file (as documented in the article Integrating Multi-Tenant Cloud Applications with Windows Azure Active Directory) – note that the file I linked to is the federation metadata for my demo tenant domain, so you need to change the URL to get your own metadata. Locate the entityId attribute on the EntityDescriptor tag.

federation-metadata-tenantid

The entityId attribute looks like this: spn:00000001-0000-0000-c000-000000000000@a071bf68-ee1d-46aa-ac6d-cfddf3826050 – the emphasized part after the @ sign is the tenant id. Yours will be different from mine of course – copy the id somewhere handy, because you will need it soon. You’re also going to need the app principal id of the service principal you created in the previous installment. Finally, you’ll need to grant your application the right to read the directory, and create a symmetric key known to both WAAD and your application – it’s a key you will use to sign your requests, so WAAD will know it’s you making them. Here’s how you do all that: open the MSOL module for PowerShell and use Connect-MsolService to connect to your WAAD tenant. Run Import-Module MSOnlineExtended –Force to make all the commandlets available, and then run Get-MsolServicePrincipal | select DisplayName,AppPrincipalId to list your service principals.

PS C:\Windows\system32> Get-MsolServicePrincipal | select DisplayName,AppPrincipalId

DisplayName                             AppPrincipalId
-----------                             --------------
Microsoft.Azure.ActiveDirectory         00000002-0000-0000-c000-000000000000
Microsoft.Azure.ActiveDirectoryUX       0000000c-0000-0000-c000-000000000000
WAADDemo App                            e8a3050f-0c61-46bd-9808-ff7dd5dcdb4b

The resulting list will show you the app principal id (in this case, e8a3050f-0c61-46bd-9808-ff7dd5dcdb4b) – copy it somewhere handy. Next, we use that to create a new symmetric key with New-MsolServicePrincipalCredential:

PS C:\Windows\system32> New-MsolServicePrincipalCredential -AppPrincipalId e8a3050f-0c61-46bd-9808-ff7dd5dcdb4b
The following symmetric key was created as one was not supplied JH0QbohY5/+IW25zzukjuwPjr6mpnMhgicgVA4SfF9A=

Save the symmetric key somewhere too. As far as I know, there is no way to restore the value if you lose it – you’ll have to create a new key instead. You’ve got everything your application needs, but there’s still one more thing left to do: granting privileges.

PS C:\Windows\system32> Add-MsolRoleMember -RoleName "Service Support Administrator" `
 -RoleMemberType ServicePrincipal -RoleMemberObjectId 3dc125e6-d518-40d2-9392-87a03dac8f68

The “Service Support Administrator” role grants read-only access to the directory. The RoleMemberType switch is needed to inform WAAD that we’re granting the privilege to an application instead of a user or a group, and the RoleMemberObjectId switch identifies the Service Principal. Note that the Object Id is different from the AppPrincipalId we used earlier – if you don’t have the Object Id written down anywhere, you can use Get-MsolServicePrincipal | select DisplayName,ObjectId to get it. Now, you can use the collected values to create an instance of GraphClient and use it. Add this to a controller action:

var client = new GraphClient("offbeatdemo.onmicrosoft.com""a071bf68-ee1d-46aa-ac6d-cfddf3826050",
 "e8a3050f-0c61-46bd-9808-ff7dd5dcdb4b""JH0QbohY5/+IW25zzukjuwPjr6mpnMhgicgVA4SfF9A=");
 
var groups = client.GetUserGroups(User.Identity.Name);

return View(groups);

Now you should be able to display your user’s group memberships in the view:

waad-app-with-groups

Wrapping up

None of this was particularly difficult to do from a programming perspective – the trick is finding all the information you need! It doesn’t much help that most of the examples out there deal with either SaaS applications that are registered in the Seller Dashboard or applications that use ACS instead of WAAD. The steps are largely the same, but the places where you look for IDs and keys are wildly different. Merely listing the user’s group memberships isn’t very interesting. Now that we’ve got them, we could use them for authorization within the application – which is what we’re doing in the next part of this series.

Windows Azure Active Directory Authentication: Setting up the application

Continuing where we left off in the previous post, we’ve got all the tools, so the next step is setting up the authentication itself. You probably don’t want to make a mess of the working state of your actual application. Or perhaps you don’t yet have an application to test with. For this walkthrough, you can start with a new ASP.NET MVC 4 project from Visual Studio and select Internet Application as the template. The template contains a layout that displays the current logged-in user, which demonstrates that the authentication mechanism integrates nicely with the rest of ASP.NET. After you’ve got your application open in Visual Studio, right-click on the project in Solution Explorer and select Enable Windows Azure Authentication.

vs-enable-waad-auth

Clicking on the menu item opens up a dialog that asks for your tenant domain name.

vs-enable-waad-auth-2

That is, unless it throws you a COMException instead. If that happens, ensure that you’ve got an IIS or IIS Express virtual directory set up for your web application. Visual Studio doesn’t let you misconfigure the virtual directory settings like this, but it might still happen if you’re running your web app as an Azure Web Role in the Compute Emulator.

Once you type in your tenant domain name, you’ll be prompted to log in. Log in with an account that has administrative access (for example, the account you created during the tenant setup process).

vs-enable-waad-auth-3

The prompt uses IE, so if you happen to be logged in an IE session with a Microsoft Online account that doesn’t exist in the tenant domain, you might end up with an error instead. This happens to me quite often, because I use our Office 365 apps with IE.

vs-enable-waad-auth-error

The easiest way to get rid of this error is to cancel the WAAD authentication setup, open up the Visual Studio web browser (View –> Other Windows –> Web Browser) and use it to navigate to the Microsoft Online logout page. Once the logout process is complete, close the Visual Studio web browser and try the WAAD authentication setup again. At this point, if everything went well, you’ll be presented with a happy green check mark.

vs-enable-waad-auth-4

You’ll have a number of changes in your Web.config, mostly under system.identitymodel:

vs-enable-waad-auth-result

The default authentication module for ASP.NET applications is the Forms Authentication module. You can only use a single authentication module for your application, and since the Forms Authentication module is configured at the web server level, it takes precedence. So make sure that your application doesn’t use the Forms Authentication module by adding this under system.webServer/modules:

<remove name="FormsAuthentication" />

A brief interlude on WAAD Single Sign-On

Before we go further, a few words on how SSO actually works. When a request comes in to your web application, the authentication module checks to see if the user is authenticated or not. This is typically done by looking at a cookie that holds some sort of authentication ticket. If the user is authenticated, then it’s all good. However, if the user is not authenticated, a bunch of things have to happen. First, the authentication module redirects the user’s browser to the identity provider. Among the information it passes on is a return address. Since the login process happens in the user’s browser, we can’t really control it. In order for things to work smoothly, we need to tell the identity provider what to do after the user has logged in. A return address tells the provider where the user should be redirected once authentication is completed. After the user logs in, the identity provider uses the return address combined with a login token to redirect back to our application. Our application invokes the authentication module, which takes the login token, verifies its validity and extracts the user’s identity. This is the point where the user is finally authenticated with our application.

browser-app-waad-interaction

I won’t pretend I understand all or even most of the security implications here, but one consideration is definitely this: you can’t let just anyone authenticate users against your directory, even if the only data you’d pass on to them would be the user name. Imagine a rogue third party that created an app that had your company’s branding and used your company’s official authentication mechanism – and then tricked your users into giving them confidential data. That’s no good, so we need to control who we give the login token out to. Which is what we’ll do next.

Configuring the Service Principal

Before your application can use WAAD authentication, it must be registered in WAAD. This registration is called a Service Principal, and a big part of what the Visual Studio wizard did for you involved creating the Service Principal. One of the things configured in the Service Principal settings is the list of accepted after-login return addresses. The wizard creates a Service Principal with a single return address: https://localhost:44302 — the port number is arbitrarily assigned, so yours might not be exactly the same. It also enables SSL for your app, so the return address will work. This is all well and good, unless your web application can’t run in that particular port – such as with Azure Web Roles in the Compute Emulator. That means you won’t get the login token from WAAD unless we add a new return address. Fire up Microsoft Online Services Module for PowerShell again. Call Connect-MsolService to authenticate, type Get-MsolService and hit tab. PowerShell will expand this to Get-MsolServicePrincipal. Hit enter… and boom! “Get-MsolServicePrincipal : The term ‘Get-MsolServicePrincipal’ is not recognized as the name of a cmdlet, function, script file, or operable program.”

msol-powershell-before-import

Wait… what? Yeah. So PowerShell’s autocomplete suggests that there is a commandlet with this name, but then it’s not there when you try to use it. This is a weird artifact of the way the commandlets are packaged. What you now want to do is run Import-Module MSOnlineExtended –Force, and then the command will work. The –Force flag is necessary, because the extended module lives in the same binary file as the regular one, and without it, PowerShell will think it’s already well and properly loaded. In order to configure the return addresses, we need a handle on the service principal created by the Visual Studio Wizard. In the MSOL PowerShell window, type Get-MsolServicePrincipal | select AppPrincipalId,DisplayName

PS C:\Windows\system32> Get-MsolServicePrincipal | select DisplayName,ObjectId

DisplayName                             ObjectId
-----------                             --------
Microsoft.Azure.ActiveDirectory         25294a23-e33f-4b06-9bd4-c0a04f4cc155
Microsoft.Azure.ActiveDirectoryUX       081e23f3-5c2b-44b5-89e7-23cbbe7a13dc
My test app                             3dc125e6-d518-40d2-9392-87a03dac8f68

The last principal in the list is the one the wizard generated for you. Visual Studio always registers your application as “My test app”, regardless of what your app is actually called – for instance, the name of my application is the far more sensible “WAADDemo”. Winking smile In my example, the generated principal’s object id is 3dc125e6-d518-40d2-9392-87a03dac8f68. In order to get a handle on that particular principal, we can run

$principal = Get-MsolServicePrincipal -ObjectId 3dc125e6-d518-40d2-9392-87a03dac8f68

To list the configured reply addresses, we type:

$principal.Addresses | select Address,AddressType

and get a return address listing

PS C:\Windows\system32> $principal.Addresses | select Address,AddressType

Address                                                             AddressType
-------                                                             -----------
https://localhost:44302/                                                  Reply

To add the final deployment and Compute Emulator reply addresses, we modify the address list we just got, then set it back to the principal:

$addresses = $principal.Addresses
$addresses.Add((New-MsolServicePrincipalAddresses -Address http://localhost:81))
$addresses.Add((New-MsolServicePrincipalAddresses -Address http://my-app.example.com))
Set-MsolServicePrincipal –ObjectId 3dc125e6-d518-40d2-9392-87a03dac8f68 -Addresses $addresses `
 -DisplayName "WAADDemo App"

The first line gets the current address list to a new variable. The next two lines create the address objects and add them to the list. The final command associates the updated address list with the principal and changes its name to “WAADDemo App” for good measure.

Finishing up

The last thing you need to do is ensure that you’re sending the correct reply address to WAAD. Edit web.config at system.identityModel.services/federationConfiguration/wsFederation and replace the value of the reply attribute with one of the reply addresses you configured in the previous step:

<wsFederation passiveRedirectEnabled="true" issuer="https://accounts.accesscontrol.windows.net/a071bf68-ee1d-46aa-ac6d-cfddf3826050/v2/wsfederation" realm="spn:e8a3050f-0c61-46bd-9808-ff7dd5dcdb4b@a071bf68-ee1d-46aa-ac6d-cfddf3826050" reply="http://localhost:81/" requireHttps="false" />

By now, when you run the application, you should be automatically redirected to the Windows Azure Authentication portal:

waad-federated-login

When you log in, you’ll be redirected back to your app as an authenticated user:

waad-app-after-login

If you play with the app a bit, you’ll notice that logging out doesn’t actually work. That’s a quirk of Azure federated login – you can visit the logout URL if you like, but it won’t really take effect unless you close the browser session. Phew, that was a lot of stuff to go through! What’s interesting here is that you didn’t have to write a single line of C# code. Now, that in itself is not really a goal worth pursuing, particularly if it results in an ungodly mess of XML configuration. However, what is remarkable and desirable is that once you understand all this, setting up new applications that share the same credentials is actually a breeze. So there is a point to all this, believe it or not. Smile Next up: User details! We’ll be reading group memberships using the Graph API.

Windows Azure Active Directory Authentication: Getting Started

I’ve recently had the pleasure of familiarizing myself with Windows Azure Active Directory (WAAD) as an identity provider for .NET web applications. While the Build demo makes it look easy, my path was wrought with peril and unexpected pitfalls, so I present to you a series of blog posts where I will go through the ones we encountered, in the vague hope that it will save someone else the trouble. Identity management is a complex topic, so if you are not familiar with Windows Azure Active Directory – or cloud identity in general – I recommend that you read Jouni’s article first.

What is WAAD Authentication and why do I need it?

Windows Azure Active Directory is a SaaS identity provider – that is, a way to provide Single Sign-On and authorization across a variety of services. It’s used by Office 365, so you get the biggest bang for your buck when you are already using Office 365 and want to save your users from having yet another login to remember. Of course, even if you’re not using Office 365, you can still use it to share identities between multiple applications – whether they are cloud-hosted services, on-premises web sites or even client applications on your smartphone.

Setting up

If you’ve got an Office 365 subscription (and have administrative access to it), you are already good to go. If you don’t have or want one, you can subscribe for a new Windows Azure Active Directory Tenant. You’ll be creating a new admin identity to go with that, so before you start, make sure you’re not logged in with a live account (or any other Microsoft identity, for that matter). The sign-up form is pretty straightforward. First, you fill in the basics. Then, when you’ve chosen a domain and verified its availability, you add in details of the new admin identity, and off you go – the setup redirects you to a login screen which is pre-populated with your new account data. Clicking on “Continue” leads you to the tenant administration page. That’s it, you’ve created a new WAAD tenant.

PowerShell Modules

With the prerequisites out of the way, we’ll tackle the tooling. Download and install the Microsoft Online Services module for PowerShell (32bit 64bit). This is the first real snag: there are at least three versions of this module you may find floating around, and if your module is an older version, there are things you won’t be able to do with it! If you’re having trouble with the commandlets, look at the file properties for the installer package you downloaded, specifically the Digital Signature tab. If the date of the signature is older than July 16th 2012, uninstall the package and download the new version I linked above.

msol-commandlets-signature-timestamp

After you install the module, you’re going to want to check that you can actually use it. Open the quick link from your desktop or search your start menu or start screen for Microsoft Online Services Module for Windows PowerShell.

msol-powershell-shortcut

When the shell opens up, type in:

Connect-MsolService

And you’ll be prompted for credentials. Type in the username and password of the account you created when you subscribed to your WAAD tenant or an administrative account in your Office 365 subscription:

msol-powershell-login

No news is good news, so after you’ve given the credentials, if PowerShell works for a while and then says nothing, you’re good to go. If, on the other hand, you get a bunch of red text, you’ve got problems with your login that you need to sort out.

ASP.NET Fall 2012 Update

In order to make use of Visual Studio’s wizardry in configuring your application, you’re going to need the ASP.NET Fall 2012 Update. It contains a number of fixes and a feature that adds Yet Another Context-Menu Item for Visual Studio Web projects.

image

You can do all of this without the wizard, but it’s somewhat error-prone, so I find it best to let tools do the heavy lifting. In the next installment, I’m going to go through the steps of setting up your application for authentication.

One percent inspiration, 99 percent perspiration (or “how I used an iPad for SharePoint development”)

To break a long-running blog silence, I thought I’d share a run-down of something I recently did. See, we were contacted to see if we could assist in a SharePoint project. The customer was having a number of problems with some custom code, and they had already run well beyond their original schedule. I stepped in, spent some time familiarizing myself with the code and fixed everything I could fix in a reasonable amount of time.

That’s not the interesting part, of course. That begins right after we deployed to production. Because obviously I made the rookie mistake of deploying right before I went on vacation. To top that off, I had spent considerable effort learning concepts that my colleagues weren’t familiar with, so obviously that left me with no fallback. In any case, off to Thailand I went with my wife.

Three days passed – we spent a couple of days getting used to the locale, and one Friday on an Intro Dive, trying out scuba diving – and I had already sort of gotten used to looking at sceneries like this:

WP_000209

Now, those of you who were facepalming and groaning at the end of the second paragraph can probably guess what happened next. I got a rather anxious contact from the customer. There was an issue in production, and it was clearly linked to my changes.

I spent a while gathering information on the situation and trying to provide clues to my colleagues if they were to try and tackle the problem without me. After a bit of back-and-forth SMSing with the parties involved, I began to feel that I really should try to do something about the problem myself. Only I was in Thailand with no computer.

Luckily, our hotel had two rather modern (albeit shabby looking) PCs, complete with Windows 7, cheap internet access and no protocol limiting. Obviously, I wasn’t going to get a development environment running there, but at least it gave me a stepping stone.

My first flash of insight came when I remembered a service called Vaasnet. Vaasnet rents out preconfigured Hyper-V virtual machines by the hour, and one of their preconfigured setups contained SharePoint Foundation 2010. I didn’t take that route at first, though, but instead started out with a blank Windows Server 2008 environment. The first thing I tried to install there was the Cisco VPN client I’d need to access the test environment.

Unfortunately, the VPN story wasn’t that straightforward. Cisco AnyConnect for Windows doesn’t allow for configuration of split tunneling on the client side, and I had zero chance of getting that changed at the server side during the weekend. So every time I tried opening the VPN connection, the client refused because I had a RDP connection open at the same time. For a moment, I considered going around that problem by installing VMware, going all Inception-like and running a VM inside the VM. But it turns out Hyper-V and VMware don’t really get along. Who’d have guessed…

Not to be stopped that easily, I fired up another Vaasnet machine. This time, a SharePoint Foundation instance, since I was going to need that to build my changes anyway. I installed a VNC server on the first machine, opened the VPN and tried VNC’ing in from the other instance – the idea being that the VPN client wouldn’t notice that I’m connected from a remote machine. Well, it worked, sort of. Except of course the VNC connection got cut off, because it was reliant on the same NIC that was now entirely tied up by the VPN.

I fiddled around for a moment, unsure on how to proceed. And then Vaasnet hit on one of their maintenance windows, and I was temporarily blocked from the VMs. So I did the reasonable thing: I grabbed a beer and went sunbathing for a couple of hours. A view like this certainly does help soothe the nerves:

WP_000229

After I got back to the hotel, I set out to install Visual Studio on the SharePoint VM. After that, however, I didn’t really know what I’d do next. So I grabbed my iPad and dabbled around for a moment. Then I suddenly remembered that I’ve got an app called Cloud Connect Pro that can do RDP. On a whim, I typed in “cisco anyconnect” to the App Store search, and I was rewarded with this:

2012-07-17T23-45-27

I installed the client, and was pretty pleased with the prospect of at least having the VPN now. I could use the Vaasnet VM to build the solution I was working on, then transfer the files to the iPad. Finally, I could open up the VPN, connect to the test environment and deploy the package. Imagine my surprise, when configuring the connection details revealed this little gem:

2012-07-17T23-45-38

See the “Connect On Demand” and “Domain List” there? That’s … split tunneling. So not only did the client exist for the iPad, it was in fact more configurable than the PC counterpart!

From this point onwards, the process was a fairly straightforward, if not too easy change-build-copy-copy-deploy cycle. During which I learned that Visual Studio is really tedious to use over RDP, and even more so on a touch keyboard. I was helped somewhat by ReSharper – the snippets that make my daily workflow smooth really helped over the slow network connection.

After about twenty cycles of testing and probing, I finally hit on the correct solution. Typing the email wherein I asked the customer to try it out was one of the most satisfying work-related things I’ve ever done.

That said, I didn’t mind finishing my beer and turning my attention to this:

WP_000240

(We spent the rest of the weekend buried in the PADI Open Water Diver manual. Then, we started the following week by completing the course itself.)

Good times!

TechDays 2012

I was up on the stage today at TechDays 2012 Finland, giving my first big presentation!

The title of my talk was roughly “practical difficulties of unit testing”. I started out with a brief intro to the tools – test frameworks, test runners and mock libraries. After that, I discussed unit test quality issues by taking an example from a recent project I worked on, and dissecting the issues I had with it. Finally, I talked at some length about breaking dependencies, which in my experience is usually the first obstacle that trips beginners.

The talk was recorded, and the video will be up for grabs at some point. Both the talk and the accompanying slides were in Finnish, but I guess so are most of my readers. :-)

Private, easily installable library packages with NuGet and MyGet

Most, if not all, software development organizations have their own set of standard tool libraries, and Offbeat Solutions is no exception. We’ve only been at this for about a year, and thus far we’ve rolled up just one, a generic tool belt that we unimaginatively called “oblib”. Smile Now, while libraries are fun, managing the installation of libraries can suck major donkey. So enter NuGet, the fledgling package manager from Microsoft. NuGet, and particularly its Visual Studio extension makes it ridiculously easy to install your favorite libraries. Wouldn’t it be great if your private tools were just as easily installable? Who says they can’t be? Let’s do this thing! If you’ve installed ASP.NET MVC 3 tooling for Visual Studio, you’ve already got the NuGet extension. If not, install it from Visual Studio Gallery. [Note: if you already have NuGet installed, you may want to check the Extension Manager for updates – the early versions didn’t always play nice with MyGet authentication] NuGet packages are relatively simple beasts. You build the packages by creating a manifest file containing some metadata about the package in question, point the command line tool NuGet.exe at the metadata file and… that’s about it. Doesn’t sound too hard, so let’s give it a whirl! Let’s start by getting the command line tool. Open the project you want to package, go to Solution Explorer in Visual Studio and right-click on the References node. There should be an option titled “Manage NuGet Packages…”:

Visual Studio solution explorer

Click on the menu item, and up pops a dialog. On the left side of the dialog, expand the “Online” section, and then type “nuget command line” to the search field in the top right corner. That should bring up a package named NuGet.CommandLine:

Manage NuGet Packages dialog

Click on the “Install” button next to the package. Visual Studio should work for a moment, and then you can close the dialog. At this point, there should be a folder named “packages” at the root of your solution, containing the package we just installed. Now that we’ve got the command line tool, let’s create the package manifest, also known as a NuSpec file. Open up a command prompt (I prefer PowerShell) and go to the project directory. Invoke NuGet by calling packages\NuGet.CommandLine.1.5.21005.9019\tools\NuGet.exe spec path\to\yourlibrary.dll (adjust paths by taking into account your version of the command line package and the path to your compiled library):

Creating a nuspec file in PowerShell

This should produce a file starting with the name of your library, ending in .nuspec. Go back to Visual Studio, include the file in your solution and open it up in Visual Studio. It should look something like this:

<?xml version="1.0"?>
<package>
   <metadata>
     <id>$id$</id>
     <version>$version$</version>
     <title>$title$</title>
     <authors>$author$</authors>
     <owners>$author$</owners>
     <licenseUrl>http://LICENSE_URL_HERE_OR_DELETE_THIS_LINE</licenseUrl>
     <projectUrl>http://PROJECT_URL_HERE_OR_DELETE_THIS_LINE</projectUrl>
     <iconUrl>http://ICON_URL_HERE_OR_DELETE_THIS_LINE</iconUrl>
     <requireLicenseAcceptance>false</requireLicenseAcceptance>
     <description>$description$</description>
     <releaseNotes>Summary of changes made in this release of the package.</releaseNotes>
     <copyright>Copyright 2011</copyright>
     <tags>Tag1 Tag2</tags>
   </metadata>
</package>

Fill in something sensible, then save the file and close it. Go back to your command shell, create a directory called “release” and then call NuGet.exe pack yourlibrary.csproj –Build –OutputDirectory release:

Building a NuGet package in PowerShell

Et voilà, we’ve got a NuGet package! Now, let’s make that a part of the build process. Unload the project in Visual Studio, and open the project file in the source editor. Add this as the AfterBuild task (adjusting for paths again):

  <Target Name="AfterBuild">
       <RemoveDir Directories="$(ProjectDir)\release\" />
       <MakeDir Directories="$(ProjectDir)\release\" />
       <Exec Command="$(SolutionDir)\Packages\NuGet.CommandLine.1.5.21005.9019\tools\NuGet.exe pack $(ProjectFileName) -OutputDirectory $(ProjectDir)\release" />
   </Target> 

Reload the project, rebuild it, and with any luck, you should have a “release” folder containing a shiny, new NuGet package. Now that we’ve come this far, the last missing step is to make the package easily available. To this end, we’re going to use MyGet, a service that provides hosting for private NuGet feeds. After registering, you can create a package feed, set permissions to it and generate an API key – you should probably go do that now. Smile At this point you store your API key for NuGet to use by calling NuGet.exe setApiKey [key] –Source [feed]/api/v1/:

Setting an API key for NuGet in PowerShell

(Note: In case the blank spaces in the screenshot look confusing – that’s where I’ve erased my API key from public scrutiny). At this point, it’s a simple matter of running NuGet.exe push [package] –Source [feed]:

Pushing a NuGet package with PowerShell

and adding your private feed to Visual Studio. Open up Package Manager Settings:

Library Package Manager menu in Visual Studio

Add your personal feed to the list of available feeds:

image

Once you’ve done this, you can install your package by opening up the “Manage NuGet Packages…” dialog again. At this point, if you configured your feed to require authentication, Visual Studio will prompt you for your user name and password. If you didn’t, you’ll get the feed contents straight away:

image

Clicking on “Install” will pop up a progress indicator, after which the deed is done, and the library is ready for use. This concludes our show! Of course, this is by no means the whole story. There’s a bunch of stuff to consider from this point onwards. Some of them include:

  • Package dependencies: when your package requires another package in order to work
  • API versioning: what sort of changes can you make without breaking clients
  • Automatic publishing from a CI server

But those are all topics for a later day.

Renaming an LDAP entry

I spent a better part of today struggling with the ModifyDNRequest class from the System.DirectoryServices namespace. I was trying to rename a bunch of entries that had invalid escaping in their DNs (thanks to yours truly botching the escaping, sigh), and I kept getting “the distinguished name contains invalid syntax” errors.

Here’s what MSDN says about the NewName property on the request class:

The NewName property contains the new object name.

So I sort of assumed it’d be the new CN for the object I was renaming – namely, the user’s username. But that didn’t work out. After digging around for a while, I noticed that here’s what MSDN says about the constructor parameters:

Parameters

distinguishedName
Type: System.String The current distinguished name of the object.
newParentDistinguishedName
Type: System.String The distinguished name of the new parent of the object.
newName
Type: System.String The new distinguished name of the object.

Alas, the last bit is rather misleading. It looks like it’s supposed to be the new DN for the object. That’s not the case either. What you’re actually supposed to use is the new relative distinguished name, which is the most significant part of the DN: the CN of the object… along with the cn= prefix.

PS: Considering that the object essentially takes three different distinguished names as arguments, it’d be real nice if it was good enough to tell which one of them was the one with the invalid syntax.

Cheers.

Learning Git the Hard Way

I’m a long time Subversion veteran with very little DVCS experience – I’ve used Mercurial for personal projects for a longer while now, but so far with very little need for the features where DVCS systems are radically different from centralized systems. It’s mostly been hg commit, hg push – using Mercurial rather like a centralized system.

So why suddenly jump to Git?

The project I’m currently working at is hosted in a Subversion repository. As it happens, I needed to try out some rather sweeping changes, and being a Subversion veteran, I really didn’t want to have to struggle through the Subversion merge process if they worked out. But Git has support for working with Subversion, so I thought I’d give it a shot.

A rocky start

My first attempt at cloning the Subversion repository started out like this, with TortoiseGit:

TortoiseGit showing an SSH password prompt

Trouble is, the repository is hosted on an Apache server, not through a SSH tunnel. Hitting cancel canceled the entire operation. So I clicked on OK in the vague hope that it’d ask if I wanted HTTP instead. The result:

TortoiseGit showing a perl.exe crash and the text "success" in bold letters

Great success! No, wait…

So yeah, not exactly stellar. But on occasion I can be annoyingly persistent, so I figured I’ll use the command line version instead. And after perusing the Git manual for a while, I successfully cloned the SVN repository.

Lesson 1: neither Revert nor Reset does what I expected

Coming from Subversion, I was used to the idea that in order to un-modify a modified file, the thing to do is to say “svn revert filename”. I had read enough about Git to know that wasn’t the right command – in fact, the manual on revert says just so:

Note: git revert is used to record some new commits to reverse the effect of some earlier commits (often only a faulty one).

Right! OK. So what about this Reset thing then?

git reset [–<mode>] [<commit>] –hard

Matches the working tree and index to that of the tree being switched to. Any changes to tracked files in the working tree since <commit> are lost.

Being the astute reader that I am, I completely failed to notice the significance of that last sentence there. I googled for some example usages, and for a moment, thought that the thing to do would be to git reset –hard HEAD^.

(Those of you who know Git: I can see you cringe. Please stop doing that.)

See, HEAD^ is not some obscure way of saying “the last committed version”. It’s an obscure way of saying “the version just before the last committed one”.

So yeah, I just removed the last committed version from my timeline.

Lesson 2: Reset still doesn’t do what I expected

Having convinced myself that I just threw my latest bits of work into the bit bucket, I quickly located my last compiled version – I knew it still had the changes I had made. I threw the assembly into Reflector, decompiled it, copied my changes back and then cleaned up the bits Reflector didn’t quite get right in the decompilation. Time spent: a few minutes. Anxiety level: through the roof.

Having this newfound wisdom about the destructiveness of reset, I decided to tweet about it. And in a matter of moments I received this reply:

@rytmis Solid advice, hopefully you didn’t lose too much? git reflog to the rescue.

Who the what now?

So as it turns out, “Git tries very hard to not lose your data for you”. Even when you tell Git to reset the status of your branch to a given commit, it doesn’t yet mean that commit is gone. And true enough, after a hard reset, running “git reflog” still shows that the commit exists. Saying “git reset –hard 5bcde1b” (where 5bcde1b is the identifier for the “lost” commit) undoes the damage.

Of course, by then I was too exhausted to try that route. Smile

Lesson 3: conflict resolution doesn’t work the way I expected

The first time a conflict occurred, I got really confused. Because, you see, I issued a “git svn rebase” expecting it to work like “svn update”. And for a while it worked the way I wanted it to work. But then my first conflict happened.

The key difference with a decentralized system is, of course, that both participants may have made multiple commits. This means that conflict resolution can’t happen quite like it does with centralized systems.

When I do a “git svn rebase”, what happens is roughly that Git remembers where I was when I last rebased. It rewinds my master branch to that state and then applies the new, incoming changes. So far, so good. Now, my changes were based on an earlier commit, so they have to be re-based on the new ones in order for there to be a consistent timeline. So Git begins to apply my recorded commits on top of the new base revision. If I get lucky, nothing special has to be done. If not, it’s conflict resolution time.

And here comes the really confusing part.

I may end up resolving a conflict between the latest revision from the remote and a local revision that’s several commits in my past. That is to say, the conflicting file will not contain my latest changes.

This really freaked me out at first.

With trembling hands I resolved my first conflicts in a way that seemed to make some kind of sense and continued with the rebase. I gave a sigh of relief when I noticed that afterwards, all my stuff was still safe. I repeated this cycle a few times before I began to grok what was going on. Of course the conflict resolution happens in “my past”. Because it has to be done at the rebase point.

Lesson 4: merges don’t work the way I expected

Another Subversion thing I had grown used to was how branches got reintegrated. You’d merge trunk changes into the branch, then merge the branch back.

Doing that with Git was a really bad idea. Especially given how at the time I had no idea how to undo the master –> branch merge I had done.

Remember the rebase process I described? Rebasing the branch on top of the latest master and then merging the branch back was way less painful.

Conclusion: Why bother?

Before this, I had tried to understand Git, and failed miserably. I suspected this would be painful, and indeed it was. And I’ve just begun to scratch the surface of Git. So why would I voluntarily do something like this when I could have just used Subversion?

Well, there’s the whole “get out of your comfort zone” aspect of things. And then there’s the fact that yeah, branch merges really are less painful.

But more to the point, collaborating with systems like this is a game changer. I know, I’m late to the game and it changed already, but damn. Thanks to Git, contributing to Dotless has been my best open source experience so far.

Oh, and by the time it became necessary for me to do my first Mercurial merge, I came prepared. Winking smile

Testing instances of anonymous types using the ‘dynamic’ keyword

Recently I’ve been writing a lot of tests that exercise ASP.NET MVC controllers. Sometimes those controllers return JSON data, and the natural way to express that is with anonymous types – the syntax and structure match JSON very well. However, if I suddenly wish to assert on those objects, things get a bit tricky: there’s no statically typed way to access the properties.

JsonResult has a property called Data which is typed as an Object. I figured if I’d cast that as a dynamic and then use runtime binding, I’d be set. So I wrote a bit of test code:

public void Returns_error_when_list_is_not_found() {
    var controller = new HomeController();
    var result = (JsonResult) controller.AddItemToList(“item”);
    dynamic resultData = result.Data;
    Assert.AreEqual(“Error”, resultData.Status);
}

and follow up with a bit of implementation code:

public ActionResult AddItemToList(string item) {
    return
 new JsonResult {Data = new {Status = “Fail”}};
}

(Note: the value of Status in the implementation code is intentionally different from the one I’m asserting against in the test – we want a red light first!)

Seems simple enough, right? So I hit “Run test” and was rather baffled: instead of seeing an assertion error I saw this:

Test result showing unexpected exception: Microsoft.CSharp.RuntimeBinder.RuntimeBinderException : 'object' does not contain a definition for 'Status'

OK, I thought, maybe I’m just looking at the wrong thing. I fired up the same test in the debugger and checked the contents of resultData. It looked like this:

Debugger clearly shows that the instance has a property called Status

So to be sure, the object actually was an instance of my anonymous type. So what’s up with the exception?

It turns out that anonymous types are always internal. Which makes sense, because there’s no sane way to represent the type at assembly or even method boundaries. However, since dynamic came along, there is a straightforward way to manipulate the objects beyond those boundaries if you just ship them across as plain Objects.

There are, of course, a couple of obvious solutions: one is to make the bits I want to manipulate statically typed. One is to futz around with reflection, but I try to keep that to a minimum. The one I chose for now, is to mark the assembly under test with

[assembly: InternalsVisibleTo(“TestProject”)]

… which does away with the problem, and now we get the expected error:

Test result showing the expected error

Another battle won, another C# compiler factoid learned.

Tests reported twice when using XUnit with TeamCity

Here’s a quickie for all you people running XUnit tests in a TeamCity build. TeamCity doesn’t directly support XUnit, so it takes a wee bit of effort to get things going. The way I decided to tackle the issue was to add the following bits to our test project build file:

<Target Name=CustomBuild DependsOnTargets=Build>
  <CallTarget Targets=Test Condition=$(TEAMCITY_PROJECT_NAME) != ” />
</Target> 
<
UsingTask AssemblyFile=..\packages\xunit.1.6.1\Tools\xunit.runner.msbuild.dll  TaskName=Xunit.Runner.MSBuild.xunit
/>
<Target Name=Test>
  <xunit Assembly=$(TargetPath) NUnitXml=..\..\TestReport.xml />
</Target>

The CustomBuild target depends on Build, so Build gets run in any case. This ensures that we always run the latest set of tests.

Then, if the build script detects TeamCity (by presence of the TEAMCITY_PROJECT_NAME variable), it runs the Test target, which outputs its results to TestReport.xml.

Having got this far, I added TestReport.xml to our TeamCity configuration, and things seemed to work nicely. Except that our tests got reported twice.

It took me a while to finally clue in to what was happening: TeamCity was already parsing the output of the XUnit test task, and having a separate test report was what caused the duplicates. This wasn’t immediately obvious to me, until we built a separate performance test bench and used console output to communicate its results to TeamCity (more on that in a future installment).

Long story short: TeamCity can already understand XUnit tests, it just doesn’t provide a way to run them.

1 2 3 4 14