Category Archives: Uncategorized

ASP.NET Core and Assembly Binding Redirects

During the last year, I’ve been a part of launching two production sites that run on ASP.NET Core, and as a company, we’ve had enough dealings with the budding framework that we arranged a full day’s seminar on the topic.

Needless to say, using a framework in anger at this point of its development has led to all kinds of interesting discoveries, the kind that you typically only ever make on the bleeding edge.

Where have my assemblies gone?

One of the major changes in .NET Core compared to the full .NET Framework is that there is no more Global Assembly Cache. All assemblies – including most if not all of the runtime itself – will be shipped as NuGet packages, which means that the assembly loading story is a fairly major departure from the way things used to be. However, .NET Core is not always a viable platform: for instance, currently there is no production-ready server-side image processing capability since System.Drawing is not cross-platform*. Given that constraint, we’ve ended up deploying our production ASP.NET Core applications on the full .NET framework, and the full FX still has the GAC.

Currently, ASP.NET Core on the full FX loads assembly dependencies by hooking up AppDomain.AssemblyResolve to work its magic. When your code tries to interact with an assembly that is not yet loaded, the runtime looks for the assembly in your NuGet packages. However, there’s a key phrase in the documentation for the event: “Occurs when the resolution of an assembly fails.” This means that regular assembly binding rules are attempted first.

Typically, this would not be a problem. When you deploy your application, you deploy the NuGet dependencies, and the GAC only contains the framework’s assemblies. However, sometimes you will have a rogue application on your system that installs something to the GAC, and things may go a little pear-shaped.

DocumentDB deserialization woes

Consider this example: our app uses Azure DocumentDB as one of its data stores. The .NET DocumentDB client library uses JSON as its serialization format, and in particular, Newtonsoft.Json as its serialization library. One of the things you can do with that combination is specify that the serialized name of your property is different from the one declared in code, by annotating the property with JsonPropertyAttribute. Now, our app opted to use one of the latest builds of Newtonsoft.Json (7.x), and for the most part, everything worked beautifully. However, my development system had an installed app that, unbeknownst to me, registered an older version of Newtonsoft.Json into the GAC.

Unfortunately, the runtime assembly version of the GAC assembly matched the requirements of the DocumentDB client library exactly, so the runtime loaded that assembly for the DocumentDB client. The practical effect was that when the DocumentDB client (de)serialized objects, it never noticed the JsonPropertyAttribute that we were using. The net result? A single property in that class was never (de)serialized correctly.

It took me a while to figure out what was happening, but the key insight was to look at the loaded modules in the debugger and notice that indeed, we now had two copies of Newtonsoft.Json in memory: the version from the GAC and the version we were specifying as a dependency. Our own code was using the JsonPropertyAttribute from version 7.x whereas the older version of Newtonsoft.Json was looking for the JsonPropertyAttribute specified in that assembly. While the properties were identical in function, they were different in identity, so the attribute we were using was ignored entirely.

Wait, isn’t this a solved problem already?

If you’re a seasoned .NET developer, at this point you are probably thinking “binding redirects”. At least we were – but the question was, where to put them? Another major change in ASP.NET Core is that your application configuration is entirely decoupled from both the configuration of the runtime and the configuration of your web server. Which means that in a fresh out-of-the-box web application, you do have a web.config, but it is only used to configure the interaction between IIS and your application server, Kestrel.

Since Kestrel is running in a process outside IIS, it’s reasonable to expect that Web.config doesn’t affect the behavior of the runtime in that process. And indeed, it doesn’t. But the new configuration system doesn’t have a way to specify the configuration of the .NET runtime either. So what does that leave us?

After a little bit of to-and-fro with the ASP.NET Core team, the answer finally came up: the runtime configuration still exists, but its naming conventions are different from what we are used to. If you create a file called App.config (yes, even when it is a web application) and specify your binding redirects there, they will be picked up, and all is well in the world again.

The configuration file has the same schema as you would expect from a Web.config or a standalone executable’s App.config. The resulting file looks like this:

<?xml version=”1.0″ encoding=”utf-8″?>
<configuration>
<runtime>
<assemblyBinding xmlns=”urn:schemas-microsoft-com:asm.v1″>
<dependentAssembly>
<assemblyIdentity name=”Newtonsoft.Json” culture=”neutral” publicKeyToken=”30ad4fe6b2a6aeed” />
<bindingRedirect oldVersion=”0.0.0.0-4.5.0.0″ newVersion=”7.0.0.0″ />
</dependentAssembly>
</assemblyBinding>
</runtime>
</configuration>

Hope this helps anyone else encountering the same problem, however uncommon it may be!

(* My colleagues pointed out that I neglected to mention the fact that System.Drawing is not a production-ready server-side image processing solution either, given that it uses GDI+ which uses process-wide locks, and therefore essentially makes the image-processing parts of your app single-threaded.)

Dotless 1.4.4 is out

Idle hands do… open source?

During the last few weeks, I’ve done something I’ve wanted to do for a longer time, and stepped up my involvement in Free Software somewhat. It started out as kind of an accident: I was encountering an annoying bug that I attributed to our usage of dotless in one of our projects, and I went to the issue tracker looking for reports of a similar problem. Having looked at the tracker for a moment, I then checked the corresponding code and noted that, yes, dotless does in fact do the correct thing. Then I proceeded to look at my own code, and it took me all of five seconds to spot the obvious bug.

A bit embarrassing, sure, but not useless. Because while I was looking through the issue tracker, I noted that some of the issues were of a fairly simple nature — maybe even something I could fix? I remembered fixing a bug back in ’10, so I then went through the list of closed pull requests, and noted that I had contributed no less than five PRs.

During that weekend, I came down with the flu and skipped work. However, I used some of that downtime to work on dotless — given that I had no time constraints or expectations of efficiency, I could spend a moment here and another there to fix a bug or two. First, I ended up going for the low-hanging fruit. I ended up creating about a dozen pull requests — some with bug fixes, some with actual new features.

After giving things about a week to settle, I then asked the current maintainers if they might accept me as a core contributor, since they didn’t seem to have the time to process the pull requests. Not long after that Daniel granted me contributor access to the project, and off I went, merging the PRs in and cleaning up the issue tracker.

Sweet release

Of course, not everything went perfectly: I intended to release dotless 1.4.3 about a week after having merged the fixes in. And I did — except that I messed up the NuGet packaging so that the standalone dotless compiler was left out of the package. And instead of releasing 1.4.3.1 with the fixed package as I should have, I bumped up the version to 1.4.4. I expect that won’t be much of a problem for anyone, though, so I’m not feeling too bad. After all, I did fix a number of inconsistencies, crashers and things like Bootstrap not compiling when minified. So maybe I can forgive myself a bit of a blunder there. :-)

What next?

The less.js guys are thinking about building a .NET wrapper around less.js. It’s an interesting idea, to be sure: that way, the .NET implementation would never need to play catch-up with the official version. However, I still believe there’s merit in having a “native” .NET implementation, so I’m going to keep at it for now.

For the next release, I’ve already got @import options, variable interpolation improvements, list arguments and improved mixin guards. Porting the less.js test cases to give me a rough idea of how far behind is a logical next step. I’d like to aim for feature parity for 1.5 — on the other hand, maybe more frequent releases with smaller, incremental improvements would better serve the project. At the very least, 1.5 should fully support Bootstrap and KendoUI.

A large slice of my professional history is in line-of-business software with user bases ranging in the dozens or hundreds. It’s exciting and a bit frightening to be taking responsibility of a project that has, over the course of years, been downloaded over 400 000 times from NuGet.org. Time to see if I’m up to the task!

NHibernate TimeoutException with Azure SQL

Recently, I spent nearly three full working days debugging the damndest thing: a simple NHibernate Linq query in an Azure test environment was timing out. Together with the SqlAzure client driver that does transient fault handling by retrying queries this resulted in a situation, where a specific page would never load, causing instead a huge spike in the database resource usage.

Of course, as it tends to be with these things, the same query against a local SQL Server database worked just fine.

Possibly the strangest part was that after obtaining the query through NHProf, I tried running the same query via SQL Server Management studio, and the mean execution time of the query was between 100ms and 200ms. Accordingly, I had a hell of a time believing that the issue was an inefficient query as such.

I even tried creating a raw ADO.NET query that had the same command text and parameter specifications… and it executed in under 200ms.

I was about to give up when I had the idea of running both the slow and the fast query against a local database with the SQL Profiler enabled, because while there was no discernible difference in execution time against the local database, perhaps I’d be able to make out some difference in the way the queries were executed.

At first, it looked like the queries were identical from the server’s perspective, too.

But then, I noticed the difference.

The slow query declared that the single parameter’s size was 4000 whereas the fast version said it was 1.

Realization began to dawn, and I re-ran my raw ADO.NET query against the Azure database, but with the parameter size set to 4000 — and wouldn’t you know it, the timeout manifested itself immediately.

My current hypothesis is that what’s going on underneath it all is that the database is waiting for a value for the parameter and it never shows up, which is what causes the timeout. Another strange factor is that the issue doesn’t reproduce with all Azure SQL servers.

All this is triggered by a change to NHibernate Core where the SqlClientDriver skips setting the parameter size to a meaningful value, and instead sets it to a default of 4000.

Fortunately, the workaround is simple: I extended the client driver code with a very specific special-case workaround that sets the parameter size:

It may eventually turn out that I’ll need to handle more than just the one-character parameter size, but for now, this seems to fix the issue.

Blog move

Just a quick note: I’ve grown tired of Blogger regularly doing a hatchet job with my post markup and the limited options they give me for customizing the look and feel of the site, so I’ve moved to a new host.

I tried my best to not mess with the existing post URLs, but I’m afraid that at least the RSS feed urls will have changed.

Windows Azure Active Directory: Using the Graph API with an Office 365 Tenant

In a previous post I wrote about querying the Graph API for the group memberships of a WAAD user. In yet another post, I detailed setting up a WAAD tenant, but I also noted that you don’t have to do that if you’ve already got an Office 365 subscription, and would prefer to use that instead.

While that statement is technically true, the story is far from complete. As commenter Nick Locke points out, my instructions for using the Graph API don’t work for Office 365 tenants: the code throws an exception while trying to get the authorization token for the request.

After a bit of trial and error, I managed to figure out a fix.

Once again, open up the Microsoft Online Services Module for Windows PowerShell. Run Import-Module MSOnlineExtended -Force to load the module, use Connect-MsolService with admin credentials to establish the connection, and type in:

$adPrincipal = Get-MsolServicePrincipal -AppPrincipalId 00000002-0000-0000-c000-000000000000
$adPrincipal.ServicePrincipalNames

If your Office 365 setup is like ours, the result will look a lot like this:

00000002-0000-0000-c000-000000000000
AzureActiveDirectory/graph.windows.net
AzureActiveDirectory/directory_s5.windows.net
AzureActiveDirectory/directory_s4.windows.net
AzureActiveDirectory/directory_s3.windows.net
AzureActiveDirectory/directory_s2.windows.net
AzureActiveDirectory/directory_s1.windows.net
AzureActiveDirectory/directory.windows.net
AzureActiveDirectory/Directoryapi.microsoftonline.com

Looking at this list, it occurred to me that the problem might be related to the fact that we construct Realm Names for our application using the GUID 00000002-0000-0000-c000-000000000000 and the domain name graph.windows.net — the result looking like: 00000002-0000-0000-c000-000000000000/graph.windows.net@your-app-principal-id. So I figured I’d change my app to use the string AzureActiveDirectory instead of the GUID, and wouldn’t you know it, I got past the authorization token error. Only now I ended up with another error: validating the token didn’t work.

After a bit of head-scratching and web-searching, I thought I had it backwards: it wasn’t my app that was misconfigured, it was the Office 365 tenant. Suppose that the Graph API service actually identifies itself as 00000002-0000-0000-c000-000000000000/graph.windows.net and not AzureActiveDirectory/graph.windows.net, and that’s why things are failing.

I repeated the commands above in a MSOL PowerShell session that was connected to the WAAD domain I created before, and noted that indeed, the Service Principal Names for Microsoft.Azure.ActiveDirectory were otherwise identical, but AzureActiveDirectory was replaced by 00000002-0000-0000-c000-000000000000. So I cooked up this PowerShell bit:

I then ran it against the Office 365 MSOL PowerShell session and tried again. And wouldn’t you know, it worked!

Apparently, we’re not the only ones with this problem, so let’s hope this helps save someone else the trouble!

TechDays 2012

I was up on the stage today at TechDays 2012 Finland, giving my first big presentation!

The title of my talk was roughly “practical difficulties of unit testing”. I started out with a brief intro to the tools – test frameworks, test runners and mock libraries. After that, I discussed unit test quality issues by taking an example from a recent project I worked on, and dissecting the issues I had with it. Finally, I talked at some length about breaking dependencies, which in my experience is usually the first obstacle that trips beginners.

The talk was recorded, and the video will be up for grabs at some point. Both the talk and the accompanying slides were in Finnish, but I guess so are most of my readers. :-)

Renaming an LDAP entry

I spent a better part of today struggling with the ModifyDNRequest class from the System.DirectoryServices namespace. I was trying to rename a bunch of entries that had invalid escaping in their DNs (thanks to yours truly botching the escaping, sigh), and I kept getting “the distinguished name contains invalid syntax” errors.

Here’s what MSDN says about the NewName property on the request class:

The NewName property contains the new object name.

So I sort of assumed it’d be the new CN for the object I was renaming – namely, the user’s username. But that didn’t work out. After digging around for a while, I noticed that here’s what MSDN says about the constructor parameters:

Parameters

distinguishedName
Type: System.String The current distinguished name of the object.
newParentDistinguishedName
Type: System.String The distinguished name of the new parent of the object.
newName
Type: System.String The new distinguished name of the object.

Alas, the last bit is rather misleading. It looks like it’s supposed to be the new DN for the object. That’s not the case either. What you’re actually supposed to use is the new relative distinguished name, which is the most significant part of the DN: the CN of the object… along with the cn= prefix.

PS: Considering that the object essentially takes three different distinguished names as arguments, it’d be real nice if it was good enough to tell which one of them was the one with the invalid syntax.

Cheers.

Learning Git the Hard Way

I’m a long time Subversion veteran with very little DVCS experience – I’ve used Mercurial for personal projects for a longer while now, but so far with very little need for the features where DVCS systems are radically different from centralized systems. It’s mostly been hg commit, hg push – using Mercurial rather like a centralized system.

So why suddenly jump to Git?

The project I’m currently working at is hosted in a Subversion repository. As it happens, I needed to try out some rather sweeping changes, and being a Subversion veteran, I really didn’t want to have to struggle through the Subversion merge process if they worked out. But Git has support for working with Subversion, so I thought I’d give it a shot.

A rocky start

My first attempt at cloning the Subversion repository started out like this, with TortoiseGit:

TortoiseGit showing an SSH password prompt

Trouble is, the repository is hosted on an Apache server, not through a SSH tunnel. Hitting cancel canceled the entire operation. So I clicked on OK in the vague hope that it’d ask if I wanted HTTP instead. The result:

TortoiseGit showing a perl.exe crash and the text "success" in bold letters

Great success! No, wait…

So yeah, not exactly stellar. But on occasion I can be annoyingly persistent, so I figured I’ll use the command line version instead. And after perusing the Git manual for a while, I successfully cloned the SVN repository.

Lesson 1: neither Revert nor Reset does what I expected

Coming from Subversion, I was used to the idea that in order to un-modify a modified file, the thing to do is to say “svn revert filename”. I had read enough about Git to know that wasn’t the right command – in fact, the manual on revert says just so:

Note: git revert is used to record some new commits to reverse the effect of some earlier commits (often only a faulty one).

Right! OK. So what about this Reset thing then?

git reset [–<mode>] [<commit>] –hard

Matches the working tree and index to that of the tree being switched to. Any changes to tracked files in the working tree since <commit> are lost.

Being the astute reader that I am, I completely failed to notice the significance of that last sentence there. I googled for some example usages, and for a moment, thought that the thing to do would be to git reset –hard HEAD^.

(Those of you who know Git: I can see you cringe. Please stop doing that.)

See, HEAD^ is not some obscure way of saying “the last committed version”. It’s an obscure way of saying “the version just before the last committed one”.

So yeah, I just removed the last committed version from my timeline.

Lesson 2: Reset still doesn’t do what I expected

Having convinced myself that I just threw my latest bits of work into the bit bucket, I quickly located my last compiled version – I knew it still had the changes I had made. I threw the assembly into Reflector, decompiled it, copied my changes back and then cleaned up the bits Reflector didn’t quite get right in the decompilation. Time spent: a few minutes. Anxiety level: through the roof.

Having this newfound wisdom about the destructiveness of reset, I decided to tweet about it. And in a matter of moments I received this reply:

@rytmis Solid advice, hopefully you didn’t lose too much? git reflog to the rescue.

Who the what now?

So as it turns out, “Git tries very hard to not lose your data for you”. Even when you tell Git to reset the status of your branch to a given commit, it doesn’t yet mean that commit is gone. And true enough, after a hard reset, running “git reflog” still shows that the commit exists. Saying “git reset –hard 5bcde1b” (where 5bcde1b is the identifier for the “lost” commit) undoes the damage.

Of course, by then I was too exhausted to try that route. Smile

Lesson 3: conflict resolution doesn’t work the way I expected

The first time a conflict occurred, I got really confused. Because, you see, I issued a “git svn rebase” expecting it to work like “svn update”. And for a while it worked the way I wanted it to work. But then my first conflict happened.

The key difference with a decentralized system is, of course, that both participants may have made multiple commits. This means that conflict resolution can’t happen quite like it does with centralized systems.

When I do a “git svn rebase”, what happens is roughly that Git remembers where I was when I last rebased. It rewinds my master branch to that state and then applies the new, incoming changes. So far, so good. Now, my changes were based on an earlier commit, so they have to be re-based on the new ones in order for there to be a consistent timeline. So Git begins to apply my recorded commits on top of the new base revision. If I get lucky, nothing special has to be done. If not, it’s conflict resolution time.

And here comes the really confusing part.

I may end up resolving a conflict between the latest revision from the remote and a local revision that’s several commits in my past. That is to say, the conflicting file will not contain my latest changes.

This really freaked me out at first.

With trembling hands I resolved my first conflicts in a way that seemed to make some kind of sense and continued with the rebase. I gave a sigh of relief when I noticed that afterwards, all my stuff was still safe. I repeated this cycle a few times before I began to grok what was going on. Of course the conflict resolution happens in “my past”. Because it has to be done at the rebase point.

Lesson 4: merges don’t work the way I expected

Another Subversion thing I had grown used to was how branches got reintegrated. You’d merge trunk changes into the branch, then merge the branch back.

Doing that with Git was a really bad idea. Especially given how at the time I had no idea how to undo the master –> branch merge I had done.

Remember the rebase process I described? Rebasing the branch on top of the latest master and then merging the branch back was way less painful.

Conclusion: Why bother?

Before this, I had tried to understand Git, and failed miserably. I suspected this would be painful, and indeed it was. And I’ve just begun to scratch the surface of Git. So why would I voluntarily do something like this when I could have just used Subversion?

Well, there’s the whole “get out of your comfort zone” aspect of things. And then there’s the fact that yeah, branch merges really are less painful.

But more to the point, collaborating with systems like this is a game changer. I know, I’m late to the game and it changed already, but damn. Thanks to Git, contributing to Dotless has been my best open source experience so far.

Oh, and by the time it became necessary for me to do my first Mercurial merge, I came prepared. Winking smile

Testing instances of anonymous types using the ‘dynamic’ keyword

Recently I’ve been writing a lot of tests that exercise ASP.NET MVC controllers. Sometimes those controllers return JSON data, and the natural way to express that is with anonymous types – the syntax and structure match JSON very well. However, if I suddenly wish to assert on those objects, things get a bit tricky: there’s no statically typed way to access the properties.

JsonResult has a property called Data which is typed as an Object. I figured if I’d cast that as a dynamic and then use runtime binding, I’d be set. So I wrote a bit of test code:

public void Returns_error_when_list_is_not_found() {
    var controller = new HomeController();
    var result = (JsonResult) controller.AddItemToList(“item”);
    dynamic resultData = result.Data;
    Assert.AreEqual(“Error”, resultData.Status);
}

and follow up with a bit of implementation code:

public ActionResult AddItemToList(string item) {
    return
 new JsonResult {Data = new {Status = “Fail”}};
}

(Note: the value of Status in the implementation code is intentionally different from the one I’m asserting against in the test – we want a red light first!)

Seems simple enough, right? So I hit “Run test” and was rather baffled: instead of seeing an assertion error I saw this:

Test result showing unexpected exception: Microsoft.CSharp.RuntimeBinder.RuntimeBinderException : 'object' does not contain a definition for 'Status'

OK, I thought, maybe I’m just looking at the wrong thing. I fired up the same test in the debugger and checked the contents of resultData. It looked like this:

Debugger clearly shows that the instance has a property called Status

So to be sure, the object actually was an instance of my anonymous type. So what’s up with the exception?

It turns out that anonymous types are always internal. Which makes sense, because there’s no sane way to represent the type at assembly or even method boundaries. However, since dynamic came along, there is a straightforward way to manipulate the objects beyond those boundaries if you just ship them across as plain Objects.

There are, of course, a couple of obvious solutions: one is to make the bits I want to manipulate statically typed. One is to futz around with reflection, but I try to keep that to a minimum. The one I chose for now, is to mark the assembly under test with

[assembly: InternalsVisibleTo(“TestProject”)]

… which does away with the problem, and now we get the expected error:

Test result showing the expected error

Another battle won, another C# compiler factoid learned.

Tests reported twice when using XUnit with TeamCity

Here’s a quickie for all you people running XUnit tests in a TeamCity build. TeamCity doesn’t directly support XUnit, so it takes a wee bit of effort to get things going. The way I decided to tackle the issue was to add the following bits to our test project build file:

<Target Name=CustomBuild DependsOnTargets=Build>
  <CallTarget Targets=Test Condition=$(TEAMCITY_PROJECT_NAME) != ” />
</Target> 
<
UsingTask AssemblyFile=..\packages\xunit.1.6.1\Tools\xunit.runner.msbuild.dll  TaskName=Xunit.Runner.MSBuild.xunit
/>
<Target Name=Test>
  <xunit Assembly=$(TargetPath) NUnitXml=..\..\TestReport.xml />
</Target>

The CustomBuild target depends on Build, so Build gets run in any case. This ensures that we always run the latest set of tests.

Then, if the build script detects TeamCity (by presence of the TEAMCITY_PROJECT_NAME variable), it runs the Test target, which outputs its results to TestReport.xml.

Having got this far, I added TestReport.xml to our TeamCity configuration, and things seemed to work nicely. Except that our tests got reported twice.

It took me a while to finally clue in to what was happening: TeamCity was already parsing the output of the XUnit test task, and having a separate test report was what caused the duplicates. This wasn’t immediately obvious to me, until we built a separate performance test bench and used console output to communicate its results to TeamCity (more on that in a future installment).

Long story short: TeamCity can already understand XUnit tests, it just doesn’t provide a way to run them.

1 2 3 10