Entity Framework Profiler: Failed to compile the following symbols: ‘EF4;NET40;NET45’

This catchy title hides behind it a rather interesting tale of debugging. A while back I encountered a case where two seemingly entirely unrelated changes in two different branches led to one of the changes no longer working. That change was the inclusion of Entity Framework Profiler (EFProf) to a console application.

(Note for the sake of completeness: for historical reasons, we are dealing with a .NET 4.x codebase that uses EF5 and EFProf version 3.x.)

TL;DR: This problem was solved by initializing EFProf before referencing any .NET Standard assemblies. Keep reading for the dirty details.

Symptomy of Destruction

The error was rather non-obvious: we were getting a compiler error at runtime, and the error details looked like this:

System.InvalidOperationException: Failed to compile the following symbols: 'EF4;NET40;NET45'
The type name 'DbProviderFactory' could not be found. This type has been forwarded to assembly 'System.Data, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089'. Consider adding a reference to that assembly.

(... lots of repetitive errors elided ...)

   at HibernatingRhinos.Profiler.Appender.Util.GenerateAssembly.CompileInternally(String fileName, List`1 sources, HashSet`1 assemblies, HashSet`1 defineSymbols, String outputFolder, String compilerVersion, Int32 tryCount)
   at HibernatingRhinos.Profiler.Appender.Util.GenerateAssembly.CompileAssembly(List`1 sources, HashSet`1 assemblies, HashSet`1 defineSymbols, String assemblyFileName, String outputFolder)

From the error message, it’s clear that we’re dealing with a type forwarding error, but there was no clear explanation as to why we were suddenly seeing this.

The branch that triggered this issue contained a number of references to .NET Standard assemblies, and as usual, whenever the two worlds of desktop .NET and .NET Standard collide, sparks can be expected to fly. After banging my head against the problem for a while, I decided to debug the error in Rider.


One of the things I so love about Rider is how easily it lets me debug inside code I don’t own. Most of the time, when dealing with exceptions from third-party dependencies, the best way to figure what’s going on is to inspect the local variables around the code where the exception is thrown, just like with code I write myself. So I poked around a bit, and then decided to set a breakpoint at the happy path of the code and see how things looked in the version that worked.

After a moment of comparing things, I noticed that on the original branch, the list of referenced assemblies looked different than with the breaking change. Originally, the assembly list looked like this:

After the change, it looked like this:

The difference is pretty easy to spot: originally, we were referencing System.Data, but after the change, it was netstandard instead. So… why is that?

Again, Rider’s tools helped me locate the crucial clues. I used Find Usages to go up the call stack and noticed that the reference in question was found by scanning the assemblies currently in memory, looking for the one that contained a declaration for DbConnection and using that as a reference.

Two minutes of furious brow-wrinkling later, a lightbulb appeared over my head. I moved the code that triggered EFProf’s initialization to happen before any references to .NET Standard assemblies… and poof. The problem was gone. The changes had nothing obvious to do with each other, but the internal state of the runtime can and does affect your code too.

So uh… what’s up with the type forwards?

Type Forwards are a mechanism that allow the implementations of types to be moved to another assembly without breaking the runtime contract of “this assembly provides these types”. In compiled .NET code, assemblies are typically loaded in the order they are encountered in the code, so the first usage of a type from a particular assembly will also trigger that assembly to be loaded. If the type is forwarded to another assembly, the runtime will then also load that one, and everything is fine.

However, since type forwarding is a runtime mechansim, what it doesn’t do is provide the full metadata description of the type. In practice, this means that when you compile against an assembly that has forwarded types, in order to use those types in your code, you also need to reference the assembly that actually contains those types.

In this case, the EFProf code that does runtime compilation didn’t take that into account. When it found the declaration for DbConnection from within netstandard.dll, it included that as a reference. However, when the Roslyn compiler encountered code that actually uses the types from System.Data, it didn’t know what to do with them. Which is why reordering the method calls in a way that forced the actual System.Data to be loaded first fixed the issue.

Using Azure Artifacts in Docker builds

One of the projects I work on recently transitioned from running on Azure Web Apps to running on Azure AKS. As part of that transition, all the individual components needed to be built into Docker container images. This turned out to be a non-trivial exercise, because we were using private NuGet feeds on Azure Artifacts to host our shared internal tools.

We run our builds on Azure Pipelines, and while it has decent support for Azure Artifacts repositories in particular, it doesn’t really help with builds that occur inside Docker. I’m going to spare you the gory details of all the weird and wonderful workarounds I attempted, including building a NuGet package cache image (with every version of every private package in it!) as a separate task, and go to the solution that actually works.

Include the Azure Artifacts credential provider in your build image

For now, you’ll probably find it easiest to have a custom build image based off of the .NET core SDK image. This is changing with .NET Core 3.0, because the SDK image is slated to include the Azure Artifacts Credential Provider out of the box. Meanwhile, you can do something like this:

FROM microsoft/dotnet:2.2-sdk AS build

# Install locale support
RUN apt-get update \
        && apt-get install -y --no-install-recommends \
        apt-transport-https \
        && apt-get update \
        && apt-get install -y --no-install-recommends locales \
        && rm -rf /var/lib/apt/lists/*

# Enable en_us.UTF-8 in /etc/locale.gen
RUN sed -i 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && locale-gen

# Set locale to UTF-8

# Install Azure Artifacts credential provider
RUN curl -o https://raw.githubusercontent.com/Microsoft/artifacts-credprovider/master/helpers/installcredprovider.sh
RUN sh installcredprovider.sh

At least for the version we were using, configuring the locale was important: without it, the credential provider fails silently when used.

Build the image above, and tag it accordingly, so you can use it in the final build.

Configure your app image to use the base image and the credential provider

FROM microsoft/dotnet:2.1.8-aspnetcore-runtime AS base

FROM someregistry.example.com/tools/dotnet-build:2.2-sdk AS build
RUN mkdir -p /root/.nuget/NuGet/
COPY ["NuGet.Config", "/root/.nuget/NuGet/NuGet.Config"]

COPY ["App/App.csproj", "App/"]

ARG nuget_external_endpoints
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS "$nuget_external_endpoints"

RUN dotnet restore "App/App.csproj"
COPY . .
WORKDIR "/src/App"
RUN dotnet build "App.csproj" -c Release -o /app

FROM build AS publish
RUN dotnet publish "App.csproj" -c Release -o /app

FROM base AS final
COPY --from=publish /app .
CMD ["dotnet", "App.dll"]

There are only a few of noteworthy things here:

First, I’m copying nuget.config over, because it defines the feeds we are using. It does not contain any credentials to the feed.

Second, I’m using the previously built image someregistry.example.com/tools/dotnet-build:2.2-sdk as the build environment instead of the regular microsoft/dotnet:2.2-sdk.

Finally, I’m defining a build argument called nuget_external_endpoints and setting the value of that argument to the environment variable VSS_NUGET_EXTERNAL_FEED_ENDPOINTS.

Generate a Personal Access Token in Azure DevOps

In order to keep things secure, we’ll opt to use an access token that only has read access to the Azure Artifacts feed. Navigate to https://<yourtenant>.visualstudio.com/_usersSettings/tokens and create a new token. Name the token, select a suitable lifetime for it, then click on “Show all scopes” at the bottom of the screen. Select the Read permission for Packages, and create the token:

Test the build locally

At this point, it’s a good idea to see if things work in your development environment. Try building your image, by running:

docker build . -f .\Dockerfile --build-arg 'nuget_external_endpoints={\"endpointCredentials\": [{\"endpoint\": \"<your feed endpoint>\", \"password\": \"<your access token>\"}]}'

Note: the quoting above is what works for PowerShell. If you use a different OS or shell, you may need to adjust accordingly.

Configure the build

The final step is to use the PAT as part of the build. You can accomplish that by adding the Docker build task in Pipelines and adding the following snippet as a build argument:

nuget_external_endpoints={"endpointCredentials": [{"endpoint":"<your feed endpoint>", "password":"$(AzureDevOpsAccessToken)"}]}

Quoting the JSON here is not necessary. The build task will pass the argument along intact.

After that, once you add a build variable named AzureDevOpsAccessToken, you’re good to go.


Having a custom build image means that when Microsoft releases a new version of the SDK, it’s up to you to update your image to match that. It’s equally possible to set up the Azure Artifacts credential provider inside the actual app Dockerfile, and depending on your circumstances, you may want to do that instead.

ExecutionEngineException in ASP.NET MVC when mixing .NET Framework and .NET Standard assemblies

TL;DR: If you mix .NET Standard assemblies that depend on the System.Net.Http NuGet package into your .NET Framework ASP.NET MVC app, you are likely to encounter runtime crashes when your AppDomain unloads and reloads. The solution involves a binding redirect and a project build time setting.

This one was fun to figure out. I’m working on a project, where I really, really wanted to use Microsoft’s SpaServices package to enable server-side rendering for a Node frontend app. Fortunately, it only takes a little bit of glue to make them stick to an ASP.NET MVC 5 app, and you’re off to the races.

Unfortunately, it also comes with a nasty side effect of periodically crashing your app during development.

It took me a while to recognize the pattern, but after a dozen or so crashes, I finally figured out, that the exception always occurred after I had either edited the Web.config file or rebuilt the binaries — that is, whenever the AppDomain was unloaded and the app restarted.

The problem was initially triggered by ASP.NET Web Api’s EnsureInitialized call, but the culprit at the top of the stack was actually RuntimeAssembly.GetTypes() — so when I managed to trick Web Api into not triggering the problem, it was pushed forward to something Glimpse did. When I removed Glimpse, it moved to something that the ASP.NET MVC infrastructure did, and so on.

The actual problem is a combination of different issues: when running on the desktop framework, the System.Net.Http version you’re supposed to use is the one shipped in the framework. However, that version has some weird versioning quirks due to the fact that it isn’t just a .NET assembly, it’s also a Windows component. This leads to it having a version number that is smaller than the NuGet counterpart, which, in some edge cases, leads to the wrong assembly ending up in your process. I’m not sure where the memory corruption comes into play (and I’m not sure I want to know 😛), but fortunately the fix is simple.

You need to add a binding redirect for System.Net.Http and set ImplicitlyExpandNETStandardFacades to false in your project, as described in this GitHub comment.

Ah, System.Net.Http. Since it began shipping out-of-band, it’s just the gift that keeps on giving

I’m not bringing much new information to the table, but I hope this post at least helps someone else find the solution to the same issue without spending hours diagnosing it.

TypeScript Duplicate Identifiers when using npm link

Last night I ran into an issue with TypeScript compilation. I’m working on a frontend web project that uses a library we are developing in another repository. Right now, we’re simultaneously working on both repos, so writing a feature from end to end is a bit of a pain, if one has to first commit to the dependency repository, then pull from there to get the latest version for the app itself. Fortunately, there’s a solution for that: npm link / yarn link. This feature allows you to substitute a local version for a dependency, using symbolic links (or NTFS junction points, if you’re Windows-inclined, like myself).

The problem arises when both your main app and your linked dependency have one or more same dependencies. For example, in our case, the shared dependency was react-intl. If the dependency is not linked, everything works as well as things usually work in js-land (🙄), but once the linking happens, things begin to break.

The core of the problem is that at some point, the TypeScript compiler will encounter two different, possibly identical versions of the same dependency declaration, at which point it will give up and produce a whole mess of errors.

The fix is simple enough, and pretty well documented already: at the top level, where you run the compilation, you add a path mapping, like so:

    "baseUrl": "./src",
    "paths": {
      "*": [

What this means (as I understand it) is, for every non-cwd-relative module resolution the TypeScript compiler attempts, it will try the paths specified in the array. This means that when encountering the dependency in the dependent module, it will look for it in the parent module’s node_modules/@types directory, and since it finds it there, it will look no further. This coalesces the duplicate dependencies into one instance, and voila, problem solved. Path mappings can be more specific, too, if you need to target a single problematic package. In this case, I wanted this behavior for all deps, so I went with the easiest route.

Large number of connections when using MongoDB C# driver 2.x

Long time, no posts and whatnot. To break the ice, here’s something that’s fairly well documented, but not necessarily that obvious, that I learned today.

TL;DR: keep your MongoClients as singletons, like the documentation damn well tells you to do. A more detailed explanation follows.

Yesterday, I deployed an app to production. It was a fairly major deployment in that it involved upgrading the infrastructure: I had updated the app to use the MongoDB C# driver 2.x series, and also upgraded the actual production database from MongoDB 2.x to the latest 3.6. The app was fairly well tested, but of course, the one thing that very rarely gets tested for is production load.

I had performed the driver upgrade as a fairly mechanical search-and-replace type exercise, and while most operations were easy to replace, the one thing that was missing was the ability to disconnect from the server. I hit the documentation, and found out that it says the following:

It is recommended to store a MongoClient instance in a global place, either as a static variable or in an IoC container with a singleton lifetime.

However, multiple MongoClient instances created with the same settings will utilize the same connection pools underneath.

Reading a bit more on the topic confirmed that there was no need to disconnect the client, so all was well in the world. What I didn’t do, however, was to register the client as a Singleton — because I wanted to keep the changes to a minimum, and the docs stated that it wasn’t required, even if it was recommended.

In the next changeset, I added some telemetry: I wanted to log the duration of MongoDB operations, so I could use our monitoring to see if our Mongo operations got very slow. I found a post titled Monitoring MongoDB with Application Insights and followed its instructions. And here’s where things went wonky.

See, the article’s example uses a lambda function as the cluster configurator, which isn’t bad as such, but. The documentation I quoted above? In full context, it looks like this:

However, multiple MongoClient instances created with the same settings will utilize the same connection pools underneath. Unfortunately, certain types of settings are not able to be compared for equality. For instance, the ClusterConfigurator property is a delegate and only its address is known for comparison. If you wish to construct multiple MongoClients, ensure that your delegates are all using the same address if the intent is to share connection pools.

Combine that with the fact that my MongoClient registration was per-request, and ta-da, I’ve effectively disabled connection pooling with no possibility to dispose of the connections. So perhaps it wasn’t that surprising to see MongoDB log the following:

2018-01-18T08:17:35.340+0200 I NETWORK [listener] connection accepted from #4457 (4455 connections now open)


Fortunately, the fix was rather simple: move the client registration to be a singleton, and that’s it.

Mind you, there’s nothing wrong with the instructions in the post I linked to. Had I treated the client instance like the docs suggested, I wouldn’t have had any problems.

Moral of the story? If the documentation recommends something, it’s probably a good idea to do it, I guess.

ASP.NET Core and Assembly Binding Redirects

During the last year, I’ve been a part of launching two production sites that run on ASP.NET Core, and as a company, we’ve had enough dealings with the budding framework that we arranged a full day’s seminar on the topic.

Needless to say, using a framework in anger at this point of its development has led to all kinds of interesting discoveries, the kind that you typically only ever make on the bleeding edge.

Where have my assemblies gone?

One of the major changes in .NET Core compared to the full .NET Framework is that there is no more Global Assembly Cache. All assemblies – including most if not all of the runtime itself – will be shipped as NuGet packages, which means that the assembly loading story is a fairly major departure from the way things used to be. However, .NET Core is not always a viable platform: for instance, currently there is no production-ready server-side image processing capability since System.Drawing is not cross-platform*. Given that constraint, we’ve ended up deploying our production ASP.NET Core applications on the full .NET framework, and the full FX still has the GAC.

Currently, ASP.NET Core on the full FX loads assembly dependencies by hooking up AppDomain.AssemblyResolve to work its magic. When your code tries to interact with an assembly that is not yet loaded, the runtime looks for the assembly in your NuGet packages. However, there’s a key phrase in the documentation for the event: “Occurs when the resolution of an assembly fails.” This means that regular assembly binding rules are attempted first.

Typically, this would not be a problem. When you deploy your application, you deploy the NuGet dependencies, and the GAC only contains the framework’s assemblies. However, sometimes you will have a rogue application on your system that installs something to the GAC, and things may go a little pear-shaped.

DocumentDB deserialization woes

Consider this example: our app uses Azure DocumentDB as one of its data stores. The .NET DocumentDB client library uses JSON as its serialization format, and in particular, Newtonsoft.Json as its serialization library. One of the things you can do with that combination is specify that the serialized name of your property is different from the one declared in code, by annotating the property with JsonPropertyAttribute. Now, our app opted to use one of the latest builds of Newtonsoft.Json (7.x), and for the most part, everything worked beautifully. However, my development system had an installed app that, unbeknownst to me, registered an older version of Newtonsoft.Json into the GAC.

Unfortunately, the runtime assembly version of the GAC assembly matched the requirements of the DocumentDB client library exactly, so the runtime loaded that assembly for the DocumentDB client. The practical effect was that when the DocumentDB client (de)serialized objects, it never noticed the JsonPropertyAttribute that we were using. The net result? A single property in that class was never (de)serialized correctly.

It took me a while to figure out what was happening, but the key insight was to look at the loaded modules in the debugger and notice that indeed, we now had two copies of Newtonsoft.Json in memory: the version from the GAC and the version we were specifying as a dependency. Our own code was using the JsonPropertyAttribute from version 7.x whereas the older version of Newtonsoft.Json was looking for the JsonPropertyAttribute specified in that assembly. While the properties were identical in function, they were different in identity, so the attribute we were using was ignored entirely.

Wait, isn’t this a solved problem already?

If you’re a seasoned .NET developer, at this point you are probably thinking “binding redirects”. At least we were – but the question was, where to put them? Another major change in ASP.NET Core is that your application configuration is entirely decoupled from both the configuration of the runtime and the configuration of your web server. Which means that in a fresh out-of-the-box web application, you do have a web.config, but it is only used to configure the interaction between IIS and your application server, Kestrel.

Since Kestrel is running in a process outside IIS, it’s reasonable to expect that Web.config doesn’t affect the behavior of the runtime in that process. And indeed, it doesn’t. But the new configuration system doesn’t have a way to specify the configuration of the .NET runtime either. So what does that leave us?

After a little bit of to-and-fro with the ASP.NET Core team, the answer finally came up: the runtime configuration still exists, but its naming conventions are different from what we are used to. If you create a file called App.config (yes, even when it is a web application) and specify your binding redirects there, they will be picked up, and all is well in the world again.

The configuration file has the same schema as you would expect from a Web.config or a standalone executable’s App.config. The resulting file looks like this:

<?xml version=”1.0″ encoding=”utf-8″?>
<assemblyBinding xmlns=”urn:schemas-microsoft-com:asm.v1″>
<assemblyIdentity name=”Newtonsoft.Json” culture=”neutral” publicKeyToken=”30ad4fe6b2a6aeed” />
<bindingRedirect oldVersion=”″ newVersion=”″ />

Hope this helps anyone else encountering the same problem, however uncommon it may be!

(* My colleagues pointed out that I neglected to mention the fact that System.Drawing is not a production-ready server-side image processing solution either, given that it uses GDI+ which uses process-wide locks, and therefore essentially makes the image-processing parts of your app single-threaded.)

Dotless 1.4.4 is out

Idle hands do… open source?

During the last few weeks, I’ve done something I’ve wanted to do for a longer time, and stepped up my involvement in Free Software somewhat. It started out as kind of an accident: I was encountering an annoying bug that I attributed to our usage of dotless in one of our projects, and I went to the issue tracker looking for reports of a similar problem. Having looked at the tracker for a moment, I then checked the corresponding code and noted that, yes, dotless does in fact do the correct thing. Then I proceeded to look at my own code, and it took me all of five seconds to spot the obvious bug.

A bit embarrassing, sure, but not useless. Because while I was looking through the issue tracker, I noted that some of the issues were of a fairly simple nature — maybe even something I could fix? I remembered fixing a bug back in ’10, so I then went through the list of closed pull requests, and noted that I had contributed no less than five PRs.

During that weekend, I came down with the flu and skipped work. However, I used some of that downtime to work on dotless — given that I had no time constraints or expectations of efficiency, I could spend a moment here and another there to fix a bug or two. First, I ended up going for the low-hanging fruit. I ended up creating about a dozen pull requests — some with bug fixes, some with actual new features.

After giving things about a week to settle, I then asked the current maintainers if they might accept me as a core contributor, since they didn’t seem to have the time to process the pull requests. Not long after that Daniel granted me contributor access to the project, and off I went, merging the PRs in and cleaning up the issue tracker.

Sweet release

Of course, not everything went perfectly: I intended to release dotless 1.4.3 about a week after having merged the fixes in. And I did — except that I messed up the NuGet packaging so that the standalone dotless compiler was left out of the package. And instead of releasing with the fixed package as I should have, I bumped up the version to 1.4.4. I expect that won’t be much of a problem for anyone, though, so I’m not feeling too bad. After all, I did fix a number of inconsistencies, crashers and things like Bootstrap not compiling when minified. So maybe I can forgive myself a bit of a blunder there. 🙂

What next?

The less.js guys are thinking about building a .NET wrapper around less.js. It’s an interesting idea, to be sure: that way, the .NET implementation would never need to play catch-up with the official version. However, I still believe there’s merit in having a “native” .NET implementation, so I’m going to keep at it for now.

For the next release, I’ve already got @import options, variable interpolation improvements, list arguments and improved mixin guards. Porting the less.js test cases to give me a rough idea of how far behind is a logical next step. I’d like to aim for feature parity for 1.5 — on the other hand, maybe more frequent releases with smaller, incremental improvements would better serve the project. At the very least, 1.5 should fully support Bootstrap and KendoUI.

A large slice of my professional history is in line-of-business software with user bases ranging in the dozens or hundreds. It’s exciting and a bit frightening to be taking responsibility of a project that has, over the course of years, been downloaded over 400 000 times from NuGet.org. Time to see if I’m up to the task!

NHibernate TimeoutException with Azure SQL

Recently, I spent nearly three full working days debugging the damndest thing: a simple NHibernate Linq query in an Azure test environment was timing out. Together with the SqlAzure client driver that does transient fault handling by retrying queries this resulted in a situation, where a specific page would never load, causing instead a huge spike in the database resource usage.

Of course, as it tends to be with these things, the same query against a local SQL Server database worked just fine.

Possibly the strangest part was that after obtaining the query through NHProf, I tried running the same query via SQL Server Management studio, and the mean execution time of the query was between 100ms and 200ms. Accordingly, I had a hell of a time believing that the issue was an inefficient query as such.

I even tried creating a raw ADO.NET query that had the same command text and parameter specifications… and it executed in under 200ms.

I was about to give up when I had the idea of running both the slow and the fast query against a local database with the SQL Profiler enabled, because while there was no discernible difference in execution time against the local database, perhaps I’d be able to make out some difference in the way the queries were executed.

At first, it looked like the queries were identical from the server’s perspective, too.

But then, I noticed the difference.

The slow query declared that the single parameter’s size was 4000 whereas the fast version said it was 1.

Realization began to dawn, and I re-ran my raw ADO.NET query against the Azure database, but with the parameter size set to 4000 — and wouldn’t you know it, the timeout manifested itself immediately.

My current hypothesis is that what’s going on underneath it all is that the database is waiting for a value for the parameter and it never shows up, which is what causes the timeout. Another strange factor is that the issue doesn’t reproduce with all Azure SQL servers.

All this is triggered by a change to NHibernate Core where the SqlClientDriver skips setting the parameter size to a meaningful value, and instead sets it to a default of 4000.

Fortunately, the workaround is simple: I extended the client driver code with a very specific special-case workaround that sets the parameter size:

It may eventually turn out that I’ll need to handle more than just the one-character parameter size, but for now, this seems to fix the issue.

Media Services at Global Azure BootCamp Finland

I was at the Global Windows Azure BootCamp in Espoo today, rambling about the coolness of the Microsoft media platform in general and Windows Azure Media Services in particular. The event was hosted at the awesome co-workspaces of AppCampus by Teemu Tapanila and Karl Ots — hats off for organizing the event, it was great fun!

I’m not sure if anyone actually tried them, but I put up some lab exercises on GitHub for playing around with Azure Media Services. If you’re looking for a starting point for working with Azure Media Services, go ahead and take a look. 🙂

The Anatomy of a Cloud Video Service — My TechDays 2013 talk

So, a few weeks back I was on stage at TechDays 2013 Finland. My topic for the day, titled “The Anatomy of a Cloud Video Service”, was about the Futudent “camera + software + cloud service” solution that I’ve been involved with for quite a while now. I intend to cover the associated technologies in more depth in blog form later, but for now, here’s the video of my presentation.

I spent my hour talking about what the client application does, how we handle video transcoding, what it was like to build the associated video sharing service and all the challenges associated with the entire story.

The talk is in Finnish, so obviously it’s only for a limited audience. Also note that for whatever reason, the video is set to forcibly start at 3:22 and you have to specifically click on “watch the entire video” at the timeline marker in order to get to the first few minutes.

1 2 3 14