A place for spare thoughts


Bug in C#/IL optimizations logic? o_O

Filed under: c# — Ivan Danilov @ 16:59

To be short: after half a day of minimizing bug case I got this code:

    public class Tests
        private class Relay
            public Action Do { get; set; }
        public void OptimizerStrangeness()
            var relay = new Relay();
            var indicator = 0;
            relay.Do = () => indicator++;
            var weak = new WeakReference(relay);

            var relayNew = weak.Target as Relay;
            if (relayNew == null) Assert.Fail();
            Assert.AreEqual(1, indicator);

Here is StackOverflow question on this topic.

Compiled under Debug mode test passes. Under Release mode it fails on highlighted line. relayNew variable is null there. But we still have relay variable in scope and haven’t assigned null to it. We still have strong reference! So, what’s happening?

I can only speculate, but my assumption is that optimizer was designed to follow such logic:

  1. relay variable is not accessed after assigning weak variable on line 15 (it could be easily proven via analyzing IL);
  2. we are about to call GC.Collect() manually, so probably we want to reclaim as much memory as possible;
  3. optimizer can “help” us by marking relay as not accessible and GC it with other garbage;

But it changes the visible behavior of the compiled code, so I assume it is a bug.

I checked generated IL and it is the same for Debug and Release mode (at least in regard to variable eligibility for GC). The only difference seems to be assembly-level attribute:

  • Debug mode:
    [assembly: Debuggable(DebuggableAttribute.DebuggingModes.Default
                        | DebuggableAttribute.DebuggingModes.DisableOptimizations
                        | DebuggableAttribute.DebuggingModes.IgnoreSymbolStoreSequencePoints 
                        | DebuggableAttribute.DebuggingModes.EnableEditAndContinue)]
  • Release mode:
    [assembly: Debuggable(DebuggableAttribute.DebuggingModes.IgnoreSymbolStoreSequencePoints)]

According to documentation it is related to JIT-optimizations, so my final assumption is that JIT-compiler makes things he thinks of as safe, but which are not.

Another indirect proof is that if you change code like below (only highlighted line added), test will pass in both Debug and Release modes:

        public void OptimizerStrangeness()
            var relay = new Relay();
            var indicator = 0;
            relay.Do = () => indicator++;
            var weak = new WeakReference(relay);

            var relayNew = weak.Target as Relay;
            if (relayNew == null) Assert.Fail();
            Assert.AreEqual(1, indicator);

Here, optimizer can’t deduce that relay‘s use is trivial and cannot optimize it away before last line passed.

The problem is reproducible on .NET 2.0, .NET 4.0 and .NET 4.5 equally.

P.S. Actually I found the problem in real code, where I have weak event handlers that treated this dead weak reference as a message that event’s target has been GCed already and were unsubscribing from event source. And on top of it some ObservableCollection<T> that were using these weak events. It wasn’t an easy task to localize the problem. In fact even reproducing test failure was not trivial, because I usually build in Debug mode at my box and good long time was wondering why tests pass on my machine, but were failing on three different build agents.
Phew. At least it can be fixed without re-architecturing half of the app. 🙂

Update. Here is the comment I found in .NET Framework sources for GC.KeepAlive(object obj) method:

    // This method DOES NOT DO ANYTHING in and of itself.  It's used to
    // prevent a finalizable object from losing any outstanding references 
    // a touch too early.  The JIT is very aggressive about keeping an 
    // object's lifetime to as small a window as possible, to the point
    // where a 'this' pointer isn't considered live in an instance method 
    // unless you read a value from the instance.  So for finalizable
    // objects that store a handle or pointer and provide a finalizer that
    // cleans them up, this can cause subtle ----s with the finalizer
    // thread.  This isn't just about handles - it can happen with just 
    // about any finalizable resource.
    // Users should insert a call to this method near the end of a 
    // method where they must keep an object alive for the duration of that
    // method, up until this method is called.  Here is an example: 
    // "...all you really need is one object with a Finalize method, and a
    // second object with a Close/Dispose/Done method.  Such as the following
    // contrived example: 
    // class Foo { 
    //    Stream stream = ...; 
    //    protected void Finalize() { stream.Close(); }
    //    void Problem() { stream.MethodThatSpansGCs(); } 
    //    static void Main() { new Foo().Problem(); }
    // }
    // In this code, Foo will be finalized in the middle of
    // stream.MethodThatSpansGCs, thus closing a stream still in use." 
    // If we insert a call to GC.KeepAlive(this) at the end of Problem(), then
    // Foo doesn't get finalized and the stream says open. 
    [System.Security.SecuritySafeCritical]  // auto-generated
    [ReliabilityContract(Consistency.WillNotCorruptState, Cer.Success)] 
    public static extern void KeepAlive(Object obj);

Also you may be interested in this excerpt from C# Language Specification, section 3.9:

If the object, or any part of it, cannot be accessed by any possible continuation of execution, other than the running of destructors, the object is considered no longer in use, and it becomes eligible for destruction. The C# compiler and the garbage collector may choose to analyze code to determine which references to an object may be used in the future. For instance, if a local variable that is in scope is the only existing reference to an object, but that local variable is never referred to in any possible continuation of execution from the current execution point in the procedure, the garbage collector may (but is not required to) treat the object as no longer in use.

So, now I’m convinced now that it is intended behavior. Good for performance, bad for clarity. Well, now I know that in such situations I have to remember to put GC.KeepAlive(relay) in the end (instead of NotNull assertion, that is meaningless in production code of course). Better I won’t forget this 🙂

Hail the unit-tests and hail the build server running them in Release mode!



  1. […] here for my investigation in more details, if you’re […]

    Pingback by Is it really a bug in JIT optimization or am I missing something? — 29/08/2012 @ 17:35

  2. Jeffrey Richter writes about this in CLR via C#. In short, Debug behavior is designed so that you can set a break point, go for lunch and continue when you come back. In release mode, nothing is sacred and can be garbage collected at any time.


    Comment by Bryan Cook — 22/09/2012 @ 02:55

  3. […] have been reading my blog for some time, you may recognize the symptoms as they are very similar to this case. Really, if you change test like below it will fail consistently in Release mode (in Debug it will […]

    Pingback by WeakEvents catch with tests | A place for spare thoughts — 17/04/2013 @ 19:01

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: