Tagged: .net

Build an interpeter in .NET

A long while ago I noted one could build an efficient language interpreter if the runtime could inline operations more aggressively. .NET has long had an attribute called MethodImplAttribute that gives the runtime some hints on how to compile a method. In .NET 4.5 they added one more hint: AggressiveInlining. This instructs the JIT to inline the method if possible. It took some doing, but I verified that it does inline methods most of the time. This post talks more about it. Plus, at the bottom is a comment from (probably) a .NET dev that says, “It simply turns off all our heuristics, and causes us to inline the method if the JIT is capable of inlining the method. The reason the documentation is so vague is that there are limitations to what the JIT can actually inline…”. So it works, but it might not sometimes.

To determine if inlining is working you can scan the ETW output from the JIT as described here. I did the following:

// Run wevtutil only once on your machine. It should work fine forever after. :)
wevtutil im C:\Windows\Microsoft.NET\Framework\v4.0.30319\CLR-ETW.man
// Do the following each time.
logman start clrevents -p {e13c0d23-ccbc-4e12-931b-d9cc2eee27e4} 0x1000 5 -ets
// RUN YOUR PROGRAM IN RELEASE MODE, NOT IN THE DEBUGGER
logman stop clrevents -ets
tracerpt clrevents.etl

Search in dumpfile.xml for MethodJitInliningSucceeded and MethodJitInliningFailed. They will give you a reason for what it chose to do. To make this a bit easier you might be able to use Reactive Extensions to monitor the logfile as described here.

The goal is to inline and optimize the interpreter directly into the program, thus creating a compiled program. This is the first Futamura projection. This is already pretty good, but to get the 2nd and 3rd projections you’d need to do the inlining at the bytecode level at compile time, not at runtime. That way you can save the optimized assembly and specialize again. This should exist somewhere. I couldn’t find anything comparable in the JVM. Perhaps LLVM? Maybe Rosyln?

Storm w/ Reactive Extensions and Dataflow

The Apache Storm project is a distributed dataflow framework. It’s used by Twitter to process a continuous stream of tweets through a network of machines. Microsoft’s TPL Dataflow library is similar, but only works on a single process. To get an approximation in .NET I want to convert MSMQ into a Spout.

One solution is to convert MSMQ into an Observable that pushes elements into a Dataflow block. I packaged this into the constructor for a QueueSourceBlock, which implements ISourceBlock. Here’s the code snippet:

public QueueSourceBlock(string queueAddress)
{
    var queue = new MessageQueue(queueAddress);
    queue.Formatter = new XmlMessageFormatter(new Type[]{typeof(T)});
    var tb = new TransformBlock<Message, T>(m => (T) m.Body);
    block = tb;
    var queueObserveOnce =
        Observable.Defer(
            () =>
                Observable.FromAsync(
                    () => Task<Message>.Factory.FromAsync(queue.BeginReceive(), queue.EndReceive)));

    queueObservable = Observable.While(() => true, queueObserveOnce);
    queueDispose = queueObservable.Subscribe(m => tb.Post(m));
}

The FromAsync method will receive only one message from the queue. The Defer method will generate a new FromAsync call on demand. The While method will keep calling the Defer forever. Whenever a message arrives from the Observable, it calls Post to push it into the TranformBlock. This block will extract the data and send it to the next node it’s linked to. This code doesn’t handle cancellation. AFAIK, there’s no way to cancel the BeginReceive on a queue, but I could support cancellation in other places.

Surprisingly there doesn’t appear to be a way to split a stream in .NET’s Dataflow. They’ve got a JoinBlock that merges streams, but not a SplitStream. I think if you increase the parallelism in a block it behaves sort of like “shuffleGrouping” in Storm. Still, it’s a weird oversight.

Simple Dropbox client w/ Reactive Extensions

Reactive Extensions is ideally suited to the task of monitoring a directory for changes. While the events from FileSystemWatcher are ok, it isn’t efficient. The goal is to send as little data as possible to the server. If a file is moved or copied, there’s no need to upload the file again. Instead, you should recognize the event on the client and simply send a Moved or Copied message pointing to the original file on the server.

Here’s a simple prototype. I left out all the code to track the files with hashes. It gets a little trickier trying to track directories. If a directory is deleted, the entire subtree is deleted. You can use SQLite with the closure.c extension to track hierarchical data.

Anyway, this proof-of-concept is easy. Cut and paste lost some formatting. Stupid tabs.

    public class DropboxClient
    {
        private readonly FileSystemWatcher watcher;

        public IObservable<DropboxEventArg> FileSystemObservable { get; private set; } 

        public DropboxClient(string home)
        {
            watcher = new FileSystemWatcher
            {
                Path = home,
                EnableRaisingEvents = true,
                IncludeSubdirectories = true
            };

            SetupRx();
        }

        private void SetupRx()
        {
            var changed = Observable.FromEventPattern<FileSystemEventArgs>(watcher, "Changed");
            var created = Observable.FromEventPattern<FileSystemEventArgs>(watcher, "Created");
            var deleted = Observable.FromEventPattern<FileSystemEventArgs>(watcher, "Deleted");
            var renamed = Observable.FromEventPattern<FileSystemEventArgs>(watcher, "Renamed");

            // Often it repeats a change event for every little change (timestamp, file size, etc). 
            var dbchanged = changed
                .DistinctUntilChanged()
                //.Do(UpdateRecord)
                .Select(fe => new DropboxEventArg(fe.EventArgs));

            // This seems to work fine, I think
            var dbrenamed = renamed.Select(fe => new DropboxEventArg(fe.EventArgs));

            // Deleted is ok, too
            var dbdeleted = deleted
                //.Do(DeleteRecord)
                .Select(fe => new DropboxEventArg(fe.EventArgs));

            // If file already exists, then a created file is a copy of another file
            var dbcreated = created
                .Select(fe =>
                {
                    if (FileExists(fe.EventArgs.FullPath))
                        return new DropboxEventArg(fe.EventArgs, DropboxChangeTypes.Copied);
                    else
                    {
                        //CreateRecord(fe);
                        return new DropboxEventArg(fe.EventArgs);
                    }
                });

            FileSystemObservable = dbchanged.Merge(dbrenamed).Merge(dbdeleted).Merge(dbcreated);
        }

        private void CreateRecord(EventPattern<FileSystemEventArgs> fe) {
            // Create row in repo
            throw new NotImplementedException();
        }

        private void UpdateRecord(EventPattern<FileSystemEventArgs> obj)
        {
            // If file size is different, rehash and update
            // If dir, maybe do nothing. Not sure.
            throw new NotImplementedException();
        }

        private void DeleteRecord(EventPattern<FileSystemEventArgs> obj) {
            // Delete file from repository
            // If directory, delete entire subtree from repo
            throw new NotImplementedException();
        }

        private bool FileExists(string fpath)
        {
            // If file, hash and lookup 
            // If dir, maybe do nothing
            return false;
        }
    }

    public enum DropboxChangeTypes
    {
	Changed,
	Created,
	Deleted,
	Renamed,
	Moved,
	Copied
    }

    public class DropboxEventArg {
	public DropboxChangeTypes ChangeType;
	public string FullPath;
	public string Name;

	public DropboxEventArg()
	{
	}

	public DropboxEventArg(FileSystemEventArgs fe)
	{
		FullPath = fe.FullPath;
		Name = fe.Name;
		switch (fe.ChangeType)
		{
			case WatcherChangeTypes.Changed: 
				ChangeType = DropboxChangeTypes.Changed; break;
			case WatcherChangeTypes.Created:
				ChangeType = DropboxChangeTypes.Created; break;
			case WatcherChangeTypes.Deleted:
				ChangeType = DropboxChangeTypes.Deleted; break;
			case WatcherChangeTypes.Renamed:
				ChangeType = DropboxChangeTypes.Renamed; break;
		}
	}

	public DropboxEventArg(FileSystemEventArgs fe, DropboxChangeTypes ct)
	{
		FullPath = fe.FullPath;
		Name = fe.Name;
		ChangeType = ct;
	}
    }

Probabilistic Reactive Extensions: ProbablyDistinct

Reactive Extensions has an operator called Distinct. As data streams through, it filters out any items it has already seen before, allowing only unique items to pass to OnNext. The problem is Distinct stores all the unique items in a HashSet, which means it will consume a lot of memory if the number of unique items is large. The solution is to implement a ProbablyDistinct operator that uses a Bloom Filter to store unique items. A Bloom Filter is a very compact data structure that tests each item and replies “item is probably in the set” or “item is definitely not in the set”. In this case, there will be some false positives (it says it’s in the set, but it’s actually not). But it’s a tradeoff some applications might need, particulary long-running server apps that see lots of unique data items.

More broadly, it would be useful to have a few Rx operators that use probabilistic data structures. For example, the HyperLogLog can be used to estimate the count of large numbers of distinct items using very little storage. Another useful operator is the opposite of Distinct (Indistinct? SeenItBefore?). This is how most people use a Bloom Filter. Rather than do an expensive DB lookup, it first checks the filter to see if it is probably in the DB and then does the query.

Here’s the source for Distinct. All you have to do is replace the HashSet with a Bloom Filter (+ minor code tweaks).

        private static IObservable<TSource> Distinct_<TSource, TKey>(IObservable<TSource> source,
            Func<TSource, TKey> keySelector, IEqualityComparer<TKey> comparer)
        {
            return new AnonymousObservable<TSource>(observer =>
            {
                var hashSet = new HashSet<TKey>(comparer);
                return source.Subscribe(
                    x =>
                    {
                        var key = default(TKey);
                        var hasAdded = false;

                        try
                        {
                            key = keySelector(x);
                            hasAdded = hashSet.Add(key);
                        }
                        catch (Exception exception)
                        {
                            observer.OnError(exception);
                            return;
                        }

                        if (hasAdded)
                            observer.OnNext(x);
                    },
                    observer.OnError,
                    observer.OnCompleted
                    );
            });
        }

Use AppDomain recycling for reliable systems

ASP.NET does it, so why not you? There are two reasons restarting an AppDomain is useful for a reliable .NET server program. First, the only way to change the assemblies loaded into your AppDomain is to restart it. This allows you to upgrade a program quickly without taking it completely down. Though you should probably have the inputs buffered somehow to deal with your program being offline for a split second. Second, when your program inevitably breaks, it is useful to simply restart the program rather than fail. If you design your program so major subsystems are in their own AppDomain, then you can have a master program that monitors their health and restart those that fail. Erlang does this for their processes.

The way to do this is to use .NET Remoting. The entry point into your subsystems will likely be a class factory that inherits from MarshalByRefObject. Then you write a master program that creates a new AppDomain (AppDomain.CreateDomain) and accesses the factory class (AppDomain.CreateInstanceFromandUnwrap). When the AppDomain needs to be restarted, unload it (AppDomain.Unload) and reload it. Remote method calls across the AppDomain barrier are very fast. The hard part is designing a program to use this technique effectively.

Some notes on .NET performance

Rico Mariani gave a talk based on a paper on .NET performance. It’s mostly what you’d expect from a managed runtime. Over here someone took the time to detail the performance of most IL instructions for .NET 1. The exact perf numbers don’t matter so much as the magnitude. I wonder if lambdas in C# 3.0 are still as expensive as delegates (8X more than an instance call). I do wish someone would explain when the JIT will inline code. This could go a long way towards writing more efficient libraries. The thing about performance is that it’s more than merely learning to use a language/runtime. It’s also about conforming to the style and expectations of the language/runtime implementors.

F# relies on slow .NET features

I reported earlier that the N-Queens problem on F# was too slow. I translated the benchmark in C# and it ran 2X faster than F#. It was essentially the same program… lots of recursion and list processing. So how can F# be 2X slower than C# for the same code? One simple instruction: isinst. I learned long ago that type checking on the CLR is too slow for dynamic languages. Though F# is statically typed, it still performs type checking on discriminated union types. Specifically, operations on lists must always check if the list is null. F# does this by checking if the object is a Nil class: “obj is Nil” in C#. Type checking is extremely slow and should be avoided whenever possible in hotspots. List processing is definitely a hot spot.

When I rewrote the N-Queens problem I used my own list class which used null to represent Nil. Checking “obj == null” is extremely fast. I then modified my list to use a Nil class and performance dropped down to F# speed. The solution I used with my Scheme compiler was to use null to represent the end of the list. An alternative is skip the nil check and wrap the list access in a try-catch. It only fails in the exceptional situation anyway, since properly written code will check for nil before it calls hd. This should speed up list performance dramatically.

F# performance on N-Queens problem

I’m writing a bunch of Scheme benchmarks in F#. The N-Queens problem is determining how to place 8 queens on an 8×8 chess board so all are safe. Computing all 92 possibilities does a fair amount of recursive function calls and list processing. To do this 2000 times with F# takes about 17 seconds. For comparison, it takes 26 seconds with Scheme48 (a Scheme interpreter). Compiled implementations of Scheme can solve this problem in 2 to 3 seconds. Why is F# only 2X faster than Scheme48? I’ll report the rest of the benchmarks later.

Overriding extension methods in C# 3.0

A language that properly supports modularity must deal with name collisions. In C# 3.0, extension methods allow us to dynamically add new methods to another class. So what happens when methods collide? If I add the same method signature (name + parameters) to a class, what will the compiler do?

  • If I include two different namespaces, and both add the same method, the compiler warns that my method call is ambiguous. That’s good. But how can I explicitly specify which method I want?
  • If the method exists in another namespace and I add the same method, the compiler does not report that I’ve modified an existing method. This means my program could change if anyone working in the same namespace accidently modifies another method. The compiler should warn that I’ve changed an existing method.