The Computer Language Benchmarks Game explains the limits of comparing the performance of different programming languages (PL) using simplistic benchmarks. The issue is how to map the performance of real programs to a new language using data from simple benchmarks. How would my Java program perform if written in Haskell? People seem to have given up on this sort of comparison, but I think it’s possible to generate a decent approximation.
The first step is to break down the performance of a real program using detailed profiling data. The cost of all OS and external calls (files, memory, databases, networking, etc) should be removed from the total because that’s not controlled by the application. The remaining time is the real cost of the PL. This data can be broken down into the % of time spent doing object allocation, function calls, int & float math, loops, and other low-level operations. A suite of micro-benchmarks that exercise these low-level operations can be used to map between languages. For example, if object allocation takes 5% of total execution time and language B is 2X faster in that benchmark, then an app written in B might improve performance by 2.5%.
Of course, this calculation is extremely crude, but the goal is to get a sense of the difference in scale. I would only use benchmarks that have significant differences, like well over 2X. You shouldn’t rewrite an application in another language unless you can easily double performance. Therefore, small differences between languages are a wash. For example, the performance difference between OCaml and C++ is usually small enough that it would be acceptable to use OCaml. In fact, if the time taken by your application is a small fraction of the total execution time (most time is spent in IO and DB) then any slow scripting language will suffice.