It seems obvious that speculators are driving up prices of commodities, particularly oil. However, Paul Krugman doesn’t think so. I still don’t understand the weird chart he drew, nor do I understand any of his argument. Take a look at the chart I drew from data gathered by the Dept. of Energy. From 2000 to today, world oil consumption and production has gone up 19%, whereas prices have gone up about 450%. A big chunk of that price rise is due to the dollar falling hard in the last few years, like 40% against the Euro. Dollar goes down, prices on imports go up. The reason speculators are in the market is because there’s about $70 trillion of global savings looking for something to buy. They tried to put it into global housing markets, but they created a huge bubble that has now burst. Now some of that money has drifted into commodities. If traders bid wildly against each other for oil future contracts, it drives the price up just as it did in the Internet bubble. Ultimately the traders have to sell their oil contracts to real oil consumers like gas refineries. The refineries buy the oil at any price, turn it into fuel and pass the cost down to consumers who pay $4/gallon of gas. The big problem is that demand does not go down in the face of far higher prices, and there’s no substitute for oil. So how high can prices go? It will continue to climb until demand goes down. Once demand edges downward, speculators will dump their contracts and the price of oil will collapse to something more reasonable. If this is such easy money for traders, why didn’t they jump in before? Don’t know, but a friend at an investment bank says hedge funds and banks didn’t start trading heavily until about 2003, which is where the chart below really soars. Speculators and a falling dollar are the main reason oil prices are going up. [ed: Barron's cover story agrees with me.]
Archive for May, 2008
I’ve decided to use Bazaar as my distributed version control system for personal projects. Like everyone else, I was also considering Git and Mercurial. Here’s a nice comparison of DVCS systems. My priorities are (1) that it be easily portable across platforms and (2) it be very easy to use. Git loses on both counts, while the other two are tied. I don’t care at all about performance because my projects are small. The feature that broke the tie between the other two is renaming. Bazaar treats renaming as a primitive operation, whereas Mercurial treats it as a copy and delete. The result is that Mercurial doesn’t show the log of the “copied” file unless you explicitly say “hg log –follow”. In the beginning of my projects, I rename files and directories a lot to make things more manageable. I want a system that makes it easy and obvious. Nevertheless, I’ll be forced to use all the other DVCS tools if I want to tweak open source code. So picking Bazaar doesn’t change the fact that I still have to learn the other two. Thankfully, their main operations are so similar that it shouldn’t take long to figure them out.[ed: I'm using etckeeper with Git to track changes in my /etc directory]
Lots of people are commenting on Steve Yegge’s talk “Dynamic Languages Strike Back“, so I’ll add my 2 cents. I’m a hardcore Scheme & Lisp programmer, but even I will reluctantly admit that static typing is important and useful. The primary reason I still use Scheme is because I won’t ever give up macros. On to the talk: Yegge spends too much time arguing that the performance of dynamic languages can match static languages, especially since many (Java & C#) are using reflection as a poor man’s dynamism anyway. He make the classic “sufficiently smart compiler” argument, which rarely matches reality. Regardless, the raw speed of a programming language is not usually a problem for larger systems, because I/O is usually the bottleneck. So the great bulk of his talk on performance is pointless. Yes, dynamic languages could be very fast, but they won’t be and it usually doesn’t matter anyway.
Performance is the least significant feature of static typing. Instead, static typing is really an automated code review by the compiler. More importantly, it can check very large programs with many developers and lots of evolving components. The fact is that programmers use idioms in dynamic languages to provide type hints to other programmers, but not to the compiler. Furthermore, dynamically typed programs aren’t usually that dynamic. You create structures with expected types on them, and that strong typing influences the rest of your program. If a name is a string, you won’t want to assign a symbol or char there. But when you do make a mistake, a runtime error dumps you into the debugger and you have to dig around to find the offending statement. This is easy when it’s all your code, but hard when it’s a very large system written by lots of different people with different programming styles.
So if most dynamic programs are mostly strongly typed anyway, why not tell the compiler about it so it can double check your code? The problem with static type systems is (1) they are complex and (2) the compiler error messages are bizarre. A minor problem is that type systems sometimes can’t express something you’re trying to do, but there’s usually a kludge for those problems. I think that (2) is easier than the runtime error messages you get in a dynamic language, but it does takes some time to learn how to read those weird error messages. Shouldn’t someone be working on this? I have some sympathy for (1), but I think programmers need to change their mindset. It’s better to write correct code slowly than to write buggy code quickly. Though it takes longer to figure out the magic incantation to make a Haskell compiler happy, it does result in far fewer bugs. At the very least, you can write your prototypes in a dynamically typed language and production code in a statically typed language. Then you get the best of both worlds.
This link describes how to install some modified drivers required to connect to the Canon MP130 printer. Curiously, VMware Player installs a VMware_Virtual_Printer, which directs output through a strange tpvmlp driver. This is briefly described here as a feature for ACE. When I tried to use it, it crashed CUPS and I couldn’t restart it without removing the tpvmlp file in /usr/lib/cups/backend. I should delete it so I’m not tempted to use it in the future.
I’m probably the last person to realize this, but VMware Player is vastly better than Server for interactive use. I’ve been using Server for a long time and have always been irritated by the slow, stuttering graphics. A comment somewhere explained that the Server window uses remote communication to connect to the VM, i.e. it’s like using VNC or Remote Desktop. Player runs the VM directly, thus optimizing the graphics pathway. I still don’t know if I need to install VMware tools for better performance. I installed it anyway, but I’ve not noticed any difference. I’ll try the Open VMware tools later.