Python Notes

Sunday, February 20, 2005

A curious remark about code structure in Python

Python never ceases to surprise me. Today I noticed something really simple; it's one of those things that can go on unnoticed for ages, but make the code read better.

When coding in several other languages -- C, C or Delphi -- there's the need to declare things before using them. As a consequence, class declarations tend to be organized to satisfy this restriction (of course, there are forward declarations, but that's the practice anyway). When debugging, or mentally tracing a sequence of calls, or simple doing a review of the code, more often than not, we end up reading the code 'backwards' - the most common entry points are usually located at the end of the source file, and more specifi methods are located before them. After some time, we get used to it, and we don't notice it anymore.

Then today, I noticed today, by accident, that the ordering of the methods in my Python classes is much more intuitive. Starting with the __init__, the methods are organized in a clear 'top-down' approach. The effect is that the source code can be comfortably read from top to bottom, as if it was an article.

I don't know about other programmers 'mental model' in this regard, but it seems to me that we are able to mentally 'push' the yet undefined symbols to check them down the code. Whatever it is, I found it very interesting, and yet another strong point for Python.

Friday, February 18, 2005

Trac is great!

Well. I may be exaggerating a little bit, but for people like me that easily get lost in the transition from the big picture into the details of the software, Trac is a great tool. My initial experience with Trac was due to the CherryPy project, that adopted it a while ago. Later, I participated in a private project that also relied on Trac for documentation and ticket management. It's a great tool. It lives up to the promise in its web page: to be an inobstrusive tool for software development management. It's still a 0.8 release, but if it keeps going this way, it's going to be a great success.

That said, I had some troubles setting up Trac in my main development machine. I had a Linux box, running Ubuntu Warty. I tried to download Trac in several formats: the official 0.8 release from SourceForge, the subversion-hosted version, and the Debian package. I installed all dependencies (for example, ClearSilver, a high-performance template package written in C and with bindings for Python). But I couldn't make it work. It always complained about something wrong with the database. I gave up, waiting for a new release to try again.

This week, I had a big problem with my PC, due to some mixups between packages from Debian unstable and Warty (I know, I was not supposed to be doing that, but I needed to use some stuff from unstable, and I had to try). It seemed to me to be a good opportunity to move to Hoary, the upcoming version from Ubuntu. It involves some risk, because I was going to rely on inherently unstable stuff. And the ride was rough, it must be said, but in the end -- and after a good day of downloading fixes and tracking dependencies -- I had everything working in a better shape than before. Now I have the newest version of Eric working (one of the limitations of Warty was its support for new Qt versions; the situation improved in Hoary). And at last, I decided to try Trac again.

I resumed working from the 0.8 tar file that I had downloaded a few weeks ago. I installed it running the standard ' install' invocation, which worked flawlessly. My customizations for Apache were kept in the upgrade, so I didn't had to redo it. But when I tried to run, it complaining about a missing neo_cgi module. It turned out that I only had ClearSilver for Python 2.3, and the new Hoary is running Python 2.4 by default. At this point I had to make my mind on whether to hunt new packages on the repository, or to solve it in a practical but less apt-friendly way, by installing such extra packages from the source distribution. I ended up choosing the later route. So I'm not relying on the official repository for such stuff, which is a shame, but made things more practical for me at this point.

I installed ClearSilver from the source (which complained quite a lot about some settings, again, because the standard installation script was broken for both Python 2.3 and Python 2.4 -- which is kind of weird). The fix is simple, and involves a few patches to the configure script:

# include 2.4 in the python_versions string
python_versions="2.4 2.3 2.2 2.1 2.0 1.5 22 21 20 15"
# remove or comment the below & substitute for the other
# PYTHON_SITE=`$python_bin -c "import site; print site.sitedirs[0]"`
PYTHON_SITE=`$python_bin -c "import os,sys; print os.path.join(sys.prefix,

The only other package missing was SilverCity, which is used for 'pretty printing' of source code in several languages. I did the same: downloaded from the source and installed it. It went smoothly. From this point, Trac was running. But running it under CGI is slow. As I am now running it for private use, I decided to try the standalone tracd daemon. It's much faster. There were a few issues with authentication that I was able to solve (using Apache style htdigest files), and now, it seems to be running pretty reliably.

Thursday, February 17, 2005

PC virtualization is coming to age

Well, this is not exactly a post about Python, but it relates a lot to my development experiences. I've been interested in PC virtualization techniques for a long time. I was an early user of VMWare, back when the license (and the price) were not so restrictive. I followed projects such as Bochs, which is a fine piece of software, but it's too slow for practical use; and later Plex86, that for some reason never managed to make it. For some applications, User Mode Linux is a good solution; it isn't as general as VMWare, but it is already used for a lot of stuff, including testing of Linux distributions, which requires a clean and controlled environment, and even for web hosting.

Over the last week, I've came across two projects which raised my interest on the subject once again. The first one was QEMU, which aims to be a VMWare-class virtualization software. It already supports several guest OSs, including some of the Windows family members. However, the project has found it recently in the middle of a big licensing discussion. The project author released the QEMU Accelerator Module -- a special binary-only module that greatly improves the performance of the system, and that seems reminiscent of the techniques used by VMWare itself -- under a free-to-use but still proprietary license. He had gone to great lengths to insure that the new module could be used as a plug-in, without disturbing the free part of the system; but even these concerns didn't help him. The community seems to be split now, and that's not good news.

The other project came to me via Red Hat, but the project home is hosted at the Cambridge University: Xen is a virtual machine monitor, and it is a kind of intermediate between the older User Mode Linux and VMWare. The approach is described as paravirtualization; the guest OS needs to know that it's running inside a VM environment. This makes the life of the VM monitor easier and improves its performance, but takes off some of the flexibility of the system. The performance is potentially higher than a similar User Mode Linux installation.

In the end, what i found more interesting on these news developments was a small note on the Xen FAQ. On the question about support for Windows as a guest OS, the FAQ says that new developments on x86 chips from Intel and AMD will make this support easier. It seems that both manufacturers finally have awakened for the possibility of full virtualization, and are including all necessary hooks in the chips themselves. This is definitely good news, and it's a sign that PC virtualization is coming to age.

Friday, February 04, 2005

What's the U in URI?

It's nice when one finds someone else talking about something that's seemingly evading his own understanding. Permanent URIs (or URLs, for old hats like myself) are one such a beast, at least for me. Roberto de Almeida wrote a good article on URLs for blogs a little while ago. I only found the link today but it still reads fresh. It helped me to organize my thoughts on this subject in some surprising ways.

The U in URL can mean two things. U as in uniform, or U as universal. According to the W3C, the standard acronym reads as the former. It means (at least for me) that the format is uniform; in other words, an URI has a definite format, that can uniformly parsed and understood by the agents. It's pretty much a computer jargon, an arrangemente between two computers on how to talk to each other. However, it's the second interpretation that carries more meaning for us human beings.

A permanent link should be just like that: permanent, and also, univocally associated with a piece of content. We humans are particularly well equiped to deal with content in multiple -- and quite often, seemingly contradictory -- formats. We can easily recognize if the content is the same, despite a slightly different format. It seems to me that what matters in a URI for us, human users, is the universality. I wish to be able to enter the same URI, at any point in the network, at any given time, and get to the same content. That's the point. All the rest is (quite probably) computer speak. Not that it doesn't matter -- it does matter for practical reasons. But in human terms, universality is what counts.