Originally Published: Wednesday, 15 November 2000 Author: Jason Tackaberry
Published to: develop_articles_tutorials/Development Tutorials Page: 1/1 - [Printable]

Programming with Python - Part 3: Extending Python

In Part 3, we're going to look closely at Python's extensibility, and in particular the Python/C API. Once you have a firm grasp on the API, embedding Python is not a huge challenge.

   Page 1 of 1  

Programming with Python series
1. Baby Steps
2. The Real World
3. Extending Python

The first two parts of this series gave you a pretty compelling shove into the world of Python. By now you have a firm grasp on Python fundamentals, and are in a good position to approach any project with Python.

There is another side to Python that we haven't looked at yet, however. Two of Python's characteristics briefly mentioned in Part 1 were extensibility and embeddability. When a language (or any system, for that matter) is extensible, it means it can be modified to perform new tasks not part of the original system, or altered so that existing tasks function differently. If it is embeddable, it can be linked with a separate system to provide the functionality of one system to the other. In Python's case, it is typically embedded in an application in order to offer a scripting language as a convenient way to control the application's behaviour (an IRC scripting language, for example). In Part 3, we're going to look closely at Python's extensibility, and in particular the Python/C API. Once you have a firm grasp on the API, embedding Python is not a huge challenge.

Throughout this series I've been making references and comparisons to Perl. I did, after all, learn Python with a Perl background, and my learning was done making similar comparisons. In this part, I'll make no such comparisons, however. The Perl equivalent of the Python/C API, called perlxs, is horrid beast which is in no way friendly to beginners. Sprinkling perlxs examples in this tutorial will only bring out insanity. I'll spare you.

Reference Counting

All Python objects have a reference count. When an object is initialized, its reference count is set to 1. When other objects want to hold a reference to it (that object is added to a list, say), its reference count must be increased. When that other object no longer needs a reference, the reference count is decremented. When an object's reference count reaches 0, it is destroyed. This is reference counting in a nutshell.

Whereas some languages, such as Java, use full-blown garbage collecting, reference counting is an attractive method because it is easy to implement and light on resources. Reference counting does have its downside, though. Neglecting to decrement an object's reference count will result in leaked memory. Also, it is possible to create circular references, which prevents the objects involved in the circle from ever being deallocated.

It's worth having a closer look at circular references, because if you're not aware of the issue, you'll likely get bitten. In the simplest case, a circular reference involves two objects A and B, where A holds a reference to B and B holds a reference to A. A typical example is a hierarchy of objects where each object has a parent and child. The parent points to its child, and the child points to its parent.

  window = Window()
  widget = Widget()
  window.child = widget
  widget.parent = window

This creates a circular reference between window and widget. Deleting either of these objects, or even when the scope is left, will not deallocate the memory used, and so the result is a memory leak. The documentation's advice on the issue is, "don't do it." In all fairness, this is pretty good advice. If you have a circular reference, it's possible your design is more complicated than it ought to be. Still, in some situations, circular assignments are warranted, and the only way to prevent leaked memory is to explicitly break the cycle yourself.

  del widget.parent


  del window.child

Then, when the scope is left, the memory will be deallocated properly. Python 2.0 will provide a garbage collector to handle this problem, but its performance is questionable, and it remains to be seen if this will be enabled by default once Python 2.0 is released.

When you're programming strictly with Python, you don't have to worry about the particulars of reference counting, except to avoid cyclical references. Programming with Python/C is a different story, however. If you forget to decrement an object's reference, memory will be leaked, destructors won't be called, and the result will be a broken mess. Or, decreasing an object's reference too many times is a sure way to make the Python interpreter crash and burn. To make matters worse, sometimes it's not entirely obvious when you're supposed to increase or decrease an object's reference. The good news is that this doesn't remain a mystery forever. It doesn't take long to learn the caveats and discover that things really do make sense in the end.

In Python, the references are owned, not the objects. When a new object is created, some other object (the creator) owns a reference to that object. It is also possible for a function to borrow a reference. Consider the case when an object is passed to some function that does some work on it, say sets an attribute or removes an element if it's a tuple, and then returns. This function needn't increment and then decrement the reference. It is said to borrow the reference. References may also be stolen. A function steals a reference when the calling function does not explicitly increase the reference for the function being called. Not many functions steal references, but you should know that some do.

The two macros used to increase and decrease references are Py_INCREF and Py_DECREF respectively. Py_INCREF increments an object's reference count by one, while Py_DECREF decrements it by one and calls the object's deallocator if it reaches 0. Now's a good time to put the reference counting issue on the backburner. We're going to revisit it from time to time as we take a closer look at the API.

The Basics

This tutorial isn't a complete Python/C API reference. Some issues I briefly gloss over, others I don't cover at all. There will be big gaping holes that only the Python/C API Reference Manual can fill. My job is to ease you into learning how to extend Python, not teach you everything you need to know. I strongly recommend reading the reference manual after you finish this tutorial.

The API itself is very intuitive. If, like me, you have left dents on the wall from banging your head while working with perlxs, you'll find Python/C a breath of fresh air. The API is reasonably object oriented, where each object has a set of functions whose first argument is an instance of that object. For example, PyString_Size() will take as an argument a pointer to a PyStringObject, and return the length of the string contained in that object. Or, PyTuple_GetItem() takes as arguments a pointer to a PyTupleObject and an index indicating the position in the tuple to return.

Okay, I lied just a tiny bit. These functions really take pointers to PyObject objects. PyObject is the base type from which all other Python types are created. This is really a form of polymorphism.

  PyObject *string = PyString_FromString("foobar");
  PyObject *tuple = PyTuple_New(5);
  int string_len = PyString_Size(string);
  int tuple_len = PyTuple_Size(tuple);

The objects string and tuple, while of type PyObject are really instances of PyStringObject and PyTupleObject. At this point, you may not be surprised to find out that there are several abstract layers, or protocols, that certain types implement. There are four protocols: object, number, sequence, and mapping. Lists and tuples implement the sequence protocol; longs and ints implement the number protocol; dictionaries implement the mapping protocol; instances and classes implement the object protocol; and so on. You can apply any of the methods for a particular protocol on an object as long as the object implements that protocol, otherwise a type exception will be raised. For example, let's look at PySequence_GetItem() given our two objects string and tuple from the last snippet:

  PyObject *result;
  result = PySequence_GetItem(string, 0);
  result = PySequence_GetItem(tuple, 0);

The same function was applied to two different objects and operates differently in each case. In the first case, with a string object, the result is a string object holding the character at index position 0. In the second case, the result is whatever object is at index position 0 in the tuple. All objects types have a set of methods specifically for that type, such as PyString_AsString(), as well as the set of methods for the protocol it implements, such as PySequence_GetItem().

Creating Modules

In order to extend Python, the first step is to create a module. Extension modules are created primarily because you need native C performance, or you want to export the functionality of another library to Python (that is, wrap it).

Modules are named as Foomodule.so, where Foo is the name of the module. Modules may also be in the module path as Foo.py or as Foo/__init__.py. When the Foo module is imported (by calling import Foo), the interpreter will load the first occurrence of the module it finds. It searches in the following order:

  • Foo/__init__.py
  • Foomodule.so
  • Foo.py

Assuming Foo/__init__.py doesn't exist and Foomodule.so does, the interpreter will dynamically link to Foomodule.so and call the function initFoo(). This function perform any initialization that needs to be done, and needs to call Py_InitModule to register the module with Python.

The simplest module is one that doesn't do anything at all. We'll have a look at the code for an empty module Foo that we can then import from Python. Of course, the module will be useless in practice, but it will serve as a skeleton. Our module, in the file foo.c, will look like this:

  #include <Python.h>

  PyMethodDef Foo_methods[] = {
    { NULL }

  void initFoo()
    Py_InitModule("Foo", Foo_methods);

Before we go over this code (I'm sure you've already got it figured out), let's throw together a Makefile that we can use to compile this example.

  CFLAGS=-g -Wall -I/usr/include/python1.5

  Foomodule.so: foo.o
    cc -shared -o Foomodule.so foo.o

  foo.o: foo.c
    cc -fPIC -c foo.c $(CFLAGS)

If this example doesn't compile for you and it squaks about not being able to find Python.h, make sure you have the Python development package installed if you're using a packaged-based distribution like Red Hat or Debian.

The code for this skeleton module isn't too intimidating at all. Foo_methods is an array of PyMethodDef, which maps C functions to Python functions. Because this array is empty in our example, this module provides no functions that can be called from Python. In the initFoo entry function, we call Py_InitModule and pass the name of the module, and the array of methods offered by this module.

Let's create a simple function that takes an integer, squares it, and returns the result.

  PyObject *square(PyObject *self, PyObject *args)
    int num;
    if (!PyArg_ParseTuple(args, "i", &num))
      return NULL;

    return PyInt_FromLong(num * num);

First, notice how the function square is declared. C functions that are called directly from Python (PyCFunction) will be of this form, taking two PyObjects as arguments, and returning a PyObject. These functions must return a valid Python object, unless an exception has been raised, in which case it should return NULL. One of the returned object's references is given to the caller.

The first parameter, self, points to the instance to which this function belongs. This parameter is NULL unless the function is a built-in method. We'll be looking at that later; for now you can ignore it. The second parameter, args, is a tuple object containing the arguments passed to this function from Python.

Inside our function, the first thing we need to do is retrieve the integer value that is to be squared. We do this using the convenience function PyArg_ParseTuple, which requires the tuple given in the first argument to be to be formatted according to the second argument (in our case, a single integer). PyArg_ParseTuple is a versatile and very useful function, and worth learning about in detail. If the tuple passed doesn't match the required format, PyArg_ParseTuple will raise an exception and return NULL. In our function, we also return immediately.

Once the integer value of the object is fetched and stored in the num variable, we square it and construct a new integer object using PyInt_FromLong. This function creates a new object with an initial reference count of one, which is given away to the caller. We could also have used Py_BuildValue, which is capable of constructing arbitrary objects based on a format description, much like the way PyArg_ParseTuple works.

We have to tell Python that this function now exists in our module, so we'll need to update the Foo_methods array to look like this:

  PyMethodDef Foo_methods[] = {
    { "square", square, METH_VARARGS },
    { NULL }

The first element in the PyMethodDef structure specifies the name of the function as it appears to the Python program. The second points to the C function that is to be called. And the third specifies the flags for this function. In this case, METH_VARARGS specifies that the function may be given any number of arguments. Even though our function takes one argument, we use this flag because it guarantees the args parameter is never NULL, even when there are zero parameters passed. PyArg_ParseTuple will verify the correct number of arguments were passed for us.

All PyCFunction functions must return some value, but in cases where return values don't make sense, you can return Python's special None object, which is roughly equivalent to C's NULL. From C, the None object is called Py_None. Before returning this value, you must first increase the Py_None object's reference -- remember that references for return values are given to the caller.

Now that we have a Python module that does something (albeit not that useful), let's look at an interactive Python session that uses it:

  >>> import Foo
  >>> Foo.__dict__
  {'__doc__': None, 'square': <built-in function square>, '__file__':
  './Foomodule.so', '__name__': 'Foo'}
  >>> Foo.square(5)

There's nothing overly magical here. Notice that the output of Foo.__dict__ shows that square is a built-in function. And Foo.square behaves as we'd expect.

Wrapping a Library

Perhaps the most common reason for creating a module in Python is to wrap the functionality of a library and provide an interface to it from Python. Another reason you might want to create a Python module is for performance reasons. In Part 2 we looked at an example that used Python's xmllib to read a simple XML description. The xmllib package isn't a great performer, however, and our next Python project requires that several dozen large XML files be parsed as quickly as possible. Having worked with XML in GNOME using gnome-xml, we know that this library meets our performance requirements. What's the next step? You guessed it.

Let's take a moment to consider the requirements of this wrapper module. We don't need to create a general purpose XML parser module from gnome-xml for our purposes. Instead our module will handle only what's necessary, and offer a few functions to access XML data that is specific to our application. The XML files in our project contain meta data for files in directories. So, an example:

    <file name="flowers.jpg">
      <meta type="keyword">Nature</meta>
    <file name="natalie.jpg">
      <meta type="keyword">Actress</meta>
      <meta type="keyword">Female</meta>

For no other reason than to keep things simple, let's only deal with metadata of type keyword. We will have a function in our Python module to read in an XML file, and return an object that provides a method to search for a filename. This method will return a list of keywords for that file.

I'm going to introduce the code to implement this in steps, and piecing together the code may not actually produce something that compiles. The most you'll be missing is function prototypes and a few other odds and ends, however, so it shouldn't take too much effort to make things work.

First let's create the module MetaReader with one function called process_dir which, given a path name, reads the file .metadata in that directory and returns a MetaData object, which we'll create later.

  #include <Python.h>
  #include <gnome-xml/parser.h>
  #include <glib.h>

  PyObject *process_dir(PyObject *self, PyObject *args)
    char *path;
    if (PyArg_ParseTuple(args, "s", &path)) {
      MetaData_PyObject *o = PyObject_NEW(MetaData_PyObject, &MetaData_PyObject_Type);
      char *file = g_strconcat(path, "/", ".metadata", NULL);
      o->doc = xmlParseFile(file);
      return (PyObject *)o;
    return NULL;

  PyMethodDef MetaReader_methods[] = {
    { "process_dir", process_dir, METH_VARARGS },
    { NULL }

  void initMetaReader()
    Py_InitModule("MetaReader", MetaReader_methods);

First we include a couple additional headers, one for gnome-xml, and one for glib. This of course means we will need to link to these libraries. (You can see the additional linker parameters needed by typing gnome-config --libs xml glib.) You'll notice that this code follows the earlier example very closely. In fact, creating modules often follows a cookbook approach to coding, where you simply flesh out some skeleton code. We parse the XML file using gnome-xml's xmlParseFile and store the result in object's doc member (which is defined by us later). The only mystical part of this code is where PyObject_NEW is called. PyObject_NEW creates a new object of some specified type. In our case, this is MetaData_PyObject, which hasn't been defined yet

New Python types are created by filling out a PyTypeObject structure, and implementing the functions that make up its behaviour, minimally the destructor and get-attribute functions. The Python type is associated with a custom structure that will be instantiated when an object of this type is created. Our MetaData object type looks like this:

  typedef struct {
    xmlDocPtr doc;
  } MetaData_PyObject;

  PyTypeObject MetaData_PyObject_Type = {
    0, /* ob_size */
    "MetaData", /* tp_name */
    sizeof(MetaData_PyObject), /* tp_basicsize */
    0, /* tp_itemsize */
    /* methods */
     (destructor)MetaData_PyObject__dealloc, /* tp_dealloc */
     0, /*tp_print*/
     (getattrfunc)MetaData_PyObject__getattr, /* tp_getattr */
     0, /* tp_setattr */
     0, /* tp_compare */
     0, /* tp_repr */
     0, /* tp_as_number */
     0, /* tp_as_sequence */
     0, /* tp_as_mapping */
     0, /* tp_hash */

In the MetaData_PyObject structure, the PyObject_HEAD macro inserts the data into the structure that's required for each instance of the Python type. The MetaData_PyObject_Type sets up the fields for the new type object, where the field name is shown in the comment beside the value. As you can see, many of these are optional. The tp_dealloc and tp_getattr fields refer to our not-yet-implemented functions that are called when the object gets destroyed, and when an attribute of the object is accessed. If the tp_setattr field isn't defined (i.e. 0), the attributes for this object are read-only.

Now let's implement the dealloc and getattr functions:

  void MetaData_PyObject__dealloc(MetaData_PyObject *self)

  PyMethodDef MetaData_PyObject_methods[] = {
    { "get_metadata", MetaData_PyObject__get_metadata, METH_VARARGS },
    { NULL, NULL }

  PyObject *MetaData_PyObject__getattr(MetaData_PyObject *self, char *name)
    return Py_FindMethod(MetaData_PyObject_methods, (PyObject *)self, name);

The MetaData_PyObject__dealloc function just frees the XML document created when we initialized the object, and then finally frees the memory used by the Python object. The MetaData_PyObject_methods array defines the methods that a MetaData object will provide, and is in the same format as MetaReader_methods. (Both are of type PyMethodDef.) We define a function called get_metadata that will look through the XML tree we parsed using xmlParseFile. Py_FindMethod searches this list for the attribute name that was requested. If the name is process_dir, it returns the method object for get_metadata, otherwise it raises an attribute exception.

After all this, we're finally ready to implement the get_metadata function.

  PyObject *MetaData_PyObject__get_metadata(MetaData_PyObject *self, PyObject *args)
    char *file, *xmlfile;
    xmlNodePtr node, node2;
    PyObject *keywords, *tuple;

    if (!PyArg_ParseTuple(args, "s", &file))
      return NULL;

    keywords = PyList_New(0);
    for (node = self->doc->root->childs; node != NULL; node = node->next) {
      /* Skip this node unless it is of type "file */
      if (strcmp(node->name, "file"))

      /* Skip unless the filename for this node is the one we want */
      xmlfile = xmlGetProp(node, "name");
      if (strcmp(xmlfile, file))

      /* Loop through all meta keywords defined */
      for (node2 = node->childs; node2 != NULL; node2 = node2->next) {
        char *type, *keyword;
        if (strcmp(node2->name, "meta"))
        type = xmlGetProp(node2, "type");
        /* Skip unless this is a keyword metdata type */
        if (strcmp(type, "keyword"))
        keyword = node2->childs->content;
        PyList_Append(keywords, PyString_FromString(keyword));
    tuple = PyList_AsTuple(keywords);

    return tuple;

Since this isn't a tutorial on gnome-xml, I won't dwell too much on the particulars of the node traversal. Suffice it to say that this code scans through the XML nodes and picks out the keyword definitions for the filename passed as an argument.

The code of interest are the four lines in bold type. The first constructs a new list object, with an initial size of 0 elements. Since lists are mutable, each time we add a new element, the list will grow. The second line appends the keyword to the list. First the C character array (string) keyword is converted to a Python object using PyString_FromString, and then it is appended to the list. The third bolded line converts the list to a tuple. We could return a list to the caller, but it makes more sense for the return value to be immutable. Finally, we decrement the reference count on the keywords list (since we're done with it), and return the tuple.

And that's it. This may seem a little complicated, but keep in mind that beginning a new module is a very copy-and-paste operation. The skeleton for modules and type objects will be reusable. I've given you that skeleton; fleshing it out is left to you.

Supposing we have the XML description from above in the current directory, we can test out our new module. Here's an interactive session showing MetaReader in action.

  >>> import MetaReader
  >>> o = MetaReader.process_dir(".")
  >>> o.get_metadata("flowers.jpg")
  >>> o.get_metadata("natalie.jpg")
  ('Actress', 'Female')

The End of the Beginning

The Python/C API is so flexible that I couldn't possible touch on it all here. Very useful topics such as exceptions and dynamically creating classes weren't discussed at all. If you're ready for more, your next stop should be the Extending and Embedding Python tutorial, followed by the Python/C API Reference Manual.

If you've been following this series from the beginning, the end of Part 3 marks an important step. Assuming you've digested all the material up until now, your knowledge in Python, while not comprehensive, is versatile enough to suit almost any problem. In the tutorials to come, we'll be applying your knowledge of Python to do specific things, such as creating GUI applications using libglade, and CORBA objects using ORBit-Python.

As always, please email me with requests or topic suggestions. These are your tutorials!

Jason Tackaberry (tack@linux.com) works in Ontario, Canada as an Academic Computing Support Specialist. He is the author of ORBit-Python, Python bindings for ORBit, and several soon to be released projects. Having over 12 years of development experience in C and C++, and hacking with Perl for 4 years, he has turned to Python as his new favorite language.

   Page 1 of 1