Personal tools
You are here: Home Glop Blog programming


Contextual operator overloading

Filed Under:

Damn the Python syntax!

For three weeks already I am struggling with this basic requirement: overload the arithmetic operators of Python without using class methods.

Or, more specifically, delegate the implementation of an infix operator to a third-party object in a specific context. For example, I would like something like the following code to work:

def foo(a, b):
    print "hello ", a, "and", b

__builtins__.__add__ = foo
12 + 13 # use foo since __add__ is overloaded

Unfortunately, this does not work. Python optimizes arithmetic over basic types. And until now I can find no other "elegant" way to overload infix operators without specific object classes.

And no, this trick to overload infix operators is very ugly.

Integer fun facts

Filed Under:

Like every ten or twenty years, technology advances challenges common knowledge.

I was very interested to read today a number of facts about the C programming languages that tend to escape “common knowledge” over time. For example, did you know that:

  • some processors (especially Digital Signal Processors) cannot efficiently access memory in smaller pieces than the processor's word size. There is at least one DSP [...] where CHAR_BIT is 32. The char types, short, int and long are all 32 bits.
  • Every bit in an object of unsigned character types contributes to its value. There are no unused or padding bits, and every possible combinations of bits represents a valid value for an unsigned char. There is no other data type in C or C++ that guarantees this to be true. [no, not even int or long]
  • SCHAR_MIN must be -127 or less (more negative), and SCHAR_MAX must be 127 or greater. [...] many compilers for processors which use a 2's complement representation support SCHAR_MIN of -128, but this is not required by the standards.
  • likewise, the standard requires minimum ranges for short, int and long data types, but implementations can choose any larger size.
  • It is int which causes the greatest confusion. Some people are certain that an int has 16 bits and sizeof(int) is 2. Others are equally sure that an int has 32 bits and sizeof(int) is 4. Who is right? On any given compiler, one or the other could be right. On some compilers, both would be wrong. [there is at least] one compiler for a 24 bit DSP where an int has 24 bits.
  • on 32-bit platforms, using "%d" [with printf] to print either an int or long will usually work, but on LP64 platforms "%ld" must be used to print a long.
  • the relationship between the fundamental data types can be expressed as sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) = sizeof(size_t).


skin by PYBOOM