Tag Archives: Python

Iterators? We Don’t NEED No Stinking Iterators!

Every so often a request for “implementation of iterators for maps” over hashes/maps/dicts or some other K-V data structure appears on mailing list for a functional langauge. I’ve spent years making heavy use of iterators in imperative languages, and the way they fit into Python is really great. For Python. I totally understand where some of these folks are coming from, they just don’t realize that functional languages are not where they came from.

So… “Is this post the result of some actual event”? Yeah, you got me. It is. On the erlang-questions mailing list someone asked “Are maps ever going to get an iterator?” Again.

Erlang is definitely not Kansas, but people thinking either that it is or (more dangerously) that it should be and then trying to influence the maintainers to make it that way (and then the powers-that-be getting in a panic over “market share” and doing horrible things to the language…) worries me a bit.

There is no faster way to paint a functional language into a corner than to try making it occasionally imperative. Conversely, consider the syntactic corner C++ and Java have painted themselves into by trying to include functional features as after-thoughts where they really didn’t belong.

(I know, I know, death-by-kitchen-sink is a proud C++ tradition. It is becoming one for Java. Even though I hate Java there is no sense in making it worse by cluttering its syntax and littering it with gotchas and newbie-unfriendly readability landmines in the interest of providing features few Java coders understand the point of, especially when the whole concept of state management in a bondage-and-discipline OOP language like Java is to keep everything in structs with legs (not anonymous closures over state that is temporarily in scope…). The lack of such problems were previously one of the main points that favored Java over C++… well, that and actual encapsulation. Hopefully Rust and D can resist this temptation.)

This frustrates me. It is almost as if instead of picking a tool that matches a given job, people learn one tool and then try over time to make a super-powered Swiss Army knife of it. This never turns out well. The result is more Frankenstein’s Monster than Swiss Army knife and in the best case it winds up being hard to learn, confusing to document and crap at everything.

What’s worse, people assume that the first tool they learned well is the standard by which everything else should be judged (if so, then why are they learning anything else?). It follows, then, that if a newly studied LangX does not have a feature of previously used LangY then it must be introduced because it is “missing”. (I do admit, though, to wishing other languages had pattern matching in function heads… but I don’t bring this up on mailing lists as if its a “missing feature”; I do, however, cackle insanely when overloading is compared with matching.)

Let’s say we did include iterators for maps into Erlang — whatever an “iterator” is supposed to mean in a list-are-conses type functional language. What would that enable?

-spec foreach(fun(), map()) -> ok.

That sort of looks pointless. Its exactly the same as lists:foreach(Fun, maps:to_list(Map)) or maybe lists:foreach(Fun, maps:values(Map)). Without knowing whether we’re trying to build a new map based on the old one or get some side effect out of Fun then its hard to know what the point is.

Maybe:

-spec map(fun(), OldMap :: map()) -> {ok, NewMap :: map()}.

But… wait, isn’t that just maps:map/2 all over again?

I think I know where this is going, though. These people really wish maps were ordered dictionaries, because they want keys to be ordered. So they want something like this:

-spec side_effects_in_order_dammit(fun(), map()) -> ok.
side_effects_in_order_dammit(F, M) ->
    Ordered = [{K, maps:get(K, M)} || K <- lists:sort(maps:keys(M))],
    ok = lists:foreach(F, Ordered).

But wait, what order should the keys be in, anyway?

This is a slow, steady march to insanity. “Give me iterators” begets “Let’s have ordered maps” begets “Let’s have ordered iterators for maps” and so on, and eventually you wind up with most of the Book of Genesis in the Devil’s Bible of Previously Decent Functional Languages. All the while, totally forgetting that these things already exist in another form. There are more data structures than just maps for a reason.

This just gets ridiculous, and it isn’t even what hashes are about to begin with.

Binary Search: Random Windowing Over Large Sets

Yesterday I came across a blog post from 2010 that said less than 10% of programmers can write a binary search. At first I thought “ah, what nonsense” and then I realized I probably haven’t written one myself, at least not since BASIC and Pascal were what the cool kids were up to in the 80’s.

So, of course, I had a crack at it. There was an odd stipulation that made the challenge interesting — you couldn’t test the algorithm until you were confident it was correct. In other words, it had to work the first time.

I was wary of fencepost errors (perhaps being self-aware that spending more time in Python and Lisp(s) than C may have made me lazy with array indexes lately) so, on a whim, I decided to use a random window index to guarantee that I was in bounds each iteration. I also wrote it in a recursive style, because it just makes more sense to me that way.

Two things stuck out to me.

Though I was sure what I had written was an accurate representation of what I thought binary search was all about, I couldn’t actually recall ever seeing an implementation, having never taken a programming or algorithm class before (and still owning zero books on the subject, despite a personal resolution to remedy this last year…). So while I was confident that my algorithm would return the index of the target value, I wasn’t at all sure that I was implementing a “binary search” to the snob-standard.

The other thing that made me think twice was simply whether or not I would ever breach the recursion depth limit in Python on really huge sets. Obviously this is possible, but was it likely enough that it would occur in the span of a few thousand runs over large sets? Sometimes what seems statistically unlikely can pop up as a hamstring slicer in practical application. In particular, were the odds good that a random guess would lead the algorithm to follow a series of really bad guesses, and therefore occasionally blow up. On the other hand, were the odds better that random guesses would be occasionally so good that on average a random index is better than a halved one (of course, the target itself is always random, so how does this balance)?

I didn’t do any paperwork on this to figure out the probabilities, I just ran the code several thousand times and averaged the results — which were remarkably uniform.

binsearch

I split the process of assignment into two different procedures, one that narrows the window to be searched randomly, and another that does it by dividing by two. Then I made it iterate over ever larger random sets (converted to sorted lists) until I ran out of memory — turns out a list sort needs more than 6Gb at around 80,000,000 members or so.

I didn’t spend any time rewriting to clean things up to pursue larger lists (appending guaranteed larger members instead of sorting would probably permit astronomically huge lists to be searched within 6Gb of memory) but the results were pretty interesting when comparing the methods of binary search by window halving and binary search by random window narrowing. It turns out that halving is quite consistently better, but not by much, and the gap may possibly narrow at larger values (but I’m not going to write a super huge list generator to test this idea on just now).

It seems like something about these results are exploitable. But even if they were, the difference between iterating 24 instead of 34 times over a list of over 60,000,000 members to find a target item isn’t much difference in the grand scheme of things. That said, its mind boggling how not even close to Python’s recursion depth limit one will get, even when searching such a large list.

Here is the code (Python 2).

from __future__ import print_function
import random

def byhalf(r):
    return (r[0] + r[1]) / 2

def byrand(r):
    return random.randint(r[0], r[1])

def binsearch(t, l, r=None, z=0, op=byhalf):
    if r is None:
        r = (0, len(l) - 1)
    i = op(r)
    z += 1

    if t > l[i]:
        return binsearch(t, l, (i + 1, r[1]), z, op)
    elif t < l[i]:
        return binsearch(t, l, (r[0], i - 1), z, op)
    else:
        return z

def doit(z, x):
    l = list(set((int(z * random.random()) for i in xrange(x))))
    l.sort()

    res = {'half': [], 'rand': []}
    for i in range(1000):
        if x > 1:
            target = l[random.randrange(len(l) - 1)]
        elif x == 1:
            target = l[0]
        res['half'].append(binsearch(target, l, op=byhalf))
        res['rand'].append(binsearch(target, l, op=byrand))
    print('length: {0:>12} half:{1:>4} rand:{2:>4}'\
                    .format(len(l),
                            sum(res['half']) / len(res['half']),
                            sum(res['rand']) / len(res['rand'])))

for q in [2 ** x for x in range(27)]:
    doit(1000000000000, q)

Something just smells exploitable about these results, but I can’t put my finger on why just yet. And I don’t have time to think about it further. Anyway, it seems that the damage done by using a random index to make certain you stay within bounds doesn’t actually hurt performance as much as I thought it would. A perhaps useless discovery, but personally interesting nonetheless.

Don’t Get Class Happy

If you find yourself writing a class and you can’t explain the difference between the class and an instance of that class, just stop. You should be writing a function.

This antipattern — needless classes everywhere, for everything — is driving me crazy. Lately I see it in Python and Ruby a good bit, where it really shouldn’t occur. I feel like its a mental contagion that has jumped species from Java to other languages.

In particular I see classes used as inter-module namespaces, which is odd since its not like there is a tendency to run out of meaningful names within a single source file (of reasonable length). Importing the module from outside can either import * from foo,or import foo, or from foo import filename, or even import foo as bar (ah ha!) and make flexible use of namespacing where it is needed most — in the immediate vicinity of use.

So don’t write useless, meaningless, fluffy, non-state&behavior classes. Any time you can avoid writing classes, just don’t do it. Write a function. Its good discipline to see how far you can get writing zero classes in an implementation, and then write it with some classes and see which is actually less complex/more readable. In any case writing two solutions will give you a chance to examine the problem from multiple angles, and that’s nearly always an effort that results in better code since it forces you to value grokness over hacking together a few lines that sort of cover the need for the moment (and forgetting to ever come back and do it right).

Or maybe I should be admonishing people to seek flatness in namespaces, since this implies not writing classes that are stateless containers of a bunch of function definitions which incidentally get referred to as “methods” even though they don’t belong to a meaningful object in the first place.

Or maybe I should be railing against the oxymoronic concept of “stateless objects/classes”.

Or maybe I should be screaming about keeping our programming vocabulary straight so that we can finally say what we mean and mean what we say. (Sound advice in all walks of life.)

This last idea is perhaps the most indirect yet powerful of all. But it implies “standards” and so far in my experience “standards” and “computing” don’t tend to turn out very well once we get beyond, say, TCP/IP or ANSI and Unicode.

Version String Comparison in Python

Comparing and sorting version strings in Python scripts has come up a few times lately. Here are some simple approaches. These examples assume that the version strings will be numeric representations and not include elements like “rc-1” or “alpha” or whatever. If your problem includes these kinds of elements, don’t worry — they are solvable by applying a touch of regex-fu to the processes below.

First, sorting a bunch of version number strings. The problem is that many version strings reported by various packages are just that: strings. Usually the only way to get version information is to ask for it (“[prog] –version” in a shell script, or “SELECT version();” from a database, or whatever) and interpret whatever gets sent to stdout. Those strings don’t mean the same thing to comparison operators that they mean to us so long as they remain strings. For example, the string ‘3.5.10’ is greater alphabetically than ‘3.15.1’ but is a higher version. So we need to convert them to tuples of integers to make comparison of them natural (again, all these examples assume integer-only version strings, but minor changes to the process can allow you to compare anything — and assigning a custom collation order can allow you to sort against any arbitrary order of arbitrary symbols, but that’s beyond the scope of the basic nature of the problem I’m addressing here):

>>> vers
['3.5.10', '3.15.1', '2.5.7', '0.20.0', '2.12.5', '10.4.3']
>>> s_vers = [tuple([int(x) for x in n.split('.')]) for n in vers]
>>> s_vers
[(3, 5, 10), (3, 15, 1), (2, 5, 7), (0, 20, 0), (2, 12, 5), (10, 4, 3)]
>>> vers[0] > vers[1]
True
>>> s_vers[0] > s_vers[1]
False
>>> cmp(vers[0], vers[1])
1
>>> cmp(s_vers[0], s_vers[1])
-1
>>> s_vers.sort()
>>> s_vers
[(0, 20, 0), (2, 5, 7), (2, 12, 5), (3, 5, 10), (3, 15, 1), (10, 4, 3)]

The list comprehension (actually, two nested list comprehensions) assignment to s_vers is the important part of this. Once that is done you can compare whatever you want. If the version number is buried as an element in a dict or larger list (likely) you can do this conversion in place by adding a new element to the contained structures and then sort the greater list based on that element:

>>> packages
[{'version': '3.5.10', 'name': 'foo'}, {'version': '3.15.1', 'name': 'foo'}, {'version': '2.5.7', 'name': 'foo'}, {'version': '0.20.0', 'name': 'foo'}, {'version': '2.12.5', 'name': 'foo'}, {'version': '10.4.3', 'name': 'foo'}]

OK, that’s pretty ugly (uglier depending on how your browser renders <pre> type text), so I’ll print them in order so we can watch the list change more easily.

>>> for p in packages:
...   print p
...
{'version': '3.5.10', 'name': 'foo'}
{'version': '3.15.1', 'name': 'foo'}
{'version': '2.5.7', 'name': 'foo'}
{'version': '0.20.0', 'name': 'foo'}
{'version': '2.12.5', 'name': 'foo'}
{'version': '10.4.3', 'name': 'foo'}
>>> for p in packages:
...   p.update({'version_tuple': tuple([int(x) for x in p['version'].split('.')])})
...
>>> for p in packages:
...   print p
...
{'version_tuple': (3, 5, 10), 'version': '3.5.10', 'name': 'foo'}
{'version_tuple': (3, 15, 1), 'version': '3.15.1', 'name': 'foo'}
{'version_tuple': (2, 5, 7), 'version': '2.5.7', 'name': 'foo'}
{'version_tuple': (0, 20, 0), 'version': '0.20.0', 'name': 'foo'}
{'version_tuple': (2, 12, 5), 'version': '2.12.5', 'name': 'foo'}
{'version_tuple': (10, 4, 3), 'version': '10.4.3', 'name': 'foo'}
>>> packages.sort(key = lambda x:x['version_tuple'])
>>> for p in packages:
...   print p
...
{'version_tuple': (0, 20, 0), 'version': '0.20.0', 'name': 'foo'}
{'version_tuple': (2, 5, 7), 'version': '2.5.7', 'name': 'foo'}
{'version_tuple': (2, 12, 5), 'version': '2.12.5', 'name': 'foo'}
{'version_tuple': (3, 5, 10), 'version': '3.5.10', 'name': 'foo'}
{'version_tuple': (3, 15, 1), 'version': '3.15.1', 'name': 'foo'}
{'version_tuple': (10, 4, 3), 'version': '10.4.3', 'name': 'foo'}

We started out with a list of dictionaries, each containing a package name and a version string. The first loop updates each dictionary to include a version tuple, and the next orders the dictionaries within the list by the tuple values. Viola! We have a list of dictionaries sorted by version number. Of course, if there are more than one package name involved you will want to sort on the package name first, then the version tuple as a secondary criteria (so you don’t compare versions of package ‘foo’ against versions of package ‘bar’, or sort glibc against firefox, for example).

If lambdas are unfamiliar to you, don’t be scared off by the package.sort() line up there — lambdas are perfectly safe, reliable and quite concise once you understand the way they are used.

From here writing a sort function for lists of version strings should be pretty obvious. And… that means that writing a comparison function for two individual elements that works the same way the built-in cmp() function works is trivial:

>>> def ver_tuple(z):
...   return tuple([int(x) for x in z.split('.') if x.isdigit()])
...
>>> def ver_cmp(a, b):
...   return cmp(ver_tuple(a), ver_tuple(b))
...
>>> vers
['3.5.10', '3.15.1', '2.5.7', '0.20.0', '2.12.5', '10.4.3']
>>> ver_cmp(vers[0], vers[1])
1
>>> ver_cmp(vers[0], vers[0])
0
>>> ver_cmp(vers[3], vers[4])
-1

Nice and easy.

Now I can’t figure out why comparison functions I’ve seen floating around occupy so much space and are hard to follow — full of class declarations and exec loops within exec loops (!!!) and other nonsense. At the most you will need to add some regular expression matching to extract/split on the correct substrings from the version string. That means you would have to import the re module and the list comprehension will grow by a few (maybe 10) characters.

ODFapper 0.04 for Django on Unix

I’ve pulled the most common bits out of the Django views where I had rendered ODFs before and have the itty-bitty beginnings of an ODF handling/rendering library started now. “Library” is a bit of a stretch, since its only a few functions and a Bash script, but it abstracts the most mysterious/tedious parts of ODF handling already.

This is just a simple tarball right now. It unpacks like a Django app would, but only contains a copy of odfapper, funcs.py and a templatetags/odf_tags.py. But since we are doing template tag registration you need to include it in your INSTALLED_APPS in settings.py and add a new variable ODFAPPER_PATH which needs to be a string with the absolute path to /your/project/location/odfapper/odfapper. Importing into a views.py (or wherever) that you want to render ODFs in is done with from odfapper.funcs import render_odf, and I go into a bit more detail in the README included in the tarball. At the moment this post and the stuff in README (which is just an expansion of internal notes) is all the documentation.

I’ve got a ton more work I’d like to do on this. If there is any interest I could put a GitHub repo up — but other (paying) work calls to which this isn’t central, so… let me know if this is a direction anyone wants to head in and I’ll keep it going. There are a bajillion tiny things that are not so hard to do that would make ODF handling enormously more intuitive.

Link to 0.04 archive: odfapper_django-0.04.bz2
Link to just the shell update: odfapper-0.04.bz2

It bears mentioning that this is tested against Django 1.4, but not 1.5 yet. Unless the template loader classes have changed a lot then this should still work fine anyway, though.