A jQuery find that also finds the root element

March 14, 2011 | categories: jQuery, Web, Python, JavaScript, Programming | View Comments

jQuery's find method is arguably the most used method in jQuery applications. Yet, when using .find() recently, I found out that it makes a rather weird exception for root element(s) of a document, that is, it ignores them.

Consider this example from the comments in the find API:

var el = $('<div id="one"><div id="two"></div></div>').find("#one");

el will be empty here, because <div id="one"> is the root element. It would work if the element were nested inside, say, another div.

In a recent project that uses .find() to apply progressive enhancement to parts of the page that have been updated through Ajax, this became a pain.

Consider this success callback function that

  • replaces parts of the page with updated content from the server
  • re-enables Ajax forms on the updated content
function success(data) {

function enableAjaxForms(node) {
        success: success

As long as the server returns a document with a form that's not the root element, this will work. But when we return HTML that has a form element as the root, our enableAjaxForms function will silently fail to find the form:

<form class="ajax"> <!-- form.ajax is root, .find() won't find it -->
  <div> ... </div>

What do? There's another function in jQuery that filters on the root elements. It's called .filter() and finds only the root elements. So this will work:

var el = $('<div id="one"><div id="two"></div></div>').filter("#one");

To get the results we want, we need to combine .filter() and .find(). We don't care whether the element we're looking for is at the root or not.

So here's a rather simple implementation of a jQuery.find2 method that'll return both root and child elements as the result of our query:

$.fn.find2 = function(selector) {
    return this.filter(selector).add(this.find(selector));

And finally, this is how you would use it. That is, just like you use .find() really:

var html = '<div class="one"><div class="one"></div></div>';
var el = html.find2(".one"); // will match both divs

Read and Post Comments

An Ajax page update mini-tutorial

March 13, 2011 | categories: jQuery, Web, Pyramid, Kotti, Python, JavaScript, Programming | View Comments

Most Ajax applications need to update parts of the DOM or perform some other action after they receive a response for their XHR requests.

This tutorial describes an approach that'll allow you to handle these updates in a unified way, minimzing code duplication and lines of JavaScript code.

(If you're looking at this through your RSS reader, you might want to head over to my blog for some synyax highlighting.)

With jQuery, Ajax functions will typically use a success callback function to give the user some feedback or update parts of the page when the response comes in. A very simple success callback function for an Ajax POST request could look like this:

function success() { alert("Successfully saved.") };

We could then pass this success function to $.post():

$.post('/mypage', {somedata}, success);

Another success handler for a GET request could update parts of the page with HTML sent back from the server:

function success(data) { $("div#content").replaceWith(data) };

In this case, what our server would put in the response would be only the <div> we're interested in:

<div id="content">...some content that's to be updated...</div>

jQuery comes with an Ajax function called .load() that has an implicit success handler which does exactly the same thing:

$('div#content').load('/mynewcontent', {somedata});

Alternatively, and because we're lazy, our server could respond to our XHR request by returning the whole HTML page. We could then extract and update only the bits that we're interested in:

function success(data) {
    var html = $(data);
    $("#message").replaceWith(("#message", html));
    $("#content").replaceWith(("#content", html));

This will pick elements #message and #content from the incoming HTML and replace the old contents of those containers on the page. The HTML response for this could look something like this:

  <div id="message">Successfully saved.</div>
  <div id="content">...some content that's to be updated...</div>

So far, so good.


Now imagine that in our application we're using some fancy notifications plug-in that displays messages as a nice pop-up. This is useful when working with Ajax since it'll guarantee that the user actually sees the notification even when they have scrolled way to the bottom of the page. Typically, we'd have some code to turn the contents of <div id="message"> into a pop-up in our document ready handler:

$(document).ready(function() {

What's the problem with this? Well, when we update our <div id="message"> through Ajax, the notifications pop-up won't display. This is because the document ready handler is not triggered for mere updates to the page.

We need to add a call to displayNotification to our Ajax success callback from before to make notifications work for the HTML that we inject dynamically:

function success(data) {
    var html = $(data);
    $("#message").replaceWith(("#message", html));
    $("#content").replaceWith(("#content", html));

Then imagine that you're using more progressive enhancement to turn <ul> list elements elements with a class dropdown into a dropdowns. Again, we need to add a bit of code to both our success handler and to the document ready handler:

$(document).ready(function() {
    makeDropdowns($("ul.dropdown")); // new!

function success(data) {
    var html = $(data);
    $("#message").replaceWith(("#message", html));
    $("#content").replaceWith(("#content", html));
    makeDropdowns($("ul.dropdown", $("#content"))); // new!

Notice how the newly added call to makeDropdowns in function success passes on only ul.dropdown elements inside #content, that is, only the dropdown lists that were injected just now:

makeDropdowns($("ul.dropdown", $("#content")));

By passing only the element that's changed we avoid having to somehow remember in function makeDropdowns which lists were already turned into dropdowns and which not. These takes quite some burden off of these functions.

The problem that emerges as we add these more and more enhancement functions like makeDropdowns and displayNotification is that we keep adding slightly different code to both our document ready handler and to a number of success handlers that we might have created in our app. This sort of duplication of code is bad. Let's try to generalize a bit more.


Let's first create a unified interface for handlers like displayNotification and makeDropdowns. All of these should have the form:

function handler(node) {
  // do our progressive enhancement here, but only inside node

As discussed before, we decide to pass in only the node that has changed. The individual handlers can then apply their enhancements only to the updated parts of the page. We'll then create an array of handlers so that later, when the DOM has been updated, we can call the handlers one by one:

var node_changed_handlers = new Array();

function node_changed(node) { // call handlers with 'node' one by one
    $.each(node_changed_handlers, function(index, func) { func(node) });

We can now go back to our document ready handler and to our success handlers and simplify them substantially:

$(document).ready(function() {

function success(data) {
    var html = $(data);
    $("#message").replaceWith(("#message", html));
    $("#content").replaceWith(("#content", html));

No longer do we now need to change the code of both of these whenever we add a new handler to node_changed_handlers. That's good.

Still, writing individual success handlers for all sorts of different actions and Ajax requests, of which there are usually many in a modern web app, is cumbersome. Ideally, our success handler and the server could use some more intelligent protocol that would allow us to reuse the same success callback function for all our application's Ajax requests. In short, we want a success handler that we can use for all our Ajax needs.

In order to achieve this, we add a little bit more information to the HTML that is returned from the server and processed in our handlers. We add a class ajax-replace to all those elements that need to be updated in the page. Here's the example from before with the class added:

  <div id="message" class="ajax-replace">Successfully saved.</div>
  <div id="content" class="ajax-replace"> ... some updated content ... </div>

We can now strip out the bit in our success handler that looks for specific ids and generalize it to this:

function success(data) {
    var html = $(data);
    $(".ajax-replace", html).each(function() {
        var selector = "#" + this.id;

How does this work exactly? It looks for all elements in the server response's HTML with the class ajax-replace (line 3) and replaces elements with a corresponding id in the current DOM (lines 4 and 5). It then calls all node changed handlers with the newly added element (line 6).

Voila! What we have here is a very powerful and simple success handler that's reusable for all our Ajax requests.


I've recently added Ajax forms and Growl-like notifications (using the jquery-toastmessage-plugin) to the Kotti CMS, a user-friendly light-weight CMS that I'm building on top of Pyramid and jQuery.

Take a look at Kotti's JavaScript code, which implements just the approach I've presented here. In particular, take a look at function messages and function dropdowns which correspond to the two handlers described in this tutorial.

To see this code in action, log in to the Kotti demo server with the username owner and password secret and try the reorder form.

Read and Post Comments

Lotsenprojekt die bruecke

March 01, 2011 | categories: jQuery, Web, Python, JavaScript, Programming, Django, Berlin | View Comments

What is the Lotsenprojekt die brücke?

My sister works as a team leader in the Lotsenprojekt die brücke (German) in Berlin Mitte. The basic idea behind this project is to help less integrated people living in Berlin find their way through what can be an impenetrable public authorities system. die brücke helps these people with their everyday issues with finances, habitation and health, primarily by acting as a connector between them and the appropriate authority.

Lotsen Moabit

Considering the real world impact that this project has, it's probably the most meaningful project I had the pleasure to work with yet.

Documenting every visit

Part of what the multilingual teams (20+ languages) of die brücke do is they document every client's visit to their offices. This is in order to allow them to analyze and react to the constantly shifting demands of their clients. Among what they record for every visit is:

  • Client
    • gender
    • age
    • language
  • Issue
    • type
    • date
    • could it be solved?

This is where I came in. 2010 was the first year in which die brücke used a database system for data entry and generating reports, in favour of filling out forms by hand and doing manual statistics with tally sheets and Excel. Furthermore, we've also implemented a searchable directory of authorities and departments which allows the team to increase their service quality.

Public reports

age overview statistics

Recently, we've decided to make the reports public. The reports website itself is in German, but here's a little help with interpreting them:

  • The overview of types of issues is a stacked bar chart that has general issue types on the Y axis, more concrete issue types represented in the stacked bars, and the number of times those issues occurred on the X axis. (The JavaScript on that page is quite intense, so you might need to wait a little for the page to render.)
  • The departments statistics gives an overview of which authorities or departments the clients were brought into contact with, divided by client languages. Here you can see that the Berliner JobCenter was the most contacted authority.
  • The age overview allows you to see that most people seeking help with die brücke are between 30 and 50 years old. It also shows you how many of those people were male versus female.

The tabs on the top of the page allow you to navigate to the other statistics.

Under the hood

The data entry and reports were developed with Python, Django, jQuery, Highcharts, DataTables and with some bits of ExtJS.

Django's admin interface turned out to be a good fit for data entry and a time saver.

ExtJS was a source of much frustration, partly because of its monolithic APIs and its poor docs. Which is why I decided I'd no longer use it in future projects. Contrary to the jQuery plug-ins Highcharts and DataTables, which were a pleasure to work with; they do just what they promise and do it well. (The pie charts that require Flash are those that use ExtJS, the stacked bar charts are made with Highcharts, and the tables with DataTables.)

Read and Post Comments

HTTP caching for the masses

February 12, 2011 | categories: Python, Web, Programming, Plone | View Comments

Why HTTP caching?

  • your websites respond faster, your users become happier
  • improves Google ranking
  • reduces running costs, less CPU power needed

How it works

This article describes an alternative approach to HTTP caching with Plone. The way the module that I'm presenting here works is it:

  • hooks into a post publication event (IAfterPublicationEvent to be exact)
  • determines what content type is being published, is it a static image, a dynamic page?
  • looks up a caching policy that matches the type of content being serverd
  • applies the caching policy, which is just a bag of HTTP response headers

All in 136 lines of code. No components, no indirection, easy to grasp.

Since HTTP caching works on the HTTP level, it works pretty much the same everywhere. You should be able to quickly adjust this code to suit your Python framework of choice. And also your individual caching needs...

But, wait, confused about HTTP caching? Not sure what the difference between maxage and s-maxage is? Then do yourself a favour and head over to the caching tutorial at mnot.net. It's a must read if you're working with caching. And the good news is it's very well written, and caching is actually easy!


You can download my caching module over at GitHub. What follows is an explanation of what it does in detail. This will allow you to understand the code and adjust it to your needs.

Understand the code

(If at this point you're still reading this in your RSS application, and you're not seeing syntax colouring, you might want to head over to my blog.)

The set_cache_headers function gives a good overview of what's happening:

@component.adapter(Interface, IAfterPublicationEvent)
def set_cache_headers(object, event):
    request = event.request
    response = request.response

    # If no caching policy was previously set, we'll choose one at this point:
    caching_policy = response.headers.get(CACHE_POLICY_HEADER)
    if caching_policy is None:
        caching_policy = _choose_caching_policy(object, request)
    if caching_policy:
        # Set a header on the response with the policy chosen
        response.setHeader(CACHE_POLICY_HEADER, caching_policy)

    # Here's where we actually set the cache headers:
    if caching_policy:

Note how function _choose_caching_policy is asked to determine a caching policy. What this function does is it introspects the object that's being published (we're talking Bobo here), and then it employs a simple if ... elif ... elif ... sequence to determine which caching policy is appropriate. An excerpt from _choose_caching_policy:

if portal_type == 'Plone Site' and request.response.status == 302:
    return 'No Cache' # don't cache redirects on the root
elif content_type.startswith('text/html'):
    return 'Cache HTML'
elif ...

If _choose_caching_policy returns a policy name, in set_cache_headers, we look up the policy function in the caching_policies dict and and call it:

if caching_policy:

The caching_policies dict is where we define all our policies:

caching_policies = {
    'Cache HTML':
    lambda response: _set_max_age(response, datetime.timedelta(days=-1),
                                  cache_ctrl={'s-maxage': '3600'}),
    'Cache Media Content':
    lambda response: _set_max_age(response, datetime.timedelta(hours=4)),
    'Cache Resource':
    lambda response: _set_max_age(response, datetime.timedelta(days=32),
                                  cache_ctrl={'public': None}),
    'No Cache':
    lambda response: _set_max_age(response, datetime.timedelta(days=-1)),

You can see how each caching policy delegates to another function called _set_max_age. This powerhouse of a caching subroutine computes the actual headers to be used and sets them on the response.

Take a closer look at Cache HTML to understand what this policy does:

_set_max_age(response, timedelta(days=-1), cache_ctrl={'s-maxage': '3600'})

This reads as:

Never cache this in the browser (timedelta(days=-1)), but do cache it in the proxy for one hour ({'s-maxage': '3600'}).

The Cache Resource policy sets the freshness to 32 days, and adds public to the Cache-Control header:

_set_max_age(response, timedelta(days=32), cache_ctrl={'public': None})

How to test it

We want to test two different things here:

  1. The response headers that we want are actually set. Our code works.
  2. The headers have the desired effect. Our theory works.

For (1) we will use functional tests. It's important to get (1) right before moving on to (2).

For (2) you should use tools like Page Speed or the Cacheability Query.

What follows is an example of a functional doctest that uses zope.testbrowser to test that the right headers are set, for (1).

Some convenience functions:

>>> import datetime, time
>>> def parse_expires(date_string):
...     return datetime.datetime(*
...         (time.strptime(date_string,
...          "%a, %d %b %Y %H:%M:%S GMT")[0:6]))
>>> def delta(date_string):
...     now = datetime.datetime.utcnow()
...     return parse_expires(date_string) - now

Check the caching policy and response headers that are set for folders:

>>> browser.open(portal['myfolder'].absolute_url())
>>> browser.headers['X-Caching-Policy']
'Cache HTML'
>>> browser.headers['Cache-Control']
>>> d = delta(browser.headers['Expires'])
>>> (d.days, d.seconds) < (0, 0)

What headers are set for images that are stored in the CMS?:

>>> some_image = portal['images']['bar.jpg']
>>> browser.open(some_image.absolute_url())
>>> browser.headers['X-Caching-Policy']
'Cache Media Content'
>>> browser.headers['Cache-Control']
>>> d = delta(browser.headers['Expires'])
>>> (d.days, d.seconds) > (0, 14000)

And lastly, we want static resources to be cached for a long time:

>>> browser.open(portal_url + '/++resource++myresources/logo.png')
>>> browser.headers['X-Caching-Policy']
'Cache Resource'
>>> browser.headers['Cache-Control']
>>> d = delta(browser.headers['Expires'])
>>> (d.days, d.seconds) > (30, 0)
>>> 'Last-Modified' in browser.headers

Read and Post Comments

16 hours into a new CMS with Pyramid

January 25, 2010 | categories: Python, Web, Pyramid, Programming, Kotti | View Comments

I've started implementing a new CMS yesterday. (Note that the date of this entry is wrong, it should say 2011/01/25.)

I'm basing it on Pyramid, and it's mostly following Pyramid's default patterns (with traversal and standard security, and SQLAlchemy for persistence). At the same time it tries to be extensible in that you can add new content types and views from within your own packages.


This post is not about the details of the CMS itself. But about how much Pyramid and SQLAlchemy rock for this kind of a thing.

I'm really impressed by what I was able to do with these two tools in just two days -- and that includes time discussing and thinking about what I really want, i.e. design.

What I've implemented so far:

  • Traversal with persistent, inheritable ACLs
  • A Node class and an example Document class that inherits from it
  • Nodes are set up to use the adjacency list pattern, and every node knows about its parent.
  • Every Node is a container of other nodes.
  • Every type of Node may have their own view, which is registered like usual.
  • Nodes may have a different views on an instance basis. That is, the root document can have a different view than my personal homepage, although they're both instances of Document.
  • SQLAlchemy is set up to do polymorphic queries, that is, querying for Nodes will hand you back instances of Document if the Node happens to be a Document.
  • Plug in your own modules via the Paste Deploy INI file to extend the CMS by new content types and views.
  • Unit tests and functional tests are running.

What's missing from a minimal, working CMS:

  • View and Edit screens,
  • Theming, template inheritance story,
  • Content type factories, i.e. "which types of content may the user add in this context",
  • and probably a few more things that I forgot.

Update: Kotti's source may be found at its homepage: https://github.com/Kotti/Kotti. Kotti also has a mailing list.

Read and Post Comments

« Previous Page