Benchmark

Introduction

This benchmark does not intend to be exhaustive nor fair to SQL. It shows how django-cachalot behaves on an unoptimised application. On an application using perfectly optimised SQL queries only, django-cachalot may not be useful. Unfortunately, most Django apps (including Django itself) use unoptimised queries. Of course, they often lack useful indexes (even though it only requires 20 characters per index…). But what you may not know is that the ORM currently generates totally unoptimised queries [1].

Conditions

In this benchmark, a small database is generated, and each test is executed 20 times under the following conditions:

CPU Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz
RAM 12281228 kB
Linux distribution Ubuntu 14.04 trusty
Python 2.7.6
Django 1.7.6
cachalot 1.0.0
sqlite 3.8.2
PostgreSQL 9.4.1
MySQL 5.5.41
Redis 2.8.4
memcached 1.4.14
psycopg2 2.6
MySQLdb 1.3.6

Database results

  • mysql is 2.1× slower then 0.9× faster
  • postgresql is 1.1× slower then 13.3× faster
  • sqlite is 1.1× slower then 8.5× faster

Cache results

  • filebased is 1.2× slower then 8.2× faster
  • locmem is 1.1× slower then 9.5× faster
  • memcached is 1.2× slower then 7.3× faster
  • pylibmc is 1.2× slower then 6.5× faster
  • redis is 1.1× slower then 7.5× faster

Cache detailed results

Redis

[1]The ORM fetches way too much data if you don’t restrict it using .only and .defer. You can divide the execution time of most queries by 2-3 by specifying what you want to fetch. But specifying which data we want for each query is very long and unmaintainable. An automation using field usage statistics is possible and would drastically improve performance. Other performance issues occur with slicing. You can often optimise a sliced query using a subquery, like YourModel.objects.filter(pk__in=YourModel.objects.filter(…)[10000:10050]).select_related(…) instead of YourModel.objects.filter(…).select_related(…)[10000:10050]. I’ll maybe work on these issues one day.