Data caching

To speed up reoccurring queries to the server, pymaid lets you cache data. This behaviour is by default switched on:

import pymaid
rm = pymaid.connect_catmaid()
INFO  : Global CATMAID instance set. (pymaid)

Query for a neuron for the first time:

%time n = pymaid.get_neuron(16)
CPU times: user 146 ms, sys: 10.6 ms, total: 156 ms
Wall time: 1.01 s

Query for the same neuron a second time:

%time n2 = pymaid.get_neuron(16)
INFO  : Cached data used. (pymaid)
INFO  : Cached data used. (pymaid)
CPU times: user 128 ms, sys: 6.77 ms, total: 135 ms
Wall time: 146 ms

For the second query cached data was used which gives us almost a 10x speed-up.

Fine-tuning the cache

You can restrict the usage of cached data either by size to prevent running out of memory or by time to discard old data:

Caching is a property of the CatmaidInstance you are using. Here, we are changing max memory used to 256 megabytes (default is 128mb) and to a max age of 15min (= 900s; no limit by default):

rm.setup_cache(size_limit=128, time_limit=900)

You can inspect the size of your current cache [mb]:

rm.cache_size
0.8

You can also clear the cache:

rm.clear_cache()
rm.cache_size
0.0

Switching off caching can either be done when initializing the CatmaidInstance

rm = pymaid.CatmaidInstance('server_url', 'api_token', 'http_user', 'http_password', caching=False)

… or by changing the according attribute on the go:

rm.caching = False

Saving cache

Imagine running some analysis: What if you want to preserve the exact data that was used for that analysis? You can save the cache into a separate file and restore it later:

n = pymaid.get_neuron(16)
rm.save_cache('cache.pickle')

rm.clear_cache()
rm.load_cache('cache.pickle')

n = pymaid.get_neuron(16)
INFO  : Cached data used. (pymaid)
INFO  : Cached data used. (pymaid)