klepto module documentation

archives module

custom caching dict, which archives results to memory, file, or database

class cache(*args, **kwds)

Bases: dict

dictionary augmented with an archive backend

initialize a dictionary with an archive backend

Parameters:

archive (archive, default=null_archive()) – instance of archive object

__archive(archive)
__get_archive()
__get_class()
__repr__()

Return repr(self).

property __type__
property archive
archived(*on)

check if the cache is archived, or toggle archiving

If on is True, turn on the archive; if on is False, turn off the archive

drop()

set the current archive to NULL

dump(*args)

dump contents to archive

If arguments are given, only dump the specified keys

load(*args)

load archive contents

If arguments are given, only load the specified keys

open(archive)

replace the current archive with the archive provided

popkeys(k[, d]) v, remove specified keys and return corresponding values.

If key in keys is not found, d is returned if given, otherwise KeyError is raised.

sync(clear=False)

synchronize cache and archive contents

If clear is True, clear all archive contents before synchronizing cache

to_frame()

convert a klepto archive to a pandas DataFrame

class dict_archive(name=None, dict=None, cached=True, **kwds)

Bases: dict_archive

initialize a dictionary archive

static __new__(dict_archive, name=None, dict=None, cached=True, **kwds)

initialize a dictionary with an in-memory dictionary archive backend

Parameters:
  • name (str, default=None) – (optional) identifier string

  • dict (dict, default={}) – initial dictionary to seed the archive

  • cached (bool, default=True) – interact through an in-memory cache

classmethod from_frame(dataframe)
class dir_archive(name=None, dict=None, cached=True, **kwds)

Bases: dir_archive

initialize a file folder with a synchronized dictionary interface

Parameters:
  • dirname (str, default='memo') – path of the archive root directory

  • serialized (bool, default=True) – save python objects in pickled files

  • compression (int, default=0) – compression level (0 to 9), 0 is None

  • permissions (octal, default=0o775) – read/write permission indicator

  • memmode (str, default=None) – mode, one of {None, 'r+', 'r', 'w+', 'c'}

  • memsize (int, default=100) – size (MB) of cache for in-memory compression

  • protocol (int, default=DEFAULT_PROTOCOL) – pickling protocol

static __new__(dir_archive, name=None, dict=None, cached=True, **kwds)

initialize a dictionary with a file-folder archive backend

Parameters:
  • name (str, default='memo') – path of the archive root directory

  • dict (dict, default={}) – initial dictionary to seed the archive

  • cached (bool, default=True) – interact through an in-memory cache

  • serialized (bool, default=True) – save python objects in pickled files

  • compression (int, default=0) – compression level (0 to 9), 0 is None

  • permissions (octal, default=0o775) – read/write permission indicator

  • memmode (str, default=None) – mode, one of {None, 'r+', 'r', 'w+', 'c'}

  • memsize (int, default=100) – size (MB) of cache for in-memory compression

  • protocol (int, default=DEFAULT_PROTOCOL) – pickling protocol

classmethod from_frame(dataframe)
class file_archive(name=None, dict=None, cached=True, **kwds)

Bases: file_archive

initialize a file with a synchronized dictionary interface

Parameters:
  • filename (str, default='memo.pkl') – path of the file archive

  • serialized (bool, default=True) – save python objects in pickled file

  • protocol (int, default=DEFAULT_PROTOCOL) – pickling protocol

static __new__(file_archive, name=None, dict=None, cached=True, **kwds)

initialize a dictionary with a single file archive backend

Parameters:
  • name (str, default='memo.pkl') – path of the file archive

  • dict (dict, default={}) – initial dictionary to seed the archive

  • cached (bool, default=True) – interact through an in-memory cache

  • serialized (bool, default=True) – save python objects in pickled file

  • protocol (int, default=DEFAULT_PROTOCOL) – pickling protocol

classmethod from_frame(dataframe)
class hdf_archive(name=None, dict=None, cached=True, **kwds)

Bases: hdf_archive

initialize an archive

static __new__(hdf_archive, name=None, dict=None, cached=True, **kwds)

initialize a dictionary with a single hdf5 file archive backend

Parameters:
  • name (str, default='memo.hdf5') – path of the file archive

  • dict (dict, default={}) – initial dictionary to seed the archive

  • cached (bool, default=True) – interact through an in-memory cache

  • serialized (bool, default=True) – pickle saved python objects

  • protocol (int, default=DEFAULT_PROTOCOL) – pickling protocol

  • meta (bool, default=False) – store in root metadata (not in dataset)

classmethod from_frame(dataframe)
class hdfdir_archive(name=None, dict=None, cached=True, **kwds)

Bases: hdfdir_archive

initialize an archive

static __new__(hdfdir_archive, name=None, dict=None, cached=True, **kwds)

initialize a dictionary with a hdf5 file-folder archive backend

Parameters:
  • name (str, default='memo') – path of the archive root directory

  • dict (dict, default={}) – initial dictionary to seed the archive

  • cached (bool, default=True) – interact through an in-memory cache

  • serialized (bool, default=True) – pickle saved python objects

  • permissions (octal, default=0o775) – read/write permission indicator

  • protocol (int, default=DEFAULT_PROTOCOL) – pickling protocol

  • meta (bool, default=False) – store in root metadata (not in dataset)

classmethod from_frame(dataframe)
class null_archive(name=None, dict=None, cached=True, **kwds)

Bases: null_archive

initialize a permanently-empty dictionary

static __new__(null_archive, name=None, dict=None, cached=True, **kwds)

initialize a dictionary with a permanently-empty archive backend

Parameters:
  • name (str, default=None) – (optional) identifier string

  • dict (dict, default={}) – initial dictionary to seed the archive

  • cached (bool, default=True) – interact through an in-memory cache

classmethod from_frame(dataframe)
class sql_archive(name=None, dict=None, cached=True, **kwds)

Bases: sqltable_archive

initialize a sql database with a synchronized dictionary interface

Connect to an existing database, or initialize a new database, at the selected database url. For example, to use a sqlite database ‘foo.db’ in the current directory, database=’sqlite:///foo.db’. To use a mysql or postgresql database, sqlalchemy must be installed. When connecting to sqlite, the default database is ‘:memory:’. Storable values are limited to strings, integers, floats, and other basic objects. To store functions, classes, and similar constructs, sqlalchemy must be installed.

Parameters:
  • database (str, default=None) – database url (see above note)

  • table (str, default='memo') – name of the associated database table

static __new__(sql_archive, name=None, dict=None, cached=True, **kwds)

initialize a dictionary with a sql database archive backend

Connect to an existing database, or initialize a new database, at the selected database url. For example, to use a sqlite database ‘foo.db’ in the current directory, database=’sqlite:///foo.db’. To use a mysql database ‘foo’ on localhost, database=’mysql://user:pass@localhost/foo’. For postgresql, use database=’postgresql://user:pass@localhost/foo’. When connecting to sqlite, the default database is ‘:memory:’; otherwise, the default database is ‘defaultdb’. If sqlalchemy is not installed, storable values are limited to strings, integers, floats, and other basic objects. If sqlalchemy is installed, additional keyword options can provide database configuration, such as connection pooling. To use a mysql or postgresql database, sqlalchemy must be installed.

Parameters:
  • name (str, default=None) – database url (see above note)

  • dict (dict, default={}) – initial dictionary to seed the archive

  • cached (bool, default=True) – interact through an in-memory cache

  • serialized (bool, default=True) – save objects as pickled strings

  • protocol (int, default=DEFAULT_PROTOCOL) – pickling protocol

classmethod from_frame(dataframe)
class sqltable_archive(name=None, dict=None, cached=True, **kwds)

Bases: sqltable_archive

initialize a sql database with a synchronized dictionary interface

Connect to an existing database, or initialize a new database, at the selected database url. For example, to use a sqlite database ‘foo.db’ in the current directory, database=’sqlite:///foo.db’. To use a mysql or postgresql database, sqlalchemy must be installed. When connecting to sqlite, the default database is ‘:memory:’. Storable values are limited to strings, integers, floats, and other basic objects. To store functions, classes, and similar constructs, sqlalchemy must be installed.

Parameters:
  • database (str, default=None) – database url (see above note)

  • table (str, default='memo') – name of the associated database table

static __new__(sqltable_archive, name=None, dict=None, cached=True, **kwds)

initialize a dictionary with a sql database table archive backend

Connect to an existing database, or initialize a new database, at the selected database url. For example, to use a sqlite database ‘foo.db’ in the current directory, database=’sqlite:///foo.db’. To use a mysql database ‘foo’ on localhost, database=’mysql://user:pass@localhost/foo’. For postgresql, use database=’postgresql://user:pass@localhost/foo’. When connecting to sqlite, the default database is ‘:memory:’; otherwise, the default database is ‘defaultdb’. Connections should be given as database?table=tablename; for example, name=’sqlite:///foo.db?table=bar’. If not provided, the default tablename is ‘memo’. If sqlalchemy is not installed, storable values are limited to strings, integers, floats, and other basic objects. If sqlalchemy is installed, additional keyword options can provide database configuration, such as connection pooling. To use a mysql or postgresql database, sqlalchemy must be installed.

Parameters:
  • name (str, default=None) – url for database table (see above note)

  • dict (dict, default={}) – initial dictionary to seed the archive

  • cached (bool, default=True) – interact through an in-memory cache

  • serialized (bool, default=True) – save objects as pickled strings

  • protocol (int, default=DEFAULT_PROTOCOL) – pickling protocol

classmethod from_frame(dataframe)

crypto module

__hash(obj, /)

Return the hash value for the given object.

Two objects that compare equal must also have the same hash value, but the reverse is not necessarily true.

algorithms()

return a tuple of available hash algorithms

encodings()

return a tuple of available encodings and string-like types

hash(object, algorithm=None)

cryptographic hashing

algorithm: one of (None, ‘sha1’, ‘sha3_512’, ‘sha256’, ‘sha512_256’, ‘blake2b’, ‘sha384’, ‘sha3_224’, ‘md5-sha1’, ‘md5’, ‘sha3_384’, ‘blake2s’, ‘sha224’, ‘shake_256’, ‘sha512’, ‘sm3’, ‘sha512_224’, ‘sha3_256’, ‘shake_128’) The default is algorithm=None, which uses python’s ‘hash’.

pickle(object, serializer=None, **kwds)

pickle an object (to a string)

serializer: name of pickler module with a ‘dumps’ method The default is serializer=None, which uses python’s ‘repr’.

NOTE: any ‘bad’ kwds will cause all kwds to be ignored.

serializers()

return a tuple of string names of serializers

string(object, encoding=None, strict=True)

encode an object (as a string)

strict: bool or None, for ‘strictness’ of the encoding encoding: one of the available string encodings or string-like types

For encodings, such as ‘utf-8’, strict=True will raise an exception in the case of an encoding error, strict=None will ignore malformed data, and strict=False will replace malformed data with a suitable marker such as ‘?’ or ‘�’. For string-like types, strict=True restricts the type casting to the list of types in klepto.crypto.encodings().

The default is encoding=None, which uses python’s ‘str’.

keymaps module

custom ‘keymaps’ for generating dictionary keys from function input signatures

__chain__(x, y)

chain two keymaps: calls ‘x’ then ‘y’ on object to produce y(x(object))

class hashmap(typed=False, flat=True, sentinel=<NOSENTINEL>, **kwds)

Bases: keymap

tool for converting a function’s input signature to an unique key

This keymap generates a hash for the given object. Not all objects are hashable, and generating a hash incurrs some information loss. Hashing is fast, however there is not a method to recover the input signature from a hash.

initialize the key builder

typed: if True, include type information in the key flat: if True, flatten the key to a sequence; if False, use (args, kwds) sentinel: marker for separating args and kwds in flattened keys algorithm: string name of hashing algorithm [default: use python’s hash]

This keymap stores function args and kwds as (args, kwds) if flat=False, or a flattened (*args, zip(**kwds)) if flat=True. If typed, then include a tuple of type information (args, kwds, argstypes, kwdstypes) in the generated key. If a sentinel is given, the sentinel will be added to a flattened key to indicate the boundary between args, keys, argstypes, and kwdstypes.

Use kelpto.crypto.algorithms() to get the names of available hashing algorithms.

encode(*args, **kwds)

use a flattened scheme for generating a key

encrypt(*args, **kwds)

use a non-flat scheme for generating a key

class keymap(typed=False, flat=True, sentinel=<NOSENTINEL>, **kwds)

Bases: object

tool for converting a function’s input signature to an unique key

This keymap does not serialize objects, but does do some formatting. Since the keys are stored as raw objects, there is no information loss, and thus it is easy to recover the original input signature. However, to use an object as a key, the object must be hashable.

initialize the key builder

typed: if True, include type information in the key flat: if True, flatten the key to a sequence; if False, use (args, kwds) sentinel: marker for separating args and kwds in flattened keys

This keymap stores function args and kwds as (args, kwds) if flat=False, or a flattened (*args, zip(**kwds)) if flat=True. If typed, then include a tuple of type information (args, kwds, argstypes, kwdstypes) in the generated key. If a sentinel is given, the sentinel will be added to a flattened key to indicate the boundary between args, keys, argstypes, and kwdstypes.

__add__(other)

concatenate two keymaps, to produce a new keymap

__call__(*args, **kwds)

generate a key from optionally typed positional and keyword arguments

__chain(map)

create a ‘nested’ keymap

__get_inner()

get ‘nested’ keymap, if one exists

__get_outer()

get ‘outer’ keymap

__get_sentinel()
__repr__()

Return repr(self).

__sentinel(mark)
decode(key)

recover the stored value directly from a generated (flattened) key

decrypt(key)

recover the stored value directly from a generated (non-flat) key

dumps(obj, **kwds)

a more pickle-like interface for encoding a key

encode(*args, **kwds)

use a flattened scheme for generating a key

encrypt(*args, **kwds)

use a non-flat scheme for generating a key

property inner

get ‘nested’ keymap, if one exists

loads(key)

a more pickle-like interface for decoding a key

property outer

get ‘outer’ keymap

property sentinel
class picklemap(typed=False, flat=True, sentinel=<NOSENTINEL>, **kwds)

Bases: keymap

tool for converting a function’s input signature to an unique key

This keymap serializes objects by pickling the object. Serializing an object with pickle is relatively slower, however will reliably produce a unique key for all picklable objects. Also, pickling is a reversible operation, where the original input signature can be recovered from the generated key.

initialize the key builder

typed: if True, include type information in the key flat: if True, flatten the key to a sequence; if False, use (args, kwds) sentinel: marker for separating args and kwds in flattened keys serializer: string name of pickler [default: use python’s repr]

This keymap stores function args and kwds as (args, kwds) if flat=False, or a flattened (*args, zip(**kwds)) if flat=True. If typed, then include a tuple of type information (args, kwds, argstypes, kwdstypes) in the generated key. If a sentinel is given, the sentinel will be added to a flattened key to indicate the boundary between args, keys, argstypes, and kwdstypes.

Use kelpto.crypto.serializers() to get the names of available picklers. NOTE: the serializer kwd expects a <module> object, and not a <str>.

encode(*args, **kwds)

use a flattened scheme for generating a key

encrypt(*args, **kwds)

use a non-flat scheme for generating a key

class stringmap(typed=False, flat=True, sentinel=<NOSENTINEL>, **kwds)

Bases: keymap

tool for converting a function’s input signature to an unique key

This keymap serializes objects by converting __repr__ to a string. Converting to a string is slower than hashing, however will produce a key in all cases. Some forms of archival storage, like a database, may require string keys. There is not a method to recover the input signature from a string key that works in all cases, however this is possible for any object where __repr__ effectively mimics __init__.

initialize the key builder

typed: if True, include type information in the key flat: if True, flatten the key to a sequence; if False, use (args, kwds) sentinel: marker for separating args and kwds in flattened keys encoding: string name of string encoding [default: use python’s str]

This keymap stores function args and kwds as (args, kwds) if flat=False, or a flattened (*args, zip(**kwds)) if flat=True. If typed, then include a tuple of type information (args, kwds, argstypes, kwdstypes) in the generated key. If a sentinel is given, the sentinel will be added to a flattened key to indicate the boundary between args, keys, argstypes, and kwdstypes.

Use kelpto.crypto.encodings() to get the names of available string encodings.

encode(*args, **kwds)

use a flattened scheme for generating a key

encrypt(*args, **kwds)

use a non-flat scheme for generating a key

rounding module

decorators that provide rounding

class deep_round(tol=0)

Bases: object

decorator for rounding a function’s input argument and keywords to the given precision tol. This decorator always rounds to a floating point number.

Rounding is done recursively for each element of all arguments and keywords.

For example: >>> @deep_round(tol=1) … def add(x,y): … return x+y … >>> add(2.54, 5.47) 8.0 >>> >>> # rounds each float, regardless of depth in an object >>> add([2.54, ‘x’],[5.47, ‘y’]) [2.5, ‘x’, 5.5, ‘y’] >>> >>> # rounds each float, regardless of depth in an object >>> add([2.54, ‘x’],[5.47, [8.99, ‘y’]]) [2.5, ‘x’, 5.5, [9.0, ‘y’]]

__call__(f)

Call self as a function.

__get__(obj, objtype)
__reduce__()

Helper for pickle.

class shallow_round(tol=0)

Bases: object

decorator for rounding a function’s input argument and keywords to the given precision tol. This decorator always rounds to a floating point number.

Rounding is done recursively for each element of all arguments and keywords, however the rounding is shallow (a max of one level deep into each object).

For example: >>> @shallow_round(tol=1) … def add(x,y): … return x+y … >>> add(2.54, 5.47) 8.0 >>> >>> # rounds each float, at the top-level or first-level of each object. >>> add([2.54, ‘x’],[5.47, ‘y’]) [2.5, ‘x’, 5.5, ‘y’] >>> >>> # rounds each float, at the top-level or first-level of each object. >>> add([2.54, ‘x’],[5.47, [8.99, ‘y’]]) [2.5, ‘x’, 5.5, [8.9900000000000002, ‘y’]]

__call__(f)

Call self as a function.

__get__(obj, objtype)
__reduce__()

Helper for pickle.

class simple_round(tol=0)

Bases: object

decorator for rounding a function’s input argument and keywords to the given precision tol. This decorator always rounds to a floating point number.

Rounding is only done for arguments or keywords that are floats.

For example: >>> @simple_round(tol=1) … def add(x,y): … return x+y … >>> add(2.54, 5.47) 8.0 >>> >>> # does not round elements of iterables, only rounds at the top-level >>> add([2.54, ‘x’],[5.47, ‘y’]) [2.54, ‘x’, 5.4699999999999998, ‘y’] >>> >>> # does not round elements of iterables, only rounds at the top-level >>> add([2.54, ‘x’],[5.47, [8.99, ‘y’]]) [2.54, ‘x’, 5.4699999999999998, [8.9900000000000002, ‘y’]]

__call__(f)

Call self as a function.

__get__(obj, objtype)
__reduce__()

Helper for pickle.

safe module

‘safe’ versions of selected caching decorators

If a hashing error occurs, the cached function will be evaluated.

class inf_cache(maxsize=None, cache=None, keymap=None, ignore=None, tol=None, deep=False, purge=False)

Bases: object

‘safe’ version of the infinitely-growing (INF) cache decorator.

This decorator memoizes a function’s return value each time it is called. If called later with the same arguments, the cached value is returned, and not re-evaluated. This cache will grow without bound. To avoid memory issues, it is suggested to frequently dump and clear the cache. This decorator takes an integer tolerance ‘tol’, equal to the number of decimal places to which it will round off floats, and a bool ‘deep’ for whether the rounding on inputs will be ‘shallow’ or ‘deep’. Note that rounding is not applied to the calculation of new results, but rather as a simple form of cache interpolation. For example, with tol=0 and a cached value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will store a new value; however if tol=1, both f(3.1) and f(3.6) will store new values.

maxsize = maximum cache size [fixed at maxsize=None] cache = storage hashmap (default is {}) keymap = cache key encoder (default is keymaps.stringmap(flat=False)) ignore = function argument names and indicies to ‘ignore’ (default is None) tol = integer tolerance for rounding (default is None) deep = boolean for rounding depth (default is False, i.e. ‘shallow’) purge = boolean for purge cache to archive at maxsize (fixed at False)

If keymap is given, it will replace the hashing algorithm for generating cache keys. Several hashing algorithms are available in ‘keymaps’. The default keymap does not require arguments to the cached function to be hashable. If a hashing error occurs, the cached function will be evaluated.

If the keymap retains type information, then arguments of different types will be cached separately. For example, f(3.0) and f(3) will be treated as distinct calls with distinct results. Cache typing has a memory penalty, and may also be ignored by some ‘keymaps’.

If ignore is given, the keymap will ignore the arguments with the names and/or positional indicies provided. For example, if ignore=(0,), then the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2). If ignore=(‘y’,), then the key generated for f(x=3,y=4) will be identical to that of f(x=3,y=0) or f(x=3,y=10). If ignore=(‘*’,’**’), all varargs and varkwds will be ‘ignored’. Ignored arguments never trigger a recalculation (they only trigger cache lookups), and thus are ‘ignored’. When caching class methods, it may be useful to ignore=(‘self’,).

View cache statistics (hit, miss, load, maxsize, size) with f.info(). Clear the cache and statistics with f.clear(). Replace the cache archive with f.archive(obj). Load from the archive with f.load(), and dump from the cache to the archive with f.dump().

__call__(user_function)

Call self as a function.

__get__(obj, objtype)

support instance methods

__reduce__()

Helper for pickle.

class lfu_cache(*args, **kwds)

Bases: object

‘safe’ version of the least-frequenty-used (LFU) cache decorator.

This decorator memoizes a function’s return value each time it is called. If called later with the same arguments, the cached value is returned, and not re-evaluated. To avoid memory issues, a maximum cache size is imposed. For caches with an archive, the full cache dumps to archive upon reaching maxsize. For caches without an archive, the LFU algorithm manages the cache. Caches with an archive will use the latter behavior when ‘purge’ is False. This decorator takes an integer tolerance ‘tol’, equal to the number of decimal places to which it will round off floats, and a bool ‘deep’ for whether the rounding on inputs will be ‘shallow’ or ‘deep’. Note that rounding is not applied to the calculation of new results, but rather as a simple form of cache interpolation. For example, with tol=0 and a cached value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will store a new value; however if tol=1, both f(3.1) and f(3.6) will store new values.

maxsize = maximum cache size cache = storage hashmap (default is {}) keymap = cache key encoder (default is keymaps.stringmap(flat=False)) ignore = function argument names and indicies to ‘ignore’ (default is None) tol = integer tolerance for rounding (default is None) deep = boolean for rounding depth (default is False, i.e. ‘shallow’) purge = boolean for purge cache to archive at maxsize (default is False)

If maxsize is None, this cache will grow without bound.

If keymap is given, it will replace the hashing algorithm for generating cache keys. Several hashing algorithms are available in ‘keymaps’. The default keymap does not require arguments to the cached function to be hashable. If a hashing error occurs, the cached function will be evaluated.

If the keymap retains type information, then arguments of different types will be cached separately. For example, f(3.0) and f(3) will be treated as distinct calls with distinct results. Cache typing has a memory penalty, and may also be ignored by some ‘keymaps’.

If ignore is given, the keymap will ignore the arguments with the names and/or positional indicies provided. For example, if ignore=(0,), then the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2). If ignore=(‘y’,), then the key generated for f(x=3,y=4) will be identical to that of f(x=3,y=0) or f(x=3,y=10). If ignore=(‘*’,’**’), all varargs and varkwds will be ‘ignored’. Ignored arguments never trigger a recalculation (they only trigger cache lookups), and thus are ‘ignored’. When caching class methods, it may be useful to ignore=(‘self’,).

View cache statistics (hit, miss, load, maxsize, size) with f.info(). Clear the cache and statistics with f.clear(). Replace the cache archive with f.archive(obj). Load from the archive with f.load(), and dump from the cache to the archive with f.dump().

See: http://en.wikipedia.org/wiki/Cache_algorithms#Least_Frequently_Used

__call__(user_function)

Call self as a function.

__get__(obj, objtype)

support instance methods

static __new__(cls, *args, **kwds)
__reduce__()

Helper for pickle.

class lru_cache(*args, **kwds)

Bases: object

‘safe’ version of the least-recently-used (LRU) cache decorator.

This decorator memoizes a function’s return value each time it is called. If called later with the same arguments, the cached value is returned, and not re-evaluated. To avoid memory issues, a maximum cache size is imposed. For caches with an archive, the full cache dumps to archive upon reaching maxsize. For caches without an archive, the LRU algorithm manages the cache. Caches with an archive will use the latter behavior when ‘purge’ is False. This decorator takes an integer tolerance ‘tol’, equal to the number of decimal places to which it will round off floats, and a bool ‘deep’ for whether the rounding on inputs will be ‘shallow’ or ‘deep’. Note that rounding is not applied to the calculation of new results, but rather as a simple form of cache interpolation. For example, with tol=0 and a cached value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will store a new value; however if tol=1, both f(3.1) and f(3.6) will store new values.

maxsize = maximum cache size cache = storage hashmap (default is {}) keymap = cache key encoder (default is keymaps.stringmap(flat=False)) ignore = function argument names and indicies to ‘ignore’ (default is None) tol = integer tolerance for rounding (default is None) deep = boolean for rounding depth (default is False, i.e. ‘shallow’) purge = boolean for purge cache to archive at maxsize (default is False)

If maxsize is None, this cache will grow without bound.

If keymap is given, it will replace the hashing algorithm for generating cache keys. Several hashing algorithms are available in ‘keymaps’. The default keymap does not require arguments to the cached function to be hashable. If a hashing error occurs, the cached function will be evaluated.

If the keymap retains type information, then arguments of different types will be cached separately. For example, f(3.0) and f(3) will be treated as distinct calls with distinct results. Cache typing has a memory penalty, and may also be ignored by some ‘keymaps’.

If ignore is given, the keymap will ignore the arguments with the names and/or positional indicies provided. For example, if ignore=(0,), then the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2). If ignore=(‘y’,), then the key generated for f(x=3,y=4) will be identical to that of f(x=3,y=0) or f(x=3,y=10). If ignore=(‘*’,’**’), all varargs and varkwds will be ‘ignored’. Ignored arguments never trigger a recalculation (they only trigger cache lookups), and thus are ‘ignored’. When caching class methods, it may be useful to ignore=(‘self’,).

View cache statistics (hit, miss, load, maxsize, size) with f.info(). Clear the cache and statistics with f.clear(). Replace the cache archive with f.archive(obj). Load from the archive with f.load(), and dump from the cache to the archive with f.dump().

See: http://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used

__call__(user_function)

Call self as a function.

__get__(obj, objtype)

support instance methods

static __new__(cls, *args, **kwds)
__reduce__()

Helper for pickle.

class mru_cache(*args, **kwds)

Bases: object

‘safe’ version of the most-recently-used (MRU) cache decorator.

This decorator memoizes a function’s return value each time it is called. If called later with the same arguments, the cached value is returned, and not re-evaluated. To avoid memory issues, a maximum cache size is imposed. For caches with an archive, the full cache dumps to archive upon reaching maxsize. For caches without an archive, the MRU algorithm manages the cache. Caches with an archive will use the latter behavior when ‘purge’ is False. This decorator takes an integer tolerance ‘tol’, equal to the number of decimal places to which it will round off floats, and a bool ‘deep’ for whether the rounding on inputs will be ‘shallow’ or ‘deep’. Note that rounding is not applied to the calculation of new results, but rather as a simple form of cache interpolation. For example, with tol=0 and a cached value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will store a new value; however if tol=1, both f(3.1) and f(3.6) will store new values.

maxsize = maximum cache size cache = storage hashmap (default is {}) keymap = cache key encoder (default is keymaps.stringmap(flat=False)) ignore = function argument names and indicies to ‘ignore’ (default is None) tol = integer tolerance for rounding (default is None) deep = boolean for rounding depth (default is False, i.e. ‘shallow’) purge = boolean for purge cache to archive at maxsize (default is False)

If maxsize is None, this cache will grow without bound.

If keymap is given, it will replace the hashing algorithm for generating cache keys. Several hashing algorithms are available in ‘keymaps’. The default keymap does not require arguments to the cached function to be hashable. If a hashing error occurs, the cached function will be evaluated.

If the keymap retains type information, then arguments of different types will be cached separately. For example, f(3.0) and f(3) will be treated as distinct calls with distinct results. Cache typing has a memory penalty, and may also be ignored by some ‘keymaps’.

If ignore is given, the keymap will ignore the arguments with the names and/or positional indicies provided. For example, if ignore=(0,), then the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2). If ignore=(‘y’,), then the key generated for f(x=3,y=4) will be identical to that of f(x=3,y=0) or f(x=3,y=10). If ignore=(‘*’,’**’), all varargs and varkwds will be ‘ignored’. Ignored arguments never trigger a recalculation (they only trigger cache lookups), and thus are ‘ignored’. When caching class methods, it may be useful to ignore=(‘self’,).

View cache statistics (hit, miss, load, maxsize, size) with f.info(). Clear the cache and statistics with f.clear(). Replace the cache archive with f.archive(obj). Load from the archive with f.load(), and dump from the cache to the archive with f.dump().

See: http://en.wikipedia.org/wiki/Cache_algorithms#Most_Recently_Used

__call__(user_function)

Call self as a function.

__get__(obj, objtype)

support instance methods

static __new__(cls, *args, **kwds)
__reduce__()

Helper for pickle.

class no_cache(maxsize=0, cache=None, keymap=None, ignore=None, tol=None, deep=False, purge=True)

Bases: object

‘safe’ version of the empty (NO) cache decorator.

Unlike other cache decorators, this decorator does not cache. It is a dummy that collects statistics and conforms to the caching interface. This decorator takes an integer tolerance ‘tol’, equal to the number of decimal places to which it will round off floats, and a bool ‘deep’ for whether the rounding on inputs will be ‘shallow’ or ‘deep’. Note that rounding is not applied to the calculation of new results, but rather as a simple form of cache interpolation. For example, with tol=0 and a cached value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will store a new value; however if tol=1, both f(3.1) and f(3.6) will store new values.

maxsize = maximum cache size [fixed at maxsize=0] cache = storage hashmap (default is {}) keymap = cache key encoder (default is keymaps.stringmap(flat=False)) ignore = function argument names and indicies to ‘ignore’ (default is None) tol = integer tolerance for rounding (default is None) deep = boolean for rounding depth (default is False, i.e. ‘shallow’) purge = boolean for purge cache to archive at maxsize (fixed at True)

If keymap is given, it will replace the hashing algorithm for generating cache keys. Several hashing algorithms are available in ‘keymaps’. The default keymap does not require arguments to the cached function to be hashable. If a hashing error occurs, the cached function will be evaluated.

If the keymap retains type information, then arguments of different types will be cached separately. For example, f(3.0) and f(3) will be treated as distinct calls with distinct results. Cache typing has a memory penalty, and may also be ignored by some ‘keymaps’. Here, the keymap is only used to look up keys in an associated archive.

If ignore is given, the keymap will ignore the arguments with the names and/or positional indicies provided. For example, if ignore=(0,), then the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2). If ignore=(‘y’,), then the key generated for f(x=3,y=4) will be identical to that of f(x=3,y=0) or f(x=3,y=10). If ignore=(‘*’,’**’), all varargs and varkwds will be ‘ignored’. Ignored arguments never trigger a recalculation (they only trigger cache lookups), and thus are ‘ignored’. When caching class methods, it may be useful to ignore=(‘self’,).

View cache statistics (hit, miss, load, maxsize, size) with f.info(). Clear the cache and statistics with f.clear(). Replace the cache archive with f.archive(obj). Load from the archive with f.load(), and dump from the cache to the archive with f.dump().

__call__(user_function)

Call self as a function.

__get__(obj, objtype)

support instance methods

__reduce__()

Helper for pickle.

class rr_cache(*args, **kwds)

Bases: object

‘safe’ version of the random-replacement (RR) cache decorator.

This decorator memoizes a function’s return value each time it is called. If called later with the same arguments, the cached value is returned, and not re-evaluated. To avoid memory issues, a maximum cache size is imposed. For caches with an archive, the full cache dumps to archive upon reaching maxsize. For caches without an archive, the RR algorithm manages the cache. Caches with an archive will use the latter behavior when ‘purge’ is False. This decorator takes an integer tolerance ‘tol’, equal to the number of decimal places to which it will round off floats, and a bool ‘deep’ for whether the rounding on inputs will be ‘shallow’ or ‘deep’. Note that rounding is not applied to the calculation of new results, but rather as a simple form of cache interpolation. For example, with tol=0 and a cached value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will store a new value; however if tol=1, both f(3.1) and f(3.6) will store new values.

maxsize = maximum cache size cache = storage hashmap (default is {}) keymap = cache key encoder (default is keymaps.stringmap(flat=False)) ignore = function argument names and indicies to ‘ignore’ (default is None) tol = integer tolerance for rounding (default is None) deep = boolean for rounding depth (default is False, i.e. ‘shallow’) purge = boolean for purge cache to archive at maxsize (default is False)

If maxsize is None, this cache will grow without bound.

If keymap is given, it will replace the hashing algorithm for generating cache keys. Several hashing algorithms are available in ‘keymaps’. The default keymap does not require arguments to the cached function to be hashable. If a hashing error occurs, the cached function will be evaluated.

If the keymap retains type information, then arguments of different types will be cached separately. For example, f(3.0) and f(3) will be treated as distinct calls with distinct results. Cache typing has a memory penalty, and may also be ignored by some ‘keymaps’.

If ignore is given, the keymap will ignore the arguments with the names and/or positional indicies provided. For example, if ignore=(0,), then the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2). If ignore=(‘y’,), then the key generated for f(x=3,y=4) will be identical to that of f(x=3,y=0) or f(x=3,y=10). If ignore=(‘*’,’**’), all varargs and varkwds will be ‘ignored’. Ignored arguments never trigger a recalculation (they only trigger cache lookups), and thus are ‘ignored’. When caching class methods, it may be useful to ignore=(‘self’,).

View cache statistics (hit, miss, load, maxsize, size) with f.info(). Clear the cache and statistics with f.clear(). Replace the cache archive with f.archive(obj). Load from the archive with f.load(), and dump from the cache to the archive with f.dump().

http://en.wikipedia.org/wiki/Cache_algorithms#Random_Replacement

__call__(user_function)

Call self as a function.

__get__(obj, objtype)

support instance methods

static __new__(cls, *args, **kwds)
__reduce__()

Helper for pickle.

tools module

Assorted python tools

Main functions exported are::
  • isiterable: check if an object is iterable

isiterable(x)

check if an object is iterable