evennia.utils.idmapper.models

Django ID mapper

Modified for Evennia by making sure that no model references leave caching unexpectedly (no use of WeakRefs).

Also adds cache_size() for monitoring the size of the cache.

class evennia.utils.idmapper.models.SharedMemoryModelBase(name, bases, attrs)[源代码]

基类:django.db.models.base.ModelBase

class evennia.utils.idmapper.models.SharedMemoryModel(*args, **kwargs)[源代码]

基类:django.db.models.base.Model

Base class for idmapped objects. Inherit from this.

objects
class Meta[源代码]

基类:object

abstract = False
classmethod get_cached_instance(id)[源代码]

Method to retrieve a cached instance by pk value. Returns None when not found (which will always be the case when caching is disabled for this class). Please note that the lookup will be done even when instance caching is disabled.

classmethod cache_instance(instance, new=False)[源代码]

Method to store an instance in the cache.

参数
  • instance (Class instance) – the instance to cache.

  • new (bool, optional) – this is the first time this instance is cached (i.e. this is not an update operation like after a db save).

classmethod get_all_cached_instances()[源代码]

Return the objects so far cached by idmapper for this class.

classmethod flush_cached_instance(instance, force=True)[源代码]

Method to flush an instance from the cache. The instance will always be flushed from the cache, since this is most likely called from delete(), and we want to make sure we don’t cache dead objects.

classmethod flush_instance_cache(force=False)[源代码]

This will clean safe objects from the cache. Use force keyword to remove all objects, safe or not.

at_idmapper_flush()[源代码]

This is called when the idmapper cache is flushed and allows customized actions when this happens.

返回

do_flush (bool)

If True, flush this object as normal. If

False, don’t flush and expect this object to handle the flushing on its own.

flush_from_cache(force=False)[源代码]

Flush this instance from the instance cache. Use force to override the result of at_idmapper_flush() for the object.

delete(*args, **kwargs)[源代码]

Delete the object, clearing cache.

save(*args, **kwargs)[源代码]

Central database save operation.

提示

Arguments as per Django documentation. Calls self.at_<fieldname>_postsave(new) (this is a wrapper set by oobhandler: self._oob_at_<fieldname>_postsave())

path = 'evennia.utils.idmapper.models.SharedMemoryModel'
typename = 'SharedMemoryModelBase'
class evennia.utils.idmapper.models.WeakSharedMemoryModelBase(name, bases, attrs)[源代码]

基类:evennia.utils.idmapper.models.SharedMemoryModelBase

Uses a WeakValue dictionary for caching instead of a regular one.

class evennia.utils.idmapper.models.WeakSharedMemoryModel(*args, **kwargs)[源代码]

基类:evennia.utils.idmapper.models.SharedMemoryModel

Uses a WeakValue dictionary for caching instead of a regular one

class Meta[源代码]

基类:object

abstract = False
path = 'evennia.utils.idmapper.models.WeakSharedMemoryModel'
typename = 'WeakSharedMemoryModelBase'
evennia.utils.idmapper.models.flush_cache(**kwargs)[源代码]

Flush idmapper cache. When doing so the cache will fire the at_idmapper_flush hook to allow the object to optionally handle its own flushing.

Uses a signal so we make sure to catch cascades.

evennia.utils.idmapper.models.flush_cached_instance(sender, instance, **kwargs)[源代码]

Flush the idmapper cache only for a given instance.

evennia.utils.idmapper.models.update_cached_instance(sender, instance, **kwargs)[源代码]

Re-cache the given instance in the idmapper cache.

evennia.utils.idmapper.models.conditional_flush(max_rmem, force=False)[源代码]

Flush the cache if the estimated memory usage exceeds max_rmem.

The flusher has a timeout to avoid flushing over and over in particular situations (this means that for some setups the memory usage will exceed the requirement and a server with more memory is probably required for the given game).

参数
  • max_rmem (int) – memory-usage estimation-treshold after which cache is flushed.

  • force (bool, optional) – forces a flush, regardless of timeout. Defaults to False.

evennia.utils.idmapper.models.cache_size(mb=True)[源代码]

Calculate statistics about the cache.

Note: we cannot get reliable memory statistics from the cache - whereas we could do getsizof each object in cache, the result is highly imprecise and for a large number of objects the result is many times larger than the actual memory usage of the entire server; Python is clearly reusing memory behind the scenes that we cannot catch in an easy way here. Ideas are appreciated. /Griatch

返回

total_num, {objclass – total_num, …}