Mercurial > libervia-backend
view doc/developer.rst @ 4044:3900626bc100
plugin XEP-0166: refactoring, and various improvments:
- add models for transport and applications handlers and linked data
- split models into separate file
- some type hints
- some documentation comments
- add actions to prepare confirmation, useful to do initial parsing of all contents
- application arg/kwargs and some transport data can be initialised during Jingle
`initiate` call, this is notably useful when a call is made with transport data (this is
the call for A/V calls where codecs and ICE candidate can be specified when starting a
call)
- session data can be specified during Jingle `initiate` call
- new `store_in_session` argument in `_parse_elements`, which can be used to avoid
race-condition when a context element (<decription> or <transport>) is being parsed for
an action while an other action happens (like `transport-info`)
- don't sed `sid` in `transport_elt` during a `transport-info` action anymore in
`build_action`: this is specific to Jingle File Transfer and has been moved there
rel 419
author | Goffi <goffi@goffi.org> |
---|---|
date | Mon, 15 May 2023 16:23:11 +0200 |
parents | 524856bd7b19 |
children | d6837db456fd |
line wrap: on
line source
.. _developer: ======================= Developer Documentation ======================= This documentation is intended for people who wants to contribute or work with the internals of the project, it is not for end-users. Storage ======= Since version 0.9, Libervia uses SQLAlchemy_ with its Object–Relational Mapping as a backend to store persistent data, and Alembic_ is used to handle schema and data migrations. SQLite_ is currently the only supported database, but it is planned to add support for other ones (notably PostgreSQL), probably during the development of 0.9 version. The mapping is done in ``sat.memory.sqla_mapping`` and working with database is done through high level methods found in ``sat.memory.sqla``. Before the move to SQLAlchemy, there was a strict separation between database implementation and the rest of the code. With 0.9, objects mapping to database can be used and manipulated directly outside of ``sat.memory.sqla`` to take profit of SQLAlchemy possibilities. Database state is detected when the backend starts, and the database will be created or migrated automatically if necessary. To create a new migration script, ``Alembic`` may be used directly. To do so, be sure to have an up-to-date database (and a backup in case of troubles), then activate the virtual environment where Libervia is installed (Alembic needs to access ORM mapping), go to ``sat/memory/migration`` directory, and enter the following command:: alembic revision --autogenerate -m "some revision message" This will create a base migration file in ``versions`` directory. Adapt it to your needs, try to create both ``upgrade`` and ``downgrade`` method whenever possible, and be sure to test it in both directions (``alembic upgrade head`` and ``alembic downgrade <previous_revision>``). Please check Alembic documentation for more details. .. _SQLALchemy: https://www.sqlalchemy.org/ .. _Alembic: https://alembic.sqlalchemy.org/ .. _SQLite: https://sqlite.org Pubsub Cache ============ There is an internal cache for pubsub nodes and items, which is done in ``plugin_pubsub_cache``. The ``PubsubNode`` and ``PubsubItem`` class are the one mapping the database. The cache is operated transparently to end-user, when a pubsub request is done, it uses a trigger to check if the requested node is or must be cached, and if possible returns result directly from database, otherwise it lets the normal workflow continue and query the pubsub service. To save resources, not all nodes are fully cached. When a node is checked, a series of analysers are checked, and the first one matching is used to determine if the node must be synchronised or not. Analysers can be registered by any plugins using ``register_analyser`` method: .. automethod:: sat.plugins.plugin_pubsub_cache.PubsubCache.register_analyser If no analyser is found, ``to_sync`` is false, or an error happens during the caching, the node won't be synchronised and the pubsub service will always be requested. Specifying an optional **parser** will store parsed data in addition to the raw XML of the items. This is more space consuming, but may be desired for the following reasons: * the parsing is resource consuming (network call or some CPU intensive operations are done) * it is desirable to do queries on parsed data. Indeed the parsed data are stored in a JSON_ field and its keys may be queried individually. The Raw XML is kept as the cache operates transparently, and a plugin may need raw data, or an user may be doing a low-level pubsub request. .. _JSON: https://docs.sqlalchemy.org/en/14/core/type_basics.html#sqlalchemy.types.JSON