# HG changeset patch # User Sebastien Jodogne # Date 1587032064 -7200 # Node ID cc5e79450374d8137475d164f93c4745841d5ac3 # Parent e4b0a4d69f4289ce5a44cbc23b29f8f7dbb79ca5# Parent ef0e45a9b08a330fb458268aa013a6c1441c4126 merge diff -r ef0e45a9b08a -r cc5e79450374 Sphinx/source/faq/scalability.rst --- a/Sphinx/source/faq/scalability.rst Tue Apr 14 12:37:30 2020 +0200 +++ b/Sphinx/source/faq/scalability.rst Thu Apr 16 12:14:24 2020 +0200 @@ -3,6 +3,11 @@ Scalability of Orthanc ====================== +.. contents:: + +Overview +-------- + One of the most common question about Orthanc is: *"How many DICOM instances can be stored by Orthanc?"* @@ -40,6 +45,12 @@ especially when it comes to the speed of :ref:`DICOM C-FIND `. + +.. _scalability-setup: + +Recommended setup for best performance +-------------------------------------- + Here is a generic setup that should provide best performance in the presence of large databases: @@ -102,3 +113,52 @@ * If using the :ref:`DICOMweb server plugin `, consider setting configuration option ``StudiesMetadata`` to ``MainDicomTags``. + + +.. _scalability-memory: + +Controlling memory usage +------------------------ + +The absence of memory leaks in Orthanc is verified thanks to `valgrind +`__. + +On GNU/Linux systems, you might however `observe a large memory +consumption +`__ +in the "resident set size" (VmRSS) of the application, notably if you +upload multiple large DICOM files using the REST API. + +This large memory consumption comes from the fact that the embedded +HTTP server is heavily multi-threaded, and that many so-called `memory +arenas `__ are +created by the glibc standard library (up to one per thread). As a +consequence, if each one of the 50 threads in the HTTP server of +Orthanc (this was the default value in Orthanc <= 1.6.0) allocates at +some point, say, 50MB, the total memory usage reported as "VmRSS" can +grow up to 50 threads x 50MB = 2.5GB, even if the Orthanc threads +properly free all the buffers. + +.. highlight:: bash + +A possible solution to reducing this memory usage is to ask glibc to +limit the number of "memory arenas" that are used by the Orthanc +process. On GNU/Linux, this can be controlled by setting the +environment variable ``MALLOC_ARENA_MAX``. For instance, the following +bash command-line would use one single arena that is shared by all the +threads in Orthanc:: + + $ MALLOC_ARENA_MAX=1 ./Orthanc + +Obviously, this restrictive setting will use minimal memory, but will +result in contention among the threads. A good compromise might be to +use 5 arenas:: + + $ MALLOC_ARENA_MAX=5 ./Orthanc + +Memory allocation on GNU/Linux is a complex topic. There are other +options available as environment variables that could also reduce +memory consumption (for instance, ``MALLOC_MMAP_THRESHOLD_`` would +bypass arenas for large memory blocks such as DICOM files). Check out +the `manpage `__ +of ``mallopt()`` for more information. diff -r ef0e45a9b08a -r cc5e79450374 Sphinx/source/users/docker.rst --- a/Sphinx/source/users/docker.rst Tue Apr 14 12:37:30 2020 +0200 +++ b/Sphinx/source/users/docker.rst Thu Apr 16 12:14:24 2020 +0200 @@ -117,6 +117,12 @@ $ docker run -p 4242:4242 -p 8042:8042 --rm -v /tmp/orthanc.json:/etc/orthanc/orthanc.json:ro jodogne/orthanc +*Remark:* These Docker images automatically set the environment +variable ``MALLOC_ARENA_MAX`` to ``5`` in order to :ref:`control +memory usage `. This default setting can be +overriden by providing the option ``-e MALLOC_ARENA_MAX=1`` when +invoking ``docker run``. + .. _docker-compose: