changeset 385:cc5e79450374

merge
author Sebastien Jodogne <s.jodogne@gmail.com>
date Thu, 16 Apr 2020 12:14:24 +0200
parents e4b0a4d69f42 (diff) ef0e45a9b08a (current diff)
children 801db4e1828c
files
diffstat 2 files changed, 66 insertions(+), 0 deletions(-) [+]
line wrap: on
line diff
--- a/Sphinx/source/faq/scalability.rst	Tue Apr 14 12:37:30 2020 +0200
+++ b/Sphinx/source/faq/scalability.rst	Thu Apr 16 12:14:24 2020 +0200
@@ -3,6 +3,11 @@
 Scalability of Orthanc
 ======================
 
+.. contents::
+  
+Overview
+--------
+
 One of the most common question about Orthanc is: *"How many DICOM
 instances can be stored by Orthanc?"* 
 
@@ -40,6 +45,12 @@
 especially when it comes to the speed of :ref:`DICOM C-FIND
 <dicom-find>`.
 
+
+.. _scalability-setup:
+
+Recommended setup for best performance
+--------------------------------------
+
 Here is a generic setup that should provide best performance in the
 presence of large databases:
 
@@ -102,3 +113,52 @@
 * If using the :ref:`DICOMweb server plugin <dicomweb-server-config>`,
   consider setting configuration option ``StudiesMetadata`` to
   ``MainDicomTags``.
+
+
+.. _scalability-memory:
+
+Controlling memory usage
+------------------------
+
+The absence of memory leaks in Orthanc is verified thanks to `valgrind
+<https://valgrind.org/>`__.
+
+On GNU/Linux systems, you might however `observe a large memory
+consumption
+<https://groups.google.com/d/msg/orthanc-users/qWqxpvCPv8g/47wnYyhOCAAJ>`__
+in the "resident set size" (VmRSS) of the application, notably if you
+upload multiple large DICOM files using the REST API.
+
+This large memory consumption comes from the fact that the embedded
+HTTP server is heavily multi-threaded, and that many so-called `memory
+arenas <https://sourceware.org/glibc/wiki/MallocInternals>`__ are
+created by the glibc standard library (up to one per thread). As a
+consequence, if each one of the 50 threads in the HTTP server of
+Orthanc (this was the default value in Orthanc <= 1.6.0) allocates at
+some point, say, 50MB, the total memory usage reported as "VmRSS" can
+grow up to 50 threads x 50MB = 2.5GB, even if the Orthanc threads
+properly free all the buffers.
+
+.. highlight:: bash
+               
+A possible solution to reducing this memory usage is to ask glibc to
+limit the number of "memory arenas" that are used by the Orthanc
+process. On GNU/Linux, this can be controlled by setting the
+environment variable ``MALLOC_ARENA_MAX``. For instance, the following
+bash command-line would use one single arena that is shared by all the
+threads in Orthanc::
+
+  $ MALLOC_ARENA_MAX=1 ./Orthanc
+
+Obviously, this restrictive setting will use minimal memory, but will
+result in contention among the threads. A good compromise might be to
+use 5 arenas::
+
+  $ MALLOC_ARENA_MAX=5 ./Orthanc
+
+Memory allocation on GNU/Linux is a complex topic. There are other
+options available as environment variables that could also reduce
+memory consumption (for instance, ``MALLOC_MMAP_THRESHOLD_`` would
+bypass arenas for large memory blocks such as DICOM files). Check out
+the `manpage <http://man7.org/linux/man-pages/man3/mallopt.3.html>`__
+of ``mallopt()`` for more information.
--- a/Sphinx/source/users/docker.rst	Tue Apr 14 12:37:30 2020 +0200
+++ b/Sphinx/source/users/docker.rst	Thu Apr 16 12:14:24 2020 +0200
@@ -117,6 +117,12 @@
 
   $ docker run -p 4242:4242 -p 8042:8042 --rm -v /tmp/orthanc.json:/etc/orthanc/orthanc.json:ro jodogne/orthanc
 
+*Remark:* These Docker images automatically set the environment
+variable ``MALLOC_ARENA_MAX`` to ``5`` in order to :ref:`control
+memory usage <scalability-memory>`. This default setting can be
+overriden by providing the option ``-e MALLOC_ARENA_MAX=1`` when
+invoking ``docker run``.
+
 
 .. _docker-compose: