view TODO @ 677:a5241defb36f default tip

cleanup + fix
author Alain Mazy <am@orthanc.team>
date Wed, 09 Oct 2024 16:04:41 +0200
parents 6ae0e07512b8
children
line wrap: on
line source

* upgrade to SDK 1.12.4 to benefit from new logging primitives for plugins


* https://orthanc.uclouvain.be/book/plugins/dicomweb.html#retrieving-dicom-resources-from-a-wado-rs-server
  Retrieve shall return the list of orthanc IDs -> it is not !

* when retrieving frames from Multi-frame instances, we should not transcode the whole instance for each frame !!!
  For a 90MB instance with 88 frames, OHIF is unusable because of this !
  Sample request:
  curl -H "Accept: multipart/related; type=application/octet-stream" http://localhost:8043/dicom-web/studies/1.2.156.112536.1.2143.25015081191207.14610300430.5/series/1.2.156.112536.1.2143.25015081191207.14610300430.6/instances/1.2.156.112536.1.2143.25015081191207.14610309990.44/frames/3 --output /tmp/out.bin
  check for these logs: DICOMweb RetrieveFrames: Transcoding instance a7aec17a-e296e51f-2abe8ad8-bc95d57b-4de269d0 to transfer syntax 1.2.840.10008.1.2.1

  Note: this has been partialy handled in 1.18: if no transcoding is needed, we avoid download the full instance from Orthanc

  We should very likely implement a cache in the DicomWEB plugin and make sure that, if 3 clients are requesting the same instance at the same time, we only
  request one transcoding.

  No such issue in StoneViewer since Stone downloads the whole file.  However, Stone uses 2-3 workers at the same time and the file is read 2-3 times at the same time
  before it ends up in Orthanc cache -> we should introduce a state "is_being_loaded" in caches and have other consumers wait for it to be available.
  https://discourse.orthanc-server.org/t/possible-memory-leak-with-multiframe-dicom-orthanc-ohif/3988/12



* Implement capabilities: https://www.dicomstandard.org/using/dicomweb/capabilities/
  from https://groups.google.com/d/msgid/orthanc-users/c60227f2-c6da-4fd9-9b03-3ce9bf7d1af5n%40googlegroups.com?utm_medium=email&utm_source=footer

* /rendered at study level shall return all instances, not only one (https://groups.google.com/g/orthanc-users/c/uFWanYhV8Fs/m/ezi1iXCXCAAJ)
  Check /rendered at series level too.

* Implement serialization of DicomWeb jobs

* Add support for application/zip in /dicom-web/studies/ (aka sup 211: https://www.dicomstandard.org/docs/librariesprovider2/dicomdocuments/news/ftsup/docs/sups/sup211.pdf?sfvrsn=9fe9edae_2)

* Support private tags in search fields:
  https://discourse.orthanc-server.org/t/dicomweb-plugin-exception-of-unknown-dicom-tag-for-private-data-element-tags-while-using-query-parameters/3998
  

* Based on this discussion: https://discourse.orthanc-server.org/t/series-metadata-retrieval-is-very-long-even-with-configuration-optimization/3389 
  optimize studies/.../series/.../metadata route when "SeriesMetadata" is set 
  to "MainDicomTags" and "ExtraMainDicomTags" are configured according to recommandation 
  (from this setup: https://bitbucket.org/osimis/orthanc-setup-samples/src/master/docker/stone-viewer/docker-compose.yml).
  
  with a 600 instance series with SQLite - all timings are performed without verbose logs !!!!:
  - time curl http://localhost:8043/dicom-web/studies/1.2.276.0.7230010.3.1.2.1215942821.4756.1664826045.3529/series/1.2.276.0.7230010.3.1.3.1215942821.4756.1664833048.11984/metadata > /dev/null 
    -> 883ms in Full mode
    -> 565ms in MainDicomTags mode     

    -> 545ms in Full mode with 1 worker
    -> 335ms in Full mode with 2 workers
    -> 267ms in Full mode with 3 workers
    -> 270ms in Full mode with 4 workers
    -> 270ms in Full mode with 8 workers


  - note that all measurements have been performed on a DB with a single series !  We should repeat 
    that with a more realistic DB

  with a 3 series study (11 + 1233 + 598 instances)
  time curl http://localhost:8043/dicom-web/studies/1.2.276.0.7230010.3.1.2.1215942821.4756.1664826045.3529/metadata > /dev/null 
    -> 1.355ms in Full mode before using worker threads
    ->   700ms in Full mode with 4 workers