Mercurial > hg > orthanc-book
annotate Sphinx/source/plugins/object-storage.rst @ 554:4f3a6145ae34
Orthanc 1.8.1
author | Sebastien Jodogne <s.jodogne@gmail.com> |
---|---|
date | Mon, 07 Dec 2020 17:48:33 +0100 |
parents | fd340103904c |
children | 5f5519f1491a |
rev | line source |
---|---|
451 | 1 .. _object-storage: |
2 | |
3 | |
4 Cloud Object Storage plugins | |
5 ============================ | |
6 | |
7 .. contents:: | |
8 | |
499
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
9 Release notes |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
10 ------------- |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
11 |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
12 Release notes are available `here |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
13 <https://hg.orthanc-server.com/orthanc-object-storage/file/default/NEWS>`__ |
451 | 14 |
15 Introduction | |
16 ------------ | |
17 | |
18 Osimis freely provides the `source code | |
19 <https://hg.orthanc-server.com/orthanc-object-storage/file/default/>`__ of 3 plugins | |
20 to store the Orthanc files in `Object Storage <https://en.wikipedia.org/wiki/Object_storage>`__ | |
21 at the 3 main providers: `AWS <https://aws.amazon.com/s3/>`__, | |
22 `Azure <https://azure.microsoft.com/en-us/services/storage/blobs/>`__ & | |
23 `Google Cloud <https://cloud.google.com/storage>`__ | |
24 | |
25 Storing Orthanc files in object storage and your index SQL in a | |
26 managed database allows you to have a stateless Orthanc that does | |
27 not store any data in its local file system which is highly recommended | |
28 when deploying an application in the cloud. | |
29 | |
30 | |
459
a4ed4e883337
highlighting the pre-compiled binaries for google, aws and azure
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
453
diff
changeset
|
31 Pre-compiled binaries |
a4ed4e883337
highlighting the pre-compiled binaries for google, aws and azure
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
453
diff
changeset
|
32 --------------------- |
a4ed4e883337
highlighting the pre-compiled binaries for google, aws and azure
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
453
diff
changeset
|
33 |
a4ed4e883337
highlighting the pre-compiled binaries for google, aws and azure
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
453
diff
changeset
|
34 These plugins are used to interface Orthanc with commercial and |
a4ed4e883337
highlighting the pre-compiled binaries for google, aws and azure
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
453
diff
changeset
|
35 proprietary cloud services that you accept to pay. As a consequence, |
a4ed4e883337
highlighting the pre-compiled binaries for google, aws and azure
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
453
diff
changeset
|
36 the Orthanc project doesn't freely provide pre-compiled binaries for |
a4ed4e883337
highlighting the pre-compiled binaries for google, aws and azure
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
453
diff
changeset
|
37 Docker, Windows, Linux or OS X. These pre-compiled binaries do exist, |
a4ed4e883337
highlighting the pre-compiled binaries for google, aws and azure
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
453
diff
changeset
|
38 but are reserved to the companies who have subscribed to a |
a4ed4e883337
highlighting the pre-compiled binaries for google, aws and azure
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
453
diff
changeset
|
39 `professional support contract |
a4ed4e883337
highlighting the pre-compiled binaries for google, aws and azure
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
453
diff
changeset
|
40 <https://www.osimis.io/en/services.html#cloud-plugins>`__ by |
a4ed4e883337
highlighting the pre-compiled binaries for google, aws and azure
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
453
diff
changeset
|
41 Osimis. Although you are obviously free to compile these plugins by |
a4ed4e883337
highlighting the pre-compiled binaries for google, aws and azure
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
453
diff
changeset
|
42 yourself (instructions are given below), purchasing such support |
a4ed4e883337
highlighting the pre-compiled binaries for google, aws and azure
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
453
diff
changeset
|
43 contracts makes the Orthanc project sustainable in the long term, to |
a4ed4e883337
highlighting the pre-compiled binaries for google, aws and azure
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
453
diff
changeset
|
44 the benefit of the worldwide community of medical imaging. |
a4ed4e883337
highlighting the pre-compiled binaries for google, aws and azure
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
453
diff
changeset
|
45 |
a4ed4e883337
highlighting the pre-compiled binaries for google, aws and azure
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
453
diff
changeset
|
46 |
451 | 47 Compilation |
48 ----------- | |
49 | |
50 .. highlight:: text | |
51 | |
52 The procedure to compile the plugins is quite similar of that for the | |
53 :ref:`core of Orthanc <compiling>` although they usually require | |
54 some prerequisites. The documented procedure has been tested only | |
55 on a Debian Buster machine. | |
56 | |
57 The compilation of each plugin produces a shared library that contains | |
58 the plugin. | |
59 | |
60 | |
61 AWS S3 plugin | |
62 ^^^^^^^^^^^^^ | |
63 | |
64 Prerequisites: Compile the AWS C++ SDK:: | |
65 | |
66 $ mkdir ~/aws | |
67 $ cd ~/aws | |
68 $ git clone https://github.com/aws/aws-sdk-cpp.git | |
69 $ | |
70 $ mkdir -p ~/aws/builds/aws-sdk-cpp | |
71 $ cd ~/aws/builds/aws-sdk-cpp | |
72 $ cmake -DBUILD_ONLY="s3;transfer" ~/aws/aws-sdk-cpp | |
73 $ make -j 4 | |
74 $ make install | |
75 | |
76 Prerequisites: Install `vcpkg <https://github.com/Microsoft/vcpkg>`__ dependencies:: | |
77 | |
78 $ ./vcpkg install cryptopp | |
79 | |
80 Compile:: | |
81 | |
82 $ mkdir -p build/aws | |
83 $ cd build/aws | |
84 $ cmake -DCMAKE_TOOLCHAIN_FILE=[vcpkg root]\scripts\buildsystems\vcpkg.cmake ../../orthanc-object-storage/Aws | |
85 | |
504 | 86 |
87 **NB:** If you don't want to use vcpkg, you can use the following | |
88 command (this syntax is not compatible with Ninja yet):: | |
89 | |
543
fd340103904c
note to build aws plugin
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
520
diff
changeset
|
90 $ cmake -DCMAKE_BUILD_TYPE=Debug -DUSE_VCPKG_PACKAGES=OFF -DUSE_SYSTEM_GOOGLE_TEST=OFF ../../orthanc-object-storage/Aws |
504 | 91 $ make |
92 | |
543
fd340103904c
note to build aws plugin
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
520
diff
changeset
|
93 Crypto++ must be installed (on Ubuntu, run ``sudo apt install libcrypto++-dev``). |
fd340103904c
note to build aws plugin
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
520
diff
changeset
|
94 |
505
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
95 |
451 | 96 Azure Blob Storage plugin |
97 ^^^^^^^^^^^^^^^^^^^^^^^^^ | |
98 | |
99 Prerequisites: Install `vcpkg <https://github.com/Microsoft/vcpkg>`__ dependencies:: | |
100 | |
101 $ ./vcpkg install cpprestsdk | |
102 | |
103 | |
104 Compile:: | |
105 | |
106 $ mkdir -p build/azure | |
107 $ cd build/azure | |
108 $ cmake -DCMAKE_TOOLCHAIN_FILE=[vcpkg root]\scripts\buildsystems\vcpkg.cmake ../../orthanc-object-storage/Azure | |
109 | |
110 Google Storage plugin | |
111 ^^^^^^^^^^^^^^^^^^^^^ | |
112 | |
113 Prerequisites: Install `vcpkg <https://github.com/Microsoft/vcpkg>`__ dependencies:: | |
114 | |
115 $ ./vcpkg install google-cloud-cpp | |
116 $ ./vcpkg install cryptopp | |
117 | |
118 Compile:: | |
119 | |
120 $ mkdir -p build/google | |
121 $ cd build/google | |
122 $ cmake -DCMAKE_TOOLCHAIN_FILE=[vcpkg root]\scripts\buildsystems\vcpkg.cmake ../../orthanc-object-storage/google | |
123 | |
124 | |
125 Configuration | |
126 ------------- | |
127 | |
128 .. highlight:: json | |
129 | |
130 AWS S3 plugin | |
131 ^^^^^^^^^^^^^ | |
132 | |
133 Sample configuration:: | |
134 | |
135 "AwsS3Storage" : { | |
136 "BucketName": "test-orthanc-s3-plugin", | |
137 "Region" : "eu-central-1", | |
138 "AccessKey" : "AKXXX", | |
463 | 139 "SecretKey" : "RhYYYY", |
499
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
140 "Endpoint": "", // custom endpoint |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
141 "ConnectionTimeout": 30, // connection timeout in seconds |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
142 "RequestTimeout": 1200, // request timeout in seconds (max time to upload/download a file) |
502 | 143 "RootPath": "", // see below |
499
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
144 "MigrationFromFileSystemEnabled": false, // see below |
505
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
145 "StorageStructure": "flat", // see below |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
146 "VirtualAddressing": true // see the section related to MinIO |
451 | 147 } |
148 | |
464 | 149 The **EndPoint** configuration is used when accessing an S3 compatible cloud provider. I.e. here is a configuration to store data on Scaleway:: |
150 | |
151 "AwsS3Storage" : { | |
152 "BucketName": "test-orthanc", | |
153 "Region": "fr-par", | |
154 "AccessKey": "XXX", | |
155 "SecretKey": "YYY", | |
156 "Endpoint": "s3.fr-par.scw.cloud" | |
505
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
157 } |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
158 |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
159 |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
160 Emulation of AWS S3 using MinIO |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
161 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
162 |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
163 .. highlight:: bash |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
164 |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
165 The `MinIO project <https://min.io/>`__ can be used to emulate AWS S3 |
507
a51542cfdfeb
warning about minio credentials
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
506
diff
changeset
|
166 for local testing/prototyping. Here is a sample command to start a |
a51542cfdfeb
warning about minio credentials
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
506
diff
changeset
|
167 MinIO server on your local computer using Docker (evidently, make sure |
a51542cfdfeb
warning about minio credentials
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
506
diff
changeset
|
168 to set different credentials):: |
505
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
169 |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
170 $ docker run -p 9000:9000 \ |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
171 -e "MINIO_REGION=eu-west-1" \ |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
172 -e "MINIO_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE" \ |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
173 -e "MINIO_SECRET_KEY=wJalrXUtnFEMI/K7MNG/bPxRfiCYEXAMPLEKEY" \ |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
174 minio/minio server /data |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
175 |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
176 .. highlight:: json |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
177 |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
178 Note that the ``MINIO_REGION`` must be set to an arbitrary region that |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
179 is supported by AWS S3. |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
180 |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
181 You can then open the URL `http://localhost:9000/ |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
182 <http://localhost:9000/>`__ with your Web browser to create a bucket, |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
183 say ``my-sample-bucket``. |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
184 |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
185 Here is a corresponding full configuration for Orthanc:: |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
186 |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
187 { |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
188 "Plugins" : [ <...> ], |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
189 "AwsS3Storage" : { |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
190 "BucketName": "my-sample-bucket", |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
191 "Region" : "eu-west-1", |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
192 "Endpoint": "http://localhost:9000/", |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
193 "AccessKey": "AKIAIOSFODNN7EXAMPLE", |
506 | 194 "SecretKey": "wJalrXUtnFEMI/K7MNG/bPxRfiCYEXAMPLEKEY", |
505
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
195 "VirtualAddressing" : false |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
196 } |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
197 } |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
198 |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
199 Note that the ``VirtualAddressing`` option must be set to ``false`` |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
200 for such a `local setup with MinIO to work |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
201 <https://github.com/aws/aws-sdk-cpp/issues/1425>`__. This option is |
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
202 **not** available in releases <= 1.1.0 of the AWS S3 plugin. |
507
a51542cfdfeb
warning about minio credentials
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
506
diff
changeset
|
203 |
a51542cfdfeb
warning about minio credentials
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
506
diff
changeset
|
204 **Important:** If you get the cryptic error message |
a51542cfdfeb
warning about minio credentials
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
506
diff
changeset
|
205 ``SignatureDoesNotMatch The request signature we calculated does not |
a51542cfdfeb
warning about minio credentials
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
506
diff
changeset
|
206 match the signature you provided. Check your key and signing |
a51542cfdfeb
warning about minio credentials
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
506
diff
changeset
|
207 method.``, this most probably indicates that your access key or your |
a51542cfdfeb
warning about minio credentials
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
506
diff
changeset
|
208 secret key doesn't match the credentials that were used while starting |
a51542cfdfeb
warning about minio credentials
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
506
diff
changeset
|
209 the MinIO server. |
505
e4bea5b97890
Emulation of AWS S3 using MinIO
Sebastien Jodogne <s.jodogne@gmail.com>
parents:
504
diff
changeset
|
210 |
464 | 211 |
451 | 212 Azure Blob Storage plugin |
213 ^^^^^^^^^^^^^^^^^^^^^^^^^ | |
214 | |
215 Sample configuration:: | |
216 | |
217 "AzureBlobStorage" : { | |
218 "ConnectionString": "DefaultEndpointsProtocol=https;AccountName=xxxxxxxxx;AccountKey=yyyyyyyy===;EndpointSuffix=core.windows.net", | |
499
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
219 "ContainerName" : "test-orthanc-storage-plugin", |
502 | 220 "RootPath": "", // see below |
499
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
221 "MigrationFromFileSystemEnabled": false, // see below |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
222 "StorageStructure": "flat" // see below |
451 | 223 } |
224 | |
225 | |
226 Google Storage plugin | |
227 ^^^^^^^^^^^^^^^^^^^^^ | |
228 | |
229 Sample configuration:: | |
230 | |
231 "GoogleCloudStorage" : { | |
232 "ServiceAccountFile": "/path/to/googleServiceAccountFile.json", | |
499
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
233 "BucketName": "test-orthanc-storage-plugin", |
502 | 234 "RootPath": "", // see below |
499
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
235 "MigrationFromFileSystemEnabled": false, // see below |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
236 "StorageStructure": "flat" // see below |
451 | 237 } |
238 | |
239 | |
499
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
240 Migration & Storage structure |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
241 ----------------------------- |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
242 |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
243 The **StorageStructure** configuration allows you to select the way objects are organized |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
244 within the storage (``flat`` or ``legacy``). |
500 | 245 Unlike the traditional file system in which Orthanc uses 2 levels |
499
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
246 of folders, object storages usually have no limit on the number of files per folder and |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
247 therefore all objects are stored at the root level of the object storage. This is the |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
248 default ``flat`` behaviour. Note that, in the ``flat`` mode, an extension `.dcm` or `.json` |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
249 is added to the filename which is not the case in the legacy mode. |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
250 |
500 | 251 The ``legacy`` behaviour mimics the Orthanc File System convention. This is actually helpful |
499
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
252 when migrating your data from a file system to an object storage since you can copy all the file |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
253 hierarchy as is. |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
254 |
502 | 255 The **RootPath** allows you to store the files in another folder as the root level of the |
520 | 256 object storage. Note: it shall not start with a ``/``. |
502 | 257 |
258 Note that you can not change these configurations once you've uploaded the first files in Orthanc. | |
499
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
259 |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
260 The **MigrationFromFileSystemEnabled** configuration has been for Orthanc to continue working |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
261 while you're migrating your data from the file system to the object storage. While this option is enabled, |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
262 Orthanc will store all new files into the object storage but will try to read/delete files |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
263 from both the file system and the object storage. |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
264 |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
265 This option can be disabled as soon as all files have been copied from the file system to the |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
266 object storage. Note that Orthanc is not copying the files from one storage to the other; you'll |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
267 have to use a standard ``sync`` command from the object-storage provider. |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
268 |
508 | 269 A migration script from File System to Azure Blob Storage is available courtesy of `Steve Hawes <https://github.com/jodogne/OrthancContributed/blob/master/Scripts/Migration/2020-09-08-TransferToAzure.sh>`__ . |
270 | |
499
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
271 |
452 | 272 Sample setups |
273 ------------- | |
274 | |
275 You'll find sample deployments and more info in the `Orthanc Setup Samples repository <https://bitbucket.org/osimis/orthanc-setup-samples/src/master/#markdown-header-for-osimisorthanc-pro-image-users>`__ . | |
276 | |
511
5b574520a34c
performance tests for object-storage
Alain Mazy <alain@mazy.be>
parents:
509
diff
changeset
|
277 Performances |
5b574520a34c
performance tests for object-storage
Alain Mazy <alain@mazy.be>
parents:
509
diff
changeset
|
278 ------------ |
5b574520a34c
performance tests for object-storage
Alain Mazy <alain@mazy.be>
parents:
509
diff
changeset
|
279 |
5b574520a34c
performance tests for object-storage
Alain Mazy <alain@mazy.be>
parents:
509
diff
changeset
|
280 You'll find some performance comparison between VM SSDs and object-storage `here <https://bitbucket.org/osimis/orthanc-setup-samples/src/master/docker/performance-tests/>`__ . |
5b574520a34c
performance tests for object-storage
Alain Mazy <alain@mazy.be>
parents:
509
diff
changeset
|
281 |
452 | 282 |
451 | 283 Client-side encryption |
284 ---------------------- | |
285 | |
286 Although all cloud providers already provide encryption at rest, the plugins provide | |
287 an optional layer of client-side encryption . It is very important that you understand | |
288 the scope and benefits of this additional layer of encryption. | |
289 | |
290 Rationale | |
291 ^^^^^^^^^ | |
292 | |
293 Encryption at rest provided by cloud providers basically compares with a file-system disk encryption. | |
294 If someone has access to the disk, he won't have access to your data without the encryption key. | |
295 | |
296 With cloud encryption at rest only, if someone has access to the "api-key" of your storage or if one | |
297 of your admin inadvertently make your storage public, `PHI <https://en.wikipedia.org/wiki/Protected_health_information>`__ will leak. | |
298 | |
299 Once you use client-side encryption, you'll basically store packets of meaningless bytes on the cloud infrastructure. | |
300 So, if an "api-key" leaks or if the storage is misconfigured, packets of bytes will leak but not PHI since | |
301 no one will be able to decrypt them. | |
302 | |
303 Another advantage is that these packets of bytes might eventually not be considered as PHI anymore and eventually | |
304 help you meet your local regulations (Please check your local regulations). | |
305 | |
306 However, note that, if you're running entirely in a cloud environment, your decryption keys will still | |
307 be stored on the cloud infrastructure (VM disks - process RAM) and an attacker could still eventually gain access to this keys. | |
308 | |
309 If Orthanc is running in your infrastructure with the Index DB on your infrastructure, and files are store in the cloud, | |
310 the master keys will remain on your infrastructure only and there's no way the data stored in the cloud could be decrypted outside your infrastructure. | |
311 | |
312 Also note that, although the cloud providers also provide client-side encryption, we, as an open-source project, | |
313 wanted to provide our own implementation on which you'll have full control and extension capabilities. | |
314 This also allows us to implement the same logic on all cloud providers. | |
315 | |
316 Our encryption is based on well-known standards (see below). Since it is documented and the source code is open-source, | |
317 feel-free to have your security expert review it before using it in a production environment. | |
318 | |
319 Technical details | |
320 ^^^^^^^^^^^^^^^^^ | |
321 | |
322 Orthanc saves 2 kind of files: DICOM files and JSON summaries of DICOM files. Both files contain PHI. | |
323 | |
452 | 324 When configuring the plugin, you'll have to provide a **Master Key** that we can also call the **Key Encryption Key (KEK)**. |
451 | 325 |
452 | 326 For each file being saved, the plugin will generate a new **Data Encryption Key (DEK)**. This DEK, encrypted with the KEK will be pre-pended to the file. |
451 | 327 |
328 If, at any point, your KEK leaks or you want to rotate your KEKs, you'll be able to use a new one to encrypt new files that are being added | |
329 and still use the old ones to decrypt data. You could then eventually start a side script to remove usages of the leaked/obsolete KEKs. | |
330 | |
331 To summarize: | |
332 | |
452 | 333 - We use `Crypto++ <https://www.cryptopp.com/>`__ to perform all encryptions. |
451 | 334 - All keys (KEK and DEK) are AES-256 keys. |
335 - DEKs and IVs are encrypted by KEK using CTR block cipher using a null IV. | |
336 - data is encrypted by DEK using GCM block cipher that will also perform integrity check on the whole file. | |
337 | |
338 The format of data stored on disk is therefore the following: | |
339 | |
340 - **VERSION HEADER**: 2 bytes: identify the structure of the following data currently `A1` | |
341 - **MASTER KEY ID**: 4 bytes: a numerical ID of the KEK that was used to encrypt the DEK | |
342 - **EIV**: 32 bytes: IV used by DEK for data encryption; encrypted by KEK | |
343 - **EDEK**: 32 bytes: the DEK encrypted by the KEK. | |
344 - **CIPHER TEXT**: variable length: the DICOM/JSON file encrypted by the DEK | |
345 - **TAG**: 16 bytes: integrity check performed on the whole encrypted file (including header, master key id, EIV and EDEK) | |
346 | |
347 Configuration | |
348 ^^^^^^^^^^^^^ | |
349 | |
350 .. highlight:: text | |
351 | |
352 AES Keys shall be 32 bytes long (256 bits) and encoded in base64. Here's a sample OpenSSL command to generate such a key:: | |
353 | |
354 openssl rand -base64 -out /tmp/test.key 32 | |
355 | |
356 Each key must have a unique id that is a uint32 number. | |
357 | |
358 .. highlight:: json | |
359 | |
360 Here's a sample configuration file of the `StorageEncryption` section of the plugins:: | |
361 | |
362 { | |
499
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
363 "GoogleCloudStorage" : { |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
364 "StorageEncryption" : { |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
365 "Enable": true, |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
366 "MasterKey": [3, "/path/to/master.key"], // key id - path to the base64 encoded key |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
367 "PreviousMasterKeys" : [ |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
368 [1, "/path/to/previous1.key"], |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
369 [2, "/path/to/previous2.key"] |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
370 ], |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
371 "MaxConcurrentInputSize" : 1024 // size in MB |
d255e02eb89d
updated object-storage doc for 1.0.0
Alain Mazy <alain@mazy.be>
parents:
464
diff
changeset
|
372 } |
451 | 373 } |
374 } | |
375 | |
376 **MaxConcurrentInputSize**: Since the memory used during encryption/decryption can grow up to a bit more | |
377 than 2 times the input, we want to limit the number of threads doing concurrent processing according | |
378 to the available memory instead of the number of concurrent threads. Therefore, if you're currently | |
379 ingesting small files, you can have a lot of thread working together while, if you're ingesting large | |
380 files, threads might have to wait before receiving a "slot" to access the encryption module. |