comparison Sphinx/source/plugins/object-storage.rst @ 499:d255e02eb89d

updated object-storage doc for 1.0.0
author Alain Mazy <alain@mazy.be>
date Tue, 01 Sep 2020 18:08:45 +0200
parents 5ea70331c0be
children 4481882d9c83
comparison
equal deleted inserted replaced
498:2ac9eacb4ade 499:d255e02eb89d
4 Cloud Object Storage plugins 4 Cloud Object Storage plugins
5 ============================ 5 ============================
6 6
7 .. contents:: 7 .. contents::
8 8
9 Release notes
10 -------------
11
12 Release notes are available `here
13 <https://hg.orthanc-server.com/orthanc-object-storage/file/default/NEWS>`__
9 14
10 Introduction 15 Introduction
11 ------------ 16 ------------
12 17
13 Osimis freely provides the `source code 18 Osimis freely provides the `source code
120 "AwsS3Storage" : { 125 "AwsS3Storage" : {
121 "BucketName": "test-orthanc-s3-plugin", 126 "BucketName": "test-orthanc-s3-plugin",
122 "Region" : "eu-central-1", 127 "Region" : "eu-central-1",
123 "AccessKey" : "AKXXX", 128 "AccessKey" : "AKXXX",
124 "SecretKey" : "RhYYYY", 129 "SecretKey" : "RhYYYY",
125 "Endpoint": "", // optional - currently in mainline version only: custom endpoint 130 "Endpoint": "", // custom endpoint
126 "ConnectionTimeout": 30, // optional - currently in mainline version only: connection timeout in seconds 131 "ConnectionTimeout": 30, // connection timeout in seconds
127 "RequestTimeout": 1200 // optional - currently in mainline version only: request timeout in seconds (max time to upload/download a file) 132 "RequestTimeout": 1200, // request timeout in seconds (max time to upload/download a file)
133 "MigrationFromFileSystemEnabled": false, // see below
134 "StorageStructure": "flat" // see below
128 } 135 }
129 136
130 The **EndPoint** configuration is used when accessing an S3 compatible cloud provider. I.e. here is a configuration to store data on Scaleway:: 137 The **EndPoint** configuration is used when accessing an S3 compatible cloud provider. I.e. here is a configuration to store data on Scaleway::
131 138
132 "AwsS3Storage" : { 139 "AwsS3Storage" : {
142 149
143 Sample configuration:: 150 Sample configuration::
144 151
145 "AzureBlobStorage" : { 152 "AzureBlobStorage" : {
146 "ConnectionString": "DefaultEndpointsProtocol=https;AccountName=xxxxxxxxx;AccountKey=yyyyyyyy===;EndpointSuffix=core.windows.net", 153 "ConnectionString": "DefaultEndpointsProtocol=https;AccountName=xxxxxxxxx;AccountKey=yyyyyyyy===;EndpointSuffix=core.windows.net",
147 "ContainerName" : "test-orthanc-storage-plugin" 154 "ContainerName" : "test-orthanc-storage-plugin",
155 "MigrationFromFileSystemEnabled": false, // see below
156 "StorageStructure": "flat" // see below
148 } 157 }
149 158
150 159
151 Google Storage plugin 160 Google Storage plugin
152 ^^^^^^^^^^^^^^^^^^^^^ 161 ^^^^^^^^^^^^^^^^^^^^^
153 162
154 Sample configuration:: 163 Sample configuration::
155 164
156 "GoogleCloudStorage" : { 165 "GoogleCloudStorage" : {
157 "ServiceAccountFile": "/path/to/googleServiceAccountFile.json", 166 "ServiceAccountFile": "/path/to/googleServiceAccountFile.json",
158 "BucketName": "test-orthanc-storage-plugin" 167 "BucketName": "test-orthanc-storage-plugin",
168 "MigrationFromFileSystemEnabled": false, // see below
169 "StorageStructure": "flat" // see below
159 } 170 }
171
172
173 Migration & Storage structure
174 -----------------------------
175
176 The **StorageStructure** configuration allows you to select the way objects are organized
177 within the storage (``flat`` or ``legacy``).
178 Unlike traditional file system in which Orthanc uses 2 levels
179 of folders, object storages usually have no limit on the number of files per folder and
180 therefore all objects are stored at the root level of the object storage. This is the
181 default ``flat`` behaviour. Note that, in the ``flat`` mode, an extension `.dcm` or `.json`
182 is added to the filename which is not the case in the legacy mode.
183
184 The ``legacy`` behaviour mimicks the Orthanc File System convention. This is actually helpful
185 when migrating your data from a file system to an object storage since you can copy all the file
186 hierarchy as is.
187
188 Note that you can not change this configuration once you've uploaded the first files in Orthanc.
189
190 The **MigrationFromFileSystemEnabled** configuration has been for Orthanc to continue working
191 while you're migrating your data from the file system to the object storage. While this option is enabled,
192 Orthanc will store all new files into the object storage but will try to read/delete files
193 from both the file system and the object storage.
194
195 This option can be disabled as soon as all files have been copied from the file system to the
196 object storage. Note that Orthanc is not copying the files from one storage to the other; you'll
197 have to use a standard ``sync`` command from the object-storage provider.
160 198
161 199
162 Sample setups 200 Sample setups
163 ------------- 201 -------------
164 202
243 .. highlight:: json 281 .. highlight:: json
244 282
245 Here's a sample configuration file of the `StorageEncryption` section of the plugins:: 283 Here's a sample configuration file of the `StorageEncryption` section of the plugins::
246 284
247 { 285 {
248 "StorageEncryption" : { 286 "GoogleCloudStorage" : {
249 "Enable": true, 287 "StorageEncryption" : {
250 "MasterKey": [3, "/path/to/master.key"], // key id - path to the base64 encoded key 288 "Enable": true,
251 "PreviousMasterKeys" : [ 289 "MasterKey": [3, "/path/to/master.key"], // key id - path to the base64 encoded key
252 [1, "/path/to/previous1.key"], 290 "PreviousMasterKeys" : [
253 [2, "/path/to/previous2.key"] 291 [1, "/path/to/previous1.key"],
254 ], 292 [2, "/path/to/previous2.key"]
255 "MaxConcurrentInputSize" : 1024 // size in MB 293 ],
294 "MaxConcurrentInputSize" : 1024 // size in MB
295 }
256 } 296 }
257 } 297 }
258 298
259 **MaxConcurrentInputSize**: Since the memory used during encryption/decryption can grow up to a bit more 299 **MaxConcurrentInputSize**: Since the memory used during encryption/decryption can grow up to a bit more
260 than 2 times the input, we want to limit the number of threads doing concurrent processing according 300 than 2 times the input, we want to limit the number of threads doing concurrent processing according