Mercurial > hg > orthanc-databases
comparison NOTES @ 469:302f3c2b1c34
merge pg-transactions -> mainline
author | Alain Mazy <am@osimis.io> |
---|---|
date | Mon, 05 Feb 2024 09:48:11 +0100 |
parents | 0a8b34e3a337 |
children | bf4b9c7cf338 |
comparison
equal
deleted
inserted
replaced
464:042416783518 | 469:302f3c2b1c34 |
---|---|
1 Resources: | |
2 ********* | |
3 - PG transaction modes explained: https://www.postgresql.org/files/developer/concurrency.pdf | |
4 - Isolation level explained (PG + MySQL): https://amirsoleimani.medium.com/understanding-database-isolation-level-via-examples-mysql-and-postgres-a86b5502d404 | |
5 - Message queuing in PG: https://www.crunchydata.com/blog/message-queuing-using-native-postgresql | |
6 | |
7 | |
8 Create and delete instances Internals: | |
9 ************************************* | |
10 | |
11 isNewInstance = CreateInstance(...) | |
12 | |
13 if (!isNewInstance && overwriteInstances) | |
14 DeleteResource(instance) | |
15 -> ClearDeletedFiles(manager); | |
16 DELETE FROM DeletedFiles ------> this is not a temporary table in postgres but it is supposed to be empty before and after each transaction -> it is empty when taking a snapshot of the DB in READ COMMITTED mode!!! | |
17 ClearDeletedResources(manager); | |
18 DELETE FROM DeletedResources ------> this is not a temporary table in postgres but it is supposed to be empty before and after each transaction !!! | |
19 | |
20 DELETE FROM RemainingAncestor ------> this is not a temporary table in postgres but it is supposed to be empty before and after each transaction !!! | |
21 DELETE FROM Resources WHERE internalId=.. | |
22 -> cascades delete the MainDicomTags, the Metadata and the AttachedFiles | |
23 -> this triggers AttachedFileDeletedFunc | |
24 INSERT INTO DeletedFiles VALUES | |
25 (old.uuid, old.filetype, old.compressedSize, | |
26 old.uncompressedSize, old.compressionType, | |
27 old.uncompressedHash, old.compressedHash); | |
28 RETURN NULL; | |
29 -> this triggers a SQL trigger: ResourceDeletedFunc | |
30 INSERT INTO DeletedResources VALUES (old.resourceType, old.publicId); | |
31 IF EXISTS (SELECT 1 FROM Resources WHERE parentId = old.parentId) THEN | |
32 -- Signal that the deleted resource has a remaining parent | |
33 -- (a parent that must not be deleted but whose LastUpdate must be updated) | |
34 INSERT INTO RemainingAncestor | |
35 SELECT resourceType, publicId FROM Resources WHERE internalId = old.parentId; | |
36 ELSE | |
37 -- Delete a parent resource when its unique child is deleted | |
38 DELETE FROM Resources WHERE internalId = old.parentId; | |
39 END IF; | |
40 | |
41 SELECT * FROM RemainingAncestor | |
42 -> SignalRemainingAncestor() // There is at most 1 remaining ancestor | |
43 -> ServerIndex::TransactionContext::SignalRemainingAncestor() | |
44 -> stores remainingType and remainingPublicId (this is used in StatelessDatabaseOperations::DeleteResource to build the Rest Response of /delete | |
45 and to update the LastUpdate of all parent (only when deleted from /delete)) | |
46 | |
47 SignalDeletedFiles(output, manager); | |
48 SELECT * FROM DeletedFiles | |
49 -> SignalDeletedAttachment() | |
50 -> ServerIndex::TransactionContext::SignalAttachmentDeleted() | |
51 -> pendingFilesToRemove_.push_back(FileToRemove(info)) (files are deleted in CommitFilesToRemove in the ServerIndex::TransactionContext::Commit) | |
52 | |
53 SignalDeletedResources(output, manager); | |
54 SELECT resourceType, publicId FROM DeletedResources | |
55 -> SignalDeletedResource() | |
56 -> Emit DeletedResource event (lua) | |
57 | |
58 | |
59 if (!CreateInstance(...)) | |
60 Error: "No new instance while overwriting; this should not happen" | |
61 | |
62 if isNewInstance -> LogChange | |
63 if isNewSeries -> LogChange | |
64 .... | |
65 | |
66 Sample SQL code that you can execute in DBeaver to test new functions/procedures: | |
67 | |
68 CreateInstance | |
69 ************************************************************************ | |
70 | |
71 CREATE OR REPLACE FUNCTION CreateInstance( | |
72 IN patient_public_id TEXT, | |
73 IN study_public_id TEXT, | |
74 IN series_public_id TEXT, | |
75 IN instance_public_id TEXT, | |
76 OUT is_new_patient BIGINT, | |
77 OUT is_new_study BIGINT, | |
78 OUT is_new_series BIGINT, | |
79 OUT is_new_instance BIGINT, | |
80 OUT patient_internal_id BIGINT, | |
81 OUT study_internal_id BIGINT, | |
82 OUT series_internal_id BIGINT, | |
83 OUT instance_internal_id BIGINT) AS $body$ | |
84 | |
85 BEGIN | |
86 is_new_patient := 1; | |
87 is_new_study := 1; | |
88 is_new_series := 1; | |
89 is_new_instance := 1; | |
90 | |
91 | |
92 BEGIN | |
93 INSERT INTO "resources" VALUES (DEFAULT, 0, patient_public_id, NULL); | |
94 EXCEPTION | |
95 WHEN unique_violation THEN | |
96 is_new_patient := 0; | |
97 END; | |
98 SELECT internalid INTO patient_internal_id FROM "resources" WHERE publicId = patient_public_id AND resourcetype = 0 FOR UPDATE; -- also locks the resource and its parent to prevent from deletion while we complete this transaction | |
99 | |
100 BEGIN | |
101 INSERT INTO "resources" VALUES (DEFAULT, 1, study_public_id, patient_internal_id); | |
102 EXCEPTION | |
103 WHEN unique_violation THEN | |
104 is_new_study := 0; | |
105 END; | |
106 SELECT internalid INTO study_internal_id FROM "resources" WHERE publicId = study_public_id AND resourcetype = 1 FOR UPDATE; -- also locks the resource and its parent to prevent from deletion while we complete this transaction | |
107 | |
108 BEGIN | |
109 INSERT INTO "resources" VALUES (DEFAULT, 2, series_public_id, study_internal_id); | |
110 EXCEPTION | |
111 WHEN unique_violation THEN | |
112 is_new_series := 0; | |
113 END; | |
114 SELECT internalid INTO series_internal_id FROM "resources" WHERE publicId = series_public_id AND resourcetype = 2 FOR UPDATE; -- also locks the resource and its parent to prevent from deletion while we complete this transaction | |
115 | |
116 BEGIN | |
117 INSERT INTO "resources" VALUES (DEFAULT, 3, instance_public_id, series_internal_id); | |
118 EXCEPTION | |
119 WHEN unique_violation THEN | |
120 is_new_instance := 0; | |
121 END; | |
122 SELECT internalid INTO instance_internal_id FROM "resources" WHERE publicId = instance_public_id AND resourcetype = 3 FOR UPDATE; -- also locks the resource and its parent to prevent from deletion while we complete this transaction | |
123 | |
124 IF is_new_instance > 0 THEN | |
125 -- Move the patient to the end of the recycling order. | |
126 PERFORM PatientAddedOrUpdated(patient_internal_id, 1); | |
127 END IF; | |
128 END; | |
129 | |
130 $body$ LANGUAGE plpgsql; | |
131 | |
132 | |
133 DO $$ | |
134 DECLARE | |
135 result record; | |
136 begin | |
137 delete from "resources"; | |
138 | |
139 SELECT * INTO result from CreateInstance('patient1', 'study1', 'series1', 'instance1'); | |
140 | |
141 RAISE NOTICE 'Value patientInternalId: %, is_new: %', result.patient_internal_id, result.is_new_patient; | |
142 RAISE NOTICE 'Value studyInternalId: %, is_new: %', result.study_internal_id, result.is_new_study; | |
143 RAISE NOTICE 'Value seriesInternalId: %, is_new: %', result.series_internal_id, result.is_new_series; | |
144 RAISE NOTICE 'Value instanceInternalId: %, is_new: %', result.instance_internal_id, result.is_new_instance; | |
145 RAISE NOTICE '--------------'; | |
146 | |
147 SELECT * INTO result from CreateInstance('patient1', 'study1', 'series1', 'instance2'); | |
148 | |
149 RAISE NOTICE 'Value patientInternalId: %, is_new: %', result.patient_internal_id, result.is_new_patient; | |
150 RAISE NOTICE 'Value studyInternalId: %, is_new: %', result.study_internal_id, result.is_new_study; | |
151 RAISE NOTICE 'Value seriesInternalId: %, is_new: %', result.series_internal_id, result.is_new_series; | |
152 RAISE NOTICE 'Value instanceInternalId: %, is_new: %', result.instance_internal_id, result.is_new_instance; | |
153 RAISE NOTICE '--------------'; | |
154 | |
155 SELECT * INTO result from CreateInstance('patient1', 'study1', 'series1', 'instance2'); | |
156 | |
157 RAISE NOTICE 'Value patientInternalId: %, is_new: %', result.patient_internal_id, result.is_new_patient; | |
158 RAISE NOTICE 'Value studyInternalId: %, is_new: %', result.study_internal_id, result.is_new_study; | |
159 RAISE NOTICE 'Value seriesInternalId: %, is_new: %', result.series_internal_id, result.is_new_series; | |
160 RAISE NOTICE 'Value instanceInternalId: %, is_new: %', result.instance_internal_id, result.is_new_instance; | |
161 RAISE NOTICE '--------------'; | |
162 | |
163 END $$; | |
164 | |
165 | |
166 -- \set patient_key 'patient_key' | |
167 -- SELECT CreateInstance('patient', 'study', 'series', 'instance', patient_key) ; | |
168 | |
169 -- drop function CreateInstance | |
170 -- select * from "resources"; | |
171 -- delete from "resources"; | |
172 -- INSERT INTO "resources" VALUES (DEFAULT, 0, 'patient', NULL) | |
173 | |
174 | |
175 | |
176 ************************************************************************ | |
177 | |
178 In debug, no verbose logs, 10 connections | |
179 Orthanc 1.12.1 + PG 5.1 (serializable mode) : test_concurrent_anonymize_same_study with 4 workers and 10x repeat: 43.957 s | |
180 Orthanc mainline + PG mainline (read-committed mode) : test_concurrent_anonymize_same_study with 4 workers and 10x repeat: 15.744 s | |
181 test_concurrent_anonymize_same_study deletion took: 18.8 s | |
182 | |
183 Orthanc 1.12.1 + PG 5.1 (serializable mode) : test_concurrent_uploads_same_study with 20 workers and 1x repeat: 21.214 s | |
184 Orthanc mainline + PG mainline (read-committed mode) : test_concurrent_uploads_same_study with 20 workers and 1x repeat: 9.514 s | |
185 | |
186 Orthanc 1.12.1 + PG 5.1 (serializable mode) : test_upload_delete_same_study_from_multiple_threads with 5 workers and 3x repeat (10x): 23.016 s | |
187 Orthanc mainline + PG mainline (read-committed mode) : test_upload_delete_same_study_from_multiple_threads with 5 workers and 3x repeat (10x): 7.129 s | |
188 | |
189 Orthanc mainline + PG mainline (read-committed mode) : test_upload_multiple_studies_from_multiple_threads with 10 workers and 25 files and repeat 3x: 6.454 s | |
190 | |
191 With Docker with 10 connections SQL: | |
192 osimis/orthanc:24.1.2 : test_concurrent_anonymize_same_study with 4 workers and 10x repeat: 18.956 s FAIL !!! | |
193 test_concurrent_anonymize_same_study deletion took: NA | |
194 osimis/orthanc:current: test_concurrent_anonymize_same_study with 4 workers and 10x repeat: 6.867 s | |
195 test_concurrent_anonymize_same_study deletion took: 9.095 s | |
196 | |
197 osimis/orthanc:24.1.2 : test_concurrent_uploads_same_study with 20 workers and 1x repeat: 9.822 s | |
198 osimis/orthanc:current: test_concurrent_uploads_same_study with 20 workers and 1x repeat: 16.027 s up to 38s ! (slower but the test is not representative of a real life scenario !!!!!) | |
199 | |
200 osimis/orthanc:24.1.2 : test_upload_delete_same_study_from_multiple_threads with 5 workers and 3x repeat (10x): 12.966 s | |
201 osimis/orthanc:current: test_upload_delete_same_study_from_multiple_threads with 5 workers and 3x repeat (10x): 4.196 s | |
202 | |
203 osimis/orthanc:24.1.2 : test_upload_multiple_studies_from_multiple_threads with 10 workers and 25 files and repeat 3x: 8.957 s | |
204 osimis/orthanc:current: test_upload_multiple_studies_from_multiple_threads with 10 workers and 25 files and repeat 3x: 2.671 s | |
205 | |
206 Testing the connecions (note: Orthanc and PG server running on the same server) | |
207 10 connections : test_concurrent_anonymize_same_study with 4 workers and 10x repeat: 15.744 s | |
208 1 connection : test_concurrent_anonymize_same_study with 4 workers and 10x repeat: 21.341 s | |
209 10 connections : test_concurrent_uploads_same_study with 20 workers and 1x repeat: 6.57 s | |
210 1 connection : test_concurrent_uploads_same_study with 20 workers and 1x repeat: 10.223 s | |
211 10 connections : test_upload_delete_same_study_from_multiple_threads with 5 workers and 3x repeat (10x): 7.129 s | |
212 1 connection : test_upload_delete_same_study_from_multiple_threads with 5 workers and 3x repeat (10x): 11.172 s | |
213 | |
214 | |
215 TODO: | |
216 - have a separate "thread" to UpdateStatistics ? | |
217 | |
218 - check https://discourse.orthanc-server.org/t/image-insert-are-too-slow-databse-performance-too-poor-when-using-mysql-mariadb/3820 | |
219 | |
220 DONE: | |
221 - implement a downgrade script ? And test it in PotgresUpgrades integ tests | |
222 - test the transfer plugin | |
223 - perf tests: upload generated data (different studies) | |
224 - In Docker images, re-enable MySQL & ODBC plugins + tests | |
225 - reenable PatientRecyclingOrder | |
226 - force the create/update DB transaction to be serializable (not needed: this is handled by POSTGRESQL_LOCK_DATABASE_SETUP) | |
227 - PatientAddedFunc contains an IF (check if other IF/THEN/ELSE pattern remains) | |
228 - validate upgrade DB from previous Orthanc and from scratch | |
229 - check minimal version of PG (9.5 - 9.6 ? for create index if not exists): seems to work with 9.5 cfr PotgresUpgrades integ tests | |
230 - test events generation StableSeries .... (count the NewSeries, NewInstances event and make sure they match the numb) |