Hah my dicom.ini is too long to attach
It needs a lot of sprucing it but I haven't got the time at the moment. This ini is unchanged from 1.4.1x other than to attempt to use the DelayedForwarderThreads
Hah my dicom.ini is too long to attach
It needs a lot of sprucing it but I haven't got the time at the moment. This ini is unchanged from 1.4.1x other than to attempt to use the DelayedForwarderThreads
Hey marcel, long time! I'm finally getting our cluster worked up for a migration to 1.5.0c on linux of course. In my testing I'm finding that my delayedforwarderthreads seem to be working on the same object at the same time. I'm assuming this is related to the discussion above... would really love a code fix so we can use this sweet sweet parallelization if that is the case. If not-- any ideas?
This is with
ForwardAssociationLevel = Series (I've tried image, and study - I did have some success with study but it would initially fail to send and then the send speed it achieved was less than a single thread.)
DelayedForwarderThreads = 2
Fri Mar 17 18:13:47 2023 Queue: retrying processing of file /conquest/data/20162809/1.3.12.2.1107.5.2.50.176059.30000023011712250368900000012_0006_000003_16790975700000.dcm
Fri Mar 17 18:13:47 2023 Exportconverter0.0 executes: sh /conquest/scripts/cqStats/cqExportNotifier.sh PACSSERVER MR CONQUESTSRV1 exporting PACSQC-CC 1.3.12.2.1107.5.2.50.176059.30000023011712250368900000016 20162809 10006833
Fri Mar 17 18:13:47 2023 Queue: retrying processing of file /conquest/data/20162809/1.3.12.2.1107.5.2.50.176059.30000023011712250368900000012_0006_000003_16790975700000.dcm
Fri Mar 17 18:13:47 2023 Exportconverter0.0 executes: sh /conquest/scripts/cqStats/cqExportNotifier.sh PACSSERVER MR CONQUESTSRV1 exporting PACSQC-CC 1.3.12.2.1107.5.2.50.176059.30000023011712250368900000016 20162809 10006833
/edit: I'm using the precompiled binary with SQLite
hammad this is a collection of (my own) poorly-scripted tools that require specific import/export converter configurations and a grafana/influx environment- I can make them available as a package but they were never intended for distribution and you will need to edit them to make them work. That and you're going to need grafana/influx. If you can handle all this, I'll start thinking about making it downloadable.
I've created some shell scripts that I link in my converters which pushes data over to a TSDB I have up with Grafana visualizing.
You can do a lot with shell scripts and lua and of course with Grafana how you visualize information is up to you.
I'm starting to see some very very large ultrasound cine clips and have begin running into issues with 32bit memory spaces. While the vendor of the imaging device is looking to resolve this at the source, I'm hoping to come up with a tactic to split the multiframe into single frames. Curious if anyone here has already weaved this into a lua script or other exportconverter.
Thanks Marcel!
If it works for the end-user that's a win.
It sounded like your database had the right information and the filesystem was wrong. I was recommending updating the filesystem.
Hi,
NEVER use V2 with modern images. Your only salvage would be to look in the images, and make sure that all sequences in there are known in dgate.dic.
Marcel
What do you recommend Marcel? We're beginning to handle very large ultrasound content and I'm running into compression bottlenecks. Our cluster is 4 hosts for HA and throughput but they are only 3cpu/3gigs of mem so we are going to need to bump that up. We prefer jp2k generally and have not yet moved to 1.5 though I'm trying to make some progress on that today.
hammad I've had a similar challenge in the past and was never quite able to come to the right workable solution. Marcel has a stickied post on the forum about not changing patient IDs in this manner. I didn't like the UID regeneration so I never actually went through with it.
Why not to change patientID of a dicom image
Adding to the complexity depending on your use-case, you end up also with the object in memory with metadata different than the object on disk.
I really would like the ability to control the data like you, but cq's architecture doesn't support it.
You could update the Lua (or call another) and have it rename the folders as it updates the objects.
I've not tried it and the idea only came to me while writing this, but you may be able to have the ImportConverter do something a bit tricky.
pseudo
ImportConverter0 = Update Patient ID, Delete stored study(not destroy!), forward study-level to ImportConverter1
ImportConverter1 = Whatever your normal importconverter rule would be
As for modifying your saved data, I've got no experience there so I'd be hacking it together probably with an OS-hosted script.
Some filesystems also have issues with the total length of the file path. That's a very long path and looking critically, it's just 0-9 repeated 12 times. What's the purpose of building a path like that?
Look into to using filesystem links if you can't fix your path. You should be able to work around this without needing a CQ change.
I swore I tried that, but I'll give it another attempt.
/edit- apparently I didn't try hard enough
Thanks Marcel.
Could the launching switch be an option in the future? it would definitely make this cleaner. I tried to hack it into dgate for the last several minutes but haven't had any success.
Looks like putparam updates the file but the in-memory AEtitle remains what was in dicom.ini at the time of launch.
/edit:
if I throw a --read_ini: at the instance, it does not adopt the new MyACRNema value. I'm guessing the dgate instance has to be respawned when that changes. It would be excellent if I were able to invoke dgate with myacrnema in the launch string.
We run a cluster of 4 nodes and would like to enable query/retrieve through them. Currently, they each go by the same AEtitle as we have replicated the dicom.ini to each server. We continue to enforce config versioning through a git-managed stream. All servers in the cluster pull their dicom.ini from a single git project. This way we have to change only 1 location and we have proper replication, with historic versioning, available for rollback.
The problem is that MyACRNema is the same for all app servers. While Query works, Retrieve most often results in the move-through store being load balanced to the wrong node.
Can we launch dgate from the command line with parameters including MyACRNema? Does this parameter dynamically update and would putparam be a viable option?
Somewhat common with our instances of dgate as well. For our service start and service restart, we put in something like 100 tries before it fails. If you do a netstat -aln you will see there are still living sockets on 5678.
I've found smaller instances with less work more reliably terminate dgate rapidly.
Built on an Ubuntu 18.04 and 16.04 box and did not have the same problem. Guessing it's something to do with the glibc libraries that are standard in different distributions. We're probably going to migrate the cluster to Ubuntu based on this.
/edit: for fun I changed the pdu parameter to 256k without the if/else. Builds still work even with that!
The test I did was with 1419d which was built on that box and also 1419b which was built on another box and was functioning properly. Wouldn't the cloned install have the proper parameters?
I've built a new environment for a new project. It's running CentOS7 as that is our standard flavor given the scoping towards Enterprise roles.
I've now run 1419d and 1419b (copied from a known-functional host) on this new box and am seeing the same odd image interleaving I've never seen CQ do before. When I retrieve these images from any number of different viewers, this problem remains. I'm about to blow up the VM and take a clone of a previous environment. I was hoping to start fresh and create some new standards for our implementations moving forward. Any thoughts Marcel?
This occurred today. I was able to use the webserver to move the object to the destination successfully. It's got something to do with the object in memory in the ExportConverter.
I wonder if this could be resolved by adding some sort of a time buffer on the ImportConverter so that cine clips aren't released to export converters for X number of seconds.
We have 4 nodes in a balanced cluster each with their own DB. Would the changeUID function result in differing UIDs on each environment if repeated sends were balanced to other nodes?
We haven't really considered using shared storage or a shared db for these systems, but doing so could possibly help with tasks like this.
We only want to generate a new SeriesUID.
I'm going to keep cracking at this and see if I can discover the reason for failure. Will report back.