Posts by mparamas

    Yes, was able to compile 150e under test system with RHEL 7.6 cold install, but needed to install more rpms. Was able to deploy dgate 150e with mariadb in the production system. Thank you for your suggestions.


    Also found that the previous 0x110 DIMSE code was due to conflict in Siemens GHVISION (PET/CT) scanner's StudyInstanceUID. Siemens had the same StudyInstanceUID for an exam for another subject done in December 2024 that conflcted with exam done in February 2025 for another subject. So dgate simply refused to store due to StudyInstanceUID conflict. We will contact Siemens as this should have never happened.

    (continued ..) The DB differences between 1.4.17d and 1.5.0b are as follows:

    1.4.17d 1.5.0b

    conquest.DICOMImages 15/Rows 15/QRows

    16/Colums 16/QColums


    conquest.UIDMODS 5/Stage varchar(32), Null: YES, Key: MUL, Default: NULL

    6/Annotation varchar(64), Null: YES, Default: NULL

    (5th and 6th fields are new that did not exist in 1.4.17d)

    The columns for the other tables match fine. Given the DB differences how will the old mariadb tables under 1.4.17d work under new dgate (150e) mariadb tables?

    1.4.17d sql table conquest.DICOMImages uses fieldnames Rows (#15) and Colums (#16). Correspondingly, 150e version uses fieldnames QRows (#15) and QColumns (#16). The filed names got changed! This will throw SQL errors due to lack of backward compatibility. Is regen (dgate -v -r) the only option? Or should I tweak src before compiling in a test machine. If so, do you have pointers?

    Hi Marcel, thanks! I already have dgate 1.4.17d running with ~6 TB of data and with mariadb as database. Lately for one data set 1.4.17d issued "Unknown response" with DIMSE code 0x110 and did not store the images. But the same data set got successfully stored under dgate 1.5.0b with mariadb. So, I plan to upgrade from 1.4.17d to the latest stable version 150e. This brings a quick question on compiling and installing. Can I just install as precompiled and then change dicom.ini with SQL information (database, username and password, etc.,) or do I have to select mysql/mariadb choice during installation. I do not want to wipe out current DB contents. And I do not want to re-generate (dgate -v -r) as it would take a lot of time. I will backup /var and dgate areas before dgate 150e install. Please advise. Thanks.

    Hello,


    We have a Siemens vision PETCT scanner and I am looking for a Dgate version that supports both Enhanced PET Image Storage (SOP Class UID 1.2.840.10008.5.1.4.1.1.130) and Enhanced CT Image Storage (SOP Class UID 1.2.840.10008.5.1.4.1.1.2.1). If dgate supports both, please indicate the earliest version that supports these formats, and will that version compile under gcc 4.8.5? Thanks.

    Sorry for the earlier post. I just had the statements in the wrong section. It worked with the following setup in dicom.ini. Now I can have the modified DICOM header and forward and not have a local copy. No ExportConverter defined.


    ForwardAssociationLevel = GLOBAL


    ImportConverters=3

    ImportConverter2 = forward to RECVROUTER; destroy


    [lua]

    ImportConverter0 = Data.PatientID = "KEY1" .. Data.PatientID

    ImportConverter1 = Data.AccessionNumber = "KEY1" .. Data.AccessionNumber

    Commented out export section

    #ForwardAssociationLevel = GLOBAL

    #ExportConverters = 1

    #ExportConverter0 = forward to RECVROUTER



    [lua]

    ImportConverters=3

    ImportConverter0 = Data.PatientID = "KEY1" .. Data.PatientID

    ImportConverter1 = Data.AccessionNumber = "KEY1" .. Data.AccessionNumber

    ImportConverter2 = forward to RECVROUTER; destroy


    Did not work. I even tried with "ImportConverter2 = forward to RECVROUTER; destroy; "


    Error:

    *** lua syntax error [string "forward to RECVROUTER; destroy"]:1: '=' expected near 'to' in 'forward to RECVROUTER; destroy'


    I tried:

    ImportConverter2 = forward to RECVROUTER

    ImportConverter3 = destroy


    Error:

    *** lua syntax error [string "forward to RECVROUTER"]:1: '=' expected near 'to' in 'forward to RECVROUTER'

    *** lua syntax error [string "destroy"]:1: '=' expected near '<eof>' in 'destroy'


    Anything wrong with syntax? forward seems to work only with ExportConverter.

    Forwarding works fine for me. But conquest also keeps a local copy under MAGDevice0. Local copy is not needed.


    As FYI, we modify header as shown in ImageConverters below for our integration purposes.


    dicom.ini (trimmed):


    MAGDevice0 = /data

    ForwardAssociationLevel = GLOBAL

    ExportConverters = 1

    ExportConverter0 = forward to RECVROUTER



    [lua]

    ImportConverters=2

    ImportConverter0 = Data.PatientID = "KEY1" .. Data.PatientID

    ImportConverter1 = Data.AccessionNumber = "KEY1" .. Data.AccessionNumber


    DICOM header modification works. Forwarding works. destroy command does not work with ExportConverter or ImportConverter. How to configure dicom.ini to automatically remove local copy in MAGDevice0 after successfully pushing out data to receiver (RECVROUTER) ?


    Conquest info:

    DGATE (1.5.0d, build Mon Aug 19 16:39:34 2024, bits 64) is running as threaded server

    Database type: NULL driver (black hole)

    This (UINT) is exactly the problem. File size is given above, and is 4305879424 > 4294967295 by 10912129 bytes. Vendors are now switching to EnhancedImageStorage format resulting in larger files at least for research data sets. Clinical whole body dynamic scans and functional MRI scans may also face this problem in future.

    There is an option to take the EnhancedImageStorage format and split it as individual smaller DICOM files before sending, but that is inefficient. Looks like I will have to change receiver from dgate to storescp for now. Please let us know if the code changes in future. Thanks.

    I have a "4.1 GB" single DICOM file (no-PHI) from Bruker animal scanner to send to Conquest/dgate. This is a dynamic scan with EnhancedPETImageStorage. Conquest dgate(64-bit) fails to receive this file. But DCMTK's storescp received this file fine on the same commandline.

    I have dgate 1.5.0a 64-bit in linux (RHEL7), with DB as NULL driver. dgate's purpose is only to receive and store. Nothing odd in dicom.ini file; storing as "as" is.


    dgate: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=72f8f701116ba429eb8b09134f049c9f1cb8092d, not stripped


    This machine has 8 GB RAM.

    I also tried 1.5.0.b 64-bit dgate on different machine with 16 GB RAM. Still the same problem. Is there a fix for this in dgate?


    Culprit seems to be in: "Protocol error 217 in PDU:Read". Details below.


    File:

    -rw-r--r-- 1 user group 4305879424 Feb 22 16:23 EnIm1_1.dcm


    Sending:

    storescu -R -d -aec CTP localhost 4143 EnIm1_1.dcm



    Sender with verbose <trimmed>:

    ..............................................................................................................................................................................................................................

    E: Store Failed, file: EnIm1_1.dcm:

    E: 0006:0317 Peer aborted Association (or never connected)

    I: Peer Aborted Association



    Sender with debug <trimmed>:

    D: ======================= END A-ASSOCIATE-AC ======================

    I: Association Accepted (Max Send PDV: 4084)

    I: Sending file: EnIm1_1.dcm

    D: DcmMetaInfo::checkAndReadPreamble() TransferSyntax="Little Endian Explicit"

    D: DcmDataset::read() TransferSyntax="Little Endian Explicit"

    I: Converting transfer syntax: Little Endian Explicit -> Little Endian Explicit

    I: Sending Store Request (MsgID 1, PIe)

    D: ===================== OUTGOING DIMSE MESSAGE ====================

    D: Message Type : C-STORE RQ

    D: Message ID : 1

    D: Affected SOP Class UID : EnhancedPETImageStorage

    D: Affected SOP Instance UID : 2.16.756.5.5.100.8323328.43296.1644938863.35.0

    D: Data Set : present

    D: Priority : medium

    D: ======================= END DIMSE MESSAGE =======================

    E: Store Failed, file: EnIm1_1.dcm:

    E: 0006:0317 Peer aborted Association (or never connected)

    I: Peer Aborted Association


    Conquest (--debuglevel:4 on screen):


    Before sending data:

    Arena 0:

    system bytes = 1056768

    in use bytes = 738864

    Arena 1:

    system bytes = 516096

    in use bytes = 379840

    Total (incl. mmap):

    system bytes = 1708032

    in use bytes = 1253872

    max mmap regions = 3

    max mmap bytes = 16936960


    At failure when sending:

    Arena 0:

    system bytes = 1056768

    in use bytes = 738864

    Arena 1:

    system bytes = 111824896

    in use bytes = 96233744

    Total (incl. mmap):

    system bytes = 113016832

    in use bytes = 97107776

    max mmap regions = 3

    max mmap bytes = 16936960


    debug log from file <trimmed>:


    UPACS THREAD 4: STARTED AT: Fri Feb 25 11:49:42 2022

    A-ASSOCIATE-RQ Packet Dump

    Calling Application Title : "STORESCU "

    Called Application Title : "CTP "

    Application Context : "1.2.840.10008.3.1.1.1", PDU length: 16384

    Number of Proposed Presentation Contexts: 2

    Presentation Context 0 "1.2.840.10008.5.1.4.1.1.130" 1

    Presentation Context 1 "1.2.840.10008.5.1.4.1.1.130" 1

    Server Command := 0001

    Message ID := 0001

    0000,0002 28 UI AffectedSOPClassUID "1.2.840.10008.5.1.4.1.1.130"

    0000,0100 2 US CommandField 1

    0000,0110 2 US MessageID 1

    0000,0700 2 US Priority 0

    0000,0800 2 US DataSetType 1

    0000,1000 46 UI AffectedSOPInstanceU "2.16.756.5.5.100.8323328.43296.1644938863.35.0"

    0002,0010 19 UI TransferSyntaxUID "1.2.840.10008.1.2.1"

    Protocol error 217 in PDU:Read <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< Culprit?

    Failed STORAGE

    Link.Connected false in PDU:Read

    UPACS THREAD 4: ENDED AT: Fri Feb 25 11:49:58 2022

    UPACS THREAD 4: TOTAL RUNNING TIME: 16 SECONDS


    DCMTS's storescp receives fine:


    D: ======================= END A-ASSOCIATE-AC ======================

    D: DcmDataset::read() TransferSyntax="Little Endian Implicit"

    I: Received Store Request

    D: ===================== INCOMING DIMSE MESSAGE ====================

    D: Message Type : C-STORE RQ

    D: Presentation Context ID : 1

    D: Message ID : 1

    D: Affected SOP Class UID : EnhancedPETImageStorage

    D: Affected SOP Instance UID : 2.16.756.5.5.100.8323328.43296.1644938863.35.0

    D: Data Set : present

    D: Priority : medium

    D: ======================= END DIMSE MESSAGE =======================

    D: DcmDataset::read() TransferSyntax="Little Endian Explicit"

    I: storing DICOM file: ./PIe.2.16.756.5.5.100.8323328.43296.1644938863.35.0

    D: DcmFileFormat::checkMetaHeaderValue() Version of MetaHeader is ok: 0x0001

    D: DcmFileFormat::checkMetaHeaderValue() use SOPClassUID [1.2.840.10008.5.1.4.1.1.130] from Dataset

    D: DcmFileFormat::checkMetaHeaderValue() use SOPInstanceUID [2.16.756.5.5.100.8323328.43296.1644938863.35.0] from Dataset

    D: DcmFileFormat::checkMetaHeaderValue() use new TransferSyntaxUID [Little Endian Explicit] on writing following Dataset

    D: DcmFileFormat::validateMetaInfo() found 8 Elements in DcmMetaInfo 'metinf'

    I: Association Release


    Sender command:

    storescu -R -d -aec CTP localhost 4112 EnIm1_1.dcm


    Sender's debug:


    D: ======================= END A-ASSOCIATE-AC ======================

    I: Association Accepted (Max Send PDV: 16372)

    I: Sending file: EnIm1_1.dcm

    D: DcmMetaInfo::checkAndReadPreamble() TransferSyntax="Little Endian Explicit"

    D: DcmDataset::read() TransferSyntax="Little Endian Explicit"

    I: Converting transfer syntax: Little Endian Explicit -> Little Endian Explicit

    I: Sending Store Request (MsgID 1, PIe)

    D: ===================== OUTGOING DIMSE MESSAGE ====================

    D: Message Type : C-STORE RQ

    D: Message ID : 1

    D: Affected SOP Class UID : EnhancedPETImageStorage

    D: Affected SOP Instance UID : 2.16.756.5.5.100.8323328.43296.1644938863.35.0

    D: Data Set : present

    D: Priority : medium

    D: ======================= END DIMSE MESSAGE =======================

    D: DcmDataset::read() TransferSyntax="Little Endian Implicit"

    I: Received Store Response

    D: ===================== INCOMING DIMSE MESSAGE ====================

    D: Message Type : C-STORE RSP

    D: Presentation Context ID : 1

    D: Message ID Being Responded To : 1

    D: Affected SOP Class UID : EnhancedPETImageStorage

    D: Affected SOP Instance UID : 2.16.756.5.5.100.8323328.43296.1644938863.35.0

    D: Data Set : none

    D: DIMSE Status : 0x0000: Success

    D: ======================= END DIMSE MESSAGE =======================

    I: Releasing Association


    Receiver command:

    storescp -d +xa 4112


    Receiver's debug:


    I: Association Accepted (Max Send PDV: 16372)

    I: Sending file: EnIm1_1.dcm

    D: DcmMetaInfo::checkAndReadPreamble() TransferSyntax="Little Endian Explicit"

    D: DcmDataset::read() TransferSyntax="Little Endian Explicit"

    I: Converting transfer syntax: Little Endian Explicit -> Little Endian Explicit

    I: Sending Store Request (MsgID 1, PIe)

    D: ===================== OUTGOING DIMSE MESSAGE ====================

    D: Message Type : C-STORE RQ

    D: Message ID : 1

    D: Affected SOP Class UID : EnhancedPETImageStorage

    D: Affected SOP Instance UID : 2.16.756.5.5.100.8323328.43296.1644938863.35.0

    D: Data Set : present

    D: Priority : medium

    D: ======================= END DIMSE MESSAGE =======================

    D: DcmDataset::read() TransferSyntax="Little Endian Implicit"

    I: Received Store Response

    D: ===================== INCOMING DIMSE MESSAGE ====================

    D: Message Type : C-STORE RSP

    D: Presentation Context ID : 1

    D: Message ID Being Responded To : 1

    D: Affected SOP Class UID : EnhancedPETImageStorage

    D: Affected SOP Instance UID : 2.16.756.5.5.100.8323328.43296.1644938863.35.0

    D: Data Set : none

    D: DIMSE Status : 0x0000: Success

    D: ======================= END DIMSE MESSAGE =======================

    I: Releasing Association



    Received file with DCMTK's storescp:

    -rw-rw-r-- 1 user group 4304155776 Feb 25 13:46 PIe.2.16.756.5.5.100.8323328.43296.1644938863.35.0


    I prefer to use Conquest's dgate than DCMTK's storescp. Please suggest a fix. Thanks.


    Sundar

    dgate compilation failed in Linux (RHEL 7.7) for 1.15.0b and also for the lastest master from github.


    I followed linuxmanual.pdf, and did

    chmod 777 maklinux

    ./maklinux

    choose option 3 or 5


    <trimmed>

    [sudo] password for xxxxx:

    /usr/bin/install -c cjpeg /usr/local/bin/cjpeg

    /usr/bin/install -c djpeg /usr/local/bin/djpeg

    /usr/bin/install -c jpegtran /usr/local/bin/jpegtran

    /usr/bin/install -c rdjpgcom /usr/local/bin/rdjpgcom

    /usr/bin/install -c wrjpgcom /usr/local/bin/wrjpgcom

    /usr/bin/install -c -m 644 ./cjpeg.1 /usr/local/man/man1/cjpeg.1

    /usr/bin/install -c -m 644 ./djpeg.1 /usr/local/man/man1/djpeg.1

    /usr/bin/install -c -m 644 ./jpegtran.1 /usr/local/man/man1/jpegtran.1

    /usr/bin/install -c -m 644 ./rdjpgcom.1 /usr/local/man/man1/rdjpgcom.1

    /usr/bin/install -c -m 644 ./wrjpgcom.1 /usr/local/man/man1/wrjpgcom.1

    Please ignore the errors above

    Please choose DB type

    1) mariadb 3) sqlite 5) precompiled

    2) postgres 4) dbase 6) Quit

    #? 3

    /usr/bin/ld: cannot find -llua5.1

    collect2: error: ld returned 1 exit status

    cp: cannot stat ‘./dgate’: No such file or directory

    cp: cannot stat ‘./dgate’: No such file or directory

    cp: cannot stat ‘./dgate’: No such file or directory

    cp: cannot stat ‘./dgate’: No such file or directory

    Regenerate the database?


    $ rpm -aq | grep lua

    lua-5.1.4-15.el7.x86_64

    lua-devel-5.1.4-15.el7.x86_64


    Pre-compiled (option 5) from github's latest master worked. It was a 64-bit binary. The precompiled dgate under 1.15.0b was a 32-bit binary.


    I really would like to compile. Why do I get "/usr/bin/ld: cannot find -llua5.1"? Am i missing any other library? Or is there is fix in maklinux file?


    Please let me know the fix. Thanks.

    CTP's dicom export sends data directly to conquest's AET & port.


    >>The safest fix would be to set patient ID 0000000 to empy there
    Conquest does this automatically when it sees data sets with no PatientID. And, I like I mentioned before, I would rather not see PatientID tag in anonymized data sets. We already anonymize PatientName.


    If PatientID tag is mandatory in conquest, I can simply copy PatientName value into PatientID tag as work around.

    I have data sets that are passed through Mirc CTP anonymizer (http://mircwiki.rsna.org/index.php?title=MIRC_CTP) end up in conquest server. These data sets have PatientID removed intentionally. Both the CTP and conquest severs are only stages in the anonymization pipeline. Anomymized data gets sent out to Horos PACS.


    I use conquest instead of DCMTK's storescp since the storescp tends to create multiple directories for the same StudyInstanceUID. Data sets are CT images in thousands.


    Problem is that the Conquest server re-introduces PatientID tag in data sets with 0000000 as value in them. Since I just want to use conquest server to receive the data sets, and not to store permanent, I have to do one extra step of removing these PatientID tags from data sets from several thousands of images, using DCMTK's dcmodify. I use FileNameSyntax as %name\%sopuid.dcm in dicom.ini.


    Perhaps have PatientID as mandatory is the default behavior of any PACS. I am just wondering it would be ever possible not to introduce PatientID on data sets, when it is not present in data sets. Please let me know. Thanks.

    OS and Mysql were sitting on 7200 rpm SATA drive. The dd read/write speeds were not the issue on this disk; I was getting ~160+ MB/s write speeds, and this should have yielded much more than 8 MR images per second; though I was writing to mdadm array. It had to do with FC17/mysql related issue. Instead of spending too much time troubleshooting the cause of slowness on an unsupported OS, I figured trying on latest stable platform would be the best and fastest option. Conquest data itself resides on mdadm as mentioned above. I just took the OS install (upgrade) opportunity to pull out spinning disk and replace with SSD.


    Now performance is not an issue. Thanks.

    Thank you both.


    Before making any changes to MySQL database, I was getting only 4 MR images per second in storage. After database setting changes as shown below, it only improved to 8 MR images per second.
    innodb_buffer_pool_size=10G
    innodb_log_file_size = 512M
    max_connections was not an issue.


    Clearly, regen was not an option and OS/database issue had to be dealt with. Then on a test machine installed RHEL7.2 with mariadb, imported conquest database, compiled and installed dgate 1417d version. Operating system disk was a SSD. This setup gave speed of 65 MR images per second in storage. With this result, I switched operating system from FedoraCore17 to RHEL7.2. I also replaced spinning disk with SSD for OS disk. Now I am getting 65 images per second in storage, and I do have the above database settings in /etc/my.cnf.


    Thanks again!

    Hi,


    Conquest's data storage is on a RAID storage (mdadm) and conquest data stored is now at 4.6 TB out of 12 TB space. Conquest which was performing fine before, now is performing very slow in storing and retrieving data. RAID is just fine and I get pretty good write/read speeds via dd tests. I tried to regen from stored data. I issued "dgate -v -r > /tmp/log 2>&1" on last Friday evening at 4:30 PM and it was still running at 1:40 PM on Tuesday (today). It would be nice if Linux version of dgate regen shows % completed in verbose mode. I aborted the regen and restored conquest database from /var/lib/mysql backup that I did before the regen. I am still faced with slow performance.


    I would like to know what causes slowness in dgate's storing and retrieving. Also I would like to know how to make the regen process faster. I guess I can also do simply "./dgate -r" that is without verbose. Machine has 8-Core 3.6 GHz CPU and 16GB RAM. Other processes such as storescp etc., works faster on the same machine.


    I would also like know if there any mysql specific configuration that people are using with conquest.


    OS: 3.9.10-100.fc17.x86_64


    ./dgate -v
    DGATE (1.4.17alpha, build Mon Dec 3 10:54:01 2012, bits 64) is running as threaded server
    Database type: native MySQL connection


    I can also take this to dgate 1.4.17d if that helps.


    Please let me know. Thanks.

    MySQL 5.5.32-1.fc17.x86_64. Meanwhile RAM was upgraded to 20GB, and still I run into slow query the first time.
    /etc/my.cnf has these:
    [mysqld]
    innodb_buffer_pool_size=14G
    query_cache_type = 1
    query_cache_size = 256M


    First time run of this query took 46min. While running top command showed:
    CPU usage at 99% and RAM usage at 10%


    Second time run took only 2 seconds, due to settings above. Memory upgrade did help for repeated queries.


    If there is a dgate setting to improve the performance of query involving NumberOfStudyRelatedInstances, please let me know since this more of a CPU intensive task. Thanks.


    Sundar

    I am running into extremely slow query response for dgate 1.4.17alpha with 2M images (46 min), especially when doing image count on all studies. My machine is Dell T5300, 64-bit, Dual core 2.6 GHz with only 4GB RAM, and I just about to upgrade to 20GB RAM. I have external 16TB software RAID-6 storage via eSATA for data and its performance is pretty good. OS is Fedora17. Performance started to slow down with increase in number of images, due to hardware resources.


    Image count (Number of study related instances) query used (with dcmtk's findscu):
    time findscu -aet MYAET -aec DGATE -S -k 0008,0052=STUDY -k 0020,000D -k 0020,1208 localhost 1234


    With same OS, same dgate version, 2M images, and same external RAID storage, but with 8 core CPU and 16GB RAM, the same query response is about 6 seconds.


    I would like to know what hardware resources are adequate for conquest with 2M images and up. Is there a hardware/DB/OS table for conquest put by users to take a look at, from performance standpoint?


    Sundar

    I have 1.4.17alpha running on Linux (Fedora 17), with ~2M images, with MySQL. I want to cold install Fedora 20 with MySQL, and run the latest 1.4.17d, with same data mount points (from external software RAID array), after a full backup.


    1.4.17d compiled fine and tested fine on a Fedora20 test machine.


    After cold install, since MySQL tables will be blank after cold install, I plan to do "dgate -r" to re-initialize database tables, and this could take a while. Is this the only recommended option?


    Sundar