Search Results
Found 2727 results
510(k) Data Aggregation
(256 days)
33130
France
Re: K243859
Trade/Device Name: PRAEVAorta®2
Regulation Number: 21 CFR 892.2050
Automated radiological image processing software |
| Product Code | QIH |
| Regulation Number | 21CFR 892.2050
Name | Automated Radiological Image Processing Software |
| Product Code | QIH |
Regulation Number | 892.2050 |
---|---|
Regulatory number | 21 CFR 892.2050 |
21 CFR 892.2050 | Same |
Regulatory class | Class II |
Product code | QIH |
PRAEVAorta®2 is a software intended to be run on its own or as part of another medical device to automatically calculate maximum diameters of anatomical zones from a DICOM CT image containing blood vessels.
PRAEVAorta®2 is designed to measure the maximal transverse diameter of vessels and determine the maximal general diameter using a non-adaptative machine learning algorithm.
Intended users of the software are aimed to the clinical specialists, physicians or other licensed practitioners in healthcare institutions, such as clinics, hospitals, healthcare facilities, residential care facilities and long-term care services. Any results obtained from the software by an intended user other than a physician must be validated by the physician responsible of the patient.
The system is suitable for adults. Its results are not intended to be used on a standalone basis for clinical decision making or otherwise preclude clinical assessment of any disease.
PRAEVAorta®2 is a decision-making support software for diagnosis and follow-up of vascular diseases. It is intended for automatic segmentation and geometric analysis of vessels.
It is a companion software whose purpose is to accompany the doctor in the first assessment of several indicators from CT scan images
The software is able to reconstruct automatically the vascular structures from CT (Computerized Tomography) scans images and automatically segments aneurysms and associated thrombus.
With this reconstruction, the software is able to provide diameters, volumes, and angles. In addition, the software provides diameters, volumes, angles and distances between anatomic points.
This software is cloud based or can be installed on premises. PRAEVAorta®2 is a server software usable through APIs. However, it is hardly recommended to use it via a client software. The client aims to provide a user interface to send images and receive the analysis results. It can either be a web client, a getaway / PACS client, an integrating solution, or a marketplace
Based on the provided FDA 510(k) clearance letter for PRAEVAorta®2 (K243859), here's a description of the acceptance criteria and the study that proves the device meets those criteria:
Acceptance Criteria and Device Performance Study for PRAEVAorta®2
PRAEVAorta®2 is a software intended to automatically calculate maximum diameters of anatomical zones from DICOM CT images containing blood vessels, specifically focusing on the aorta and iliac arteries. The device utilizes a non-adaptive machine learning algorithm to measure maximal transverse diameters of vessels and determine the maximal general diameter.
1. Table of Acceptance Criteria and Reported Device Performance
The primary performance validation criterion for PRAEVAorta®2 was based on the measurement accuracy of the total maximum orthogonal aorta diameter compared to a ground truth established by expert manual measurements.
Variable | Acceptance Criteria | Reported Device Performance |
---|---|---|
Total Maximum Orthogonal Aorta Diameter | ||
Mean Absolute Error (MAE) | Must be less than or equal to 5 mm | 2.04 mm (95% CI: [1.75 mm; 2.34 mm]) |
Percentage of values within ≤ 5 mm limit | Must be in at least 96% of cases | 96.9% of values (within a ≤ 5 mm limit) |
Pearson Correlation Coefficient | Must be at least greater than 0.90 (defined as a very strong correlation) | 0.97 |
Bias (Mean Difference) | (Implied acceptance: close to zero, within reasonable limits) | -0.75 mm (95% CI: [-1.17 mm; -0.33 mm]) |
Percentage of values within 95% Limit of Agreement (Bland-Altman) | (Implied acceptance: high percentage) | 96.9% (within the 95% limit of agreement, ranged from –6.01 mm to +4.51 mm) |
Conclusion: The device successfully met all defined acceptance criteria based on its reported performance.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: 159 unique cases (patients).
- Data Provenance: The dataset included both contrast-enhanced and non-contrast-enhanced CT scans from:
- United States (81 CT scans)
- France (40 CT Scans)
- Canada (38 CT Scans)
- Retrospective/Prospective: The document does not explicitly state whether the data was retrospective or prospective, but the description "The selected CT scans were not used for AI training" and "Information collected on the dataset included patient demographics... imaging characteristics... and clinical management details" suggests these were pre-existing, retrospectively collected CT scans.
- The dataset included images from numerous scanner manufacturers (e.g., GE Medical System, Siemens, Philips, Toshiba) and comprised 95 preoperative and 64 postoperative CT scans (62 with an aortic stent graft). The patients were aged over 18 years, including 130 males, 28 females, and one patient of unknown sex.
3. Number of Experts Used to Establish Ground Truth and Qualifications
- Number of Experts: Four (4)
- Qualifications of Experts: All experts were vascular surgeons with at least five years of clinical experience in vascular diseases following board certification. They had no financial conflicts of interest and received adequate training from NUREA.
4. Adjudication Method for the Test Set
The ground truth was established by manual measurements performed by the four vascular surgeons. The document states, "The measurements performed by these professionals showed no discrepancy greater than 5 mm at the end of the collected data process." This suggests that a form of consensus or high agreement among experts was achieved, though a specific adjudication method (e.g., 2+1 tie-breaker, majority rule) for resolving discrepancies if they exceeded 5mm is not explicitly detailed. The statement implies that no significant discrepancies requiring formal adjudication arose.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No Multi-Reader Multi-Case (MRMC) comparative effectiveness study was explicitly described in the provided text. The study focused on the standalone performance of the PRAEVAorta®2 algorithm against expert-established ground truth manual measurements, rather than comparing human reader performance with and without AI assistance.
6. Standalone (Algorithm Only) Performance Study
Yes, a standalone performance study was conducted. The "Performance assessment" section details a technical performance assessment of PRAEVAorta®2 to validate its accuracy against measurements provided by the Ground Truth using manual measurement tools. This means the algorithm's measurements were directly compared to expert manual measurements to determine its accuracy and reliability.
7. Type of Ground Truth Used
The ground truth used was expert consensus (or high agreement) based on manual measurements performed by a panel of four qualified vascular surgeons. This is explicitly stated as the "reference standard" and "ground truth."
8. Sample Size for the Training Set
The document explicitly states, "The selected CT scans were not used for AI training." However, the exact sample size for the training set is not provided in this document. The focus of this section is on the validation/test dataset used for performance assessment.
9. How the Ground Truth for the Training Set Was Established
The document states that the testing dataset was explicitly not used for AI training, but it does not describe how the ground truth for the training set was established. This information would typically be detailed in the development methodology rather than the performance validation section of a submission summary.
Ask a specific question about this device
(140 days)
Ultrasonic Pulsed Doppler Imaging System
Regulation Number: 21 CFR 892.1550, 892.1560. 892.1570, 892.2050
The Sonosite LX and Sonosite PX ultrasound systems are general purpose ultrasound systems intended for use by qualified physicians and healthcare professionals for evaluation by ultrasound imaging or fluid flow analysis of the human body. Specific clinical applications and exam types include:
- Abdominal
- Adult Cephalic
- Neonatal Cephalic
- Cardiac Adult
- Cardiac Pediatric
- Fetal - OB/GYN
- Musculo-skeletal (Conventional)
- Musculo-skeletal (Superficial)
- Ophthalmic
- Pediatric
- Peripheral vessel
- Small Organ (breast, thyroid, testicles, prostate)
- Transesophageal (cardiac)
- Transrectal
- Transvaginal
- Needle Guidance
Modes of operation include: B Mode (B), M-Mode (M) (including simultaneous M-mode and anatomical M-M-Mode), PW Doppler (PWD) (including High Pulse Repetition Frequency (HPRF) and simultaneous PWD for certain exam types), Tissue Doppler Imaging (TDI), Continuous Wave Doppler (CWD), Color Power Doppler, Velocity Color Doppler, Color Variance, Tissue Harmonic Imaging (THI), Multi-beam imaging, Steep Needle Profiling, Trapezoid, and combined modes, including duplex and triplex imaging: B+M, B+PWD, B+CWD, B+C, (B+C)+PWD, (B+C)+CWD.
This device is indicated for Prescription Use Only.
The Sonosite LX and Sonosite PX ultrasound systems are intended to be used in medical practices, clinical environments, including Healthcare facilities, Hospitals, Clinics, and clinical point-of-care for diagnosis of patients.
The systems are used with a transducers attached and they are powered either by battery or by AC electrical power. The clinician is positioned next to the patient and places the transducer onto the patient's body where needed to obtain the desired ultrasound image.
The Sonosite LX and Sonosite PX Ultrasound Systems are full featured, general purpose, software controlled diagnostic ultrasound systems used to acquire and display high-resolution, real-time ultrasound data in 2D, M-Mode, Pulsed Wave (PW) Doppler, Continuous Wave (CW) Doppler, Color Power Doppler (CPD), and Color Doppler (Color) or in a combination of these modes.
The systems include a variety of accessories including needle guide starter kits. The systems include USB host support for peripherals such as input devices, storage devices and Ethernet port. Input devices include wired and wireless devices. The systems also include an ECG-specific port to support the ECG feature. The non-diagnostic ECG module provides ECG tracing of the cardiac signal synchronized with the ultrasound image.
The provided document details the 510(k) clearance for the Sonosite LX and Sonosite PX Ultrasound Systems, specifically focusing on the addition of the PIV Assist AI feature. Here's a breakdown of the acceptance criteria and the study proving the device meets them:
Acceptance Criteria and Reported Device Performance
The document presents performance metrics for the PIV Assist AI algorithm, which serve as the acceptance criteria for its functionality. Specifically, the metrics cover "Vessel Precision," "Vessel Recall," "Vein Classification Accuracy," and "Artery Classification Accuracy" for two different transducers (L19-5 and L12-3). Additionally, average depth and diameter errors are reported.
Here's a table summarizing the reported device performance against implicitly defined acceptance criteria (as these are the results presented to demonstrate performance). No explicit "desired" or "threshold" values are given for acceptance criteria; rather, the reported performance is the demonstration of meeting the criteria.
Metric | Transducer | Reported Performance (95% CI) |
---|---|---|
Vessel Precision | L19-5 | 97.32% (97%-98%) |
L12-3 | 95.58% (95%-96%) | |
Vessel Recall | L19-5 | 97.07% (96%-98%) |
L12-3 | 94.49% (93%-95%) | |
Vessel Classification for Veins | L19-5 | 96.01% (95%-97%) |
L12-3 | 94.54% (93%-96%) | |
Vessel Classification for Arteries | L19-5 | 89.71% (87%-92%) |
L12-3 | 86.06% (83%-89%) | |
Average Depth Error | L19-5 | 0.065mm (0.062-0.068mm) |
L12-3 | 0.105mm (0.103-0.108mm) | |
Average Diameter Error | L19-5 | 6.2% (5.5-7.1%), 0.203mm (0.186-0.219mm) |
L12-3 | 5.6% (5.0-6.2%), 0.19mm (0.18-0.21mm) |
Note: While specific acceptance thresholds are not explicitly stated, the reported high percentages and low error margins demonstrate the device's acceptable performance for its intended use.
Study Details for PIV Assist AI Algorithm Performance Testing
Here's a breakdown of the study details as provided in the document:
-
Sample Size used for the test set and the data provenance:
- Sample Size: 584 ultrasound clips from 292 subjects.
- Data Provenance: The data was collected prospectively from 3 hospitals within the United States. The document explicitly states that the validation dataset was collected at a "much later timeframe" and at "different sites from the training and tuning data" to ensure independence.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document states that "certified Clinical Sonographers" independently labeled the data.
- For adjudication, an "Interventional Radiologist" evaluated the labeled data.
- Specific numbers of sonographers or the interventional radiologist are not provided. Their general qualifications (certified Clinical Sonographers, Interventional Radiologist) are mentioned, but specific experience (e.g., "10 years of experience") is not detailed.
-
Adjudication method for the test set:
- An adjudication process took place "in cases where there was disagreement between the images."
- The adjudication was performed by an "Interventional Radiologist" who evaluated the labeled data to establish the final ground truth. This suggests a form of expert consensus with a tie-breaker, though specific methods like "2+1" or "3+1" are not explicitly stated. It implies a process where disagreements among sonographers were resolved by a higher-level expert.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
- No, an MRMC comparative effectiveness study was not done. The document focuses solely on the "AI algorithm performance testing" in a standalone manner, evaluating its accuracy against ground truth. There is no mention of comparing human reader performance with and without AI assistance, nor any effect size.
-
If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Yes, a standalone study was done. The "Summary of PIV Assist AI Algorithm Performance Testing" section details metrics like precision, recall, and classification accuracy, which are inherent to the algorithm's performance when processing and interpreting ultrasound data. There's no indication of human interaction during the measurement of these performance metrics.
-
The type of ground truth used:
- The ground truth was established through expert consensus and adjudication of ultrasound clips and frames. "Certified Clinical Sonographers" independently labeled the data, and an "Interventional Radiologist" adjudicated disagreements to establish the "final ground truth."
-
The sample size for the training set:
- The document states that the validation dataset was collected independently from the "training and tuning data," but it does not provide the sample size of the training set.
-
How the ground truth for the training set was established:
- The document mentions "training and tuning data" but does not explicitly detail how the ground truth for the training set was established. It can be inferred that a similar process of expert labeling, potentially with adjudication, was used given the rigorous approach for the validation set, but this is not explicitly stated.
Ask a specific question about this device
(210 days)
38700
France
Re: K250290
Trade/Device Name: SurgiTwin
Regulation Number: 21 CFR 892.2050
** SurgiTwin
Classification Name: Medical image management and processing system (21 C.F.R. § 892.2050 |
---|
Product Code |
Regulation Number |
21 CFR 892.2050 |
Regulation Name |
SurgiTwin is a web-based platform designed to help healthcare professionals carry out pre-operative planning for knee reconstruction procedures, based on their patients' imported imaging studies. Experience in usage and a clinical assessment is necessary for the proper use of the system in the revision and approval of the output of the planning.
The system works with a database of digital representations related to surgical materials supplied by their manufacturers. SurgiTwin generates a PDF report as an output. End users of the generated SurgiTwin reports are trained healthcare professionals. SurgiTwin does not provide a diagnosis or surgical recommendation.
SurgiTwin is a semi-automated Software as a Medical Device (SaMD) that assists health care professionals in the pre-operative planning of total knee replacement surgery. Using a series of algorithms, the software creates 2D segmented images, a 3D model, and relevant measurements derived from the patient's pre-dimensioned medical images. The software interface allows the user to adjust the plan manually to verify the accuracy of the model and achieve the desired clinical targets. SurgiTwin generates a PDF report as an output. SurgiTwin does not provide a diagnosis or surgical recommendation.
The intended patient population is patients over 22 undergoing total knee replacement surgery without any existing material in the operated lower limb.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) clearance letter for SurgiTwin:
1. Acceptance Criteria and Reported Device Performance
The provided document specifically details acceptance criteria for the segmentation ML model. Other functions (automatic landmark function, metric generation, implant placement, osteophyte removal) are mentioned as having "predefined clinical acceptance criteria" and "all acceptance criteria were met," but the specific numeric criteria are not listed.
Table of Acceptance Criteria (for the Segmentation ML Model) and Reported Device Performance:
Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|
Mean DSC (Dice Similarity Coefficient) | > 0.95 | Met (> 0.95, implied by "met the acceptance criteria") |
Mean voxel based AHD (Average Hausdorff Distance) | 0.9 | Met (> 0.9, implied by "met the acceptance criteria") |
95th percentile of the boundary based HD 95 (Hausdorff Distance 95th percentile) |
Ask a specific question about this device
(59 days)
Re: K252030*
Trade/Device Name: RadiForce GX570; RadiForce GX570-AR
Regulation Number: 21 CFR 892.2050
Monochrome LCD Monitor
- Classification Name: Medical Image Management and Processing System (21 CFR 892.2050
Monochrome LCD Monitor - Classification Name: Medical Image Management and Processing System (21 CFR 892.2050
Monochrome LCD Monitor - Classification Name: Medical Image Management and Processing System (21 CFR 892.2050
This product is intended for use in clinical radiological images (including full-field digital mammography and digital breast tomosynthesis) for review, analysis, and diagnosis by trained medical practitioners.
RadiForce GX570 is a monochrome LCD monitor for viewing medical images including those of mammography. The monochrome panel employs in-plane switching (IPS) technology allowing wide viewing angles and the matrix size (or resolution) is 2,048 x 2,560 pixels (5MP) with a pixel pitch of 0.165 mm.
Since factory calibrated display modes, each of which is characterized by a specific tone curve (including DICOM GSDF), a specific luminance range and a specific color temperature, are stored in lookup tables within the monitor, the tone curve is e.g. DICOM compliant regardless of the display controller used. This helps ensure tone curves even if a display controller or workstation must be replaced or serviced.
The Digital Uniformity Equalizer function compensates luminance non-uniformity, one of the inherent characteristics of LCD panel modules, to the levels required by various QC standards and guidelines.
The Sharpness Recovery function compensates sharpness degradation caused by the inherent characteristics of LCD panel modules (A user selectable).
There are two model variations, GX570 and GX570-AR. The difference of the two variations is the surface treatment of the GX570 is Anti-Glare (AG) treatment and that of the GX570-AR is Anti-Reflection (AR) coating.
Two GX570 monitors mounted on a single stand configuration is available identified by with "MD" like GX570-MD and GX570-AR-MD.
RadiCS is application software to be installed in each workstation offering worry-free quality control of diagnostic monitors including the RadiForce GX570 based on the QC standards and guidelines and is capable of quantitative tests and visual tests defined by them. The RadiCS is included in this 510(k) submission as an accessory to the RadiForce GX570.
RadiCS is of Basic Documentation Level and that it's being used unchanged from the predicate software. RadiCS supports the functions of the monitor RadiForce GX570 and it's not a medical imaging software.
The provided FDA 510(k) clearance letter and summary are for a medical display monitor (RadiForce GX570), not an AI device or a diagnostic algorithm. Therefore, the information requested regarding acceptance criteria and a study proving an AI device meets those criteria cannot be extracted from this document.
The document discusses the technical performance of a display monitor, such as:
- Spatial resolution (MTF)
- Pixel defects
- Luminance and chromaticity (including DICOM GSDF conformance)
- Temporal response
- Noise (NPS)
- Display reflections
- Small-spot contrast ratio
These are physical and optical performance characteristics of a display hardware, not the diagnostic performance of a software algorithm.
Therefore, I cannot populate the requested table or answer the questions related to AI device performance, sample sizes for test/training sets, expert adjudication, MRMC studies, or ground truth establishment, as this information is not relevant to a medical display monitor clearance.
The document does state that the device is intended for use with "clinical radiological images (including full-field digital mammography and digital breast tomosynthesis) for review, analysis, and diagnosis by trained medical practitioners." However, the studies described are bench tests to assure the display hardware meets performance standards for displaying these images, not for interpreting them with AI.
Ask a specific question about this device
(152 days)
Re: K250947**
Trade/Device Name: VistaSoft 4.0 and VisionX 4.0
Regulation Number: 21 CFR 892.2050
Regulation Number: 892.2050
Product Code(s): QIH
Legally Marketed Predicate Devices -
VistaSoft 4.0 and VisionX 4.0 imaging software is an image management system that allows dentists to acquire, display, edit, view, store, print, and distribute medical images. VisionX 4.0 / VistaSoft 4.0 runs on user provided PC compatible computers and utilize previously cleared digital image capture devices for image acquisition.
The software must only be used by authorized healthcare professionals in dental areas for the following tasks:
- Filter optimization of the display of 2D and 3D images for improved diagnosis
- Acquisition, storage, management, display, analysis, editing and supporting diagnosis of digital/digitised 2D and 3D images and videos
- Forwarding of images and additional data to external software (third-party software)
The software is not intended for mammography use.
VisionX 4.0 / VistaSoft 4.0 imaging software is an image management system that allows dentists to acquire, display, edit, view, store, print, and distribute medical images. VisionX 4.0 / VistaSoft 4.0 runs on user provided PC compatible computers and utilize previously cleared digital image capture devices for image acquisition. Additional information: The software is intended for the viewing and diagnosis of image data in relation to dental issues. Its proper use is documented in the operating instructions of the corresponding image-generating systems. Image-generating systems that can be used with the software include optical video cameras, digital X-ray cameras, phosphor storage plate scanner, extraoral X-ray devices, intraoral scanners and TWAIN compatible image sources.
The software must only be used by authorized healthcare professionals in dental areas for the following tasks:
- Filter optimization of the display of 2D and 3D images for improved diagnosis
- Acquisition, storage, management, display, analysis, editing and supporting diagnosis of digital/digitised 2D and 3D images and videos
- Forwarding of images and additional data to external software (third-party software)
The provided document is a 510(k) clearance letter for VistaSoft 4.0 and VisionX 4.0. It does not contain any information regarding acceptance criteria or a study proving the device meets those criteria, specifically concerning AI performance or clinical efficacy.
The document primarily focuses on regulatory compliance, outlining:
- The device's classification and regulation.
- Its intended use and indications for use.
- Comparison with a predicate device (VisionX 3.0), highlighting new features (image filter operations, annotations, cloud interface, cybersecurity enhancements).
- Compliance with FDA recognized consensus standards and guidance documents for software development and cybersecurity (e.g., ISO 14971, IEC 62304, IEC 82304-1, IEC 81001-5-1, IEC 62366-1).
- Statement that "Software verification and validation were conducted."
However, there is no specific information presented that describes:
- Quantitative acceptance criteria for device performance (e.g., sensitivity, specificity, accuracy).
- Details of a clinical or analytical study to demonstrate meeting these criteria.
- Sample sizes for test sets or training sets.
- Data provenance.
- Number or qualifications of experts for ground truth establishment.
- Adjudication methods.
- MRMC study results or effect sizes.
- Standalone algorithm performance.
- Type of ground truth used (e.g., pathology, expert consensus).
- How ground truth was established for training data.
The FDA 510(k) clearance process for this type of device (Medical image management and processing system, Product Code QIH) often relies on demonstrating substantial equivalence to a predicate device based on similar technological characteristics and performance, rather than requiring extensive clinical studies with specific performance metrics like those for AI-driven diagnostic aids. The "new features" listed (filter optimization, acquisition/storage/etc., forwarding data) appear to be enhancements to image management and display, not necessarily new diagnostic algorithms that would typically necessitate rigorous performance studies with specific acceptance criteria.
Therefore, based solely on the provided text, I cannot complete the requested tables and details regarding acceptance criteria and study results, as this information is not present in the document.
If such information were available, it would typically be found in a separate section of the 510(k) submission, often within the "Non-Clinical and/or Clinical Tests Summary & Conclusions" section in more detail than what is provided here, or in a specific performance study report referenced by the submission. The current document only states that "Software verification and validation were conducted" and lists the standards used for software development and cybersecurity, but not the outcomes of performance testing against specific acceptance criteria.
Ask a specific question about this device
(15 days)
Colorado 80401
Re: K252526
Trade/Device Name: Rapid DeltaFuse
Regulation Number: 21 CFR 892.2050
processing system
Classification: II
Product Code: LLZ
Regulation No: 21 C.F.R. §892.2050
the Predicate Device:
Rapid DeltaFuse' predicate device is Rapid (K213165), under regulation 21 CFR §892.2050
DeltaFuse |
|---------|---------------------------|--------------------------------|
| Regulation | 21 CFR 892.2050
; Medical image management and processing system | 21 CFR 892.2050; Medical image management and processing
Rapid DeltaFuse is an image processing software package to be used by trained professionals, including but not limited to physicians and medical technicians.
The software runs on a standard off-the-shelf computer or a virtual platform, such as VMware, and can be used to perform image viewing, processing, and analysis of images.
Data and images are acquired through DICOM compliant imaging devices.
Rapid DeltaFuse provides both viewing and analysis capabilities for imaging datasets acquired with Non-Contrast CT (NCCT) images.
The CT analysis includes NCCT maps showing areas of hypodense and hyperdense tissue including overlays of time differentiated scans of the same patient.
Rapid DeltaFuse is intended for use for adults.
Rapid DeltaFuse (DF) is a Software as a Medical Device (SaMD) image processing module and is part of the Rapid Platform. It provides visualization of time differentiated neuro hyperdense and hypodense tissue from Non-Contrast CT (NCCT) images.
Rapid DF is integrated into the Rapid Platform which provides common functions and services to support image processing modules such as DICOM filtering and job and interface management along with external facing cyber security controls. The Integrated Module and Platform can be installed on-premises within customer's infrastructure behind their firewall or in a hybrid on-premises/cloud configuration. The Rapid Platform accepts DICOM images and, upon processing, returns the processed DICOM images to the source imaging modality or PACS.
The provided FDA 510(k) clearance letter for Rapid DeltaFuse describes the acceptance criteria and the study that proves the device meets those criteria, though some details are absent.
Here's a breakdown of the information found in the document, structured according to your request:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are not explicitly stated in a quantified manner as a target. Instead, the document describes the type of performance evaluated and the result obtained.
Acceptance Criteria (Implied/Description of Test) | Reported Device Performance |
---|---|
Co-registration accuracy for slice overlays | DICE coefficient of 0.94 (Lower Bound 0.93) |
Software performance meeting design requirements and specifications | "Software performance testing demonstrated that the device performance met all design requirements and specifications." |
Reliability of processing and analysis of NCCT medical images for visualization of change | "Verification and validation testing confirms the software reliably processes and supports analysis of NCCT medical images for visualization of change." |
Performance of Hyperdensity and Hypodensity display with image overlay | "The Rapid DF performance has been validated with a 0.95 DICE coefficient for the overlay addition to validate the overlay performance..." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: 14 cases were used for the co-registration analysis. The sample size for other verification and validation testing is not specified.
- Data Provenance: Not explicitly stated (e.g., country of origin, retrospective or prospective).
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
- This information is not provided in the document. The document refers to "performance validation testing" and "software verification and validation testing" but does not detail the involvement of human experts or their qualifications for establishing ground truth.
4. Adjudication Method for the Test Set
- This information is not provided in the document.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC comparative effectiveness study was reported. The document focuses on the software's performance (e.g., DICE coefficient for co-registration) rather than its impact on human reader performance.
6. Standalone (Algorithm Only) Performance Study
- Yes, a standalone performance study was done. The reported DICE coefficients (0.94 and 0.95) are measures of the algorithm's performance in co-registration and overlay addition, independent of human interaction.
7. Type of Ground Truth Used
- The document implies that the ground truth for co-registration and overlay performance was likely established through a reference standard based on accurate image alignment and feature identification, against which the algorithm's output (DICOM images with overlays) was compared. The exact method of establishing this reference standard (e.g., manual expert annotation, a different validated algorithm output) is not explicitly stated.
8. Sample Size for the Training Set
- The document does not specify the sample size used for training the Rapid DeltaFuse algorithm.
9. How Ground Truth for the Training Set Was Established
- The document does not specify how the ground truth for the training set was established.
Ask a specific question about this device
(269 days)
20004
Re: K243680
Trade/Device Name: Neurovascular Insight V1.0
Regulation Number: 21 CFR 892.2050
, 2025
Re: K243680
Trade/Device Name: Neurovascular Insight V1.0
Regulation Number: 21 CFR 892.2050
processing system |
| Classification Name | System, Image Processing, Radiological |
| Regulation Number | 892.2050
Neurovascular Insight V1.0 is an optional user interface for use on a compatible technical integration environment and designed to be used by trained professionals with medical imaging education including, but not limited to, physicians. Neurovascular Insight V1.0 is intended to:
- Display and, if necessary, export neurological DICOM series and outputs provided by compatible processing docker applications, through the technical integration environment.
- Allow the user to edit and modify parameters that are optional inputs of aforementioned applications. These modified parameters are provided by the technical integration environment as inputs to the docker application to reprocess the outputs. When available, Neurovascular Insight V1.0 display can be updated with the reprocessed outputs.
- If requested by an application, allow the user to confirm information before displaying associated outputs and export them.
The device does not alter the original image information and is not intended to be used as a diagnostic device. The outputs of each compatible application must be interpreted by the predefined intended users, as specified in the application's own labeling. Moreover, the information displayed is intended to be used in conjunction with other patient information and based on professional judgment, to assist the clinician in the medical imaging assessment. It is not intended to be used in lieu of the standard care imaging.
Trained professionals are responsible for viewing the full set of native images per the standard of care.
Neurovascular Insight V1.0 is an optional user interface for use on a compatible technical integration environment and designed to be used by trained professionals with medical imaging education including, but not limited to, physicians and medical technicians.
It is worth noting that Neurovascular Insight V1.0 is an evolution of the FDA cleared medical device Olea S.I.A. Neurovascular V1.0 (K223532).
Neurovascular Insight V1.0 does not contain any calculation feature or any algorithm (deterministic or AI).
The provided FDA 510(k) clearance letter for Neurovascular Insight V1.0 states that the device "does not contain any calculation feature or any algorithm (deterministic or AI)." Furthermore, it explicitly mentions, "Neurovascular Insight V1.0 provides no output. Therefore, the comparison to predicate was based on the comparison of features available within both devices. No performance feature requires a qualitative or quantitative comparison and validation."
Based on this, it's clear that the device is a user interface and does not include AI algorithms or generate outputs that would require a study involving acceptance criteria for AI performance (e.g., sensitivity, specificity, accuracy). Therefore, the questions related to AI-specific performance criteria, ground truth establishment, training sets, and MRMC studies are not applicable to this particular device.
The "study" conducted for this device was a series of software verification and validation tests to ensure its functionality as a user interface and its substantial equivalence to its predicate.
Here's a breakdown of the requested information based on the provided document, highlighting where the requested information is not applicable due to the device's nature:
1. A table of acceptance criteria and the reported device performance
Note: As the device is a user interface without AI or output generation, there are no quantitative performance metrics like sensitivity, specificity, or accuracy that would typically be associated with AI algorithms. The acceptance criteria relate to the successful execution of software functionalities.
Acceptance Criteria (Based on information provided) | Reported Device Performance |
---|---|
Product risk assessment successfully completed | Confirmed |
Software modules verification tests successfully completed | Confirmed |
Software validation test successfully completed | Confirmed |
System provides all capabilities necessary to operate according to its intended use | Confirmed |
System operates in a manner substantially equivalent to the predicate device | Confirmed |
All features tested during verification phases (Software Test Description) | Successfully performed as reported in Software Test Report (STR) |
Specific features highlighted by risk analysis tested during usability process (human factor considered) | User Guide followed, no clinically blocking bugs, no incidents during processing |
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set: Not explicitly stated as a number of patient cases or images, as the testing was focused on software functionality rather than AI performance on a dataset. The testing refers to "software modules verification tests" and "software validation test."
- Data Provenance: Not applicable in the context of clinical data for AI development/validation, as the device doesn't use or produce clinical outputs requiring such data. The testing was internal software validation.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Not Applicable: Given that the device is a user interface and does not utilize AI or produce diagnostic outputs, there was no need to establish clinical ground truth for a test set by medical experts in the traditional sense. The "ground truth" for its functionality would be the design specifications and successful execution of intended features. The document mentions "operators" who "reported no issue" during usability testing, but these are likely system testers/engineers, not clinical experts establishing diagnostic ground truth.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Not Applicable: No clinical ground truth was established, so no adjudication method was required.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No: The document explicitly states, "Neurovascular Insight V1.0 does not contain any calculation feature or any algorithm (deterministic or AI)." Therefore, an MRMC study comparing human readers with and without AI assistance was not performed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- No: The device does not contain an algorithm, only a user interface. Standalone algorithm performance testing is not applicable.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Not Applicable: No clinical ground truth was established, as the device is a user interface without AI or diagnostic output generation. The "ground truth" for its validation was adherence to software specifications and intended functionalities.
8. The sample size for the training set
- Not Applicable: The device does not contain any AI algorithms, therefore, no training set was used.
9. How the ground truth for the training set was established
- Not Applicable: No training set was used.
Ask a specific question about this device
(266 days)
06200
FRANCE
Re: K243685
Trade/Device Name: MammoScreen BD
Regulation Number: 21 CFR 892.2050
06200
FRANCE
Re: K243685
Trade/Device Name: MammoScreen BD
Regulation Number: 21 CFR 892.2050
Classification Name:** Automated radiological image processing software
Regulation Number: 21 CFR §892.2050
K243685)** |
|---|---|---|
| Manufacturer | Therapixel | Therapixel |
| Regulation number | 892.2050
| 892.2050 |
| Product Code | QIH | QIH |
| Medical Class Device | Class II | Class II |
| *
MammoScreen® BD is a software application intended for use with compatible full-field digital mammography and digital breast tomosynthesis systems. MammoScreen BD evaluates the breast tissue composition to provide an ACR BI-RADS 5th Edition breast density category. The device is intended to be used in the population of asymptomatic women undergoing screening mammography who are at least 40 years old.
MammoScreen BD only produces adjunctive information to aid interpreting physicians in the assessment of breast tissue composition. It is not a diagnostic software.
Patient management decisions should not be made solely based on analysis by MammoScreen BD.
MammoScreen BD is a software-only device (SaMD) using artificial intelligence to assist radiologists in the interpretation of mammograms. The purpose of the MammoScreen BD software is to automatically process a mammogram to assess the density of the breasts.
MammoScreen BD processes the 2D-mammograms standard views (CC and/or MLO of FFDM and/or the 2DSM from the DBT) to assess breast density.
For each examination, MammoScreen BD outputs the breast density following the ACR BI-RADS 5th Edition breast density category.
MammoScreen BD outputs can be integrated with compatible third-party software such as MammoScreen Suite. Results may be displayed in a web UI, as a DICOM Structured Report, a DICOM Secondary Capture Image, or within patient worklists by the third-party software.
MammoScreen BD takes as input a folder with images in DICOM formats and outputs breast density assessment in a form of a JSON file.
Note that the MammoScreen BD outputs should be used as complementary information by radiologists while interpreting breast density. Patient management decisions should not be made solely on the basis of analysis by MammoScreen BD, the medical professional interpreting the mammogram remains the sole decision-maker.
Here's a breakdown of the acceptance criteria and the study that proves MammoScreen BD meets them, based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Device Performance Study
The study primarily focuses on the standalone performance of MammoScreen BD in assessing breast density against an expert consensus Ground Truth. The key metric for performance is the quadratically weighted Cohen's Kappa (${\kappa}$).
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Primary Objective: Superiority in standalone performance for density assignment of MammoScreen BD compared to a pre-determined reference value (${\kappa_{\text{reference}} = 0.85}$). | Hologic: ${\kappa_{\text{quadratic}} = 89.03}$ [95% CI: 87.43 – 90.56] |
Acceptance Criteria (Statistical): The one-sided p-value for the test $H_0: \kappa \leq 0.85$ is less than the significance level ($\alpha=0.05$) AND the lower bound of the 95% confidence interval for Kappa $> 0.85$, indicating that the observed weighted Kappa is statistically significantly greater than 0.85. | Hologic Envision: ${\kappa_{\text{quadratic}} = 89.54}$ [95% CI: 86.88 – 91.69] |
GE: ${\kappa_{\text{quadratic}} = 93.19}$ [95% CI: 90.50 – 94.92] |
All reported Kappa values exceed the reference value of 0.85, and their 95% confidence intervals' lower bounds are also above 0.85, satisfying the acceptance criteria.
2. Sample Size and Data Provenance
Test Set:
- Hologic (original dataset): 922 patients / 1,155 studies
- Hologic Envision (new system for subject device): 500 patients / 500 studies
- GE (new system for subject device): 376 patients / 490 studies
Data Provenance:
- Hologic (original dataset):
- USA: 658 studies (distributed as A:85, B:269, C:241, D:63)
- EU: 447 studies (distributed as A:28, B:169, C:214, D:86)
- Hologic Envision: USA: 500 studies (distributed as A:50, B:200, C:200, D:50)
- GE:
- USA: 359 studies (distributed as A:38, B:155, C:139, D:31)
- EU: 129 studies (distributed as A:4, B:45, C:61, D:19)
All data for the test sets appears to be retrospective, as it's stated that the "Data used for the standalone performance testing only belongs to the test group" and is distinct from the training data.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: 5 breast radiologists
- Qualifications: At least 10 years of experience in breast imaging interpretation.
4. Adjudication Method for the Test Set
The ground truth was established by majority rule among the assessment of the 5 breast radiologists. This implies a 3-out-of-5 or more agreement for a given breast density category to be assigned as ground truth.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
There is no mention of an MRMC comparative effectiveness study being performed to assess how much human readers improve with AI vs. without AI assistance. The study focuses solely on the standalone performance of the AI algorithm. The device is described as "adjunctive information to aid interpreting physicians," but its effect on radiologist performance isn't quantified in this document.
6. Standalone Performance (Algorithm Only)
Yes, a standalone performance study was explicitly conducted. The results for the quadratically weighted Cohen's Kappa presented in the table above (89.03 for Hologic, 89.54 for Hologic Envision, and 93.19 for GE) are all for the algorithm's performance only ("MammoScreen BD against the radiologist consensus assessment").
7. Type of Ground Truth Used
The ground truth used was expert consensus based on the visual assessment of 5 breast radiologists.
8. Sample Size for the Training Set
- Total number of studies: 108,775
- Total number of patients: 32,368
9. How the Ground Truth for the Training Set was Established
The document states that the training modules are "trained with very large databases of annotated mammograms." While "annotated" implies ground truth was established, the specific method for establishing ground truth for the training set is not detailed in the provided text. It only specifies the ground truth establishment method for the test set (majority rule of 5 radiologists). It's common for training data to use various methods for annotation, which might differ from the rigorous expert consensus used for the test set.
Ask a specific question about this device
(24 days)
Austin, Texas 78705
Re: K252362
Trade/Device Name: GBrain MRI
Regulation Number: 21 CFR 892.2050
Galileo CDS Inc |
| 510(k) Number | K250416 |
| Product Code | QIH, LLZ |
| Regulation Number | 21 CFR 892.2050
The indications are equivalent to the scope of regulation 21 CFR 892.2050.
Galileo CDS Inc | - |
| Classification | Class II | Class II | Same |
| Regulation Number | 21 CFR 892.2050
| 21 CFR 892.2050 | Same |
| Regulation Description | Medical image management and processing system
GBrain MRI is a post processing medical device software intended for analyzing and quantitatively reporting signal hyperintensities in the brain on T2w FLAIR MR images and T1w post contrast images in the context of diagnostic radiology.
GBrain MRI is intended to provide automatic segmentation, quantification, and reporting of derived image metrics. It is not intended for detection or specific diagnosis of any disease nor for the detection of signal hyperintensities.
GBrain MRI should not be used in-lieu of a full evaluation of the patient's MRI scans. The physician retains the ultimate responsibility for making the final patient management and treatment decisions.
GBrain MRI is a non-invasive MR imaging post-processing medical device software that aids in the volumetric quantification of hyperintensities in T2-weighted Fluid Attenuated Inversion Recovery (T2w FLAIR), and in post contrast T1-weighted (T1c) brain MR images. It is intended to aid the trained radiologist in quantitative measurements.
The input to the software are the T2w FLAIR and the T1w post contrast brain MR images.
The outputs are volume measurements in Secondary Capture DICOM format, a DICOM Encapsulated pdf file, as well as a DICOM SR. More specifically, the total volume of hyperintensities in the input T2w FLAIR and the T1c are shown in two new secondary capture image series, called GBrain T2 FLAIR & GBrain T1 CE respectively, with a segmentation overlay on the hyperintensities that were used to measure the total volumes. These volume measurements are summarized in the DICOM encapsulated pdf and DICOM SR files.
The outputs are provided in standard DICOM format that can be displayed on most third-party DICOM workstations and Picture Archive and Communications Systems (PACS).
The software is suitable for use in routine patient care as a support tool for radiologists in assessment of structural adult brain MRIs, by providing them with complementary quantitative information.
The GBrain MRI processing architecture includes a proprietary automated internal pipeline that performs skull stripping, signal normalization, segmentations, volume calculations, and report generation.
From a workflow perspective, GBrain MRI is packaged as a computing appliance that is capable of supporting DICOM file transfer for input, and output of results. The software is designed without the need for a user interface after installation. Any processing errors are reported either in the output series report, or in the system log files.
GBrain MRI software is intended to be used by trained personnel only and is to be installed by trained technical personnel.
Quantitative reports and derived image data sets are intended to be used as complementary information in the review of a case.
The GBrain MRI software does not have any accessories or patient contacting components.
The GBrain MRI device is intended to be used for the adult population only.
Here's a summary of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) Clearance Letter.
Acceptance Criteria and Device Performance
1. Table of Acceptance Criteria and Reported Device Performance
Metric | Acceptance Criteria (Lower Bound of 95% CI) | Reported Device Performance (Lower Bound of 95% CI) |
---|---|---|
Volume Measurement (R²) | N/A (explicit value not stated, but implied by "passed planned acceptance criteria") | 0.94 (Contrast Enhancement) |
Segmentation Overlap (Dice Similarity Coefficient) | N/A (explicit value not stated, but implied by "passed planned acceptance criteria") | 0.81 (Contrast Enhancement) |
Reproducibility (R²) | N/A (explicit value not stated, but implied by "passed planned acceptance criteria") | 0.92 |
Note: While explicit acceptance values for R² and Dice were not provided in the document, the statement "passed the planned acceptance criteria" indicates that the reported performance values met the internal thresholds set by the manufacturer.
Study Details
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size (Test Set): 131 patient cases for Contrast Enhancement measurements.
- Data Provenance:
- Country of Origin: United States (collected from four separate hospital systems in Alabama, Florida, Kentucky, and California).
- Retrospective/Prospective: Not explicitly stated, but "collected from four separate hospital systems" and "external dataset used for validation was independent from the internal training datasets" typically implies a retrospective collection of existing data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Three independent experts.
- Qualifications: US board-certified, experienced neuroradiologists.
4. Adjudication Method for the Test Set
- Method: Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm was used to generate a consensus ground truth from the three expert-labeled segmentations. This effectively acts as an automated adjudication method.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
- No. The document describes a "standalone" performance evaluation of the algorithm against expert-derived ground truth, not a comparative effectiveness study involving human readers with and without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done
- Yes. The performance testing described focuses on comparing the software's segmentations to expert segmentations, indicating a standalone evaluation of the algorithm's accuracy.
7. The Type of Ground Truth Used
- Type: Expert consensus, specifically using the STAPLE algorithm to combine three independent expert-labeled segmentations.
8. The Sample Size for the Training Set
- Not explicitly stated. The document mentions the validation dataset was "independent from the internal training datasets" but does not specify the size of the training datasets.
9. How the Ground Truth for the Training Set Was Established
- Not explicitly stated. The document mentions "internal training datasets" but does not detail the method for establishing their ground truth. Given the validation approach, it's highly probable that similar expert-derived ground truth methods were used for training data, but this is an inference rather than a direct statement.
Ask a specific question about this device
(65 days)
: K251837**
Trade/Device Name: Salix Coronary Plaque (V1.0.0)
Regulation Number: 21 CFR 892.2050
Processing, Radiological
Classification: Class II Medical Device
Regulation Number: 21 CFR 892.2050
Salix Coronary Plaque (V1.0.0) is a web-based, non-invasive software application that is intended to be used for viewing, post-processing, and analyzing cardiac computed tomography (CT) images acquired from a CT scanner in a Digital Imaging and Communications in Medicine (DICOM) Standard format.
This software provides cardiologists and radiologists with interactive tools that can be used for viewing and analyzing cardiac computed tomography (CT) data for quantification and characterization of coronary plaques (i.e. atherosclerosis), stenosis and to perform calcium scoring in non-contrast cardiac CT
Salix Coronary Plaque (V1.0.0) is intended to complement standard care as an adjunctive tool and is not intended as a replacement to a medical professional's comprehensive diagnostic decision-making process. The software's semi-automated features are intended for an adult population and should only be used by qualified medical professionals experienced in examining and evaluating cardiac CT images.
Users should be aware that certain views make use of interpolated data. These data are created by the software based on the original data set. Interpolated data may give the appearance of healthy tissue in situations where pathology that is near or smaller than the scanning resolution may be present.
Salix Coronary Plaque (K251837) is a web-based software application, hosted on Amazon Web Services cloud computing services, delivered using a SaaS model. The software provides interactive, post-processing tools for trained radiologists or cardiologists for viewing, analyzing, and characterizing cardiac computed tomography (CT) image data obtained from a CT scanner. The physician-driven coronary analysis is used to review CT image data to prepare a standard coronary report that may include the presence and extent of physician-identified coronary plaques (i.e., atherosclerosis) and stenosis, and assessment of calcium score performed on a non-contrast cardiac CT scan. The Cardiac CT image data are physician-ordered and typically obtained from patients who underwent CCTA or CAC CT for evaluation of CAD or suspected CAD.
Here's a detailed breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) Clearance Letter for Salix Coronary Plaque (V1.0.0):
Acceptance Criteria and Device Performance
Salix Coronary Plaque Output | Statistic | Reported Device Performance (Estimate [95% CI]) | Acceptance Criteria | Result |
---|---|---|---|---|
Vessel Level Stenosis | Percentage within one CAD-RADS category | 95.8% [94.1%, 97.3%] | 90% | Pass |
Total plaque | ICC3¹ | 0.96 [0.94, 0.98] | 0.70 | Pass |
Calcified plaque | ICC3¹ | 0.96 [0.90, 0.99] | 0.80 | Pass |
Noncalcified plaque | ICC3¹ | 0.91 [0.84, 0.95] | 0.55 | Pass |
Low attenuating plaque | ICC3¹ | 0.61 [0.41, 0.93] | 0.30 | Pass |
Calcium Scoring | Pearson Correlation | 0.958 [0.947, 0.966] | 0.90 | Pass |
Centerline Extraction | Overlap score | 0.8604 [0.8445, 0.8750] | 0.80 | Pass |
Vessel Labelling | F1 Score | 0.8264 [0.8047, 0.8479] | 0.70 | Pass |
Lumen Wall Segmentation | Dice Score | 0.8996 [0.8938, 0.9055] | 0.80 | Pass |
Vessel Wall Segmentation | Dice Score | 0.9016 [0.8962, 0.9070] | 0.80 | Pass |
¹ Intraclass correlation coefficient two-way mixed model ICC(3, 1) was used.
Study Details
1. A table of acceptance criteria and the reported device performance:
See table above.
2. Sample size used for the test set and the data provenance:
- Multi-reader Multi-case (MRMC) study for Plaque Volumes and CAD-RADS:
- Sample Size: 103 adult patients (58 women, 45 men; mean 61 ± 12 years, range 23–84).
- Data Provenance: Retrospective data from seven geographically diverse U.S. centers (Wisconsin, New York, Arizona, and Alabama). Self-reported race was 57% White, 22% Black or African American, 12% Asian, 2% American Indian/Alaska Native; 7% declined/unknown. 13% identified as Hispanic or Latino. Scans were acquired on contemporary 64-detector row or newer systems from Canon, GE, Philips, and Siemens, ensuring vendor diversity.
- Standalone Performance Validation for ML-enabled Outputs (Calcium Scoring, Centerline Extraction, Vessel Labelling, Lumen and Vessel Wall Segmentation):
- Sample Size:
- 302 non-contrast series for calcium scoring.
- 107 contrast-enhanced series for centerline extraction, vessel labeling, and wall segmentation.
- Data Provenance: Sourced from multiple unique centers in the USA that did not contribute any data to the training datasets for any Salix Central algorithm. The validation dataset consisted of de-identified cardiac CT studies from seven (7) centers across four (4) US states. Included representation of multiple scanner manufacturers (Canon, GE, Philips, and Siemens) and disease severity based on calcium score and maximum stenosis (CAD-RADS classification) based on source clinical radiology reports.
- Sample Size:
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- For Plaque Volumes and CAD-RADS (MRMC study):
- Number of Experts: Multiple (implied, at least two initial experts plus one adjudicator).
- Qualifications: Independent Level III-qualified (or equivalent experience) experts.
- For ML-enabled Outputs (Standalone Performance Validation):
- Number of Experts: Multiple (implied, by using "board certified cardiologists and radiologists").
- Qualifications: Board certified cardiologists and radiologists with SCCT Level III certification (or equivalent experience).
4. Adjudication method for the test set:
- For Plaque Volumes and CAD-RADS (MRMC study): Discrepancies between the initial expert readers were resolved by a third independent adjudicator with Level III qualifications or equivalent experience. This is a "2+1" adjudication method.
- For ML-enabled Outputs (Standalone Performance Validation): The ground truth was "independently established" by the experts from the source clinical image interpretation. The document does not specify an adjudication method for these specific tasks if there were multiple ground truth annotations.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Yes, an MRMC study was done.
- The study was not designed to measure the improvement of human readers with AI vs without AI assistance (i.e., a comparative effectiveness study of reader performance with and without the device).
- Instead, the MRMC study evaluated the performance of human readers using the Salix Coronary Plaque device compared to an expert ground truth. It states, "Eight U.S.-licensed radiologists and cardiologists... acted as SCP readers... They began with the device's standalone automated output and made any refinements they deemed necessary."
- The conclusion states: "This data supports our claim that qualified clinicians with minimal SCP specific training can achieve SCCT expert-level performance with SCP without the support of a core laboratory or specialized technician pre-read." This implies that the device enables a standard clinical user to achieve expert-level performance, but it does not quantify 'effect size' of improvement over their performance without the device.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Yes, a standalone performance validation was done for "ML-enabled Salix Coronary Plaque outputs for calcium scoring, centerline extraction, vessel labelling, and lumen and vessel wall segmentation against reference ground truth." These results are presented in the "Reported Device Performance" table and were shown to meet or exceed acceptance criteria.
- The MRMC study also started with the "device's standalone automated output," suggesting that the algorithm's initial automated output was part of the workflow, though readers could refine it.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- Expert consensus/annotation:
- For the MRMC study (plaque volumes and CAD-RADS), ground truth was established by "Independent Level III-qualified (or equivalent experience) experts [who] produced vessel-wall and lumen segmentations and assigned CAD-RADS stenosis categories." Discrepancies were adjudicated by a third expert.
- For the standalone ML-enabled outputs, ground truth was established by "board certified cardiologists and radiologists with SCCT Level III certification (or equivalent experience) using manual annotation and segmentation tools."
8. The sample size for the training set:
- The document states that the validation data was "sourced from multiple unique centers in the USA that did not contribute any data to the training datasets for any Salix Central algorithm."
- However, the actual sample size used for the training set for Salix Coronary Plaque (V1.0.0) is not provided in the given text.
9. How the ground truth for the training set was established:
- This information is not provided in the given text for the Salix Coronary Plaque device. While it mentions how ground truth for the test set was established, it does not detail the process for the training data (nor the training data size).
Ask a specific question about this device
Page 1 of 273