(59 days)
aPROMISE X is intended to be used by healthcare professionals and researchers for acceptance, transfer, storage, image display, manipulation, quantification and reporting of digital medical images. The system is intended to be used with images acquired using nuclear medicine (NM) imaging. using PSMA PET/CT. The device provides general Picture Archiving and Communications System (PACS) tools as well as a clinical application for oncology including marking of regions of interest and quantitative analysis.
aPROMISE (automated PROstate specific Membrane Antigen Imaging SEgmentation) X is an updated version of previously cleared device, aPROMISE v 1.2.1 (K211655), with a web interface where users can upload body scans of PSMA PET/CT image data in the form of DICOM files, review patient studies and share study assessments within a team. The software platform has two installation configurations: either deployed to a cloud infrastructure or to a local server at a clinical facility. The software platform can communicate via http-protocol as described in part 18 of the DICOM standard. This compatibility enables direct transmission of DICOM data from a PACS to aPROMISE X. Manual upload via aPROMISE X user interface is also supported. The software complies with the Digital Imaging and Communications in Medicine (DICOM) 3 standard.
Multiple scans can be uploaded for each patient and the system provides a separate review for each study. The review page display studies in a 4-panel view showing PET, CT, PET/CT fusion and maximum intensity projection (MIP) simultaneously and includes the option to display each view separately. The device is used to review entire patient studies, using image visualization and analysis tools for users to identify and mark regions of interest (ROIs). While reviewing image data, users can mark ROIs by selecting from pre-defined hotspots that are highlighted when hovering with the mouse pointer over the segmented region, or by manual drawing, i.e selecting individual voxels in the image slices to include as hotspots. Selected or drawn hotspots are subject to automatic quantitative analysis. The user can review the results of this quantitative analysis and determine which hotspots should be reported as suspicious lesions.
To create a report the signing user is required to confirm quality control, and electronically sign the report preview. Signed reports are saved in the device and can be exported as a JPG or DICOM file.
The provided text, a 510(k) summary for the aPROMISE X device, details its comparison to a predicate device and includes information about performance testing. However, it does not contain specific acceptance criteria, reported device performance metrics (e.g., sensitivity, specificity, AUC), or a detailed description of a clinical study that proves the device meets specific acceptance criteria related to its diagnostic performance.
The document states that the Analytical Performance in Clinical Study "demonstrated that aPROMISE X enables the automated quantification of tracer uptake are more reproducible, and efficient than those obtained manually. The study demonstrated that aPROMISE X enables the reader to achieve a higher efficiency and quantitative consistency, while maintaining the diagnostic performance of the physicians." This implies some form of performance evaluation, but the specific metrics and methodology required to answer your detailed questions are not provided in this 510(k) summary.
Therefore, I cannot populate the table or answer most of your questions with the information available in the provided text.
Here is what I can infer or state based on the document:
1. A table of acceptance criteria and the reported device performance:
Acceptance Criteria | Reported Device Performance |
---|---|
Specific diagnostic performance metrics (e.g., sensitivity, specificity, AUC for lesion detection) | Not provided in this document. The document states that the Analytical Performance in Clinical Study demonstrated "maintaining the diagnostic performance of the physicians," but no specific metrics or associated acceptance criteria are listed. |
Accuracy, linearity, and limit of detection of SUV and volume quantification (from Digital Phantom Validation Study) | "All SUV and volume quantification tests of aPROMISE X met their predetermined acceptance criteria." (Specific numerical criteria and results are not provided.) |
Equivalent performance of aPROMISE X vs. predicate for standard functions in marking and quantitative assessments of user-defined ROI | "This study demonstrated the equivalent performance of aPROMISE X as compared to the previous version, predicate aPROMISE v1.2.1 (K211655) for standard functions in marking and quantitative assessments of user defined region of interest in PSMA PET/CT." (Specific metrics are not provided.) |
Reproducibility and efficiency of automated quantification vs. manual quantification | "aPROMISE X enables the automated quantification of tracer uptake are more reproducible, and efficient than those obtained manually." (No numerical metrics provided.) |
Reader efficiency and quantitative consistency with aPROMISE X vs. without | "aPROMISE X enables the reader to achieve a higher efficiency and quantitative consistency..." (No numerical metrics provided.) |
2. Sample size used for the test set and the data provenance:
- Sample Size: Not specified for the "Analytical Performance in Clinical Study."
- Data Provenance: Not specified (e.g., country of origin, retrospective or prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not specified. The study compared performance to "clinicians" and "physicians" but did not detail their role in ground truth establishment or their qualifications.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- A study comparing performance was done ("compared the performance of aPROMISE X to that of clinicians"), and it suggested improvement in "efficiency and quantitative consistency" while "maintaining diagnostic performance." However, it's not explicitly stated as a formal MRMC study, and no specific effect sizes or quantitative improvements in diagnostic performance (e.g., AUC increase, sensitivity/specificity changes) with AI assistance versus without are provided.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- The phrasing "aPROMISE X enables the automated quantification of tracer uptake" suggests the algorithm performs quantification. However, the study description focuses on comparing "aPROMISE X" (which is a system used by "healthcare professionals") "to that of clinicians," implying it might be evaluating the system as a whole rather than a purely standalone algorithm's diagnostic capability. The section primarily highlights where the AI assists the user, such as in hotspot detection for user selection: "Pre-definition of Hotspot: Algorithm, using a machine-learning model to detect high local intensity regions of interest in the PET series."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not specified. The study is described as comparing to "clinicians" and "physicians," which implies clinical judgment, but the gold standard (e.g., biopsy results, long-term follow-up) for determining the true presence or absence of a clinically significant lesion is not mentioned.
8. The sample size for the training set:
- Not specified. The document mentions a "machine-learning model" for hotspot detection, implying a training set was used, but its size is not disclosed.
9. How the ground truth for the training set was established:
- Not specified.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).