Search Results
Found 2 results
510(k) Data Aggregation
(82 days)
3D-MSPECT
Ask a specific question about this device
(90 days)
3D-MSPECT
The ADAC 3D-MSPECT application is intended to provide processing and 3-dimensional display of reconstructed Cardiac SPECT studies.
3D-MSPECT is a software application designed to review and quantitatively analyze cardiac SPECT perfusion nuclear medicine patient studies. 3D-MSPECT operates as an independent application on the ADAC Pegasys system. The application provides tools for viewing standard and gated cardiac SPECT images on both a slice-by-slice basis and as a three-dimensional rendered image. Additionally, it provides a quantitative assessment of heart function by computing and displaying left ventricular chamber volume, ejection fraction, and transient ischemic dilation (TID) values and provides an assessment of the data set in comparison to similar patients. Physicians use this information to assess the anatomical and physiological functionality of the heart and analyze the presence of myocardial defects.
3D-MSPECT can be used to display the left ventricular endocardial and epicardial surfaces, polar maps indicating perfusion, wall thickening, wall motion, and reversibility, a 3D rendered image of the cardiac surfaces, and the short axis, vertical long, and horizontal long slice data. These can be displayed for a single data set or as a comparison of related data sets (i.e., stress, rest, delay, or Vantage). Physicians can also use this application to create, modify and review Normals files from patient data available in the Pegasys database.
The provided text does not contain acceptance criteria for the 3D-MSPECT device, nor does it describe a study that uses such criteria to prove the device's performance.
Instead, the document is a 510(k) Premarket Notification summary and an FDA clearance letter. It mainly focuses on:
- Device Description: What the 3D-MSPECT software does (review and quantitatively analyze cardiac SPECT perfusion nuclear medicine patient studies, providing tools for viewing images and assessing heart function).
- Indications for Use: The intended purpose of the device (processing and 3-dimensional display of reconstructed Cardiac SPECT studies).
- Technological Comparison: States that the 3D-MSPECT has similar indications for use and utilizes similar data types as predicate devices (ADAC CEQUAL and Sopha Sophy NXT), performing similar quantifications.
- Testing (briefly mentioned): "Images were generated using a prototype of the application. The quality of the images produced was verified to be similar to the quality of images produced by the predicate devices." This is the only mention of "testing" and it's a very high-level statement without specific metrics or a detailed study description.
- FDA Clearance: A letter from the FDA stating that the device is substantially equivalent to predicate devices and can be marketed.
Therefore, I cannot provide the requested information in the table or answer most of the specific questions because the necessary details are not present in the provided document.
Here's what can be inferred or explicitly stated based only on the provided text, with most items marked as "Not provided":
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Not provided | "quality of the images produced was verified to be similar to the quality of images produced by the predicate devices." (No specific metrics) |
2. Sample size used for the test set and the data provenance
- Sample Size: Not provided.
- Data Provenance: Not provided (e.g., country of origin, retrospective or prospective). The text only mentions "Images were generated using a prototype of the application."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Not provided.
4. Adjudication method for the test set
- Not provided.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance
- An MRMC study is not mentioned. The closest statement is that the "quality of the images produced was verified to be similar to the quality of images produced by the predicate devices," but this doesn't describe a comparative effectiveness study with human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The testing description is too vague to confirm if standalone performance was specifically assessed as an algorithm only without human-in-the-loop. It focuses on image quality being similar to predicate devices.
7. The type of ground truth used
- Not provided. The text only mentions "verified to be similar to the quality of images produced by the predicate devices," implying a comparison to existing, accepted methods or images, but it doesn't describe how the truth was established for the specific prototype images.
8. The sample size for the training set
- Not provided. The document primarily discusses a prototype application and its comparison to predicate devices, not the specifics of an AI model's training.
9. How the ground truth for the training set was established
- Not provided. The document does not mention a training set or how ground truth was established for it.
Ask a specific question about this device
Page 1 of 1