Search Results
Found 1 results
510(k) Data Aggregation
(269 days)
VIVIX-M
VIVIX-M detectors are flat panel detectors used in mammographic applications to acquire digital images for screening and diagnosis.
VIVIX-M, a series of flat panel detectors models named; FXMD-2430S, with imaging areas of 24cm x 30cm, 18cm x 24cm, respectively. The device intercepts x-ray photons and the scintillator emits visible spectrum photons that illuminate an array of photo (a-SI)-detectors that create electrical signals. After the electrical signals are generated, it is converted to digital value, and the Software acquires and processes the data values from the detector. The resulting digital images will be displayed on monitors. These devices should be integrated with an operating PC and an X-Ray generator. It can be utilized to digitalize x-ray images and transfer for radiography diagnostic.
VXvue Mammo is a digital X-ray imaging software designed for mammography applications. It acquires, processes, and transmits digital images from VIVIX-M detectors (FXMD-1008S and FXMD-2430S) while ensuring compliance with DICOM standards for seamless integration with PACS and other medical systems.
The provided document is a 510(k) summary for the VIVIX-M Digital X-ray detectors (models FXMD-2430S and FXMD-1008S). It states that the device is substantially equivalent to a predicate device (RSM 2430C, K170930).
However, the document does not contain the detailed acceptance criteria and a comprehensive study that proves the device meets those acceptance criteria in the format requested. Specifically:
- No explicit table of acceptance criteria and reported device performance for clinical endpoints. The document provides a table comparing the subject device to the predicate device for technical parameters (MTF, DQE) and states "Substantially Equivalent" but does not define numerical acceptance thresholds for these or for clinical performance.
- No details on sample size for the test set or data provenance for a clinical study. It mentions a "clinical image evaluation" was conducted but provides no specifics on the number of cases, patient demographics, or whether the data was retrospective or prospective, or its country of origin.
- No information on the number or qualifications of experts used for ground truth.
- No adjudication method specified for the test set.
- No Multi-Reader Multi-Case (MRMC) comparative effectiveness study details. There is no mention of human readers improving with AI assistance because this is a detector, not an AI-enabled device for interpretation.
- No "standalone (i.e. algorithm only without human-in-the-loop performance)" was done for clinical endpoints. This is a hardware device; its performance is assessed through image quality metrics (MTF, DQE, NPS, etc.) and clinical image evaluation for diagnostic capability, not an algorithm's standalone performance.
- The type of ground truth for clinical evaluation is not specified beyond "equivalent diagnostic capability."
- No training set size or ground truth establishment method for a training set. This device is a digital X-ray detector, not an AI/ML algorithm that is trained on a dataset. The performance evaluation is based on non-clinical technical measurements and a clinical image evaluation for diagnostic capability comparison to a predicate, not on a trained AI model.
The document primarily focuses on demonstrating substantial equivalence through:
- Technical performance metrics: MTF, DQE, NPS, Dynamic Range, Image Erasure, AEC Performance, and Phantom Testing. Acceptance for these is generally implied by being "equivalent to the predicate" or exceeding "specified thresholds."
- Clinical image evaluation: A general statement is made that a clinical image evaluation was conducted, confirming "equivalent diagnostic capability to the predicate device."
Therefore, based only on the provided text, I cannot complete the table or provide the requested details about a study proving clinical acceptance criteria. The information is limited to substantiating the device's equivalence to a predicate, not demonstrating it meets specific, predefined clinical acceptance criteria through a detailed study design as might be seen for a novel AI diagnostic device.
If this were an AI device, the "acceptance criteria" would be specific performance metrics (e.g., sensitivity, specificity, AUC) with predefined thresholds derived from clinical needs. For this detector, "acceptance" is framed in terms of substantial equivalence to a legally marketed predicate device.
What is present in the document relevant to performance and equivalence:
The document outlines a non-clinical testing summary and mentions a clinical image evaluation to demonstrate substantial equivalence to a predicate device, rather than meeting specific, numerical acceptance criteria for clinical performance that would typically be associated with an AI diagnostic study.
Here's what can be extracted based on the provided text, and where information is missing:
Feature | Description based on provided text |
---|---|
1. Acceptance Criteria & Reported Performance Table | (No explicit table as requested for clinical endpoints) The document implicitly defines "acceptance" by demonstrating substantial equivalence to the predicate device (K170930) in both non-clinical and clinical performance. |
Non-clinical Performance Comparisons (as presented in the 510(k) summary, specifically for FXMD-2430S model):
| Parameter | Predicate Device (RSM 2430C) | Subject Device (VIVIX-M FXMD-2430S) | Equivalence | Acceptanc
| MTF | 70% at 2lp/mm, 30% at 5lp/mm | 54.9% at 3lp/mm, 33.1% at 5lp/mm | Substantially Equivalent | Implied: Demonstrated spatial resolution equivalent to predicate, exceeding specified thresholds. | DQE | 43% at 2lp/mm, 30% at 5lp/mm | 59.0% at 3lp/mm, 42.5% at 5lp/mm | Substantially Equivalent | Implied: Comparable to predicate, confirming equivalent imaging performance. |
Other Non-Clinical Tests (Results are qualitative in the summary):
- Noise Power Spectrum (NPS): "confirmed consistent noise performance across tested exposure levels."
- Dynamic Range Testing: "exhibited a wide dynamic range suitable for mammographic imaging."
- Image Erasure and Fading Test (Ghosting): "No significant ghosting or residual artifacts were observed."
- Automatic Exposure Control (AEC) Performance: "met manufacturer-defined limits, ensuring consistent image quality."
- Phantom Testing (ACR, TE, CDMAM Phantoms): "All tests demonstrated diagnostic image quality equivalent to or better than the predicate device."
Clinical Performance:
- Overall Diagnostic Capability: "the study confirmed that the new x-ray detectors VIVIX-M provide images of equivalent diagnostic capability to the predicate device." |
| 2. Sample Size (Test Set) & Data Provenance | Sample Size: Not specified. The document only states "A clinical image evaluation... was conducted."
Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). |
| 3. Number of Experts & Qualifications for Ground Truth | Not specified. |
| 4. Adjudication Method for Test Set | Not specified. |
| 5. MRMC Comparative Effectiveness Study Effect Size | Not applicable. This is a medical imaging detector, not an AI-assisted diagnostic tool for human readers. There is no AI component that assists human readers. The clinical evaluation verifies equivalent diagnostic capability of the images produced by the detector. |
| 6. Standalone (Algorithm Only) Performance | Not applicable in the context of an AI algorithm. This is a hardware device (X-ray detector). Its "standalone" performance is measured by its physical and image quality characteristics (MTF, DQE, NPS) and its ability to produce images comparable to a predicate for diagnostic purposes. These non-clinical tests were conducted. |
| 7. Type of Ground Truth Used (for Clinical Evaluation) | The general statement is "equivalent diagnostic capability" to the predicate. The method for establishing this "diagnostic capability" as ground truth (e.g., expert consensus on clinical cases, pathological confirmation, long-term follow-up) is not explicitly detailed. It's likely based on radiologists' interpretation of the images produced by the device compared to the predicate in clinical scenarios. |
| 8. Sample Size for Training Set | Not applicable. This device is a digital X-ray detector, not an AI/ML algorithm that requires a training set. |
| 9. How Ground Truth for Training Set Was Established | Not applicable. (See #8) |
Ask a specific question about this device
Page 1 of 1