(555 days)
A VIEW provides CT values for pulmonary tissue from CT thoracic datasets. This software can be used to support the physician quantitatively in the diagnosis, followup evaluation of CT lung tissue images by providing image segmentation of sub-structures in the left and right lung (e.g., the five lobes and airway), volumetric and structural analysis, density evaluations and reporting tools. AVIEW is also used to store, transfer, inquire and display CT data sets. A VIEW is not meant for primary image Interpretation in mammography.
The AVIEW is a software product which can be installed on a PC. It shows images taken with the interface from various storage devices using DICOM 3.0 which is the digital image and communication standard in medicine. It also offers functions such as reading, manipulation, analyzing, post-processing, saving and sending images by using the software tools.
The provided text describes the AVIEW software, a medical device for processing CT thoracic datasets, and its substantial equivalence to predicate devices. However, the document does not contain the specific details required to fully address your request regarding acceptance criteria and the study that proves the device meets them.
Here's a breakdown of what information is available and what is missing:
Information Available:
- Indications for Use: AVIEW "provides CT values for pulmonary tissue from CT thoracic datasets. This software can be used to support the physician quantitatively in the diagnosis, followup evaluation of CT lung tissue images by providing image segmentation of sub-structures in the left and right lung (e.g., the five lobes and airway), volumetric and structural analysis, density evaluations and reporting tools. AVIEW is also used to store, transfer, inquire and display CT data sets. AVIEW is not meant for primary image Interpretation in mammography."
- Performance Data: "Verification, validation and testing activities were conducted to establish the performance, functionality and reliability characteristics of the modified devices. The device passed all of the tests based on pre-determined Pass/Fail criteria."
- Tests Conducted:
- Unit test
- System test
- DICOM test
- LAA analysis test
- LAA size analysis test
- Airway wall measurement test
- Reliability test
- CT image compatibility test
- Conclusion: The device is deemed "substantially equivalent to the predicate device" based on "technical characteristics, general functions, application, and intended use," and "nonclinical tests demonstrate that the device is safe and effective."
Information Missing (and why based on the document):
-
A table of acceptance criteria and the reported device performance: While various tests are listed (e.g., LAA analysis test, Airway wall measurement test), the document explicitly states these are "nonclinical tests." It does not provide specific quantitative acceptance criteria or corresponding reported device performance values for these tests. The nature of these tests appears to be functional and reliability-focused rather than clinical performance metrics. For example, it doesn't state "AVIEW achieved X% accuracy for LAA analysis against ground truth Y" or "Airway wall measurement deviation was within Z mm."
-
Sample size used for the test set and the data provenance: The document mentions "CT thoracic datasets" but does not specify the sample size for any test set or the provenance (e.g., country of origin, retrospective/prospective nature) of the data used for testing.
-
Number of experts used to establish the ground truth for the test set and their qualifications: The document states, "Results produced by the software tools are dependent on the interpretation of trained and licensed radiologists, clinicians and referring physicians as an adjunctive to standard radiology practices for diagnosis." However, it does not specify how many, if any, experts were used to establish ground truth for the test set, nor their specific qualifications, for the performance testing cited.
-
Adjudication method for the test set: No information is provided regarding any adjudication methods (e.g., 2+1, 3+1) used for establishing ground truth for the test set.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance: The document explicitly states that AVIEW is "not meant for primary image Interpretation in mammography" and that its results "are dependent on the interpretation of trained and licensed radiologists, clinicians and referring physicians as an adjunctive to standard radiology practices for diagnosis." This suggests it's an assistive tool, but no MRMC study comparing human readers with and without AI assistance, or any effect size, is mentioned. The "Performance Data" section focuses on "nonclinical tests" for software functionality.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: The provided detail about "LAA analysis test," "LAA size analysis test," and "Airway wall measurement test" implies standalone algorithmic performance was assessed in these nonclinical tests. However, specific performance metrics (e.g., accuracy, precision, recall) from a standalone evaluation are not provided.
-
The type of ground truth used: For the mentioned performance tests (e.g., LAA, airway wall measurement), the type of ground truth used is not explicitly specified. It's implied that these are technical validations against known values or established methods, but whether this involved expert consensus on clinical cases, pathology, or outcomes data is not detailed.
-
The sample size for the training set: No information is provided about a training set or its size, as the document refers to "Verification, validation and testing activities" as "nonclinical tests" demonstrating substantial equivalence, not a machine learning model's development.
-
How the ground truth for the training set was established: Since no training set information is provided, this cannot be answered.
In summary, the document serves as an FDA 510(k) clearance letter and summary, which primarily focuses on demonstrating "substantial equivalence" to predicate devices through technical characteristics and "nonclinical tests" for functionality and reliability. It does not provide the detailed clinical performance study data that would include specific acceptance criteria, sample sizes (for test or training sets), expert qualifications, or ground truth establishment methods typical for AI-based diagnostic/assistive tools evaluated for quantitative clinical outcomes.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).