K Number
K192040
Device Name
AVIEW Modeler
Date Cleared
2019-12-20

(142 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The AVIEW Modeler is intended for use as an image review and segmentation system that operates on DICOM imaging information obtained from a medical scanner. It is also used as a pre-operative software for surgical planning. 3D printed models generated from the output file are for visualization and educational purposes only and not for diagnostic use.

Device Description

The AVIEW Modeler is a software product which can be installed on a separate PC, it displays patient medical images on the screen by acquiring it from image Acquisition Device. The image on the screen can be checked edited, saved and received.

  • -Various displaying functions
    • Thickness MPR., oblique MPR, shaded volume rendering and shaded surface rendering.
    • . Hybrid rendering of simultaneous volume-rendering and surface-rendering.
  • -Provides easy to-use manual and semi-automatic segmentation methods
    • Brush, paint-bucket, sculpting, thresholding and region growing. "
    • . Magic cut (based on Randomwalk algorithm)
  • -Morphological and Boolean operations for mask generation.
  • Mesh generation and manipulation algorithms. -
    • Mesh smoothing, cutting, fixing and Boolean operations.
  • -Exports 3d-printable models in open formats, such as STL.
  • DICOM 3.0 compliant (C-STORE, C-FIND) -
AI/ML Overview

The provided text describes the 510(k) Summary for AVIEW Modeler, focusing on its substantial equivalence to predicate devices, rather than a detailed performance study directly addressing specific acceptance criteria. The document emphasizes software verification and validation activities.

Therefore, I cannot fully complete all sections of your request concerning acceptance criteria and device performance based solely on the provided text. However, I can extract information related to software testing and general conclusions.

Here's an attempt to answer your questions based on the available information:

1. A table of acceptance criteria and the reported device performance

The document does not provide a quantitative table of acceptance criteria with corresponding performance metrics like accuracy, sensitivity, or specificity for the segmentation features. Instead, it discusses the successful completion of various software tests.

Acceptance Criteria (Implied)Reported Device Performance
Functional Adequacy"passed all of the tests based on pre-determined Pass/Fail criteria."
Performance AdequacyPerformance tests conducted "according to the performance evaluation standard and method that has been determined with prior consultation between software development team and testing team" to check non-functional requirements.
ReliabilitySystem tests concluded "not finding 'Major'. 'Moderate' defect."
CompatibilitySTL data created by AVIEW Modeler was "imported into Stratasys printing Software, Object Studio to validate the STL before 3d-printing with Objet260 Connex3." (implies successful validation for 3D printing)
Safety and Effectiveness"The new device does not introduce a fundamentally new scientific technology, and the nonclinical tests demonstrate that the device is safe and effective."

2. Sample sizes used for the test set and the data provenance

The document does not specify the sample size (number of images or patients) used for any of the tests (Unit, System, Performance, Compatibility). It also does not explicitly state the country of origin of the data or whether the data was retrospective or prospective.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

The document does not provide any information about the number or qualifications of experts used to establish ground truth for a test set. The focus is on internal software validation and comparison to a predicate device.

4. Adjudication method for the test set

The document does not mention any adjudication method for a test set, as it does not describe a clinical performance study involving human readers.

5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance

No, the provided text does not describe an MRMC comparative effectiveness study involving human readers with or without AI assistance. The study described is a software verification and validation, concluding substantial equivalence to a predicate device.

6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

The document describes various software tests (Unit, System, Performance, Compatibility) which could be considered forms of standalone testing for the algorithm's functionality and performance. However, it does not present quantitative standalone performance metrics typical of an algorithm-only study (e.g., precision, recall, Dice score for segmentation). It focuses on internal software quality and compatibility.

7. The type of ground truth used

The type of "ground truth" used is not explicitly defined in terms of clinical outcomes or pathology. For the software validation, the "ground truth" would likely refer to pre-defined correct outputs or expected behavior of the software components, established by the software development and test teams. For example, for segmentation, it would be the expected segmented regions based on the algorithm's design and previous validation efforts (likely through comparison to expert manual segmentations or another validated method, though not detailed here).

8. The sample size for the training set

The document does not mention a training set or its sample size. This is a 510(k) summary for a medical image processing software (AVIEW Modeler), and while it mentions a "Magic cut (based on Randomwalk algorithm)," it does not describe an AI model that underwent a separate training phase with a specific dataset, nor does it classify the device as having "machine learning" capabilities in the context of FDA regulation. The focus is on traditional software validation.

9. How the ground truth for the training set was established

As no training set is mentioned (see point 8), there is no information on how its ground truth would have been established.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).