Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K060840
    Device Name
    SVIEW
    Date Cleared
    2006-04-07

    (10 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K990241

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    SView 1.0 is a medical image management device intended for viewing, manipulating and analyzing DICOM-compliant medical images acquired from MRI scanners and other DICOM -compliant imaging devices. SView™ 1.0 can be used for real-time image viewing, manipulation and analysis that aid a cardiologist or radiologist in their diagnosis. In addition, it facilitates the physician's reporting and reviewing of patient studies.

    Device Description

    SView " 1.0 is a tool for 2D (two-dimensional) viewing and manipulation of DICOM compliant MR images and other DICOM-compliant images. The proposed device provides real-time image viewing, manipulation, analysis and reporting.

    AI/ML Overview

    The provided 510(k) summary for the SView™ Version 1.0 PACS/Medical Image Management Device does not describe an acceptance criteria or a study proving the device meets said criteria in the way typically associated with algorithms or AI models.

    This device is classified as a "PACS / Medical Image Management Device" and a "System, Image Processing." The 510(k) summary explicitly states:

    "The SView™ medical image management device contains no image digitizers and uses only lossless compression. On this basis, MRI Cardiac Services, Inc. believes that clinical investigation is not necessary."

    Instead of a clinical study, the submission focuses on demonstrating substantial equivalence to a predicate device (AccuView Diagnostic Imaging Workstation Software Package, K990241) by comparing features and specifications. The "Laboratory and Clinical Testing" section describes general software testing rather than a performance study against clinical endpoints or ground truth.

    Therefore, many of the requested details about acceptance criteria, performance, sample sizes, and ground truth are not applicable to the information provided in this 510(k) summary.

    Here's an analysis based on the information available:

    1. Table of Acceptance Criteria and Reported Device Performance:

    Not applicable. The submission does not define specific performance metrics or acceptance criteria for an algorithm's diagnostic performance. The substantial equivalence argument rests on functional equivalency with the predicate device for image viewing and manipulation.

    2. Sample size used for the test set and the data provenance:

    • Test Set Sample Size: Not applicable. No clinical test set for algorithmic performance evaluation is described.
    • Data Provenance: Not applicable.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    Not applicable. No ground truth establishment for a diagnostic algorithm is described.

    4. Adjudication method for the test set:

    Not applicable. No test set requiring adjudication is described.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    No. The document explicitly states that "clinical investigation is not necessary." Therefore, no MRMC study was conducted.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    Not applicable. This device is described as a "tool for 2D (two-dimensional) viewing and manipulation of DICOM compliant MR images" and not a standalone diagnostic algorithm. Its function is to aid a cardiologist or radiologist, implying a human-in-the-loop scenario, but no performance study is detailed.

    7. The type of ground truth used:

    Not applicable. No ground truth for an algorithm's diagnostic performance is mentioned.

    8. The sample size for the training set:

    Not applicable. This device is a PACS/image management system, not an AI/ML algorithm that requires a training set.

    9. How the ground truth for the training set was established:

    Not applicable. As above, no training set or ground truth for training is described.

    Summary of what the document does say about testing:

    • Laboratory and Clinical Testing: "SView 1.0 is intended for use with existing MRI images for real-time image viewing, image manipulation, and analysis that aid a cardiologist or radiologist in their diagnosis. In addition it facilitates the physician's reporting and reviewing of patient studies. The SView™ medical image management device contains no image digitizers and uses only lossless compression. On this basis, MRI Cardiac Services, Inc. believes that clinical investigation is not necessary."
    • Software Testing: "Extensive testing of the device will be performed by programmers, by nonprogrammers, quality assurance staff and by potential customers prior to commercial release." This refers to software validation and verification, not clinical performance studies.

    In conclusion, this 510(k) summary for SView™ 1.0 explicitly states that clinical investigation (and by extension, the detailed performance studies, ground truth establishment, and sample sizes you've asked about) was deemed unnecessary because the device is a medical image management system that does not perform diagnostic algorithms, uses lossless compression, and is claiming substantial equivalence based on functional similarity to an existing predicate device.

    Ask a Question

    Ask a specific question about this device

    K Number
    K040782
    Date Cleared
    2004-07-13

    (109 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K972903, K990241, K891931

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Calcium scoring of calibrated CT images to provide quantitative measures of calcium content in the coronary arteries and aorta. The software is intended to be used under the supervision of trained physicians to monitor progression of vascular calcium which may be useful in the prognosis of Cardiovascular disease.

    N-Vivo™ Calcium Score is a calibrated software program for non-invasive identification and quantification of calcified plaques in the coronary arteries and aorta using CT images. The software runs on a standard PC with basic image processing, database and reporting functions.

    N-Vivo™ may be used to monitor progression of calcium which may be useful in the prognosis of cardiovascular disease.

    Device Description

    A software package operating on a PC which facilitates non-invasive measurements of vascular calcified plaques. The device provides Agaston, Volume and Mass Scores using Phantom Calibrated CT images.

    The key features include:

    • Hybrid calibration with external phantom and invivo blood pool reference. .
    • 사 Plaque definition which include statistical and calibrated thresholds for a calcium mass measurement.
    • . Artery trace which segments regions of the heart that include the coronary arteries and aorta.
    • . Automated identification, quantification, and scoring of vascular calcium.
    • 내 PC workstation with web browser based interface which includes database, Dicom reports, Serial graphs and QA module.
    AI/ML Overview

    The provided text is a 510(k) summary for the N-Vivo Calcium Score device, a software package for calcium scoring from CT images. It does not contain a detailed study description with specific acceptance criteria or an explicit comparative effectiveness study. The document focuses on establishing substantial equivalence to previously marketed devices.

    However, based on the information provided, we can infer some details and highlight what is missing:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state acceptance criteria or provide a table of performance metrics. It mainly focuses on the device description and its intended use. The closest statement regarding performance is: "The calibration and software analysis improves the performance of calcium measurements from reconstructed CT images." However, no specific data demonstrating this improvement or against a defined acceptance criterion is provided.

    2. Sample Size Used for the Test Set and Data Provenance

    This information is not provided in the document.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    This information is not provided in the document.

    4. Adjudication Method for the Test Set

    This information is not provided in the document.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    A multi-reader multi-case (MRMC) comparative effectiveness study, which would involve human readers improving with AI vs. without AI assistance, is not mentioned or described in this document. The focus is on the device's ability to provide quantitative measures, not on its impact on human reader performance.

    6. Standalone Performance Study

    While the device's function is standalone (algorithm-only), a detailed standalone performance study with specific metrics and results against a ground truth is not explicitly described in this document. The statement "The calibration and software analysis improves the performance of calcium measurements" implies a standalone capability, but without specific study details.

    7. Type of Ground Truth Used

    The document states that the device provides "Agaston, Volume and Mass Scores using Phantom Calibrated CT images" and uses "Hybrid calibration with external phantom and invivo blood pool reference." This suggests that phantom-based calibration and potentially in-vivo biological references are used to establish a form of ground truth for the measurements. However, the exact nature of the ground truth for an independent test set (e.g., pathology, clinical outcomes) is not detailed. It implies a focus on accuracy of measurement against calibrated standards rather than diagnostic accuracy against a clinical outcome.

    8. Sample Size for the Training Set

    This information is not provided in the document.

    9. How the Ground Truth for the Training Set Was Established

    This information is not provided in the document. However, similar to point 7, it can be inferred that phantom-based calibration and in-vivo blood pool references played a role in the development and refinement (training) of the algorithms.

    In summary, the 510(k) summary provided is a high-level overview establishing substantial equivalence. It lacks the detailed study information typically found in a clinical validation report, including specific acceptance criteria, sample sizes, expert qualifications, adjudication methods, and detailed performance metrics.

    Ask a Question

    Ask a specific question about this device

    K Number
    K024149
    Device Name
    PRIMELUNG
    Date Cleared
    2003-02-21

    (67 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K990241/ K012106

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Comprehensive software package for visualization and analysis of thoracic CT datasets, which is intended to help the reading physician to analyze regions of the lung, such as nodules and other lung parameters, and to generate an automatic report.

    Device Description

    The Acculmage PrimeLung software module is an additional software option to K990241, AccuView Diagnostics Imaging Workstation with AccuScore, AccuAnalyze, AccuShade, AccuVRT and AccuMIP plug-ins. The AccuShade plug-in is not currently marketed, the AccuMIP plug-in is currently marketed with the name ProjectorPro. The Acculmage PrimeLung plug-in provides visualization and analysis tools for viewing regions of the lung, and generating reports with patient information, images, results and recommendations.

    AI/ML Overview

    Here's an analysis of the provided 510(k) summary regarding the acceptance criteria and the study that proves the device meets those criteria:

    Evaluation of Acceptance Criteria and Device Performance for PrimeLung (K024149)

    Based on the provided 510(k) summary for PrimeLung (K024149), the information regarding acceptance criteria and performance studies is limited and primarily focuses on functional verification rather than clinical accuracy for diagnostic tasks.

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document does not explicitly state quantitative acceptance criteria for diagnostic performance (e.g., sensitivity, specificity for nodule detection or characterization). Instead, the "test results" section describes functional verification.

    Acceptance Criteria (Inferred/Stated)Reported Device Performance
    Graphic User Interface (GUI) conforms to functional specificationGUI, menus, and buttons conform as per PrimeLung functional specification.
    All functionality works as described in functional specificationAll functionality works as described in the PrimeLung Functional Specification.
    Auto-matching comparison tool provides reliable results100% matching accuracy on the specified data sets.
    Report generator can be created and results printedReport generator can be created and results can be printed.
    Segmentation of lung nodule functionalityYes (as per comparison table, but no performance metrics provided)
    Volume measurements, comparator tool for nodule matching functionalityYes (as per comparison table, but no performance metrics provided)
    Visualization tools (MIP, MPR, 3D Volume Rendering) functionalityYes (as per comparison table, but no performance metrics provided)

    2. Sample Size Used for the Test Set and Data Provenance

    The document states: "The testing performed showed that auto-matching comparison tool provides very reliable results with 100% of matching accuracy on the specified data sets."

    • Sample Size: The exact sample size used for the "specified data sets" is not provided.
    • Data Provenance: The country of origin and whether the data was retrospective or prospective is not specified.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • The document does not specify the number of experts or their qualifications used to establish ground truth for any of the testing. The "100% matching accuracy" for the comparison tool implies that there was a reference standard, but how that standard was derived is not detailed.

    4. Adjudication Method for the Test Set

    • The document does not mention any adjudication method (e.g., 2+1, 3+1, none) for the test set.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    • No, an MRMC comparative effectiveness study that assesses human reader improvement with AI assistance versus without AI assistance was not mentioned or reported in this 510(k) summary. The summary focuses on the device's functional equivalence to predicate devices and its own functional performance.

    6. If a Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • The reported performance for the "auto-matching comparison tool" achieving "100% matching accuracy on the specified data sets" seems to be a standalone algorithm performance metric. However, this is a very specific function and not a comprehensive diagnostic standalone claim (e.g., for nodule detection or characterization).
    • The overall context of the device as "intended to help the user to analyze lung nodules" suggests it's an assistive tool, not a standalone diagnostic. Therefore, a comprehensive standalone performance study for diagnostic tasks was not explicitly stated or reported.

    7. The Type of Ground Truth Used

    • For the "auto-matching comparison tool," the ground truth was likely established by manual matching performed by a human expert or a pre-defined reference, against which the automated matching was compared. The exact nature of this ground truth (e.g., consensus, pathology, follow-up) for general nodule analysis is not specified.
    • For other functionalities like GUI conformity and report generation, the "ground truth" is simply adherence to the functional specification.

    8. The Sample Size for the Training Set

    • The document does not mention any training set or its sample size. This is consistent with the era of the submission (2002-2003) where deep learning and large-scale training datasets were not standard practice for medical device submissions of this type. The device's functionality appears to be based on more traditional image processing algorithms rather than machine learning requiring a distinct training phase.

    9. How the Ground Truth for the Training Set Was Established

    • As no training set is mentioned (see point 8), the method for establishing its ground truth is also not applicable/provided.

    Summary of Study Limitations and Nature:

    The "study" described in the 510(k) summary is primarily a functional verification test (also known as verification and validation testing) rather than a clinical performance study. It confirms that the software's user interface works as designed, its features perform their intended actions, and a specific "auto-matching comparison tool" achieves high accuracy for matching tasks on limited, unspecified data. It does not provide clinical performance metrics like sensitivity, specificity, or reader agreement for diagnostic tasks such as nodule detection or characterization, which are common in more recent AI/CADe submissions. The submission relies on substantial equivalence to predicate devices that also provided visualization and analysis tools.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1