Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K993574
    Date Cleared
    2000-01-18

    (89 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K962933, K983110, K962138, K946244/A1, K933018/S1

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Imaging of: The Whole Body (including head, abdomen, pelvis, limbs and extremities, spine, neck, TMJ, heart, blood vessels). [Application terms include MRCP (MR Cholangiopancreatography), MR Urography, MR Myelography, MR Fluoroscopy, SAS (Surface Anatomy Scan), Dynamic Scan and Cine Imaging.] -Fluid Visualization -2D/3D Imaging Additional indications for v3.1, v3.2, & v3.3 (only) - MR Angiography/MR Vascular Imaging - Additional indication for v3.2 & v3.3 (only) - Water/Fat Imaging - Additional indication for v3.3 {only} Perfusion/Diffusion Imaging -

    Device Description

    Versions v3.0/v3.1/v3.2/v3.3 software are a combination of modifications and the addition of new sequences to the existing software, which facilitate the acquisition and reconstruction of MR images. The four versions have the same base software features with certain additional features available in each subsequent version (see Comparison Table, Appendix B, for detailed description). A brief description follows: - v3.0: Based on v2.5 (K990260) with MR Angio and FASE sequences removed - v3.1: Based on v2.5 (K990260) - v3.2: Based on v2.6 (K990260) - v3.3: Based on v2.6 (K990260) with addition of Perfusion/Diffusion imaging

    AI/ML Overview

    This 510(k) premarket notification describes an upgrade to an existing Magnetic Resonance Diagnostic Device, the OPART™ (Model MRT-600), to software versions v3.0, v3.1, v3.2, and v3.3, along with optional hardware items. The core of this submission is to demonstrate substantial equivalence to previously cleared versions and to justify new functionalities and increased safety parameters.

    Here's an analysis based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The submission focuses on safety parameters and imaging performance. The "acceptance criteria" appear to be specified maximums for safety and a general "specification volume" for imaging. The "reported device performance" is essentially that the device operates within these stated limits and produces sample images.

    ParameterAcceptance CriteriaReported Device Performance
    Maximum static magnetic field strengthSpecified (0.35 Tesla)0.35 Tesla (inherent to the device model)
    Rate of change of magnetic fieldSpecified (19 T/second)19 T/second (inherent to the device model)
    Maximum radio frequency power deposition (SAR)< 1.5 Watt/kg< 1.5 Watt/kg (increase from <0.4 W/kg for previous versions)
    Acoustic noise levels (maximum)98.4 dB (A)98.4 dB (A) (inherent to the device model)
    Specification volume (Head)10cm dsvSample phantom images and clinical images presented (Appendix K & L)
    Specification volume (Body)20cm dsvSample phantom images and clinical images presented (Appendix K & L)
    New Software FunctionalityEquivalent to predicate devices & perform as intendedSample phantom images and clinical images presented (Appendix K & L)
    Optional Hardware ItemsEquivalent to predicate devicesDemonstrated equivalence to cleared predicate devices

    2. Sample Size Used for the Test Set and Data Provenance

    The document explicitly states: "Sample phantom images and clinical images are presented for new sequences (see Appendices K & L)."

    • Sample Size: Not explicitly stated as a number of patients or images. The term "sample" suggests a limited number, likely a qualitative representation rather than a statistically powered quantitative study.
    • Data Provenance: Not explicitly stated (e.g., country of origin). The submission is from Toshiba America MRI, Inc., headquartered in South San Francisco, CA, which might imply U.S. data, but this is not confirmed.
    • Retrospective or Prospective: Not explicitly stated. Given the context of a 510(k) for software upgrades, it's possible that both retrospective clinical images (to showcase existing capabilities with new software) and prospective images (to demonstrate new sequences) were used, but the document does not specify.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    This information is not provided in the document. The submission focuses on technical specifications, safety, and substantial equivalence to predicate devices, not on the diagnostic performance validation by experts.

    4. Adjudication Method for the Test Set

    This information is not provided in the document. As there's no mention of expert review or diagnostic accuracy studies, an adjudication method is not applicable to the reported data.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    An MRMC comparative effectiveness study was not performed. This submission is for an upgrade to an MRI device's software and optional hardware, not for an AI-powered diagnostic tool. Therefore, there's no "AI assistance" component or improvement effect size to report.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    A standalone algorithm performance study was not described. The device is an MRI diagnostic system, which inherently requires human operation and interpretation. The "software functionality" refers to image acquisition and reconstruction, not autonomous diagnostic algorithms.

    7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)

    The document does not describe the establishment of ground truth in the context of diagnostic accuracy. The "ground truth" for this submission appears to be:

    • Technical Specifications: Measured values for safety parameters (e.g., SAR, acoustic noise).
    • Image Quality: Qualitative assessment based on "sample phantom images and clinical images" (Appendices K & L) to demonstrate that the new sequences produce recognizable and potentially diagnostically useful images, implicitly compared to expected image quality from existing MRI systems.
    • Substantial Equivalence: The primary "ground truth" for the entire submission is the demonstration that the modified device is as safe and effective as predicate devices, which implies the predicate devices already met certain performance standards.

    8. The Sample Size for the Training Set

    This information is not provided and is not applicable. The software described (v3.0/v3.1/v3.2/v3.3) facilitates image acquisition and reconstruction, and adds new imaging sequences. It is not an AI/ML algorithm that would require a "training set" in the conventional sense of machine learning.

    9. How the Ground Truth for the Training Set Was Established

    This information is not provided and is not applicable, as there is no "training set" for an AI/ML algorithm mentioned in this submission.


    Summary of the Study Proving Acceptance Criteria:

    The study proving the device meets the acceptance criteria is primarily an analysis demonstrating substantial equivalence to previously cleared predicate devices (K990260, K981475, K983110, K962933, K962138, K946244/A1, K933018/S1).

    Specific evidence includes:

    • Technical Specifications Compliance: The document lists safety parameters (static field strength, rate of change of magnetic field, SAR, acoustic noise) and states that the device operates within these specified limits. The increase in SAR limit from <0.4 W/kg to <1.5 W/kg is specifically justified in Appendix J, indicating a technical evaluation was performed to ensure safety at this higher limit.
    • Qualitative Image Review: "Sample phantom images and clinical images are presented for new sequences (see Appendices K & L)." This implicitly demonstrates that the device, with its new software features, can acquire and reconstruct images that are visually acceptable and consistent with traditional MRI output for diagnostic purposes. This is a qualitative assessment rather than a quantitative diagnostic accuracy study.
    • Functional Equivalence: The new software functionalities (e.g., multi-phase/multi-slice for cardiac gating, dual-channel RF coil array, Perfusion/Diffusion imaging) and optional hardware items are described and asserted to be substantially equivalent to capabilities already cleared in other predicate devices.

    In essence, the "study" is a compilation of engineering specifications, safety analyses, and qualitative imaging demonstrations, all framed within the context of showing that the upgraded device maintains its safety and effectiveness characteristics, and that new features are comparable to those found in already approved devices. There are no detailed clinical trials or diagnostic performance studies described in this summary to "prove" meeting acceptance criteria in a statistical sense for diagnostic accuracy.

    Ask a Question

    Ask a specific question about this device

    K Number
    K983110
    Date Cleared
    1999-02-25

    (174 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K973799, K962138

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Imaging of:

    • The Whole Body (including head, abdomen, breast, heart, pelvis, joints, neck, TMJ, spine, blood vessels, limbs, and extremities). [Application terms include MRCP (MR Cholangiopancreatography), MR Urography, MR Myelography, MR Fluoroscopy, SAS (Surface Anatomy Scan), Dynamic Scan, Cine Imaging, and Cardiac tagging.]
    • Fluid Visualization
    • 2D/3D Imaging
    • MR Angiography/MR Vascular Imaging
    • Blood Oxygenation Level Dependent (BOLD) Imaging
    Device Description

    This submission consists of a software upgrade to the MRT-50GP/E2 (FLEXARTTM), MRT-50GP/H2 (FLEXARTTM/Hyper), MRT-150/F1 (VISARTTM), MRT-150/F2 (VISARTTM/Hyper)

    AI/ML Overview

    Here's an analysis of the provided 510(k) summary relating to acceptance criteria and the study conducted:

    Disclaimer: The provided document (K983110) is a 510(k) Premarket Notification summary from 1998 for a software upgrade to existing Magnetic Resonance Diagnostic Devices (FLEXART™ and VISART™). It focuses on demonstrating substantial equivalence to previously cleared devices. It primarily discusses safety parameters and imaging performance specifications rather than a typical clinical study with acceptance criteria for a new AI/CAD device.

    This document predates widespread AI in medical imaging and the standard AI/CAD study structure. Therefore, many of the requested fields (like sample size for test/training sets, ground truth establishment methods, MRMC studies, effect sizes, and standalone performance) are not directly addressed in the provided text as they pertain to a different type of device evaluation.


    1. Table of Acceptance Criteria and Reported Device Performance

    Given the nature of the document, the "acceptance criteria" are more akin to specifications that the software upgrade maintains, and the "reported device performance" indicates that these specifications are met or comparable to the predicate devices.

    Parameter/CriteriaAcceptance Criteria (V3.5 s/w)Reported Device Performance (V4.0 s/w)Outcome/Met?
    Safety Parameters
    Maximum static field strength (FLEXART™)0.5 T0.5 TMet
    Maximum static field strength (VISART™)1.5 T1.5 TMet
    Rate of change of magnetic field (FLEXART™)11 T/sec.11 T/sec.Met
    Rate of change of magnetic field (FLEXART™/Hyper)13.3 T/sec.13.3 T/sec.Met
    Rate of change of magnetic field (VISART™)13.3 T/sec.13.3 T/sec.Met
    Rate of change of magnetic field (VISART™/Hyper)19.5 T/sec.19.5 T/sec.Met
    Maximum RF power deposition (FLEXART™)<0.4 W/kg<0.4 W/kgMet
    Maximum RF power deposition (VISART™)<1.0 W/kg<1.0 W/kgMet
    Acoustic noise levels (FLEXART™)100.2 dB(A)100.2 dB(A)Met
    Acoustic noise levels (FLEXART™/Hyper)98.5 dB(A)98.5 dB(A)Met
    Acoustic noise levels (VISART™)105.3 dB105.3 dBMet
    Acoustic noise levels (VISART™/Hyper)105.1 dB105.1 dBMet
    Imaging Performance Parameters
    Specification volume: Head16cm dsv16cm dsvMet
    Specification volume: Body28cm dsv28cm dsvMet
    Functionality: New sequences (e.g., cardiac tagging, Cine imaging)Not explicitly listed as "acceptance criteria" but included as new features"Sample clinical images are presented for new sequences" and substantial equivalence claimed.Implied Met

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not specify a distinct "test set" in the context of an algorithmic performance evaluation. The evaluation primarily relies on:

    • Engineering specifications and measurements: For safety and imaging performance parameters (e.g., static field strength, SAR, acoustic noise, specification volume).
    • Demonstration of "Sample clinical images": For new sequences. The number of images or patients is not specified.
    • Comparison to predicate devices: The core of a 510(k) submission is to show substantial equivalence.

    Data Provenance: Not explicitly stated, however, the manufacturing site is "Toshiba Corporation, Japan". The context suggests data would likely be from internal testing and validation, potentially clinical data from relevant medical sites if "sample clinical images" implies actual patient scans. It is retrospective in the sense that it's comparing against existing cleared versions.


    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    This information is not provided in the document. The evaluation focuses on engineering specifications and "sample clinical images" which are presented, but details on expert review or ground truth establishment are absent.


    4. Adjudication Method for the Test Set

    This information is not provided in the document. Given the type of submission (software upgrade, substantial equivalence), a formal adjudication process for a clinical test set is not explicitly mentioned as being part of the presented summary.


    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not reported or described in the provided document. The submission is a 510(k) for a software upgrade, demonstrating substantial equivalence, not a comparative effectiveness study comparing human readers with and without AI assistance.


    6. If a Standalone Performance Study Was Done

    A standalone performance study in the context of evaluating an algorithm only without human-in-the-loop performance was not described or reported. This submission concerns a software upgrade to an MRI device, not an AI or CAD algorithm.


    7. The Type of Ground Truth Used

    The "ground truth" for this submission primarily consists of:

    • Engineering measurements and specifications: For safety and scanner performance parameters (e.g., measured static field strength, SAR, acoustic noise, image volume).
    • Clinical observation/demonstration: "Sample clinical images" are presented to show the functionality and quality of new sequences. The specific type of "ground truth" for these images (e.g., pathology, clinical follow-up) is not specified.

    8. The Sample Size for the Training Set

    The concept of a "training set" in the context of machine learning is not applicable to this document. This submission does not describe an AI or machine learning device that requires a training set. It's a software upgrade to an existing MRI system.


    9. How the Ground Truth for the Training Set Was Established

    As stated in point 8, the concept of a "training set" is not applicable to this submission.

    Ask a Question

    Ask a specific question about this device

    K Number
    K964608
    Device Name
    MRT-35A V9.0
    Date Cleared
    1997-02-04

    (78 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K961842, K962138

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Imaging of the whole body (including the head, abdomen, heart, pelvis, spine, blood vessels, limbs and extremities), fluid visualization, 2D/3D Imaging, MR Angiography, MR Fluoroscopy.

    Device Description

    The MRT-35A v9.0 is a modification of the MRT-35A system which uses a 0.35T superconducting magnet. The MRT-35A v9.0 is an incremental upgrade to an MRT-35A system which is configured with version 8 hardware and software. The 9.0 upgrade replaces the version 8 computer system. The computer architecture, operational characteristics and user software follow the same design considerations cleared by Flexart™ and Visart™ systems. This 9.0 upgrade makes no change to the MRT-35A version 8 series magnet, gradient system or coil set.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the MRT-35A v9.0 device, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for the MRT-35A v9.0 are implicitly defined by its equivalency to previously cleared devices (MRT-35A v8.0 and Flexart™) and its demonstrated adherence to "consensus standards requirements" for image quality. The performance is reported by comparing key safety and image performance parameters to these predicate devices.

    Acceptance Criteria CategorySpecific ParameterAcceptance Criterion (Implicitly based on Predicate Device Performance)Reported MRT-35A v9.0 Performance
    Safety ParametersMaximum static field strength0.35 T (MRT-35A v8.0) / 0.5 T (Flexart™)0.35 T
    Rate of change of magnetic field6.83 T/sec (MRT-35A v8.0) / 11 T/sec (Flexart™)6.83 T/sec
    Max. radio frequency power deposition0.34 W/kg (MRT-35A v8.0) / <0.256 W/kg (Flexart™)0.3 W/kg
    Acoustic Noise levels101.6 db (max) (MRT-35A v8.0) / 100.2 db (max) (Flexart™)110.4 db (max)
    Image PerformanceSpecification Volume: Head10cm dsv (MRT-35A v8.0) / 10.4cm dsv (Flexart™)10cm dsv
    Specification Volume: Body20cm dsv (MRT-35A v8.0) / 10.4cm dsv (Flexart™)20cm dsv
    Signal-to-Noise ratioConformance with consensus standards requirementsDemonstrated conformance
    UniformityConformance with consensus standards requirementsDemonstrated conformance
    Slice ProfilesConformance with consensus standards requirementsDemonstrated conformance
    Geometric DistortionConformance with consensus standards requirementsDemonstrated conformance
    Slice Thickness/Interslice SpacingConformance with consensus standards requirementsDemonstrated conformance

    2. Sample size used for the test set and the data provenance

    The document does not explicitly state the sample size for a test set in terms of clinical cases or patient numbers. It mentions that "Sample phantom images and clinical images were presented for all new sequences." This implies a qualitative assessment rather than a quantitative, numerically defined test set.

    The data provenance (country of origin of the data, retrospective or prospective) is not mentioned. Given the nature of a 510(k) submission for a device upgrade, it's likely previous clinical data from the predicate devices or internal testing data were leveraged, but this is not definitively stated.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document does not specify the number of experts or their qualifications used to establish ground truth for the test set. The assessment appears to be based on "consensus standards requirements" for image quality, which implies an understanding of established medical imaging quality metrics by the reviewers.

    4. Adjudication method for the test set

    The document does not describe any specific adjudication method (e.g., 2+1, 3+1) for the test set. The evaluation seems to be based on a comparison to "consensus standards requirements" and visual assessment of "sample phantom images and clinical images."

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC comparative effectiveness study is mentioned. This device is an upgrade to an existing MRI system and does not involve AI for interpretation or improvement of human reader performance.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    This section is not applicable. The device is an MRI system, not an algorithm, and its performance is inherently tied to human operation and interpretation of images.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The ground truth for image performance appears to be based on "consensus standards requirements" for various image quality metrics (Signal-to-Noise ratio, Uniformity, Slice Profiles, Geometric Distortion, and Slice Thickness/Interslice Spacing). For the clinical images, the implicit ground truth would be the expected visual quality and diagnostic utility that a radiologist would expect from an MRI system. There's no mention of pathology or outcomes data being used for the performance evaluation in this summary.

    8. The sample size for the training set

    The document does not describe a training set. As this is an upgrade to an MRI system, the "training" would have been the development and optimization of the underlying software and hardware components, rather than a machine learning training set in the modern sense.

    9. How the ground truth for the training set was established

    Not applicable, as no training set (in the machine learning context) is described. The development of the system's "software technology" and "applications software" would have been guided by established engineering principles and medical imaging requirements, rather than a specific "ground truth" derived for a training set.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1