Search Filters

Search Results

Found 5 results

510(k) Data Aggregation

    K Number
    K120207
    Device Name
    VISIA
    Date Cleared
    2012-04-23

    (90 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    VISIA

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Visia™ is a medical image processing software application intended for the visualization of images from various sources (e.g., Computed Tomography (CT), Magnetic Resonance (MR), etc). The system provides viewing, quantification, manipulation, communication, and printing of medical images. Visia™ is not meant for primary diagnostic interpretation of mammography.

    Device Description

    Visia™ is a medical imaging software platform that allows processing, review, and analysis of multidimensional digital images acquired from a variety of medical imaging modalities. Visia™ offers flexible workflow options to aid clinicians in the evaluation of patient anatomy and pathology. The Visia™ system integrates within typical clinical workflow patterns through receiving and transferring medical images over a computer network. The software can be loaded on a standard off-the-shelf personal computer (PC) and can operate as a stand-alone workstation or in a distributed server client configuration across a computer network. Images can be displayed based on physician preferences using configurable viewing options or hanging protocols. Visia™ provides the clinician with a broad set of viewing and analysis tools to annotate, measure, and output selected image views or reports.

    AI/ML Overview

    The provided 510(k) summary for MeVis Medical Solutions AG's Visia™ device primarily focuses on demonstrating substantial equivalence to a predicate device (Vitrea® K071331) and discusses general nonclinical testing.

    Based on the provided text, there is no detailed information about specific acceptance criteria, a dedicated study proving performance against these criteria, or the methodology typically found in studies for AI/CADe devices. The device, Visia™, is described as a medical image processing software platform for visualization, quantification, manipulation, and printing, rather than an AI/CADe device that performs diagnostic interpretations or provides automated analysis results for which detailed performance metrics would be assessed. The submission explicitly states: "Diagnosis is not performed by the software but by Radiologists, Clinicians and referring Physicians." and "Visia™ is not meant for primary diagnostic interpretation of mammography."

    Therefore, I cannot populate most of the requested fields as the information is not present in the provided document.

    Here's an assessment based on the available information:

    1. Table of acceptance criteria and the reported device performance:

    Acceptance Criteria (Not explicitly stated or quantitative for performance)Reported Device Performance (General statements from submission)
    Functional Equivalence to Predicate DeviceThe submission states: "Nonclinical and performance testing results are provided in the 510(k) and demonstrate that the predetermined acceptance criteria are met." It also claims "The design, function, and specifications of Visia™ are similar to the identified legally marketed predicate device." And "The new device and predicate devices are substantially equivalent in the areas of technical characteristics, general function, and intended use. The new device does not raise any new potential safety risks and is equivalent in performance to the existing legally marketed devices." The validation test plan "was designed to evaluate all input functions, output functions, and actions performed by the software in each operational mode" and "passed all in-house testing criteria including validating design, function, and specifications."
    Safety and Effectiveness"The Visia™ labeling contains instructions for use and necessary cautions, warnings and notes to provide for safe and effective use of the device." "Risk Management is ensured via MeVis Medical Solution AG's Risk Management procedure, which is used to identify potential hazards." "These potential hazards are controlled via software development and verification testing." And "Nonclinical tests demonstrate that the device is safe, effective, and is substantially equivalent to the predicate device."

    2. Sample sized used for the test set and the data provenance:

    • Sample Size: Not specified. The document refers generally to "nonclinical and performance testing" and a "Validation Test Plan" but does not provide details on the number of cases or datasets used.
    • Data Provenance: Not specified (e.g., country of origin, retrospective/prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not applicable/Not specified. The device is a viewing/processing platform, not a diagnostic aid that establishes "ground truth." The submission states "Diagnosis is not performed by the software but by Radiologists, Clinicians and referring Physicians."

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Not applicable/Not specified.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, an MRMC study was not described or implied. The device is explicitly stated as "not meant for primary diagnostic interpretation of mammography" and does not perform diagnosis; therefore, a study on human reader improvement with AI assistance would not be relevant to its intended use as described.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

    • Not applicable. The device is a "medical imaging software platform" that provides "viewing, quantification, manipulation, and printing." It is a tool for clinicians, not a standalone diagnostic algorithm.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

    • Not applicable. As a viewing and processing software, the concept of "ground truth" for its performance isn't in the same vein as for a diagnostic AI. The "ground truth" for showing it works would likely relate to the accuracy of its display, measurements, and functional operations compared to predefined specifications or those of the predicate device, which isn't detailed in terms of a specific "ground truth" type here.

    8. The sample size for the training set:

    • Not applicable. This device is described as general medical image processing software, not an AI/ML algorithm that requires a training set in the conventional sense.

    9. How the ground truth for the training set was established:

    • Not applicable.

    Summary of Device and Evidence provided:

    The Visia™ device is a medical image processing software platform. Its 510(k) submission primarily relies on demonstrating substantial equivalence to a predicate device (Vitrea® K071331) and on internal "nonclinical and performance testing" to validate its design, function, and specifications. The criteria for acceptance appear to be primarily functional verification against design requirements and comparison to the predicate device's capabilities, rather than quantitative performance metrics for a diagnostic aid that would analyze medical images or perform AI tasks. The submission emphasizes that the software does not perform diagnosis and is not intended for primary diagnostic interpretation.

    Ask a Question

    Ask a specific question about this device

    K Number
    K120484
    Device Name
    VISIA ONCOLOGY
    Date Cleared
    2012-03-27

    (39 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    VISIA ONCOLOGY

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Visia Oncology is a medical software application intended for the visualization of images from a variety of image devices. The system provides viewing, quantification, manipulation, and printing of medical images. Visia Oncology is a noninvasive image analysis software package designed to support the physician in routine diagnostic oncology, staging and follow-up. Flexible layouts and automated image registration facilitate the synchronous display and navigation of multiple datasets for viewing data and easy follow-up comparison. The application provides a range of interactive tools specifically designed for segmentation and volumetric analysis of findings. The integrated reporting helps the user to track findings and note changes, such as shape or size, over time.

    Device Description

    Visia™ Oncology is a noninvasive medical image processing software application intended for the visualization of images from various sources such as Computed Tomography systems or from image archives. The system provides viewing, quantification, manipulation, and printing of medical images. Visia™ Oncology integrates within typical clinical workflow patterns through receiving and transferring medical images over a computer network. The software can be loaded on a standard off-the-shelf personal computer (PC) and can operate as a stand-alone workstation or in a distributed server-client configuration across a computer network. Visia™ Oncology is designed to support the physician in routine diagnostic oncology, staging and follow-up. Flexible layouts and automated image registration facilitate the synchronous display and navigation of multiple datasets for viewing data and easy follow-up comparison. The application provides a range of interactive tools specifically designed for segmentation and volumetric analysis of findings. The integrated reporting helps the user to track findings and note changes, such as shape or size, over time.

    AI/ML Overview

    The provided text indicates that "Visia™ Oncology" is a medical image processing software. However, the document does not contain specific acceptance criteria, a detailed study description, or performance metrics for the device. Instead, it focuses on demonstrating substantial equivalence to predicate devices for regulatory clearance.

    Therefore, I cannot provide a table of acceptance criteria and reported device performance, nor details about a specific study proving the device meets acceptance criteria, an MRMC study, standalone performance, or training/test set details based on this document.

    Here's what can be extracted based on the information provided, assuming the "nonclinical testing" mentioned broadly refers to the evaluation of the device:

    1. Table of Acceptance Criteria and Reported Device Performance:

    Acceptance CriteriaReported Device Performance
    Not specifiedNot specified

    Explanation: The document states that "Validation testing indicated that as required by the risk analysis, designated individuals performed all verification and validation activities and that the results demonstrated that the predetermined acceptance criteria were met." However, the specific acceptance criteria themselves (e.g., minimum accuracy for a particular task, specific tolerance for volumetric measurements, success rate for image registration) and the actual reported performance metrics against those criteria are not detailed in this 510(k) summary.

    2. Sample size used for the test set and the data provenance:

    • Sample size for the test set: Not specified.
    • Data provenance: Not specified (e.g., country of origin, retrospective or prospective). The document only mentions "images from various sources such as Computed Tomography systems or from image archives."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of experts: Not specified.
    • Qualifications of experts: Not specified beyond the general statement that "Diagnosis is not performed by the software but by Radiologists, Clinicians and referring Physicians." There's no mention of specific experience levels or board certifications for anyone involved in establishing ground truth for testing.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Not specified.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • MRMC study done: No, not mentioned in the document. The document describes the software as a tool to "support the physician" and provides "interactive tools," but it doesn't detail a study measuring improvement in human reader performance with or without the AI assistance.
    • Effect size of improvement: Not applicable, as no MRMC study is detailed.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • The document implies the software is a standalone application but doesn't describe a standalone performance study of the algorithm itself in isolation from human interpretation. It emphasizes that "Diagnosis is not performed by the software but by Radiologists, Clinicians and referring Physicians." So, if "standalone" refers to the algorithm making independent diagnoses or interpretations without human oversight, then no such study is described, as that is explicitly not its intended use.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Not specified.

    8. The sample size for the training set:

    • Not specified. The document does not describe a machine learning training process or a training set.

    9. How the ground truth for the training set was established:

    • Not applicable, as no training set or machine learning model requiring ground truth for training is described. The device is characterized as "medical image processing software" that provides "viewing, quantification, manipulation, and printing." While it has "automated image registration" and "interactive tools specifically designed for segmentation and volumetric analysis," the underlying methods are not detailed as AI/ML that would require a distinct training phase.
    Ask a Question

    Ask a specific question about this device

    K Number
    K113701
    Device Name
    VISIA NEURO
    Date Cleared
    2012-02-16

    (62 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    VISIA NEURO

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Visia™ Neuro is a medical image processing software application intended for the visualization of images from various sources such as Magnetic Resonance Imaging systems or from image archives. The system provides viewing, quantification, manipulation, and printing of medical images. Visia™ Neuro provides both analysis and viewing capabilities for anatomical and physiologic/functional imaging datasets, including blood oxygen dependent (BOLD) fMRI, diffusion, fiber tracking, dynamic review, and vessel visualization. Data can be visualized in both 2D and 3D views.

    BOLD fMRi Review: The BOLD MRI feature is useful in identifying small susceptibility changes arising from neuronal activity during performance of a specific task.

    Diffusion Review: The diffusion review feature is intended for visualization and analysis of the diffusion of water molecules through brain tissue.

    Fiber Tracking Review: The fiber tracking feature uses the directional portion of the diffusion vector to track and visualize white matter structures within the brain.

    Dynamic Review: Dynamic review feature is intended for visualization and analysis of MRI dynamic studies, showing changes in contrast over time, where such techniques are useful or necessary.

    Vessel Visualization: The vessel feature is used to identify and visualize the vascular structures of the brain.

    3D Visualization: The 3D visualization feature allows image data to be reconstructed as 3D objects that are visualized and manipulated on a 2D screen.

    Device Description

    Visia™ Neuro is a medical image processing software application intended for the visualization of images from various sources such as Magnetic Resonance Imaging systems or from image archives. The system provides viewing, quantification, manipulation, and printing of medical images.

    Visia™ Neuro integrates within typical clinical workflow patterns through receiving and transferring medical images over a computer network. The software can be loaded on a standard off-the-shelf personal computer (PC) and can operate as a stand-alone workstation or in a distributed server-client configuration across a computer network.

    The software provides functionality for processing and analyzing both anatomical and physiologic/functional imaging datasets. Specifically, the software includes user defined processing modules for image registration, blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI), diffusion imaging, fiber tracking, dynamic imaging, and vessel imaging. Processed images are stored as separate files from the original data such that the original data is preserved.

    Images may be displayed based on physician preferences using configurable layouts, or hangings. Visia™ Neuro provides the clinician with a broad set of viewing and analysis tools in both 2D and 3D. The software includes tools to annotate, measure, and output selected image views or user defined reports.

    AI/ML Overview

    The provided documentation for Visia™ Neuro does not contain specific acceptance criteria or a detailed study that proves the device meets such criteria in terms of quantitative performance metrics for medical diagnosis or image interpretation.

    Instead, the submission focuses on demonstrating substantial equivalence to a predicate device (DC Neuro, K081262) through non-clinical testing and verification/validation activities of the software itself. The document states that the software passed "all in-house testing criteria" and that "the results demonstrated that the predetermined acceptance criteria were met." However, these acceptance criteria are not explicitly defined in terms of clinical performance (e.g., accuracy, sensitivity, specificity for identifying pathologies).

    Here's a breakdown of the information that is available in the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance:

    No specific clinical acceptance criteria for diagnostic performance (e.g., sensitivity, specificity, AUC) are mentioned. The document primarily discusses functional and technical acceptance criteria related to software performance and safety.

    Acceptance Criteria CategoryReported Device Performance
    Software FunctionalityPassed all in-house testing criteria for input functions, output functions, and actions in each operational mode.
    Safety and EffectivenessRisk management procedures identified potential hazards, which were controlled via software development and verification & validation testing.
    Technological Characteristics (Substantial Equivalence)Substantially equivalent to the predicate device (DC Neuro, K081262) in technical characteristics, general function, application, and intended use. Does not raise new safety risks.

    2. Sample size used for the test set and the data provenance:

    • Test Set Description: The document refers to "the complete system configuration" being "assessed and tested at the manufacturer's facility." It also mentions "Validation Test Plan" results.
    • Sample Size: Not specified. It only refers to "all verification activities."
    • Data Provenance: Not specified, but given it was "in-house testing," it's likely internal, potentially simulated or based on historical data readily available to the manufacturer. It doesn't specify if it's retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • The document states: "Diagnosis is not performed by the software but by Radiologists, Clinicians and referring Physicians." and "A physician, providing ample opportunity for competent human interprets images and information being displayed and printed."
    • However, for the validation testing of the software itself, there's no mention of experts establishing ground truth for evaluating diagnostic performance. The validation appears to be focused on software functionality and technical aspects rather than clinical outcome.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    Not applicable or not mentioned, as the validation described is for software functionality and not for diagnostic accuracy requiring expert adjudication.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    Not applicable. The document describes a software medical device for visualization and analysis, not an AI-assisted diagnostic tool requiring MRMC studies to assess human reader improvement.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    The device is explicitly described as a tool for visualization and analysis, with diagnoses made by physicians. Therefore, a "standalone algorithm only" performance study in a diagnostic context is not relevant to its intended use as described. The software's performance was evaluated in terms of its functions, not its diagnostic accuracy.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    No ground truth type is specified for evaluating the device's clinical performance because the validation described is centered on software functionality and technical equivalence.

    8. The sample size for the training set:

    Not applicable. This is a medical image processing software application, not a machine learning or AI algorithm that requires a "training set" in the context of learning to perform a diagnostic task.

    9. How the ground truth for the training set was established:

    Not applicable, as there is no mention of a training set for an AI/ML algorithm.

    Ask a Question

    Ask a specific question about this device

    K Number
    K113337
    Date Cleared
    2011-12-30

    (46 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    VISIA DYNAMIC REVIEW

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Visia™ Dynamic Review is a software package intended for use in viewing and analyzing magnetic resonance imaging (MRI) studies. Visia™ Dynamic Review supports evaluation of dynamic MR data.

    Visia™ Dynamic Review automatically registers serial patient motion to minimize the impact of patient motion and visualizes different enhancement characteristics (parametric image maps). Furthermore, it performs other user-defined post-processing functions such as image subtractions; multi-planar reformats and maximum intensity projections. The resulting information can be displayed in a variety of formats, including a parametric image overlaid onto the source image. Visia™ Dynamic Review can also be used to provide measurements for diameters, areas and volumes. Furthermore, Visia™ Dynamic Review can evaluate the uptake characteristics of segmented tissues.

    Visia™ Dynamic Review also displays images from a number of other imaging modalities; however, these images must not be used for primary diagnostic interpretation.

    When interpreted by a skilled physician, Visia™ Dynamic Review provides information that may be useful in diagnosis. Patient management decisions should not be made based solely on the results of Visia™ Dynamic Review analysis.

    Device Description

    Visia™ Dynamic Review is a software package intended for use in viewing and analyzing magnetic resonance imaging (MRI) studies. Visia™ Dynamic Review supports evaluation of dynamic MR data.

    Visia™ Dynamic Review integrates within typical clinical workflow patterns through receiving and transferring medical images over a computer network. The software can be loaded on a standard offthe-shelf personal computer (PC) and can operate as a stand-alone workstation or in a distributed server-client configuration across a computer network.

    Visia™ Dynamic Review automatically registers serial patient motion to minimize the impact of patient motion and visualizes different enhancement characteristics (parametric image maps). Furthermore, it performs other user-defined post-processing functions such as image subtractions; multi-planar reformats and maximum intensity projections.

    The resulting information can be displayed in a variety of formats, including a parametric image overlaid onto the source images can also be displayed based on physician preferences using configurable viewing options or hanging protocols.

    Visia™ Dynamic Review provides the clinician with a broad set of viewing and analysis tools to annotate, measure, and output selected image views or user defined reports. Furthermore, Visia™ Dynamic Review can evaluate the uptake characteristics of segmented tissues.

    Visia™ Dynamic Review also displays images from a number of other imaging modalities; however, these images must not be used for primary diagnostic interpretation.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the Visia™ Dynamic Review device, structured according to your requested information:

    1. A table of acceptance criteria and the reported device performance

    Based on the provided K113337 510(k) Summary, specific quantitative acceptance criteria and corresponding reported device performance values are not explicitly detailed. The document focuses on the regulatory submission process and general affirmations of safety and effectiveness through nonclinical testing.

    Acceptance CriteriaReported Device Performance
    Not explicitly defined in the provided document. The document states: "The Validation Test Plan was designed to evaluate all input functions, output functions, and actions performed by the software in each operational mode and followed the process documented in the Validation Test Plan." And "Validation testing indicated that as required by the risk analysis, designated individuals performed all verification activities and that the results demonstrated that the predetermined acceptance criteria were met."No specific quantitative performance metrics are provided. The document states: "The complete system configuration has been assessed and tested at the manufacturer's facility and has passed all in-house testing criteria." And "Nonclinical tests demonstrate that the device is safe, effective, and is substantially equivalent to the predicate device."

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The provided K113337 510(k) Summary does not specify the sample size used for any test set or the data provenance (country of origin, retrospective/prospective). It only mentions "in-house testing criteria" and "Validation Test Plan."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    The provided K113337 510(k) Summary does not provide information on the number or qualifications of experts used to establish ground truth for any test set.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The provided K113337 510(k) Summary does not describe any adjudication method used for a test set.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    A multi-reader multi-case (MRMC) comparative effectiveness study was not described or performed in the provided K113337 510(k) Summary. The device is software for viewing and analyzing MRI studies, and the document explicitly states, "Diagnosis is not performed by the software but by Radiologists, Clinicians and referring Physicians." There is no mention of AI assistance or human reader improvement with or without AI.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The K113337 510(k) Summary does not explicitly describe a standalone performance study in terms of quantitative metrics. It states that the "complete system configuration has been assessed and tested at the manufacturer's facility and has passed all in-house testing criteria." However, it clarifies that the software's role is to provide information for a skilled physician's interpretation, rather than performing diagnosis independently: "Diagnosis is not performed by the software but by Radiologists, Clinicians and referring Physicians." and "A physician, providing ample opportunity for competent human intervention interprets images and information being displayed and printed."

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The provided K113337 510(k) Summary does not specify the type of ground truth used for its internal testing or validation studies.

    8. The sample size for the training set

    The provided K113337 510(k) Summary does not mention a training set sample size. This is consistent with the nature of the device, which is described as "Medical Image Processing Software" that performs functions like motion registration, parametric image mapping, subtractions, and multi-planar reformats. These are generally rule-based or algorithmic image processing tasks rather than machine learning models that require explicit training sets in the typical sense.

    9. How the ground truth for the training set was established

    As no training set is mentioned or implied for the core functionalities of the device, the document does not describe how ground truth for a training set was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K101134
    Manufacturer
    Date Cleared
    2010-08-09

    (109 days)

    Product Code
    Regulation Number
    886.4300
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    VISIAN NANOPOINT 2.0 INJECTOR MODEL LP604430

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Visian® nanoPOINT™ 2.0 Injector System is a device intended to fold and insert STAAR Surgical Collamer® Phakic One Piece Intraocular Lenses, Model Visian® ICL, for surgical placement in the human eye.

    Device Description

    The Visian® nanoPOINT™2.0 Injector System is a sterile, single-use device intended to fold and insert a STAAR Surgical Collamer® Phakic One Piece Intraocular Leas. Model Visian® ICL through surgical procedure in a human eye. The system provides, a tubular pathway through a corneal incision allowing delivery of a phakic IOL into the human eye. This device has 3 basic components: a syringe type injector with a silicone cushion tip plunger, a 33º bevel down cartridge tip and a loading block.

    AI/ML Overview

    This submission describes a medical device, the Visian® nanoPOINT™ 2.0 Injector System, and its substantial equivalence to a previously cleared device. Therefore, a traditional acceptance criteria and study section as seen with AI/ML devices or novel technologies is not fully applicable. However, I can extract the relevant information from the provided text to structure an answer that addresses your requested categories based on the nature of this 510(k) submission.

    Nature of the Device and Submission:

    The Visian® nanoPOINT™ 2.0 Injector System is an intraocular lens injector system, a Class I medical device. This 510(k) submission (K101134) is a declaration of substantial equivalence to a predicate device (K092023, Naviject Sub2-1P Injector Set) rather than a novel device requiring extensive de novo clinical trials to prove efficacy against specific acceptance criteria. The key argument for substantial equivalence is that the new device is identical to the predicate device in its technological characteristics, design, materials, and operating principle, with only a modification to the indications for use to specifically name the lens model it is intended for.

    Given this, the "acceptance criteria" and "study" are framed around demonstrating that the new device is as safe and effective as the predicate device because it is essentially the same device.


    1. A table of acceptance criteria and the reported device performance

    The "acceptance criteria" for the Visian® nanoPOINT™ 2.0 Injector System were met by demonstrating its substantial equivalence to the predicate device (K092023, Naviject Sub2-1P IOL Injector Set). The performance reported is that the device is identical to the predicate device in all critical aspects.

    Feature / Acceptance Criteria CategoryPredicate Device Specification (K092023)Reported Device Performance (Visian® nanoPOINT™ 2.0 Injector System)Meets Criteria?
    Product DescriptionSterile, single-use device intended to fold and insert a STAAR Surgical Collamer® phakic one piece intraocular lens. The system provides a tubular pathway through a corneal incision.IdenticalYes
    Intended UseInsertion of intraocular lenses as per approved labeling.Folding and insertion of STAAR Surgical Collamer® Phakic One Piece Intraocular Lenses, Model Visian® ICL.Yes (only difference is the specific lens name)
    Design3 basic components: syringe type injector with silicone cushion tip plunger, 33º bevel down cartridge tip, loading block.IdenticalYes
    MaterialsABS, Polydimethylsiloxane (Silicone), Polypropylene with GMS additive.IdenticalYes
    Mechanical SafetyValidated for specific IOL models.Validated for STAAR Surgical Collamer® Phakic One Piece Intraocular Lenses, Model Visian® ICL (V08-138).Yes
    ManufacturingPer internal operating procedures.IdenticalYes
    Operating PrinciplePhakic IOL loaded into cartridge, pushed through and delivered into human eye through 2.2 mm surgical incision.IdenticalYes
    PackagingBlister trays with labeled Tyvek material lids and labeled boxes.Blister trays with pre-printed labeling on Tyvek material lids and boxes.Yes
    BiocompatibilityBiocompatibility tests for K092023, substantially equivalent to K070669.Injector materials are the same as previously cleared K070669 IOL injector. Biocompatibility tests for that injector were acceptable.Yes
    Shelf LifeThree years.Identical (Three years)Yes
    SterilitySterile (EO).Identical (Sterile (EO))Yes
    ManufacturerMedicel AG.Identical (Medicel AG)Yes

    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: Not applicable in the traditional sense of a clinical or performance test set. The submission relies on the established performance and safety of the legally marketed predicate device (K092023).
    • Data Provenance: The "data" primarily comes from the regulatory clearance of the predicate device (K092023) and an earlier device (K070669) from which the biocompatibility data was carried forward. This is retrospective in nature, drawing from previous approvals and pre-existing validation reports. The country of origin for the earlier validation studies is not specified in this document, but Medicel AG is based in Switzerland.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not applicable. The ground truth for this type of submission is based on regulatory precedent and the established safety and effectiveness of a predicate device, as determined by the FDA. It does not involve expert consensus on new data for this specific submission, beyond the FDA's regulatory review process itself.


    4. Adjudication method for the test set

    Not applicable. There was no "test set" in the context of clinical or performance testing requiring adjudication. The adjudication was the FDA's regulatory review process determining substantial equivalence.


    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Not applicable. This device is not an AI/ML diagnostic tool, nor does it involve "human readers" or "AI assistance." It is a mechanical device for implanting intraocular lenses.


    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    Not applicable. This is a mechanical device, not an algorithm.


    7. The type of ground truth used

    The "ground truth" for this substantial equivalence determination is regulatory precedent and established safety/effectiveness data of the predicate device. Specifically:

    • The Visian® nanoPOINT™ 2.0 Injector System is declared "identical" to the K092023 Naviject Sub2-1P IOL Injector Set.
    • Biocompatibility data was inherited from the previously cleared K070669 IOL injector, which was deemed substantially equivalent to K092023.
    • Mechanical safety was validated for the specific lens model, building on validations done for the predicate device.

    8. The sample size for the training set

    Not applicable. This is not an AI/ML device requiring a training set.


    9. How the ground truth for the training set was established

    Not applicable. This is not an AI/ML device requiring a training set with established ground truth.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1