Search Results
Found 42 results
510(k) Data Aggregation
(90 days)
Stereotaxic Instrument (21 CFR 882.4560, product code OLO); Medical Image Communications Device (21 CFR 892.2020
The ExactechGPS is intended for use during preoperative planning and during orthopedic surgeon in locating anatomical structures and aligning the endoprosthesis with the anatomical structures provided that the requred anatomical landmarks can be identified on the patient's preoperative CT scan.
The ExactechGPS Total Ankle Case Manager Application is specifically indicated for creating Total Ankle Navigation cases, uploading CT scans for reconstruction, download the reconstruction and export the reconstructed case to the Total Ankle Navigation Application.
The ExactechGPS Total Ankle Navigation is specifically indicated for Total Ankle Arthroplasty using the Vantage ankle system to aid the surgeon in locating anatomical structures and aligning the tibial and talar components with the anatomical structures.
The ExactechGPS Total Ankle Application proposed by Blue Ortho in this submission is new clinical software application.
The ExactechGPS Total Ankle Application (TAA) system is intended to be used during stereotaxic surqical procedures to aid the surgeons in locating anatomical structures and aligning the endoprosthesis with the patient bony anatomy provided that the required anatomical landmarks can be identified on the patient's preoperative CT scan.
The ExactechGPS Total Ankle Application is an Image Guided Surgery, or Navigation, system designed to guide surgeons during the preparation of the tibia and talar bones as part of a total ankle arthroplasty procedure. The ExactechGPS Total Ankle Application requires patient CT-scan data to undergo segmentation prior to being imported into the software, as part of reconstructing the bone model.
It encompasses two software applications:
- The ExactechGPS Total Ankle Case Manager Application is specifically indicated for creating Total Ankle Navigation cases, uploading CT scans for reconstruction, download the reconstruction and export the reconstructed case to the Total Ankle Navigation Application.
- -The ExactechGPS Total Ankle Navigation Application is specifically indicated for Total Ankle Arthroplasty using the Vantage ankle system to aid the surgeon in locating anatomical structures and aligning the tibial and talar components with the anatomical structures.
The ExactechGPS Total Ankle Navigation is working on the Blue Ortho ExactechGPS System cleared per 510(k) #K213877. The system is composed of ExactechGPS Trackers, which are rigidly fixed to the patient bony anatomy, and other components (ExactechGPS Station and ExactechGPS camera) that are not in contact with the patient.
The provided text is a 510(k) FDA clearance letter and a summary of safety and effectiveness for a medical device (ExactechGPS® Total Ankle Application). While it discusses indications for use, device description, and comparison to a predicate device, it explicitly states, "This submission includes or references the following non-clinical testing: Software verification testing to ensure all design outputs meet all specified requirements; Software validation to ensure software specifications conform to user needs and intended uses."
Crucially, the document does NOT contain detailed performance data to establish acceptance criteria or describe a specific study proving the device meets those criteria with statistical rigor, using a defined test set, expert ground truth, or adjudication methods. It mentions "non-clinical testing" for software verification and validation, but these are general statements about software development processes, not specific clinical or even simulated performance studies with quantitative results against defined acceptance criteria.
Therefore, I cannot fulfill your request for the following information based on the provided text:
- A table of acceptance criteria and the reported device performance: The document does not define specific performance acceptance criteria beyond general software quality statements, nor does it report quantitative performance.
- Sample size used for the test set and the data provenance: No test set or data provenance is detailed.
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts: No information on ground truth establishment or experts is provided.
- Adjudication method (e.g., 2+1, 3+1, none) for the test set: Not available.
- If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance: This is a navigation system, not an AI-assisted diagnostic tool. An MRMC study would be irrelevant. The document does not describe any human-in-the-loop performance studies or comparative effectiveness studies.
- If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: The document describes a surgical navigation system, which inherently involves human interaction. There's no mention of a "standalone" algorithm performance.
- The type of ground truth used (expert consensus, pathology, outcomes data, etc.): No ground truth definition is provided.
- The sample size for the training set: No training set or machine learning model is explicitly described as part of the performance evaluation. The device uses patient CT-scan data for reconstruction, but this is not delineated as a "training set" in the context of AI model development that would typically require such detail.
- How the ground truth for the training set was established: Not applicable, as no training set for an AI model is described.
In summary, the provided document is a regulatory approval letter and a summary of the device's characteristics and intended use, not a detailed study report on device performance against specific, quantifiable acceptance criteria. It states that safety and effectiveness are supported by "non-clinical testing" for software verification and validation, but these are high-level statements without the specifics requested in your prompt.
Ask a specific question about this device
(252 days)
in a technical environment, which incorporates a Medical Image Communications Device (MICD) (21 CFR 892.2020
in a technical environment, which incorporates a Medical Image Communications Device (MICD) (21 CFR 892.2020
CT Perfusion V1.0 is an automatic calculation tool indicated for use in radiology. The device is an image processing software allowing computation of parametric maps from CT Perfusion data and extraction of volumes of interest based on numerical thresholds applied to the aforementioned maps. Computation of mismatch between extracted volumes is automatically provided.
The device is intended to be used by trained professionals with medical imaging education including but not limited to, physicians and medical technicians in the imaging assessment workflow by extraction and communication of metrics from CT Perfusion dataset.
The results of CT Perfusion V1.0 are intended to be used in conjunction with other patient information and, based on professional judgment, to assist the clinician in the medical imaging assessment. Trained professionals are responsible for viewing the full set of native images per the standard of care.
The device does not alter the original image. CT Perfusion V1.0 is not intended to be used as a standalone diagnostic device and shall not be used to take decisions with diagnosis or therapeutic purposes. Patient management decisions should not solely be based on CT Perfusion V1.0 results.
CT Perfusion V1.0 can be integrated and deployed through technical platforms, responsible for transferring, storing, converting formats, notifying of detected image variations and display DICOM of imaqing data.
The CT perfusion V1.0 application can be used to automatically compute qualitative as well as quantitative perfusion maps based on the dynamic (first-pass) effect of a contrast agent (CA). The perfusion application assumes that the input data describes a well-defined and transient signal response following rapid administration of a contrast agent.
Olea Medical proposes CT Perfusion V1.0 as an image processing application, Picture Archiving Communications System (PACS) software module that is intended for use in a technical environment, which incorporates a Medical Image Communications Device (MICD) (21 CFR 892.2020) as its technical platform.
CT Perfusion V1.0 image processing application is designed as a docker installed on a technical platform, a Medical Image Communications Device.
The CT Perfusion V1.0 application takes as input a full CT perfusion (CTP) sequence acquired following the injection of an iodine contrast agent.
By processing these input image series, the application provides the following outputs:
- . Parametric maps.
- Volume 1 and volume 2 segmentation in DICOM format. Fusion of segmented Volume 1 and 2 and CTP map could be provided in PNG and DICOM secondary captures.
- . Mismatch computation:
- Mismatch volume = Volume 2-Volume 1
- Mismatch ratio = Volume 2/Volume 1 O
- Relative Mismatch = (Volume 2-Volume 1)/Volume 2*100. O
The CT Perfusion V1.0 offers automatic volume seqmentations based on a set of maps and thresholds. The user is able to tune/adjust these thresholds and the maps associated to thresholds in the configuration files.
Here's an analysis of the acceptance criteria and study for the CT Perfusion V1.0 device, based on the provided FDA 510(k) summary:
Acceptance Criteria and Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Parametric maps result comparison: All parametric maps (CBF, CBV, MTT, TTP, Delay, tMIP) computed with CT Perfusion V1.0 and Olea Sphere® V3.0 predicate device were identical. | Value differences voxel-by-voxel were equal to zero. Pearson and Spearman correlation coefficients were equal to 1. |
Volumes result comparison: Segmentations (hypoperfused areas) derived from thresholds should be similar between CT Perfusion V1.0 and the predicate device. | Mean DICE index (similarity coefficient) was equal to 1 between CT Perfusion V1.0 and Olea Sphere® V3.0 predicate device segmentations. For all cases, no difference was found. |
Study Details
-
Sample size used for the test set and the data provenance: Not explicitly stated. The document mentions "all cases" for volume comparison, implying a dataset was used, but the specific number of cases is not provided. The provenance (country of origin, retrospective/prospective) is also not stated.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable. The study compares the performance of the new device to a predicate device, Olea Sphere V3.0, not to expert-derived ground truth.
-
Adjudication method for the test set: Not applicable, as the comparison is against a predicate device's output rather than an expert-adjudicated ground truth.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: No. This was not a MRMC comparative effectiveness study involving human readers with and without AI assistance. The study focuses on comparing the new device's output to a predicate device's output.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: Yes, this was a standalone comparison of the CT Perfusion V1.0 algorithm's outputs against the Olea Sphere V3.0 algorithm's outputs.
-
The type of ground truth used: The "ground truth" for this study was the output of the predicate device, Olea Sphere V3.0.
-
The sample size for the training set: Not applicable. The document states that "CT Perfusion V1.0 does not contain any AI-based algorithms. All calculations are based on deterministic algorithms." Therefore, there is no training set in the machine learning sense.
-
How the ground truth for the training set was established: Not applicable, as there is no training set for a deterministic algorithm.
Ask a specific question about this device
(97 days)
is a Class I medical device exempt per 21 CFR § 892.2010 (Medical image storage device) and 21 CFR § 892.2020
PowerLook® Tomo Detection V2 Software is a computer-assisted detection and diagnosis (CAD) software device intended to be used concurrently by interpreting physicians while reading digital breast tomosynthesis (DBT) exams from compatible DBT systems. The system detects soft tissue densities (masses, architectural distortions and asymmetries) and calcifications in the 3D DBT slices. The detections and Certainty of Finding and Case Scores assist interpreting physicians in identifying soft tissue densities and calcifications that may be confirmed or dismissed by the interpreting physician.
PLTD V2 detects malignant soft-tissue densities and calcifications in digital breast tomosynthesis (DBT) image. The PLTD V2 software allows a interpreting physician to quickly identify suspicious soft tissue densities and calcifications by marking the detected areas in the tomosynthesis images. When the PLTD V2 marks are displayed by a user, the marks will appear as overlays on the tomosynthesis images. The PLTD V2 marks also serve as a navigation tool for users, because each mark is linked to the tomosynthesis plane where the detection was identified. Users can navigate to the plane associated with each mark by clicking on the detection mark. Each detected region will also be assigned a "score" that corresponds to the PLTD V2 algorithm's confidence that the detected region is a cancer (Certainty of Finding Score). Certainty of Finding scores are relative scores assigned to each detected region and a Case Score is assigned to each case regardless of the number of detected regions. Certainty of Finding and Case Scores are computed by the PLTD V2 algorithm and represent the algorithm's confidence that a specific finding or case is malignant. The scores are represented on a 0% to 100% scale. Higher scores represent a higher algorithm confidence that a finding or case is malignant. Lower scores represent a lower algorithm confidence that a finding or case is malignant.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text.
1. Acceptance Criteria and Reported Device Performance
The device is a Computer-Assisted Detection and Diagnosis (CAD) software for digital breast tomosynthesis (DBT) exams. The acceptance criteria are largely demonstrated through the multi-reader multi-case (MRMC) pivotal reader study and standalone performance evaluations.
Table of Acceptance Criteria and Reported Device Performance:
Criteria Category | Metric | Acceptance Criteria (Implied / Stated) | Reported Device Performance (with CAD vs. without CAD) |
---|---|---|---|
Pivotal Reader Study (Human-in-the-Loop) | |||
Radiologist Performance | Case-level Area Under the Receiver Operating Characteristic (ROC) Curve (AUC) | Non-inferiority to radiologist performance without CAD. Implicit superiority is also a desirable outcome. | AUC with CAD: 0.852 |
AUC without CAD: 0.795 | |||
Average difference: 0.057 (95% CI: 0.028, 0.087); p |
Ask a specific question about this device
(79 days)
is a Class I medical device exempt per 21 CFR § 892.2010 (Medical image storage device) and 21 CFR § 892.2020
PowerLook Density Assessment is a software application intended for use with digital breast tomosynthesized 2D images from tomosynthesis exams. PowerLook Density Assessment provides an ACR BI-RADS Atlas 5th Edition breast density category to aid health care professionals in the assessment of breast tissue composition. PowerLook Density Assessment produces adjunctive information. It is not a diagnostic aid.
The PowerLook Density Assessment Software analyzes digital breast tomosynthesis 2D synthetic images to calculate the dense tissue area of each breast. The measured dense tissue area is then used to provide a Category of 1-4 consistent with ACR BI-RADS 5th edition a-d. The top-level design sub-systems are as follows: Initialization, Breast Segmentation, Breast Thickness Correction, and Breast Density Computation. The assessment results in a final density map and, in conjunction with its pixel size (in square cm), is used to compute the area of the dense tissue (square cm). The area of the breast (square cm) is computed by counting the total number of pixels in the valid regions of the breast segmentation mask). The ratio of the dense area to the total breast area gives the percent breast density (PBD) for the given view. The dense areas, breast areas, percent breast densities, and dispersion for the CC and MLO views are averaged in order to report measurements for each breast. The average PBD and the average dispersion are then taken, and mapped to a density category from 1 through 4 consistent with ACR BI-RADS 5th edition a-d for each breast, using a set of calibrated boundaries. The higher category of the two breasts is reported as the overall case score. The PowerLook Density Assessment is designed as a stand-alone executable operating within the larger software framework provided by PowerLook AMP. As such, the PowerLook Density Assessment software is purely focused on processing tomosynthesis 2D synthetic images and is not concerned with system issues such as managing DICOM image inputs or managing system outputs to a printer, PACS or Mammography Workstation. The PowerLook Density Assessment software is automatically invoked by PowerLook AMP. The results of PowerLook Density Assessment are designed to display on a mammography workstation, high resolution monitor, or in a printed case report. PowerLook Density Assessment is designed to process approximately 60-120 cases per hour.
Here's a summary of the acceptance criteria and the study details for the PowerLook Density Assessment Software, based on the provided FDA 510(k) document:
Acceptance Criteria and Device Performance
The document states that the PowerLook Density Assessment Software performed "substantially equivalent to the predicate device." While specific numerical acceptance criteria (e.g., minimum kappa score, percentage agreement) are not explicitly listed in the provided text as pass/fail thresholds, the performance was assessed based on:
- Kappa score: A statistical measure of inter-rater agreement, commonly used for categorical data.
- Percent correct in each BI-RADS category: Measures the accuracy of the software's classification into each of the four BI-RADS density categories (1, 2, 3, 4).
- Combined A/B and C/D BI-RADS categories: Assesses performance when categories are grouped (e.g., non-dense vs. dense).
The document states: "PowerLook Density Assessment performed substantially equivalent to the predicate device." This implies that the device's performance metrics were within an acceptable range compared to the already cleared predicate.
Table of Acceptance Criteria and Reported Device Performance
Performance Metric | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Kappa Score | Substantially equivalent to predicate device | Performance substantially equivalent to predicate device |
Percent Correct (Each BI-RADS Category) | Substantially equivalent to predicate device | Performance substantially equivalent to predicate device |
Combined A/B and C/D BI-RADS Categories | Substantially equivalent to predicate device | Performance substantially equivalent to predicate device |
Study Details
-
Sample size used for the test set and the data provenance:
- Test Set Sample Size: Not explicitly stated in the provided text. The document mentions "a set of digital breast tomosynthesis synthesized 2D images."
- Data Provenance: Not explicitly stated (e.g., country of origin, specific clinics). The study used "digital breast tomosynthesis synthesized 2D images from." It is retrospective, as it refers to images for which BI-RADS scores "were obtained from radiologists."
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not explicitly stated. The document mentions "BI-RADS scores were obtained from radiologists." It does not specify if this was a single radiologist or multiple.
- Qualifications of Experts: The experts are identified as "radiologists." No specific experience level (e.g., "10 years of experience") is provided.
-
Adjudication method for the test set:
- The document states that BI-RADS scores "were obtained from radiologists," but it does not specify an adjudication method (such as 2+1, 3+1, or none) for determining a consensus ground truth if multiple radiologists were involved.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
- No, an MRMC comparative effectiveness study was not explicitly mentioned or described. The study primarily focused on the standalone performance of the PowerLook Density Assessment Software against radiologist assessments (ground truth). It did not describe a scenario where human readers' performance with and without AI assistance was compared.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone study was performed. The described validation involved running the "PowerLook Density Assessment Software... followed by a comparison of the results between the predicate results, desired results, and observed performance..." This indicates the algorithm's performance was evaluated independently against the established ground truth.
-
The type of ground truth used:
- Expert Consensus (Radiologist BI-RADS Scores): The ground truth was established by "BI-RADS scores... obtained from radiologists." This implies the radiologists' interpretations served as the reference standard for breast density categorization.
-
The sample size for the training set:
- Not specified. The document does not provide any information regarding the training set's sample size or characteristics.
-
How the ground truth for the training set was established:
- Not specified. Since information about the training set size or its establishment is absent, the method for establishing ground truth for training data is also not provided.
Ask a specific question about this device
(110 days)
br>Class I per 21 CFR §892.2010
Device, Communication, Images, Ophthalmic
Class I per 21 CFR §892.2020
The Navilas Laser System 577s is indicated for use:
- In Retinal Photocoagulation for the treatment of Clinically Significant Diabetic Macular Edema (Focal or Grid Laser). Proliferative Diabetic Retinopathy (Panretinal Photocoagulation), Sub-retinal (Choroidal) Neovascularization (Focal Laser), Central and Branch Retinal Vein Occlusion (Scatter Laser Photocoagulation, Focal or Grid Laser), Latice Degeneration, Retinal Tears and Detachments (Laser Retinopexy).
- For the imaging (capture, display, storage and manipulation) of the retina of the eve, including via color and infrared imaging; and for aiding in the diagnosis and treatment of ocular pathology in the posterior segment of the eye.
- In Laser Trabeculoplasty for Primary Open Angle Glaucoma, as well as Iridotomy and Iridoplasty for Closed Angle Glaucoma.
The Navilas Laser 577s is a laser photocoagulator with an integrated digital fundus camera. The Navilas 577s Laser System combines imaging technologies (fundus live imaging, and infra-red imaging) with established laser photocoagulation treatment methods by providing the doctor a system for imaging and treatment planning prior to the photocoagulation.
The Navilas Laser 577s is comprised of:
- A semiconductor laser source that operates at 577nm. The semiconductor laser . source for the Navilas 577s is identical to the laser source used with the Navilas 577+ cleared under K141851.
- An integrated delivery system that directs the laser beam through an ● ophthalmoscope using motorized mirrors.
- A digital camera that provides continuous real-time imaging in color with white light illumination of the fundus, or in monochrome using infrared illumination.
- A software platform intended to be used to capture display, store and manipulate . images captured by the fundus camera.
The Navilas Laser System 577s supports the user during multiple steps of a laser treatment procedure with digital imaging, image storage, planning and laser treatment options including:
Digital imaging as provided by a color image with white light, supporting mydratic and non-mydratic image acquisition (with and without dilated pupils), or a monochrome IR image. Images are presented using a digital display. An illumination mode is selected where images are acquired and either stored or discarded after viewing on the touch sensitive digital display.
Image Storage - Captured images can be digitally stored in the Navilas Laser System 577s database along with other patient related data to create a complete patient record for future reference. Images from other devices may also be imported and stored.
Planning - Areas identified on acquired or imported images by the user that are selected for future treatment consideration can be marked through the use of treatment planning tools available. The physician has the ability to highlight areas on acquired images (called Points of Interest). These locations are created and manipulated using the touch sensitive digital display.
Laser Treatment - Treatment options are also unchanged from the predicate device with Single Pulse Mode. Repeat Mode and Scanned Pattern Mode available on all Navilas laser models. Pre-positioning of the aiming beam onto locations which are selected by the physician during planning is also facilitated. The position of the aiming beam can be monitored on the real-time image that is displayed on the touch sensitive digital display.
Report generation - Information collected in the database includes images obtained before, during and after treatment. This information may be used for the generation of patient reports for documentation purposes.
The provided text describes a 510(k) premarket notification for the "Navilas® Laser System 577s." This document primarily focuses on demonstrating substantial equivalence to a predicate device (Navilas Laser System 577+), rather than presenting a detailed independent study with specific acceptance criteria and performance data for a new AI/algorithm-driven device.
Therefore, many of the requested elements (e.g., acceptance criteria for a new clinical study, sample size for test set with data provenance, number of experts for ground truth, adjudication methods, MRMC study, standalone performance, training set details) are not present in this document because it is a submission for a device change that is functionally identical to the predicate with some hardware improvements and elimination of a feature. The "Performance Data" section discusses engineering and software testing, not clinical performance against specific metrics for diagnostic accuracy or efficacy.
However, I can extract and present the information that is available:
Summary of Device and Context:
- Device Name: Navilas® Laser System 577s
- Device Type: Laser photocoagulator with an integrated digital fundus camera.
- Purpose of Submission: 510(k) premarket notification to demonstrate substantial equivalence to the Navilas Laser System 577+ (K141851).
- Key Differences from Predicate: Elimination of fluorescein angiography imaging capability; hardware design improvements (new GUI, relocation of scanner controls, conversion to manual base height adjustment, designation of table as optional accessory, inclusion of combination objective element). The core laser and imaging technology are stated as the same.
- Indications for Use: (Identical to predicate, except for the removed angiography feature)
- Retinal Photocoagulation for various conditions (Diabetic Macular Edema, Proliferative Diabetic Retinopathy, Sub-retinal Neovascularization, Retinal Vein Occlusion, Lattice Degeneration, Retinal Tears and Detachments).
- Imaging (capture, display, storage, manipulation) of the retina (color, infrared) for aiding diagnosis and treatment of ocular pathology in the posterior segment.
- Laser Trabeculoplasty for Primary Open Angle Glaucoma, and Iridotomy/Iridoplasty for Closed Angle Glaucoma.
Regarding the Requested Information:
Since this document describes a 510(k) submission for substantial equivalence based on functional identity and engineering testing rather than a new clinical performance study for an AI/algorithm, most of the specific questions about acceptance criteria for clinical performance, ground truth, expert consensus, and reader studies are not applicable or not detailed in this document.
Here's what can be inferred or directly stated from the provided text, with a clear note when information is absent:
-
A table of acceptance criteria and the reported device performance
- Acceptance Criteria: For substantial equivalence, the primary acceptance criterion is that the new device (Navilas 577s) is as safe and effective as the predicate device (Navilas 577+). This is demonstrated through engineering and software verification and validation, showing that the minor changes do not negatively impact performance or safety. No specific quantitative clinical performance acceptance criteria (e.g., accuracy thresholds) are listed, as this isn't a de novo clinical study establishing such criteria. Instead, it's about meeting specifications and requirements.
- Reported Device Performance: The document states: "All criteria for this testing were met and results demonstrate that laser photocoagulation performed with the Navilas Laser 577s meets all performance specifications and requirements." This is a qualitative statement of success in the engineering and software tests.
Criterion Type Description / Test Performed Reported Performance / Outcome Substantial Equivalence Device maintains functional identity to predicate (Navilas 577+) with minor changes (no fluorescein angiography, hardware updates). Determined to be substantially equivalent by FDA. Illumination Safety ISO 15004-2 Ophthalmic Instruments - Fundamental Requirements and Test Methods - Part 2: Light Hazard Protection Criteria met. Software Life Cycle Process IEC 62304 Medical Device Software Software Life Cycle Process (Software LOC "Major") Criteria met. (Implies adherence to process adequate for software with potential for serious injury/death) Human Factors & Usability IEC 62366 (Usability Engineering), IEC 60601-1-6 (General requirements for basic safety and essential performance - Usability) Criteria met. Laser Product Safety IEC 60601-1 (General safety), IEC 60601-1-2 (EMC), IEC 60601-2-22 (Therapeutic Laser Equipment), IEC 60825-1 (Laser classification) Criteria met. Laser Bench Testing Verify spot and pattern placement accuracy. Criteria met. Software Verification & Validation General V&V for the "Major" LOC software. Criteria met. -
Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Not Applicable / Not Provided: This document describes engineering and software testing ("bench testing") rather than a clinical study with a patient "test set". There is no mention of patient data (images or otherwise) used for testing in this summary.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not Applicable / Not Provided: No clinical test set or ground truth establishment by experts is described in this submission summary.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not Applicable / Not Provided: No clinical test set or adjudication method is described.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No: This document does not describe an MRMC study or any study involving human readers with or without AI assistance. The device is a laser system with imaging capability, not an AI diagnostic assistant.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not Applicable: While the device has an imaging and software component ("aiding in the diagnosis"), the focus of this submission is on the laser safety, performance, and the integrated system's function, not a standalone diagnostic algorithm. No such standalone performance study is mentioned.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Not Applicable / Not Provided: For the engineering and software tests, "ground truth" would likely refer to established engineering specifications and requirements (e.g., laser power, spot size accuracy, software functionality conforming to design), rather than clinical ground truth like pathology or expert consensus on disease states.
-
The sample size for the training set
- Not Applicable / Not Provided: This is not a submission for a machine learning or AI device that typically requires a distinctive "training set" for an algorithm. The software is described as a platform for image capture, display, storage, and manipulation, and planning tools, not a learning algorithm.
-
How the ground truth for the training set was established
- Not Applicable / Not Provided: As noted above, training sets and their associated ground truth methodology are not discussed in this substantial equivalence submission.
Ask a specific question about this device
(177 days)
|
| | Device, Communication, Images, Ophthalmic
Class I per 21 CFR §892.2020
The Navilas Laser System/Navilas Laser System 532+/ Navilas Laser System 577+ are indicated for use:
· In Retinal Photocoagulation for the treatment of Clinically Significant Diabetic Macular Edema (Focal or Grid Laser), Proliferative Diabetic Retinopathy (Panretinal Photocoagulation), Sub-retinal (Choroidal) Neovascularization (Focal Laser), Central and Branch Retinal Vein Occlusion (Scatter Laser Photocoagulation, Focal or Grid Laser), Latice Degeneration, Retinal Tears and Detachments (Laser Retinopexy).
· For the imaging (capture, display, storage and manipulation) of the eve, including via color. fluorescein angiography and infrared imaging; and for aiding in the diagnosis and treatment of ocular pathology in the posterior segment of the eye.
· In Laser Trabeculoplasty for Primary Open Angle Glaucoma, as well as Iridotomy and Iridoplasty for Closed Angle Glaucoma.
The NAVILAS Laser System combines imaging technologies (fundus live imaging, infra-red imaging and fluorescein angiography) with established retinal laser photocoagulation treatment methods, providing the doctor with a system for image capture, display, storage and manipulation for treatment planning and documentation.
The primary components of the Navilas Laser System include:
-
. One of three optional ophthalmic laser sources:
- a frequency doubled ND:YVO4 laser source that operates at 532nm, or -
- an optically-pumped semiconductor laser source that also operates at the same -532nm, or
- an optically-pumped semiconductor laser source that operates at 577nm -
-
An integrated delivery system that directs the laser beam through ophthalmoscope optics using motorized mirrors,
-
A digital camera and computer hardware that provides continuous real-time imaging ● using slit illumination that is projected through the ophthalmoscope optics and panned automatically at a rapid rate of 25 Hz across the subject area using the motorized mirrors. Imaging can be in color (using white light illumination) or in monochrome (using infrared illumination or blue light illuminations).
Laser photocoagulation with the NAVILAS can be performed using single shot (Single Spot Mode), repeated shots (Repeat Mode), and scanned patterns (Pattern Mode).
The provided text is a 510(k) summary for the Navilas Laser System. It focuses on demonstrating substantial equivalence to predicate devices and adherence to performance standards, rather than detailing a specific clinical study with acceptance criteria for device performance in terms of diagnostic or therapeutic accuracy.
Therefore, the document does not contain the acceptance criteria and study details as requested in the input prompt, particularly regarding a study that proves the device meets specific performance metrics for an AI/algorithm-driven application.
The sections that would contain such information (4.8 SUMMARY OF PERFORMANCE TEST RESULTS and 4.9 CONCLUSIONS) are very general. They state that "Performance verification and validation testing was completed to demonstrate that the device performance complies with specifications and requirements identified for the Navilas Laser System" and "All criteria for this testing were met and results demonstrate that the Navilas Laser System meets all performance specifications and requirements." However, they do not provide the specific acceptance criteria or the study details (sample size, ground truth, expert qualifications, etc.) for any performance evaluation in the context of an AI-driven component.
To directly answer your prompt based only on the provided text, the information requested is largely absent.
Here's a breakdown of what is and isn't available based on your requested structure:
1. A table of acceptance criteria and the reported device performance
- Not available in the provided text. The document states that performance testing was completed and criteria were met, but it does not list the specific acceptance criteria or the quantitative results of the device's performance against those criteria.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Not available in the provided text. There is no mention of a test set sample size or data provenance for any performance evaluation.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not available in the provided text. The document does not describe the establishment of ground truth by experts, as it does not detail a study involving such a process.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not available in the provided text. No adjudication method is described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not available in the provided text. The document does not describe an MRMC study or the use of AI assistance for human readers. The Navilas Laser System, as described, is a surgical laser system with imaging, planning, and documentation capabilities, not an AI diagnostic assistant.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not available in the provided text. No standalone algorithm performance study is described.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)
- Not available in the provided text. As no study requiring ground truth is detailed, the type of ground truth is not mentioned.
8. The sample size for the training set
- Not available in the provided text. There is no mention of a training set, indicating that this submission is not about an AI/ML algorithm requiring such data.
9. How the ground truth for the training set was established
- Not available in the provided text. As no training set is mentioned, the method for establishing its ground truth is not provided.
Conclusion: The provided 510(k) summary (K141851) for the Navilas Laser System is focused on establishing substantial equivalence based on indications for use, technological characteristics, and compliance with general safety and performance standards (like IEC and ISO). It does not present data from a clinical or performance study that would typically include acceptance criteria, sample sizes, ground truth establishment, or expert evaluations as requested for an AI/algorithm-based device.
Ask a specific question about this device
(174 days)
Lava Digital Models, Marketed by 3M Unitek Class | Exempt Product Code: LMD (21CFR§892.2020)
- OrthoCAD
Insignia Digicast is a computer aided system intended for use as an aid in orthodontic diagnostics for use by dental professionals trained in orthodontic treatment including radiographic analyses and diagnostics.
Insignia Digicast is a software product and service that creates digital models of patients' teeth, which are used primarily to record the status of a patients' dentition prior to treatment. Clinicians may also use the digital model to support their diagnosis. The Insignia Digicast system scans a traditional impression, processes the scan, and electronically delivers a digital study model to the dental professional. The dental professional may view, measure, and analyze the Insignin Digicast three dimensional viewer software. The main analysis tools include TJ Moyers, Bolton analyses, ABO scoring, and Arch and Overbite/Overjet measurements. There are no accessories or patient contacting components of Insignia Digicast.
The provided document describes the Insignia Digicast, a software product and service that creates digital models of patients' teeth for orthodontic diagnostics.
Here's an analysis of the acceptance criteria and study data:
-
Table of Acceptance Criteria and Reported Device Performance:
The document does not explicitly state numerical acceptance criteria (e.g., specific accuracy thresholds for measurements). Instead, it relies on demonstrating substantial equivalence to predicate devices (OrthoCAD iQ and Lava Digital Models) through qualitative comparisons of features and mode of use, and quantitative comparisons of measurement accuracy.
Feature/Measurement Acceptance Criteria (Implied by Substantial Equivalence) Insignia Digicast Performance (Reported) Teeth Width Functionally equivalent to predicates Bench tested, successfully validated Space Functionally equivalent to predicates Bench tested, successfully validated T-J Moyers Analysis Functionally equivalent to predicates Bench tested, successfully validated Bolton Analysis Functionally equivalent to predicates Bench tested, successfully validated Arch Measurements Functionally equivalent to predicates Bench tested, successfully validated Overbite/Overjet Functionally equivalent to predicates Bench tested, successfully validated Overall Performance Substantially equivalent to predicate devices Deemed substantially equivalent -
Sample Size Used for the Test Set and Data Provenance:
- Sample Size: Not specified in the provided document. The document states "data from bench testing" was used, but does not quantify the number of cases or models tested.
- Data Provenance: Not explicitly stated. Given it's bench testing, it's likely synthetic data, cadaver models, or a collection of patient impressions, but the origin (e.g., country) is not mentioned. It is a retrospective evaluation against existing (presumably traditional) measurements.
-
Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
- Not specified. The document indicates that bench testing was used to "evaluate the performance characteristics... compared to the predicate device." It doesn't mention expert involvement in establishing a separate ground truth for the test set beyond the comparisons made with the predicate device's established performance.
-
Adjudication Method for the Test Set:
- Not specified. This information is typically relevant for human-reader evaluations, which did not occur for this device.
-
If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study was done. The document explicitly states: "Clinical testing has not been conducted on this product."
-
If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone evaluation was performed. The "Non-Clinical Performance Data" section describes "bench testing" to evaluate the "performance characteristics of Insignia Digicast," which is an algorithm-only assessment. The software was "successfully validated to confirm the performance of the device."
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The "ground truth" for the non-clinical performance evaluation was based on comparisons to the predicate device, OrthoCAD iQ, for characteristics such as teeth width, space, and various analyses (T-J Moyers, Bolton, Arch, Overbite/Overjet). This implies the predicate device's measurements served as the reference or accepted standard.
-
The sample size for the training set:
- Not specified. The document does not provide any details regarding the training data or its size.
-
How the ground truth for the training set was established:
- Not specified. Since no information on the training set is provided, the method for establishing its ground truth is also unknown.
Ask a specific question about this device
(34 days)
(Class I exempt per 21 CFR § 892.2010 and CFR § 892.2020).
Device Description:
MPE is a software
Mammography Prior Enhancement (MPE) is a software application intended to enhance the appearance of prior non-Hologic digital mammography x-ray images so that they more closely resemble Hologic digital mammography images. MPE processed images are intended for comparison purposes only and cannot be used for primary diagnosis.
MPE runs on a Windows-based computer. Results can be displayed on a workstation capable of displaying mammography x-ray images, such as Hologic's SecurView® DX workstation, Product Code LLZ, CFR 892.2050 (K103385)
MPE is a software application that runs on a Windows server or softcopy display workstation, such as SecurView DX (K103385). MPE processes (manipulates) prior GE digital mammography images so that they will appear similar to Hologic digital mammography images. The image processing consists of various steps to improve visualization of structures in the breast including, logarithmic conversion, skin line correction and contrast enhancement. These are standard methods used to allow optimal display and review of mammography images with minimal window/leveling operation.
The provided text describes a 510(k) submission for the Mammography Prior Enhancement (MPE) device and its substantial equivalence to a predicate device. However, it does not contain the detailed acceptance criteria or a specific study that proves the device meets those criteria in a quantitative manner.
Here's a breakdown of the information available and what is missing based on your request:
1. Table of Acceptance Criteria and Reported Device Performance:
- Acceptance Criteria: Not explicitly stated in the provided text in a quantitative or measurable format. The document focuses on demonstrating substantial equivalence based on intended use, technological characteristics, and safety/effectiveness concerns addressed by adherence to standards and risk management.
- Reported Device Performance: The text states, "The MPE software further processes and displays prior digital mammography images for physicians or trained medical personnel to use as a historical image reference when reviewing current Hologic digital mammography images. The MPE processed images will appear similar to Hologic digital images." This is a qualitative description of its function but lacks specific performance metrics (e.g., image quality scores, similarity metrics).
2. Sample Size Used for the Test Set and Data Provenance:
- Not provided. The document does not mention any specific test set size, data provenance (country of origin), or whether the data was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications:
- Not provided. There is no mention of experts, ground truth establishment, or their qualifications.
4. Adjudication Method for the Test Set:
- Not applicable/Not provided. Since no specific test set or ground truth establishment method is described, there is no mention of an adjudication method.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and the effect size:
- Not performed or reported. The 510(k) summary does not describe any MRMC study comparing human reader performance with and without AI assistance. The focus is on image processing for comparison, not diagnostic performance improvement.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Not performed or reported as a performance study. The device's function is standalone in the sense that it processes images automatically. However, there's no standalone performance study reported with specific metrics. Its output is explicitly stated as "for comparison purposes only and cannot be used for primary diagnosis," which inherently avoids standalone diagnostic performance claims.
7. The Type of Ground Truth Used:
- Not applicable/Not provided. As no performance study with a defined ground truth is described, this information is not present.
8. The Sample Size for the Training Set:
- Not provided. There is no mention of a training set or its size.
9. How the Ground Truth for the Training Set was Established:
- Not applicable/Not provided. Since no training set or ground truth establishment is described, this information is not present.
Summary of available information related to "acceptance criteria" and "study":
The "study" or justification for the device's acceptance presented in this 510(k) is primarily based on:
- Demonstration of Substantial Equivalence: The MPE software shares the same intended use, technological characteristics, and performance standards as its predicate device (DigitalNow HD, K091368).
- Adherence to Standards: The device is designed and manufactured according to ISO 13485, ISO 14971, IEC 62304, and 21 CFR Part 820.
- Risk Management: Potential hazards are controlled via risk management processes and verification and validation testing, ensuring "no risk of data loss" and that "MPE processed images are not intended for diagnosis."
- Intended Use Limitations: The key acceptance criterion implicitly stated is that the processed images are "for comparison purposes only and cannot be used for primary diagnosis."
The document focuses on regulatory compliance and safe operation within its limited intended use, rather than a quantitative clinical performance study with specific acceptance metrics. For a device intended "for comparison purposes only" and not for primary diagnosis, a full clinical performance study as might be expected for an AI diagnostic aid is often not required for 510(k) clearance, as the primary risk is misuse (i.e., using it for diagnosis), which is mitigated by labeling and intended use restrictions.
Ask a specific question about this device
(159 days)
Medical Image Storage (Section 892.2010, product code LMB) and Medical Image Communications (Section 892.2020
The ThinkingNet is a Medical Image Management and Review System, commonly known as PACS. ThinkingNet made by Thinking Systems Corporation, Florida, USA, is indicated for acceptance, transmission, storage, archival, reading, interpretation, clinical review, analysis, annotation, distribution, printing, editing and processing of digital images and data acquired from DICOM compatible diagnostic device, by healthcare professionals, including radiologists, cardiologists, physicians, technologists and clinicians.
- With a ThinkingWeb option it can be used to access diagnostic information remotely with all workstation functionality or to collaborate with other users. The client device is cross platform for all but the thick-client ThinkingNet.Net option.
- With the molecular imaging option it can be used for processing and interpreting nuclear medicine and other molecular imaging studies.
- With image co-registration and fusion option it can be used for processing and interpreting PET-CT, PET-MRI, SPECT-CT and other hybrid imaging studies.
- With the Mammography option it can be used for screening and diagnosis (with -MG, "For presentation" images only) from FDA approved modalities in softcopy (using FDA cleared displays for mammography) and printed formats.
- With the cardiology option it can be used for reading, interpreting and reporting cardiac studies, such as nuclear cardiac, PET cardiac, echocardiographic, X-ray angiographic and CTA studies.
- With the Orthopedic option it can be used to perform common orthopedic measurements of the hip, knee, spine, etc.
OR - With the 3D/MPR option it can be used to volumetric image data visualization: -MIP, MPR, VR and triangulation.
- With the Quality Assurance option it can be used by PACS administrators or clinicians to perform quality control activities related to patient and images data.
ThinkingNet is a multi-modality PACS/RIS with applications optimized for each individual imaging modality. The image data and applications can be accessed locally or remotely. ThinkingNet workstation software is designed as diagnostic reading and processing software packages, which may be marketed as software only, as well as packaged with standard off-the-shelf computer hardware.
The base functions include receiving, transmission, storage, archival, display images from all imaging modalities. When enabled, the system allows remote access of image data and applications over a local or wide area network, using a Web browser, thick-client, thin-client or cloud-based remote application deployment method.
Options allow for additional capability, including modality specific applications, quantitative postprocessing, modality specific measurements, multi-planar reformatting and 3D visualization.
ThinkingNet Molecular Imaging modules offer the image processing functionality, through MDStation and ThinkingWeb, that have the same indication as the predicate modality workstations. It delivers image processing and review tools for applications used in functional imaging modalities, such as nuclear medicine, PET, PET/CT, SPECT/CT and PET/MRI.
ThinkingNet Mammo module is a diagnostic softcopy breast imaging workstation with diagnostic print capability.
- It displays and prints regionally approved DICOM DR Digital Mammography Images (MG . SOP class) with a default or user defined mammography hanging protocol.
- . It displays and prints regionally approved DICOM CR Digital Mammography Images (CR SOP class) a default or user defined mammography hanging protocol.
- It displays adjunct breast imaging modality studies (i.e. Breast MR, Breast . PET and Breast gamma camera) for comparison.
ThinkingWeb modules offer comprehensive remote image and application access methods to allow clinicians to review and process images remotely. It has the following modules. - . ThinkingWebLite: Clientless image distribution via simple Web browser (see NuWEB in K010271). It is primarily a referral physician's portal, not intended for primary reading.
- . ThinkingNet.Net: A thick-client implementation using an existing image review module (NuFILM) with a proprietary image streaming mechanism.
- . ThinkingWeb: Cross-platform thin-client remote application access based the existing MDStation software and off-the-shelf remote computing technology.
- . ThinkingWeb Extreme: A cloud-based remote application deployment implementation based the existing MDStation software and off-the-shelf cloud computing technologies.
Besides ThinkingNet.Net, all ThinkingWeb products support cross-platform client computer devices. Thinking Net uses Windows-based client computer.
The provided 510(k) summary for "ThinkingNet Modality Applications and Web Extensions" does not contain specific acceptance criteria or performance study results in the typical format of a clinical or technical validation study.
Instead, the submission primarily focuses on demonstrating substantial equivalence to predicate devices. This means that, for this type of submission, the manufacturer asserts that their device is as safe and effective as existing legally marketed devices, rather than needing to prove new performance metrics against predefined acceptance criteria. The "Performance testing" mentioned refers to internal verification and validation activities to ensure the new features (like Web options and modality-specific modules) still perform as expected and maintain equivalence to the predicate devices.
Therefore, many of the requested details about acceptance criteria, specific performance numbers, sample sizes for test sets, expert qualifications, and ground truth methodologies for a new performance claim are not present in the provided document.
Here's an analysis of the available information:
1. A table of acceptance criteria and the reported device performance
- Acceptance Criteria: Not explicitly stated as quantifiable metrics for a new claim. The "acceptance criteria" here are implicitly linked to demonstrating substantial equivalence with the listed predicate devices. This means the device must function similarly in terms of image management, communication, archiving, and processing capabilities.
- Reported Device Performance: No specific numerical performance metrics (e.g., sensitivity, specificity, accuracy) are reported for the device in the context of a clinical study designed to establish new performance claims. The document states that "Performance testing was conducted to show that ThinkingNet is safe and effective," but does not provide details of these tests or their results.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size: Not specified.
- Data Provenance: Not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not specified.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No MRMC study is mentioned. This device is a PACS/image management system, not an AI-assisted diagnostic tool for which such studies are typically conducted.
- Effect Size: Not applicable as no MRMC study was performed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Not applicable/not specified. The device is an image management and review system, inherently designed for human interaction.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: Not specified. For a PACS, the "ground truth" would generally relate to the accurate display, storage, and retrieval of medical images and data, ensuring data integrity and functionality mirroring predicate devices.
8. The sample size for the training set
- Sample Size for Training Set: Not applicable/not specified. This is a PACS system, not a machine learning model that typically requires a discrete training set.
9. How the ground truth for the training set was established
- Ground Truth for Training Set: Not applicable.
Summary of available information regarding the "study" (internal V&V) and acceptance criteria:
The "study" referenced in the document is the internal verification and validation (V&V) process conducted by Thinking Systems, following their ISO 13485 and FDA 21 CFR Part 820 compliant Quality System.
- Acceptance Criteria (Implicit): The primary acceptance criterion is that the "ThinkingNet with its Web options and modality specific modules to be as safe, as effective, and performance is substantially equivalent to the predicate device(s)." This means the device must meet the functional and performance characteristics established by the legally marketed predicate devices.
- Proof of Meeting Criteria:
- Performance Testing: "Performance testing was conducted to show that ThinkingNet is safe and effective." This would have involved internal tests to ensure the system's various functions (receiving, transmission, storage, display, advanced processing, remote access) operate correctly and reliably. These tests would validate that the device's features, especially the new Web options and modality-specific applications, perform comparably to or within the established safety and effectiveness parameters of the predicate devices.
- Quality Assurance Measures: The document lists several QA measures applied to the development, including Requirements Specification, Design Specification, Hazard and Risk Analysis, Modular Testing, Verification Testing, Validation Testing, and Integration Testing. These processes ensure that the device was designed and built to meet its intended purpose and functions properly.
- Substantial Equivalence Argument: The core of the submission is the detailed argument for substantial equivalence, comparing ThinkingNet's intended use, indications, target populations, and technical characteristics with multiple predicate devices across various modalities (PACS, molecular imaging, mammography, cardiology, RECIST, echocardiography). The differences identified (e.g., more built-in modality-specific applications, server-side processing for Web clients, cross-platform support) are then argued not to affect safety and effectiveness, supported by the internal performance testing.
In essence, for this 510(k) submission, the "acceptance criteria" were met by demonstrating that the device functions comparably to existing cleared devices, and the "study" was the manufacturer's well-documented internal verification and validation process designed to ensure this equivalence and the device's safety and effectiveness. No independent clinical efficacy study with specific numerical performance targets was required or presented for this type of device and submission pathway.
Ask a specific question about this device
(169 days)
§886.1120, Class 11
Product Code: HKI
Subsequent Classification: 21 CFR 892.2010 class I; 21 CFR 892.2020
The Non-Mydriatic Auto Fundus Camera AFC-330 with Image Filing Software NAVIS-EX is intended to capture, display, store and manipulate images of the retina and the anterior segment of the eye, to aid in diagnosing or monitoring diseases of the eye that may be observed and photographed.
The Non-Mydriatic Auto Fundus Camera AFC-330 with Image Filing Software NA VIS-EX ("AFC-330 with NA VIS-EX") is a conventional non-mydriatic auto fundus camera. The AFC-330 with NAVIS-EX captures fundus images using a built-in colour CCD camera without the use of mydriatic agents. With this single device, registration of patient information, image capture, and viewing of captured images are possible. By connecting a personal computer (PC) to the device via a LAN and installing the NA VIS-EX image filing system software, images captured by this device can be transferred to the PC and viewed and managed on the PC.
The provided text is a 510(k) summary for the Nidek Non-Mydriatic Auto Fundus Camera AFC-330 with Image Filing Software NAVIS-EX. It describes the device, its intended use, and substantial equivalence to predicate devices, but it does not contain information about acceptance criteria or a specific study proving the device meets those criteria, as typically found in clinical performance studies of AI/CADe devices.
The document states:
- Testing in support of substantial equivalence determination: "All necessary bench testing was conducted on the AFC-330 with NA VIS-EX to support a determination of substantial equivalence to the predicate devices. The performance testing included the following tests:
- Electrical and mechanical safety testing
- Electromagnetic compatibility testing
- Light burden testing
- Verification and validation testing"
- Summary: "The collective performance testing results demonstrate that AFC-330 with NA VIS-EX is substantially equivalent to the predicate devices."
This indicates that the submission relied on bench testing to demonstrate performance characteristics related to safety and fundamental functionality, rather than a clinical study evaluating diagnostic accuracy against specific performance metrics and acceptance criteria for an AI or CADe component. The device appears to be a traditional fundus camera with image filing software, not a product that performs automated diagnostic interpretations requiring acceptance criteria like sensitivity, specificity, or AUC based on expert reads.
Therefore, I cannot provide the requested information regarding acceptance criteria and studies proving the device meets them because such information is not present in the provided text. The submission focuses on demonstrating substantial equivalence through standard device testing (safety, EMC, light burden, verification/validation) for a medical imaging acquisition and management system.
Ask a specific question about this device
Page 1 of 5