Search Results
Found 5 results
510(k) Data Aggregation
(412 days)
The VS800 system is an automated digital slide creation, management, and viewing system. It is intended for in vitro diagnostic use as an aid to the pathologist in the display, detection, counting and classification of tissues and cells of clinical interest based on particular color, intensity, size, pattern and shape.
The VS800HER2 Manual Read (MR) of digital slide application is intended for use as an aid to the pathologist in the detection and semi-quantitative measurement of HER2 by manual examination of the digital slide of formalin-fixed, paraffin-embedded and neoplastic tissue IHC stained for HER2 receptors on a computer monitor. HER2 results are indicated for use as an aid in the management, prognosis and prediction of therapy outcomes of breast cancer.
The VS800HER2 MR of digital slide application is intended for use as an accessory to the DakoHercepTest to aid the pathologist in the detection and semi-quantitative measurement of HER2 by manual examination of the digital slide of formalin-fixed, paraffin-embedded and neoplastic tissue immunohistochemically stained for HER2 receptors on a computer monitor. When used with the Dako Hercep Test, it is indicated for use as an aid in the assessment of breast cancer patients for whom HERCEPTIN® (Trastuzumab) treatment is being considered.
Note: The actual correlation of the Dako Hercep Test to the Herceptin® clinical outcome has not been established.
The VS800 System is an automated digital slide creation, management and viewing system. The VS800 System components consist of an automated digital microscope slide scanner (VS800-SS) which include a computer, keyboard and mouse, operating monitor (VS800-MTR) and VS Viewer software (VS2-ASW-IDB). The system capabilities include digitizing microscope slides at high resolution, storing and managing the resulting digital slide images, retrieving and displaying digital slides, including support for remote access over wide-area networks, providing facilities for annotating digital slides and editing metadata associated with digital slides, and facilities for image analysis of digital slides. The remote digital slide viewing capabilities of the system support reading digital slides on a computer monitor, enabling Pathologists to make clinically relevant decisions analogous to those they make using a conventional microscope. Specifically, the system supports the pathologist in the detection of HER2/neu by manual examination of the digital slide of formalin-fixed, paraffin-embedded normal and neoplastic tissue immunohistochemically stained for HER2 receptors on a computer monitor.
The VS800-SS (an automated digital microscope slide scanner) creates high resolution, color digital slide images of entire glass slides in a matter of minutes. High numeric aperture 20x objectives, specially designed for VS800-SS optical system and real time contrast auto focus system (AF) are used to produce high-quality images. VS800-SS employs a 2D CCD imager for fine image acquisition which is same technologies used in conventional microscope imaging system. VS800-SS captured image is as same as conventional microscope image.
The VS-ASW-IDB (VS Viewer software) is a full-featured digital pathology information management system. The software runs on a server computer, which stores digital slide images on disk storage such as a RAID array, and which hosts an SQL database that contains digital slide metadata. The VS-ASW-IDB includes a web application and services which encapsulate database and digital slide image access for other computers. The VS-ASW-IDB also includes support for locally or remotely connected Image Server, which run digital slide viewing software provided as part of VS-ASW-IDB.
The laboratory technician or operator of VS800-SS loads glass microscope slides into a specially designed slide carrier with a capacity up to 100 slides per carrier (300 total). The scanning process begins when the operator starts the VS800-SS scanner and finishes when the scanner has completed scanning of all loaded slides. As each glass slide is processed, the system automatically stores stitched images as a single digital slide image, which represents a histological reconstruction of the entire tissue section. When the slide scanning finished, then operator of scanner will confirms the image quality and records to the database. When the images are recorded, pathologists or authorized parsons can observe these images to access the VS-ASW-IDB.
Here's a summary of the acceptance criteria and study details for the Olympus VS800HER2 MR Application, based on the provided 510(k) summary:
Acceptance Criteria and Device Performance
Acceptance Criteria Category | Acceptance Criteria | Reported Device Performance (Mean %) | Reported Device Performance (Range %) |
---|---|---|---|
Agreement with Manual Microscopy Reads (Trichotomous HER2 Scores: 0,1+; 2+; 3+) | (Individual Pathologist % Agreements are shown in the tables below.) | ||
Site 1, Pathologist 1 | |||
HER2 0, 1+ | N/A (Comparison study, not a specific threshold for acceptance) | 90.91% | (75.67%, 98.08%) CI |
HER2 2+ | N/A | 88.24% | (72.55%, 96.70%) CI |
HER2 3+ | N/A | 96.97% | (84.24%, 99.92%) CI |
Site 1, Pathologist 2 | |||
HER2 0, 1+ | N/A | 91.18% | (76.32%, 98.14%) CI |
HER2 2+ | N/A | 90.91% | (75.67%, 98.08%) CI |
HER2 3+ | N/A | 96.97% | (84.24%, 99.92%) CI |
Site 1, Pathologist 3 | |||
HER2 0, 1+ | N/A | 60.00% | (38.67%, 78.87%) CI |
HER2 2+ | N/A | 97.22% | (85.47%, 99.93%) CI |
HER2 3+ | N/A | 87.18% | (72.57%, 95.70%) CI |
Site 2, Pathologist 1 | |||
HER2 0, 1+ | N/A | 85.19% | (66.27%, 95.81%) CI |
HER2 2+ | N/A | 80.95% | (65.88%, 91.40%) CI |
HER2 3+ | N/A | 100% | (88.78%, 100%) CI |
Site 2, Pathologist 2 | |||
HER2 0, 1+ | N/A | 96.67% | (82.78%, 99.92%) CI |
HER2 2+ | N/A | 78.38% | (61.79%, 90.17%) CI |
HER2 3+ | N/A | 100% | (89.42%, 100%) CI |
Site 2, Pathologist 3 | |||
HER2 0, 1+ | N/A | 63.89% | (46.22%, 79.18%) CI |
HER2 2+ | N/A | 80.65% | (62.53%, 92.55%) CI |
HER2 3+ | N/A | 93.94% | (79.77%, 99.26%) CI |
Precision Study (Overall Agreements for Manual Digital Reads) | |||
Intra-Instrument (Intra-Pathologist) | N/A (Comparison study, not a specific threshold for acceptance) | 100% | (95.98%, 100%) CI |
Inter-Instruments (Intra-Pathologist) | N/A | 95.6% | (89.01%, 98.78%) CI |
Note: The study describes percentages of agreement without predefined acceptance thresholds for substantial equivalence in the document. The statistical analysis is presented as Percent Agreement (PA) with a 95% Confidence Interval (CI) between manual microscopy reads and manual digital reads, and for precision studies.
Study Details for the Olympus VS800HER2 MR Application
-
Sample Size Used for the Test Set and Data Provenance:
- Sample Size: 100 slides per clinical site, so a total of 200 slides were used for the comparison study (100 slides at Site 1, 100 slides at Site 2).
- For the precision study, a subset of 30 slides from the comparison study was used.
- Data Provenance: Retrospective. The slides were "selected from archive." The country of origin is not explicitly stated, but the study was conducted at "two clinical sites," implying local (likely within the US, given FDA submission context) data.
-
Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
- Number of Experts: Six pathologists in total for the comparison study (three at each of two clinical sites). One pathologist for the precision study (who repeated reads).
- Qualifications of Experts: Described as "Pathologists." Specific years of experience or sub-specialty certifications are not provided in the summary.
-
Adjudication Method for the Test Set:
- Adjudication Method: None for establishing a single "ground truth." The study compared each pathologist's "manual digital reads" against their own "manual microscopy reads." This is a paired comparison, where the conventional microscopy read by the same pathologist is considered the reference for that pathologist's digital read. The summary doesn't describe an external or consensus ground truth for the comparison study itself; rather, it assesses agreement between two reading methods by the same individual.
- For the precision study, there was also no external adjudication; it assessed agreement of repeated reads by a single pathologist.
-
If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- This was a Multi-Reader Multi-Case (MRMC) comparative effectiveness study of digital pathology reads vs. conventional microscopy reads, but without AI assistance. The VS800HER2 MR Application is a "Manual Read" application, meaning the pathologist manually interprets the digital image.
- Therefore, there is no AI component, and no effect size regarding human readers improving with AI assistance is reported or applicable to this specific application. The study focuses on the agreement between conventional microscopy and manual reading of digital slides.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- No standalone algorithm-only performance study was done. The device (VS800HER2 MR Application) is explicitly described as for "Manual Read (MR) of digital slide application," intended "as an aid to the pathologist in the detection and semi-quantitative measurement of HER2 by manual examination of the digital slide." It is a display and management system for pathologists to manually review digital slides, not an automated AI-driven diagnostic algorithm.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the comparison study: The "ground truth" or reference standard for each pathologist's digital read was their own prior manual microscopy read of the physical glass slide. This is a paired comparison, where the same pathologist acts as their own control.
- For the precision study: The reference was the pathologist's own repeated manual digital reads from the same or different instruments.
-
The sample size for the training set:
- The 510(k) summary does not mention a training set for an algorithm, as the VS800HER2 MR Application is a manual read application. The study described is entirely a clinical validation/test set.
-
How the ground truth for the training set was established:
- Not applicable, as there is no mention of a training set or an algorithm being developed (which would typically require a training set with established ground truth). The device acts as a digital visualization and management system for manual pathologist interpretation.
Ask a specific question about this device
(73 days)
EndoClear™ Laparoscopes Accessory is intended to be used by qualified physicians to provide endoscope lens cleaning for uninterrupted visualization of internal structures in a wide variety of diagnostic and therapeutic laparoscopic procedures.
The Virtual Ports EndoClear™ system is a sterile, single patient use system consisting of: EndoClear™ Lens Cleaner and the EndoClear™ Introducer. The EndoClear Lens Cleaner is an internally anchored hands-free, laparoscope lens cleaning device which is attached to the internal abdominal cavity wall and remains in position until completion of the surgery, enabling the surgeon to effectively clean the lens of blood, fat, fog, and secretions without removing it from the cavity. The Virtual Ports EndoClear™ Lens Cleaner is introduced via a cannula using the EndoClear™ Introducer, which also removes the EndoClear™ Lens Cleaner at the end of the surgical procedure.
The Virtual Ports EndoClear™ system is a sterile, single-patient use system designed to clean the lens of a laparoscope internally during surgery without removing it from the abdominal cavity. This allows for uninterrupted visualization of internal structures during diagnostic and therapeutic laparoscopic procedures.
Here's an analysis of the acceptance criteria and study data based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Safety: Device performs safely. Materials in contact with the human body are biocompatible. | The animal study demonstrated that "no safety and effectiveness questions were raised." All materials in contact with the human body are biocompatible in accordance with ISO 10993-1. |
Effectiveness: Device effectively cleans the laparoscope lens of blood, fat, fog, and secretions without removal from the cavity, enabling uninterrupted visualization. | Bench tests and an animal study were performed. The document states: "All testing results demonstrated satisfactory performance." The animal study showed that "the EndoClear™ system performs as intended." The overall conclusion is that "the device performs safely and efficiently in accordance with its intended use." |
Substantial Equivalence: Device is substantially equivalent to predicate devices. | Preclinical and bench performance data were supplied to demonstrate that the EndoClear™ meets its labeled performance claims and to establish substantial equivalence to the predicate devices (Laparoscope and Monopolar laparoscopic instruments; Instrumed International, Inc. K040855 and g-Lix™ Tissue Grasper; USGI Medical K061268). |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size:
- Bench Tests: The document does not specify a numerical sample size for the bench tests. It states "Series of bench tests were performed."
- Animal Study: The document does not specify a numerical sample size (e.g., number of animals) for the animal study. It just states "An animal study was performed."
- Data Provenance: The document does not explicitly state the country of origin for the studies. Given the applicant's address (Israel), it's highly probable the studies were conducted in Israel or a location collaborating with the applicant. The studies described are prospective in nature, as they involve testing the device to evaluate its performance.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document does not provide information regarding the number of experts, their qualifications, or the method used to establish ground truth for the test set in either the bench tests or the animal study. The evaluation of performance in the animal study would typically involve veterinary surgeons or researchers, but this is not detailed.
4. Adjudication Method for the Test Set
The document does not describe any adjudication method (e.g., 2+1, 3+1, none) for the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No multi-reader multi-case (MRMC) comparative effectiveness study was mentioned or conducted, as this device is a physical tool for cleaning a laparoscope lens, not an imaging or diagnostic algorithm used by multiple human readers.
6. Standalone (Algorithm Only) Performance Study
No standalone (algorithm only) performance study was conducted. The EndoClear™ system is a physical medical device, not an algorithm, and it is intended for human-in-the-loop use by a surgeon during laparoscopic procedures.
7. Type of Ground Truth Used
- Bench Tests: The ground truth for bench tests would be defined by pre-established engineering specifications or performance metrics for lens cleaning effectiveness (e.g., clarity measurements after cleaning, removal of specific contaminants).
- Animal Study: The ground truth for the animal study would likely be a combination of direct observation by the surgical team/researchers on the safety (e.g., lack of device-related injury) and effectiveness (e.g., visual clarity through the endoscope, ability to complete the intended surgical task). The document implies a qualitative assessment ("performs as intended," "no safety and effectiveness questions were raised").
8. Sample Size for the Training Set
This information is not applicable as the EndoClear™ system is a physical device and does not involve a "training set" in the context of machine learning algorithms.
9. How the Ground Truth for the Training Set Was Established
This information is not applicable for the same reason as point 8.
Ask a specific question about this device
(108 days)
Intended to be used for making accurate dental impressions. The resulting impressions are used to make plaster models of the teeth.
Not Found
The provided text is a 510(k) clearance letter from the FDA for a dental impression material named "Virtual 380". This document focuses on regulatory approval based on demonstrating substantial equivalence to a predicate device, rather than detailed performance studies and acceptance criteria typically found in a clinical study report or a more comprehensive premarket submission.
Therefore, the requested information regarding acceptance criteria, study details, sample sizes, expert qualifications, adjudication methods, MRMC studies, standalone performance, and ground truth establishment cannot be extracted from the provided text.
The document states: "We have reviewed your Section 510(k) premarket notification of intent to market the device referenced above and have determined the device is substantially equivalent (for the indications for use stated in the enclosure) to legally marketed predicate devices marketed in interstate commerce prior to May 28, 1976..." This indicates that the approval is based on demonstrating equivalence, not necessarily on a new clinical study with specific acceptance criteria as you've outlined.
To answer your questions, one would need to review the actual 510(k) submission document (K060891) itself, which is not provided here.
Ask a specific question about this device
(44 days)
This device employs previously scanned DICOM CT images in a software tool which serves as an aid to visualizing and pre-planning of dental implant surgery.
Virtual Implant Placement, or simply VIP, is a software program that will allow dental implant clinicians to pre-plan their implant surgeries and/or to design surgical appliances that will be used during surgery. The program will presents the clinician with various reformatted CT images of their patient's jaw(s), allow the placement and manipulation of virtual implants, and provide measurement and other tools to assist the clinician. In typical usage a dentist evaluating a patient for dental implant surgery will often refer the patient for a CT scan to better visualize the patient's anatomy, and check the amount and density of the bone for its suitability for placing implants. The CT scan site will return the axial images from the CT scan on a CD in industry-standard DICOM format. Upon receipt of the CD, the doctor will "process" the case using VIP. Axial images are well-known to radiologists, but foreign to dentists. Processing involves the removal of unnecessary images which are outside the region of interest, and drawing a curve which will be used for the later reformatting of the data to produce images more familiar to dentists. After opening a disk of images, VIP will display the axial images and thumbnails of these, along with a scout view and a checklist of stops to follow in processing the case. After the case has been processed, the axial data will be processed to make panoramic images, which are parallel to the curve that was drawn during processing, and cross-sectional images, which are perpendicular to the panoramic image. Both types of images are normally generated by the Panorex machines dentists are familiar with. Since the primary purpose of VIP is to aid in the planning of implant surgeries, VIP will allow the surgeon to place simulated implants on the image and to gauge their size and position relative to the surrounding anatomy. The simulated implants will be generic models of standard dental implants, which range from cylindrical to conical. When the data becomes available from various implant manufacturers, VIP will allow the user to pick from specific, currently-manufactured implants to approximately model any of their favorite implants.
The provided text is a 510(k) summary for the Virtual Implant Placement™ (VIP) Dental Implant Surgery Planning Software. It details the device's intended use and compares it to legally marketed predicate devices to establish substantial equivalence.
Based on the provided text, here's an analysis of the requested information:
1. Table of acceptance criteria and the reported device performance:
The document does not explicitly state "acceptance criteria" in a quantitative, measurable form for the device's performance. Instead, it focuses on establishing substantial equivalence to predicate devices through a qualitative comparison of features and intended use. The device's performance is implicitly judged by its ability to perform similar functions as the predicate devices.
Feature / Criterion | Predicate Device 1: SimPlant system, K033849 (Materialise.) | Predicate Device 2: ImplantMaster K042212 (I-Dent Ltd.) | Virtual Implant Placement™ (VIP) - Reported Performance |
---|---|---|---|
Image Source | CT Scanner | DICOM CT | DICOM CT |
Main Indication / Purpose | Medical front-end software for visualizing gray value images, image segmentation, transfer of imaging information, planning and simulation for dental implant placement and surgical treatment. | Uses DICOM CT data for visualization, diagnosis, and treatment planning for dental implant surgery. | Employs previously scanned DICOM CT images as an aid to visualizing and pre-planning of dental implant surgery. |
Tools | Visualization, Implant placement, measurement of distances, angles, and density. | Visualization, Implant placement. | Visualization, Implant placement, Distance measurement, Angle measurement, Rectangular measurement, Elliptical measurement. |
Conclusion of Equivalence | N/A (Predicate) | N/A (Predicate) | "In all important respects, the VIP is substantially equivalent to one or more predicate systems." |
No specific quantitative performance metrics (e.g., accuracy, precision, sensitivity, specificity) or corresponding acceptance criteria are provided in this document. The "device performance" is described through its functionalities and comparison to existing devices.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
The document does not mention any specific sample size for a test set or the provenance of any data used for testing. The submission is focused on establishing substantial equivalence based on a comparison of features and indications for use.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
The document does not mention the use of experts to establish ground truth for a test set. No details are provided regarding any clinical validation studies with expert review.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
Since no test set or expert ground truth establishment is mentioned, there is no information on adjudication methods.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
The document does not indicate that an MRMC comparative effectiveness study was done. The device itself is described as a "software tool which serves as an aid," implying human-in-the-loop, but no data on human performance improvement with or without the software is provided.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
The device is explicitly described as "an aid to visualizing and pre-planning," meaning it's intended to be used with human involvement. Therefore, a standalone (algorithm only) performance assessment would not be directly relevant to its intended use, and no mention of such a study is made.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
The document does not describe the type of ground truth used because it does not refer to any formal performance study that would require ground truth. The basis for substantial equivalence is primarily a functional and indications-for-use comparison with predicate devices.
8. The sample size for the training set:
The document does not mention any sample size for a training set. As the application describes software for planning based on existing DICOM CT images, it's not clear if a machine learning model requiring a traditional "training set" (in the AI/ML sense) was used or if it's primarily rule-based or image processing software.
9. How the ground truth for the training set was established:
Since no training set is mentioned, no information is provided on how ground truth for a training set was established.
Ask a specific question about this device
(13 days)
The VT 3000 is designed to conduct a range of tests including Nerve Conduction Studies (NCS) and Evoked Potentials (EP).
Not Found
This document appears to be an FDA 510(k) clearance letter for a medical device (Virtual Medical Systems VT 3000, a nerve conduction velocity measurement device). It does not contain the kind of detailed study information (acceptance criteria, performance data tables, sample sizes, ground truth establishment, expert qualifications, etc.) that would be part of a submission to demonstrate clinical effectiveness or safety as you've requested.
The document primarily focuses on:
- Confirming substantial equivalence to a predicate device.
- Indications for use (Nerve Conduction Studies and Evoked Potentials).
- Regulatory classification (Class II).
- General regulatory requirements for the manufacturer.
Therefore, based solely on the provided text, it is not possible to describe the acceptance criteria and the study proving the device meets them. The information you've requested typically comes from the detailed technical sections of the 510(k) submission itself, which are not included here.
The provided text does not contain any of the following information:
- A table of acceptance criteria and the reported device performance
- Sample size used for the test set and the data provenance
- Number of experts used to establish the ground truth for the test set and their qualifications
- Adjudication method for the test set
- If a multi-reader multi-case (MRMC) comparative effectiveness study was done
- If a standalone performance analysis was done
- The type of ground truth used
- The sample size for the training set
- How the ground truth for the training set was established
Ask a specific question about this device
Page 1 of 1