Search Results
Found 124 results
510(k) Data Aggregation
(33 days)
Hybrid Viewer (00859873006240)
Ask a specific question about this device
(155 days)
Segmentron Viewer
Ask a specific question about this device
(174 days)
AV Viewer
The AV Viewer is an advanced visualization software intended to process and display images and associated data in a clinical setting.
The software displays images of different modalities and timepoints, and performs digital image processing, measurement, manipulation, quantification and communication.
The AV Viewer is not to be used for mammography.
AV Viewer is an advanced visualization software which processes and displays clinical images from the following modality types: CT, CBCT – CT format, Spectral CT, MR, EMR, NM, PET, SPECT, US, XA (iXR, DXR), DX, CR and RF.
The main features of the AV Viewer are:
• Viewing of current and prior studies
• Basic image manipulation functions such as real-time zooming, scrolling, panning, windowing, and rolling/rotating.
• Advanced processing tools assisting in the interpretation of clinical images, such as 2D slice view, 2D and 3D measurements, user-defined regions of interest (ROIs), 3D segmentation and editing, 3D models visualization, MPR (multi planar Reconstructions) generation, image fusion and more.
• A finding dashboard used for capturing and displaying findings of the patient as an overview.
• Customized workflows allow the user to create their own workflows
• Tools to export customizable reports to the Radiology Information System (RIS) or PACS (Picture archiving and communication system) in different formats.
AV Viewer is based on the AV Framework, an infrastructure that provides the basis for the AV Viewer and common functionalities such as: image viewing, image editing tools, measurements tools, finding dashboard.
AV viewer can be hosted on multiple platforms and devices, such as Philips AVW, Philips CT/MR scanner console or on cloud.
The provided FDA 510(k) clearance letter for the AV Viewer device indicates that the device has met its acceptance criteria through various verification and validation activities. However, the document does not include detailed quantitative acceptance criteria (e.g., specific thresholds for accuracy, sensitivity, specificity, or measurement error) or comprehensive performance data that would typically be presented in a clinical study report. The submission focuses on demonstrating "substantial equivalence" to a predicate device rather than presenting detailed performance efficacy of the algorithm itself.
Therefore, much of the requested information regarding specific performance metrics, sample sizes for test and training sets, expert qualifications, and detailed study methodologies is not explicitly stated in this 510(k) summary. I will extract and infer what is present and explicitly state when information is missing.
Here's a breakdown based on the provided document:
Acceptance Criteria and Device Performance
The document describes comprehensive verification and validation activities, including "Bench Testing" for measurements and segmentation algorithms. However, specific quantitative acceptance criteria (e.g., "accuracy > 95%") and the reported performance values are not detailed in this summary. The general statement is that "Product requirement specifications were tested and found to meet the requirements" and "The validation objectives have been fulfilled, and the validation results provide evidence that the product meets its intended use and user requirements."
Table of Acceptance Criteria and Reported Device Performance
Feature/Metric | Acceptance Criteria (Quantified) | Reported Device Performance (Quantified) | Supporting Study Type mentioned |
---|---|---|---|
General Functionality | Meets product requirement specifications | Meets product requirements | Verification, Validation |
Clinical Use Simulation | Successful performance in use case scenarios | Passed successfully by clinical expert | Expert Test |
Measurement Accuracy | Not explicitly stated | "Correctness of the various measurement functions" | Bench Testing |
Segmentation Accuracy | Not explicitly stated | "Performance" validated for segmentation algorithms | Bench Testing |
User Requirements | Meets user requirement specifications | Meets user requirements | Validation |
Safety and Effectiveness | Equivalent to predicate device | Safe and effective; substantially equivalent to predicate | Verification, Validation, Substantial Equivalence Comparison |
Note: The quantitative details for the "Acceptance Criteria" and "Reported Device Performance" for measurement accuracy and segmentation accuracy are missing from this 510(k) summary. The document only confirms that these tests were performed and the results were positive.
Study Details Based on the Provided Document:
2. Sample sizes used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Test Set Sample Size: Not explicitly stated. The document mentions "Verification," "Validation," "Expert Test," and "Bench Testing" were performed, implying the use of test data, but no specific number of cases or images in the test set is provided.
- Data Provenance: Not explicitly stated. The document does not specify the country of origin of the data used for testing, nor does it explicitly state whether the data was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not explicitly stated. The "Expert Test" mentions "a clinical expert" (singular) was used to test use case scenarios. It does not mention experts establishing ground truth for broader validation.
- Qualifications of Experts: The "Expert Test" mentions "a clinical expert." For intended users, the document states "trained professionals, including but not limited to, physicians and medical technicians" (Subject Device), and "trained professionals, including but not limited to radiologists" (Predicate Device). It can be inferred that the "clinical expert" would hold one of these qualifications, but specific details (e.g., years of experience, subspecialty) are not provided.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Adjudication Method: Not explicitly stated. The document refers to "Expert test" where "a clinical expert" tested scenarios, implying individual assessment rather than a multi-reader adjudication process for establishing ground truth for a test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Comparative Effectiveness Study: Not explicitly stated or implied. The document focuses on the device's substantial equivalence to a predicate device and its internal verification and validation. There is no mention of a human-in-the-loop MRMC study to compare reader performance with and without AV Viewer assistance. The AV Viewer is described as an "advanced visualization software" and not specifically an AI-driven diagnostic aid that would typically warrant such a study for demonstrating improved reader performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance Study: The "Bench Testing" section states that it "was performed on the measurements and segmentation algorithms to validate their performance and the correctness of the various measurement functions." This implies a standalone evaluation of these specific algorithms. However, the quantitative results (e.g., accuracy, precision) of this standalone performance are not provided.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: For the "Bench Testing" of measurement and segmentation algorithms, the ground truth would likely be based on reference measurements/segmentations, possibly done manually by experts or using highly accurate, non-clinical methods. For other verification/validation activities, the ground truth would be against the pre-defined product and user requirements. However, explicit details about how this ground truth was established (e.g., expert consensus, comparison to gold standard devices/methods) are not specified.
8. The sample size for the training set
- Training Set Sample Size: Not explicitly stated. The document does not mention details about the training data used to develop the AV Viewer's algorithms. The focus is on validation for regulatory clearance. Since the product is primarily an "advanced visualization software" with general image processing tools, much of its functionality might not rely on deep learning requiring large, distinct training sets in the same way a specific AI diagnostic algorithm would.
9. How the ground truth for the training set was established
- Ground Truth for Training Set: Not explicitly stated. As no training set details are provided, the method for establishing its ground truth is also not mentioned.
Summary of Missing Information:
This 510(k) summary provides a high-level overview of the device's intended use, comparison to a predicate, and the types of verification and validation activities conducted. It largely focuses on demonstrating "substantial equivalence" based on similar indications for use and technological characteristics. Critical quantitative details about the performance of specific algorithms (measurements, segmentation), the size and characteristics of the datasets used for testing, and the methodology for establishing ground truth are not included in this public summary. Such detailed performance data is typically found in the full 510(k) submission, which is not publicly released in its entirety.
Ask a specific question about this device
(272 days)
OPXION Optical Skin Viewer (OPXSV1-01F)
OPXION Optical Skin Viewer is a non-invasive imaging system intended to be used for real-time visualization of the external tissues of the human body. The two-dimensional, cross-sectional, three-dimensional, and en-face images of tissue microstructures can be obtained.
OPXION Optical Skin Viewer is composed of two parts consisting of a handheld probe and a mainframe, connected by an optical fiber cable. The device comes with three accessories: a USB 3.0 cable, a power adapter, and a power cord. The Optical Skin Viewer needs to be connected to a laptop or a personal computer. The device uses Optical Coherence Tomography (OCT) technology with a Superluminescent diode, 840 nm, 6 mW light source.
Based on the provided FDA 510(k) clearance letter for the OPXION Optical Skin Viewer, an optical device that visualizes external tissue and is not an AI/ML powered device, the document does not contain the specific information requested about acceptance criteria and a study that proves the device meets the acceptance criteria in the context of AI/ML performance.
The 510(k) summary focuses on demonstrating substantial equivalence to a predicate device (VivoSight Topical OCT System) primarily based on intended use, technology (Optical Coherence Tomography), and general performance (image quality accepted by a qualified medical professional for visualization).
Therefore, I cannot provide a table of acceptance criteria, sample sizes for test sets, number of experts for ground truth, adjudication methods, MRMC studies, standalone performance, or details about training sets, as these specific details are not present in the provided document, nor are they typically required for a Class II medical imaging device like this one unless it incorporates AI/ML for diagnostic or interpretive functions.
However, I can extract the general acceptance criteria and the type of study conducted for this device based on the provided text:
Acceptance Criteria and Study:
The document describes the device's performance in terms of its ability to produce images for visualization, rather than offering specific quantitative metrics for diagnostic accuracy, sensitivity, or specificity that would be typical for an AI/ML driven device.
Here's an interpretation based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (General) | Reported Device Performance (General) |
---|---|
Image quality confirmed and accepted by a qualified medical professional. | The OPXION Optical Skin Viewer demonstrated consistent performance in producing images of a quality that is substantially equivalent to that produced by the cited predicate device. The device successfully displayed anatomical features of skin. |
No adverse events or safety concerns were reported. The scanning process was well-tolerated by all subjects. | |
Safe and effective clinical imaging device capable of generating two-dimensional, cross-sectional, three-dimensional, and en-face images of external tissue microstructure. |
2. Sample Size Used for the Test Set and Data Providence
- Test Set Sample Size: The study included three subjects with healthy skin and five subjects with diseased skin conditions.
- Data Provenance: Not explicitly stated, but implies a prospective study given the "Study Design" description of scanning "each target area in three sessions." The country of origin of the data is not specified in the 510(k) summary.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: The document states that "Image quality was confirmed and accepted by a qualified medical professional." This implies at least one, but the exact number beyond "a" is not specified.
- Qualifications of Experts: Described as "a qualified medical professional." No specific specialty (e.g., dermatologist, radiologist) or years of experience are provided.
4. Adjudication Method for the Test Set
- Adjudication Method: Not explicitly stated as an adjudication method in the context of multiple readers reaching consensus. The acceptance criterion notes "Image quality was confirmed and accepted by a qualified medical professional," which suggests a single reviewer or possibly an internal review process where consensus was reached without a formal adjudication method described.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
- MRMC Study: No, an MRMC comparative effectiveness study was not conducted or described in the provided document. The study focuses on the device's ability to produce images and its substantial equivalence to a predicate, not on how human readers perform with or without the device.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Standalone Performance: This device is an imaging system for visualization, not an AI/ML algorithm that provides diagnostic outputs. Therefore, the concept of "standalone performance" of an algorithm is not applicable or described. Its "performance" is its ability to acquire and display images.
7. The Type of Ground Truth Used
- Type of Ground Truth: The ground truth for this device's performance evaluation was the visual assessment and acceptance of image quality by a qualified medical professional, based on the successful display of "anatomical features of skin" for both healthy and diseased conditions. This is a form of expert consensus/acceptance on display quality.
8. The Sample Size for the Training Set
- Training Set Sample Size: Not applicable. This is an optical imaging device, not an AI/ML algorithm that undergoes a training phase with a specific dataset.
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: Not applicable, as there is no mention of an AI/ML training set.
In summary, the provided FDA 510(k) letter describes a traditional medical imaging device focused on visualization, not an AI/ML-powered device. Therefore, the detailed criteria and study designs typically associated with AI/ML device validation are absent from this document.
Ask a specific question about this device
(81 days)
PathPresenter Clinical Viewer
For In Vitro Diagnostic Use
The PathPresenter Clinical Viewer is a software intended for viewing and managing whole slide images of scanned glass sides derived from formalin fixed paraffin embedded (FFPE) tissue. It is an aid to pathologists to review and render a diagnosis using the digital images for the purposes of primary diagnosis. PathPresenter Clinical is not intended for use with frozen sections, cytology specimens, or non-FFPE specimens. It is the responsibility of the pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images using PathPresenter Clinical software. PathPresenter Clinical Viewer is intended for use with Hamamatsu NanoZoomer S360MD Slide scanner NDPI image formats viewed on the Barco NV MDPC-8127 display device.
The PathPresenter Clinical Viewer (version V1.0.1) is a web-based software application designed for viewing and managing whole slide images generated from scanned glass slides of formalin-fixed, paraffin-embedded (FFPE) surgical pathology tissue. It serves as a diagnostic aid, enabling pathologists to review digital images and render a primary pathology diagnosis. Functions of the viewer include zooming and panning the image, annotating the image, measuring distances and areas in the image and retrieving multiple images from the slide tray including prior cases and deprecated slides.
Here's a breakdown of the acceptance criteria and study information for the PathPresenter Clinical Viewer based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Device Performance for PathPresenter Clinical Viewer
1. Table of Acceptance Criteria and Reported Device Performance
Test | Acceptance Criteria | Reported Device Performance |
---|---|---|
Pixelwise Comparison | The 95th percentile of the pixel-wise color difference in any image pair is less than 3 CIEDE2000 ( |
Ask a specific question about this device
(73 days)
Merge Universal Viewer (MUV)
Merge Universal Viewer (MUV) is intended to provide internet access to multi-modality softcopy medical images, medical data, reports, and other patient-related information to conduct diagnostic review, processing, analyzing, reporting and sharing of DICOM-compliant medical images and relevant digital data.
MUV provides functionality that allows for creating and outputting digital files suitable for the fabrication of physical replicas, such as 3D printing, using DICOM files as inputs. Physical/3D printed models generated from the digital output files are not for diagnostic use.
MUV is intended to be used by trained healthcare professionals.
MUV can be configured to provide either lossless or lossy compressed images for display. The medical professional user must determine the appropriate level of image data compression that is suitable for their purpose.
Lossy compressed mammographic images and digitized film screen images must not be reviewed for primary image interpretations. Mammographic images may only be interpreted using an FDA approved monitor that offers at least 5 MP resolution and meets other technical specifications reviewed and accepted by FDA.
Display monitors used for reading medical images for diagnostic purposes must comply with applicable regulatory approvals and with quality control requirements for their use and maintenance. Use of MUV on mobile devices such as iPhones and iPads is not intended for diagnostic use.
Merge Universal Viewer (formerly known as IBM iConnect Access) is a software application that is intended to provide internet access to multi-modality softcopy medical images, medical data, reports, and other patient-related information to conduct diagnostic review, processing, analyzing, reporting and sharing of DICOM-compliant medical images and relevant digital data. Merge Universal Viewer provides healthcare professionals with access to diagnostic quality images, reports, and various types of patient data over conventional TCP/IP networks.
Merge Universal Viewer was designed with an easy and convenient workflow providing image viewing and manipulation capabilities including but not limited to zoom, pan, window/level, scroll, CINE, link series, and MPR. Additionally, the existing Merge Universal Viewer offers measurement and analysis tools such as line measurement, cross reference lines, rectangle, ellipse, perfect circle, freehand ROI, angle, Cobb angle, calibration, pixel value, plumb lines and cardiac calcium scoring.
A high-level overview of the modifications to the subject device being introduced as part of this 510(k) are as follows:
- Ability to display Mammography CAD SR
- Addition of the Volumetric SUV (Standard Uptake Value) to the measurement tools
- Addition of a DICOM Structured Report (SR) ingestion panel: The "Findings Panel":
- Display of lung nodule detection and characteristics
- Generalized lesion tracking (for CT and MR studies)
- Addition of cardiology measurement tools (for cardiac ultrasound studies)
- Miscellaneous updates such as:
- Cybersecurity improvements to ensure full compliance with FDA's Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions Guidance issued September 27, 2023.
- The ability to display mammography images in full resolution using a keyboard shortcut
- Bug fixes
- Labeling update, i.e., revised Indications for Use statement to reflect the new branding as well as to align with the current industry standards to consolidate the information
This FDA 510(k) clearance letter and summary for Merge Universal Viewer (MUV) K250301 primarily focuses on administrative and functional updates, and the "device" in question is medical image management and processing software (MUV). The provided document does not contain explicit acceptance criteria or details of a rigorous clinical study demonstrating the performance of the MUV in a diagnostic context against specific metrics like sensitivity, specificity, or reader agreement improvement.
The document states:
- "No clinical testing was performed as part of performance testing for MUV 9.0." (Page 9)
- The modifications are primarily a "branding update," "consolidation of information," and addition of features (e.g., display of Mammography CAD SR, Volumetric SUV, DICOM SR ingestion panel, cardiology measurement tools, cybersecurity improvements, full-resolution mammography display with keyboard shortcut, bug fixes). (Pages 7-8)
- The comparison is primarily focused on "technological characteristics" and "intended use" relative to predicate devices, and internal software verification and validation. (Page 8-9)
Therefore, based solely on the provided text, it is not possible to fill out the requested information regarding acceptance criteria and a study proving "the device meets the acceptance criteria" in terms of clinical diagnostic performance. The acceptance criteria described are internal to software development and regulatory compliance, not clinical diagnostic accuracy.
However, I will address what can be inferred or directly stated from the provided document regarding the requested categories:
Based on the provided FDA 510(k) Clearance Letter and Summary for Merge Universal Viewer (MUV) K250301, the "acceptance criteria" and "study proving the device meets the acceptance criteria" are focused on software functionality, safety, and equivalence to predicate devices, rather than a clinical performance study with specific diagnostic accuracy metrics.
Here's a breakdown of the information available:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Criteria (Inferred/Stated) | Reported Device Performance |
---|---|---|
Software Functionality | All planned verification tests met their acceptance criteria. | All planned verification tests were performed and met their acceptance criteria. (Page 9) |
Cybersecurity | Compliance with FDA guidance "Cybersecurity in Medical Devices..." and all planned tests met their acceptance criteria. | All planned tests were performed and met their acceptance criteria. (Page 9) |
Usability | Acceptability of user interactions; no new use errors or use-related risks identified that could lead to patient or user harm. | Results demonstrated MUV 9.0 met the acceptance criteria and no new use errors or use-related risks were identified. (Page 9) |
Design Validation | Coverage of clinical workflow scenarios and user needs (including new features); all planned tests met their acceptance criteria. | All planned tests were performed and met their acceptance criteria. (Page 9) |
Substantial Equivalence | Device features, design, safety, and effectiveness are comparable to legally marketed predicate devices. | Non-clinical testing confirmed differences did not adversely affect safety/effectiveness and demonstrated substantial equivalence. (Page 9) |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: Not specified for any of the described tests (software verification, cybersecurity, usability, design validation). The document describes the types of tests performed on the software, not the number of specific cases or data points used.
- Data Provenance: Not specified. Given the nature of the tests (software verification, cybersecurity, usability, design validation), the "data" would be test results and logs generated during internal development and validation, rather than patient imaging data used in a clinical performance study. The document states "No clinical testing was performed." (Page 9)
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified.
- For software verification, cybersecurity, and design validation, these would likely be internal software engineers, quality assurance personnel, and subject matter experts on medical imaging systems.
- For usability testing, "trained healthcare professionals" are mentioned as the intended users, but the specific qualifications of those who participated in usability testing are not detailed.
4. Adjudication Method for the Test Set
- Adjudication Method: Not applicable or not specified. The described tests are about software functionality, usability, and security, not clinical diagnostic interpretation requiring adjudication of reader opinions.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- MRMC Study: No, an MRMC comparative effectiveness study was not performed. The document explicitly states: "No clinical testing was performed as part of performance testing for MUV 9.0." (Page 9)
- Effect Size: Not applicable.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
- Standalone Performance: No, a standalone performance study of an algorithm in a diagnostic context was not performed. The device is a viewer, not an AI diagnostic algorithm meant to be used standalone. It displays data, including "Mammography CAD SR" and "DICOM Structured Report (SR) ingestion panel," which implies it can display outputs from other algorithms, but it is not itself an algorithm generating diagnostic interpretations.
7. The Type of Ground Truth Used
- Type of Ground Truth: For the software validation and verification, the "ground truth" would be the expected functionality, design specifications, and security requirements laid out by the developers and in regulatory guidance. It is not clinical ground truth (e.g., pathology, expert consensus, outcomes data) as no clinical performance study was conducted.
8. The Sample Size for the Training Set
- Training Set Sample Size: Not applicable. MUV is an image viewer and management system, not a machine learning model that undergoes a "training" phase with a dataset in the typical sense of AI/ML development for diagnostic tasks.
9. How the Ground Truth for the Training Set Was Established
- Training Set Ground Truth Establishment: Not applicable, as MUV is not an AI/ML model for diagnostic training.
In summary, the provided FDA document focuses on the safety and effectiveness of the Merge Universal Viewer 9.0 primarily through demonstrating:
- Its functional integrity through software verification and design validation.
- Its cybersecurity resilience.
- Its usability for trained healthcare professionals.
- Its substantial equivalence to previously cleared predicate devices for image management and viewing, including the display of information from other diagnostic tools (like CAD SR or structured reports), but not its own diagnostic performance or improvement in human reader accuracy.
Ask a specific question about this device
(226 days)
Viewer+
For In Vitro Diagnostic Use
Viewer+ is a software only device intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review, interpret and manage digital images of pathology slides for primary diagnosis. Viewer+ is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. Viewer+ is intended for use with Hamamatsu NanoZoomer S360MD Slide scanner and BARCO MDPC-8127 display.
Viewer+, version 1.0.1, is a web-based software device that facilitates the viewing and navigating of digitized pathology images of slides prepared from FFPE-tissue specimens acquired from Hamamatsu NanoZoomer S360MD Slide scanner and viewed on BARCO MDPC-8127 display. Viewer+ renders these digitized pathology images for review, management, and navigation for pathology primary diagnosis.
Viewer+ is operated as follows:
-
- Image acquisition is performed using the NanoZoomer S360MD Slide scanner according to its Instructions for Use. The operator performs quality control of the digital slides per the instructions of the NanoZoomer and lab specifications to determine if re-scans are necessary.
-
- Once image acquisition is complete and the image becomes available in the scanner's database file system, a separate medical image communications software (not part of the device) automatically uploads the image and its corresponding metadata to persistent cloud storage. Image and data integrity checks are performed during the upload to ensure data accuracy.
-
- The subject device enables the reading pathologist to open a patient case, view the images, and perform actions such as zooming, panning, measuring distances and areas, and annotating images as needed. After reviewing all images for a case, the pathologist will render a diagnosis.
Here's a breakdown of the acceptance criteria and the study details for the Viewer+ device, based on the provided FDA 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criterion | Reported Device Performance |
---|---|
Pixel-wise comparison (of images reproduced by Viewer+ and NZViewMD for the same file generated from NanoZoomer S360md Slide Scanner) | The 95th percentile of pixel-wise differences between Viewer+ and NZViewMD was less than 3 CIEDE2000, indicating their output images are pixel-wise identical and visually adequate. |
Turnaround time (for opening, panning, and zooming an image) | Found to be adequate for the intended use of the device. |
Measurement accuracy (using scanned images of biological slides) | Viewer+ was found to perform accurate measurements with respect to its intended use. |
Usability testing | Demonstrated that the subject device is safe and effective for the intended users, uses, and use environments. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state the specific sample size of images or cases used for the "Test Set" in the performance studies. It mentions "scanned images of the biological slides" for measurement accuracy and "images reproduced by Viewer+ and NZViewMD for the same file" for pixel-wise comparison.
The data provenance (country of origin, retrospective/prospective) is also not specified in the provided text.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document does not specify the number of experts or their qualifications used to establish ground truth for the test set. It mentions that the device is "an aid to the pathologist" and that "It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision." However, this relates to the intended use and not a specific part of the performance testing described.
4. Adjudication Method for the Test Set
The document does not describe any specific adjudication method (e.g., 2+1, 3+1) used for establishing ground truth or evaluating the test set results. The pixel-wise comparison relies on quantitative color differences, and usability is assessed according to FDA guidance.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance is mentioned or implied in the provided text. The device is a "viewer" and not an AI-assisted diagnostic tool that would typically involve such a study.
6. Standalone Performance (Algorithm Only without Human-in-the-Loop)
The performance tests described (pixel-wise comparison, turnaround time, measurements) primarily relate to the technical functionality of the Viewer+ software itself, which is a viewing and management tool. These tests can be interpreted as standalone assessments of the software's performance in rendering images and providing basic functions like measurements. However, it's crucial to note that Viewer+ is an "aid to the pathologist" and not intended to provide automated diagnoses without human intervention. The "standalone" performance here refers to its core functionalities as a viewer, not as an autonomous diagnostic algorithm.
7. Type of Ground Truth Used
- Pixel-wise comparison: The ground truth for this test was the image reproduced by the predicate device's software (NZViewMD) for the same scanned file. The comparison was quantitative (CIEDE2000).
- Measurements: The ground truth would likely be established by known physical dimensions on the biological slides, verified by other means, or through precise calibration. The document states "Measurement accuracy has been verified using scanned images of the biological slides."
- Usability testing: The ground truth here is the fulfillment of usability requirements and user satisfaction/safety criteria, as assessed against FDA guidance.
8. Sample Size for the Training Set
The document does not mention the existence of a "training set" in the context of the Viewer+ device. This is a software-only device for viewing and managing images, not an AI/ML algorithm that typically requires a training set for model development.
9. How the Ground Truth for the Training Set Was Established
As no training set is mentioned for this device, information on how its ground truth was established is not applicable.
Ask a specific question about this device
(271 days)
aiCockpit AI Viewer
Al Viewer is software for clinical review intended for use in General Radiology and other healthcare imaging applications.
Al Viewer is intended to be used with off the shelf computing devices. Al Viewer supports major desktop browsers such as Microsoft Edge, Chrome, Safari.
Al Viewer can display annotations and measurements as an overlay on images from DICOM objects, and from Al software. The viewer can perform 3D Multi-Planar Reformatting (MPR), 3D Maximum Intensity Projection (MIP) and 3D Volume Rendering (VR). Al Viewer is purposed to aid in reviewing findings through its ability to display clinical documents and reports side by side with the images.
Al Viewer is a stand-alone software as medical device (Stand-alone SaMD) for clinicians able to use web browsers at client stations to view DICOM image data. It is intended to provide image and related information to render and interact with Al findings and annotations from Lung Nodule detection Al algorithms, but it does not directly generate any potential findings or diagnosis.
Al Viewer allows trained medical professionals to perform 2D, MPR & 3D image manipulations using the Adjustment Tools, including window level, rotate, flip, pan, stack scroll, and magnify. Al Viewer is also capable of organizing all the images and presenting them in a zero-footprint, web user interface, allowing the users to view images in their preferred layout.
Al Viewer allows trained medical professions with the Al Findings such as accept/reject/edit. Editing tools include selecting descriptive features for findings as well as allowing the physician to control editing quantifiable measurements first computed by AI such as lengths and volumes.
Al Viewer supports major desktop browsers such as Microsoft Edge, Chrome, Safari.
The provided text is a 510(k) summary for the Fovia, Inc. aiCockpit AI Viewer. However, it does not contain the specific details required to fully describe the acceptance criteria and the study that proves the device meets them, as requested in your prompt.
Specifically, the document states: "Software Validation and Verification Testing was conducted, and documentation was provided as recommended by FDA's Guidance for Industry and FDA Staff..." and then lists general testing steps like "Full regression tests..." and "Bug fix verification...". It also explicitly states "Not Applicable" for more detailed information when describing "Non-Clinical and/or Clinical Tests Summary & Conclusions 21 CFR 807.92(b)".
This means the key information you're asking for, such as a table of acceptance criteria, reported device performance, sample sizes, ground truth establishment, and details of comparative effectiveness studies (MRMC or standalone), is not present in the provided text.
Based on the available information, I can only provide a summary of what the document does state about testing:
No specific acceptance criteria or detailed study results are provided in this document. The 510(k) summary indicates that software validation and verification testing was conducted according to FDA guidance, but it does not disclose the acceptance criteria, the reported performance, or the methodologies of these tests in detail.
Here's what can be extracted, acknowledging the significant gaps in information for your request:
1. A table of acceptance criteria and the reported device performance:
- Information Not Provided: The document does not specify any quantitative or qualitative acceptance criteria for the aiCockpit AI Viewer's performance. It also does not report specific device performance metrics against any criteria.
2. Sample sized used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective):
- Information Not Provided: The document does not specify the sample size of any test set used, nor does it provide details about the data provenance (country of origin, retrospective/prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience):
- Information Not Provided: The document does not mention the use of experts to establish a ground truth for a test set, nor does it specify their number or qualifications.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Information Not Provided: No adjudication method is mentioned in the document.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Information Not Provided: The document does not describe any MRMC comparative effectiveness study. The AI Viewer is described as a tool to "render and interact with AI findings and annotations from Lung Nodule detection AI algorithms" and to "display annotations and measurements as an overlay on images from DICOM objects, and from AI software," rather than an AI algorithm itself that would augment human reading for a direct performance comparison.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Information Not Provided: The document describes the aiCockpit AI Viewer as a "stand-alone software as medical device (Stand-alone SaMD)" for clinicians to view DICOM data and interact with AI findings. It explicitly states it "does not directly generate any potential findings or diagnosis." Therefore, a standalone performance of an algorithm itself is not relevant to this specific device's function as a viewer.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Information Not Provided: No ground truth information is provided in the document.
8. The sample size for the training set:
- Information Not Provided: The document does not discuss any training set, as the device is a viewer for AI findings, not an AI algorithm itself that requires training.
9. How the ground truth for the training set was established:
- Information Not Provided: Not applicable, as no training set or ground truth for a training set is mentioned for this device.
What the document does say about testing:
- Software Validation and Verification Testing Summary:
- "The verifications of features were performed by members of the product." (No specific roles mentioned for these "members")
- Test Steps Included:
- Full regression tests executed on the build with all feature implementations to uncover bugs.
- Bug fix verification for reported issues and regression tests on related features.
- End-to-end full tests executed on the candidate build to verify all functionalities and fixes.
- Guidance Followed: FDA's Guidance for Industry and FDA Staff, "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices" and "Content of Premarket Submission for Management of Cybersecurity in Medical Devices."
In conclusion, while the document confirms that software validation and verification testing took place to support the substantial equivalence claim, it does not provide the specific performance data, acceptance criteria, sample sizes, or details about ground truth and expert involvement that your request asks for. This type of detailed information is typically found in the full 510(k) submission and associated test reports, which are not part of this summary document.
Ask a specific question about this device
(133 days)
Lightning Viewer
Lightning Viewer is software for diagnostic and clinical review intended for use in General Radiology (images from modalities including CR/DR/XT, CT, FL, MR, MG, NM, US/US Echo, PET, VL) and other healthcare imaging applications.
Lightning Viewer is intended to be used with off the shelf computing devices. Display monitors used for reading medical images for diagnostic purposes must comply with applicable clinical requirements, requlatory approvals, and with quality control requirements for their use and maintenance. With appropriate display monitors, lighting, image quality, and level of lossy image compression, Lightning Viewer is intended for diagnostic purposes to be used by trained healthcare professionals. Display calibration and lighting conditions should be verified by viewing a test pattern prior to use for diagnostic purposes.
Lightning Viewer supports major desktop browsers such as Microsoft Edge, Chrome, Safari, and Windows. Lightning Viewer displays both lossless and lossy compressed images. Each healthcare professional must ensure that they have the necessary environment to ensure the appropriate image quality for their clinical purpose and determine the level of lossy image compression acceptable for their purpose. Lossy image compression should not be used for primary reading in mammography.
Lightning Viewer can be utilized for image manipulation by radiology technologists or other healthcare professionals. Lightning Viewer can be used to verify that images captured in a medical imaging system have a diagnostic quality: to correct viewing characteristics of the image such as orientation, rotation, and contrast; as well as to add annotations to mark significant findings or provide guidance for radiologists.
Lightning Viewer can store annotations and measurements as DICOM presentation states without changing the original image data. Lightning Viewer can display annotations and measurements as an overlay on images from DICOM objects, and from Computer-Aided Diagnosis (CAD) and AI software. The viewer can perform 3D Multi-Planar Reformatting (MPR), 3D Maximum Intensity Projection (MIP) and 3D Volume Rendering (VR). Lightning Viewer is purposed to aid in reviewing findings through its ability to display clinical documents and reports side by side with the images. This can be used side by side with a reporting tool to create diagnostic reports.
Lightning Viewer is a web-based DICOM medical image viewer that allows for diagnostic viewing, measuring, manipulating, and annotation of radiological series and images. The Lightning Viewer is part of a server, providing healthcare professionals with a comprehensive tool for accessing and interacting with medical images. Key features and functions of the Medweb Lightning Viewer include:
- A web browser-based interface for easy access without client software installation ●
- Support for multiple imaging modalities including CR/DR/XR, CT, MR, US/US Echo, MG (Mammo), FL (Fluoro), VL (visual light), NM (Nuclear medicine), PET
- . Advanced image manipulation tools such as window/level adjustment, zooming, panning, and image rotation
- . Measurement and annotation capabilities including length, angle, area, and freehand ROI tools
- Specialized tools for specific imaging modalities, such as for mammography tools and spine labeling
- . MPR (Multiplanar Reconstruction) mode with crosshairs tool for 3D visualization and MIP (Maximum Intensity Projection)
- . Synchronized scrolling across series and studies for efficient comparison
- Customizable layouts and hanging protocols for various study types
- DICOM-compliant image viewing and data management .
The Medweb Lightning Viewer consists of a user-friendly workspace with a customizable toolbar, providing quick access to commonly used tools. The viewer supports various layout options, including side-by-side comparison of series and prior studies. Users can easily navigate through image stacks, adjust window/level settings, and apply advanced visualization techniques like MPR.
Medweb Lightning Viewer is designed to integrate seamlessly with existing PACS installations. It allows authorized users to access patient studies from multiple sources, facilitating efficient workflow for radiologists, physicians, and other healthcare professionals.
Lightning Viewer provides a comprehensive set of tools for image review and analysis. It supports various image formats and modalities, making it a versatile solution for healthcare facilities of all sizes.
The system employs user authentication and access controls to ensure that only authorized personnel can access sensitive patient information. This makes it suitable for use both within healthcare facilities and for remote access by referring physicians or specialists, promoting efficient telemedicine and teleradiology practices.
The provided document, a 510(k) Summary for the Lightning Viewer, describes the device's acceptance criteria and the study that proves it meets those criteria.
1. Table of Acceptance Criteria and Reported Device Performance:
The document states that the Lightning Viewer's performance was evaluated through "Software Verification and Validation Testing" and "Non-Clinical Performance Testing". The acceptance criteria are implicitly that the device performs equivalently to the predicate device and accurately calculates measurements.
Acceptance Criterion | Reported Device Performance |
---|---|
Software Functionality & Performance | Verified that design requirements were successfully met and intended use/user needs were validated. Device performs in an equivalent manner to the predicate device (OmegaAI Image Viewer). |
Measurement Accuracy | Phantom study demonstrated that the device accurately calculates distance and angle measurements based on user-defined inputs. |
Safety & Effectiveness (comparative) | No differences between Lightning Viewer and predicate device (OmegaAI Image Viewer) raise new questions of safety and effectiveness. |
2. Sample Size for the Test Set and Data Provenance:
- Sample Size for Test Set: Not explicitly stated for either the software verification/validation or the phantom study. The document only mentions "a dataset with known voxel dimensions" for the phantom study.
- Data Provenance: Not explicitly stated. Given the nature of the tests (software verification/validation, phantom study), the data would likely be internally generated and controlled rather than from external patient sources or specific countries. It's a non-clinical evaluation.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications:
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified. For a phantom study, ground truth is typically established by the known physical properties of the phantom and the measurements taken from it, rather than expert interpretation. For software verification/validation, ground truth is typically against predefined functional specifications and requirements.
4. Adjudication Method for the Test Set:
- Adjudication Method: Not applicable or not specified. Given the nature of the studies (software verification/validation and phantom study), expert adjudication in the medical sense (e.g., 2+1, 3+1 for clinical endpoints) is not described. Ground truth for these types of studies is usually based on predefined specifications or known physical properties.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance:
- MRMC Study: No, an MRMC comparative effectiveness study was not done.
- Effect Size: Not applicable as no MRMC study was performed. The device is a medical image management and processing system, not an AI-assisted diagnostic tool for human readers in the context of improving their performance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:
- The document implies that standalone performance was assessed for measurement accuracy in the phantom study. The "device accurately calculates distance and angle measurements" without mention of human intervention in the calculation process. However, the primary purpose of the device is a viewer that facilitates human review, not an autonomous diagnostic algorithm. The software verification and validation would also evaluate the standalone functionality of the viewer.
7. The Type of Ground Truth Used:
- Software Verification and Validation: Ground truth was established against predefined "design requirements" and "intended use and user needs." This implies a set of functional and performance specifications.
- Non-Clinical Performance Testing (Phantom Study): Ground truth was established by a "dataset with known voxel dimensions" which allows for precise calculation of true distances and angles against which the device's measurements are compared.
8. The Sample Size for the Training Set:
- Training Set Sample Size: Not applicable. The Lightning Viewer is a DICOM viewer and image manipulation system, not a machine learning or AI-driven diagnostic algorithm that requires a training set in the conventional sense. Its functionality is based on established image processing algorithms and display standards.
9. How the Ground Truth for the Training Set Was Established:
- Ground Truth for Training Set: Not applicable, as there is no mention of a training set for machine learning.
Ask a specific question about this device
(178 days)
Hybrid Viewer (00859873006189)
Hybrid Viewer is a software application for nuclear medicine and radiology. Based on user input, Hybrid Viewer processes, displays and analyzes nuclear medicine and radiology imaging data and presents the result to the user. The result can be stored for future analysis.
Hybrid Viewer is equipped with dedicated workflows which have predefined settings and layouts optimized for specific nuclear medicine investigations,
The software application can be configured based on user needs.
The investigation of physiological or pathological states using measurement and analysis functionality provided by Hybrid Viewer is not intended to replace visual assessment. The information obtained from viewing and/or performing quantitative analysis on the images is used, in conjunction with other patient related data, to inform clinical management,
Hybrid Viewer provides general tools for viewing and processing nuclear medicine and radiology images. It includes software fornuclear medicine (NM) processing studies for specific darts of the body and specific diseases using predefined workflows. Hybrid Viewer 7.0 includes the following additional clinical features compared to Hybrid Viewer 2.8:
• Additional DICOM file support for Segmentation (SEG), RT Dose and CT Fluoroscopy
· Three energy window (TEW) correction for whole body studies
• Automatic motion correction and additional motion correction option for dual isotope
• Display quantitative SPECT studies in SUV units
• Additional NM processing workflows for:
-
Assessment of the percentage of activity which is shunted to the lung prior to Y90 microsphere treatment planning.
-
Assessment of the ratio of activity in the heart compared to the mediastinum. The workflow contains specific options for the GE Healthcare product AdreView™, a radiopharmaceutical agent used in the detection of primary or secondary pheochromocytoma or neuroblastoma as an adjunct to other diagnostic tests.
-
Assessment of the relative uptake in right, left and duplex kidneys in DMSA™ (dimercaptosuccinic acid) SPECT studies. This is an extension of an existing workflow for planar DMSA studies.
This document describes the premarket notification (510(k)) for the Hermes Medical Solutions AB Hybrid Viewer (Premarket Notification Number K241364).
1. Table of Acceptance Criteria and Reported Device Performance
The provided text does not contain a specific table of acceptance criteria or reported device performance in the traditional sense of a clinical study with quantitative metrics. However, it outlines the validation approach for new clinical features and indicates that they met acceptance criteria.
Feature Category | Acceptance Criteria / Validation Method | Reported Performance |
---|---|---|
New NM Processing Workflows: | ||
DMSA SPECT | Validated using analytical verification, where results were based on relevant publications. | "All analytical verifications concluded that the new workflows introduced since the predicate device fulfill the acceptance criteria and are therefore safe to use." |
Lung Liver Shunt | Validated using analytical verification, where results were based on relevant publications. | "All analytical verifications concluded that the new workflows introduced since the predicate device fulfill the acceptance criteria and are therefore safe to use." |
DICOM SEG file support | Validated using analytical verification, where results were based on relevant publications. | "All analytical verifications concluded that the new workflows introduced since the predicate device fulfill the acceptance criteria and are therefore safe to use." |
New Features (non-advanced calculations): | ||
RT Dose and CT Fluoroscopy support | Verified by comparing acceptance criteria against test results from scripted verification testing during the development process. | (Implicitly met as no issues or non-conformance are reported) |
New motion correction options | Verified by comparing acceptance criteria against test results from scripted verification testing during the development process. | (Implicitly met as no issues or non-conformance are reported) |
SUV display | Verified by comparing acceptance criteria against test results from scripted verification testing during the development process. | (Implicitly met as no issues or non-conformance are reported) |
TEW correction | Verified by comparing acceptance criteria against test results from scripted verification testing during the development process. | (Implicitly met as no issues or non-conformance are reported) |
Heart Mediastinum | Verified by comparing acceptance criteria against test results from scripted verification testing during the development process. | (Implicitly met as no issues or non-conformance are reported) |
Overall Safety and Effectiveness | Comparison to predicate device (Hybrid Viewer 2.8), including software design, principles of operation, critical performance, and compliance with the Quality System (QS) regulation. Usability testing and validation. | "There is no change in the overall safety and effectiveness of Hybrid Viewer version 7.0 compared to predicate Hybrid Viewer version 2.8." |
Cybersecurity | Updating Software of Unknown Provenance (SOUPs) to the latest versions. | "In 7.0 they have been updated to the latest versions for cybersecurity." |
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify a "test set" in terms of patient cases or images for a clinical performance evaluation. The validation described is primarily analytical verification and scripted verification testing of software features. Therefore, information on sample size and data provenance (e.g., country of origin, retrospective/prospective) for a clinical test set is not provided.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not provided. The validation relied on "relevant publications" for analytical verification and scripted software testing, rather than expert-established ground truth on a specific clinical test set.
4. Adjudication Method for the Test Set
This information is not provided, as the validation method did not involve an adjudication process on a clinical test set with experts.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
An MRMC comparative effectiveness study is not mentioned as part of this submission. The device is described as a software application for viewing, processing, displaying, and analyzing imaging data, and its "measurement and analysis functionality...is not intended to replace visual assessment. The information obtained from viewing and/or performing quantitative analysis on the images is used, in conjunction with other patient related data, to inform clinical management." This implies an assistance role rather than a standalone diagnostic or primary reader device that would typically warrant an MRMC study for AI improvement.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
While the device performs calculations and analyses, the document emphasizes that it is "not intended to replace visual assessment" and its output is used "in conjunction with other patient related data." This suggests that a standalone, algorithm-only performance evaluation, typically associated with AI algorithms making diagnostic decisions without human intervention, was not the focus of this submission given its intended use as an assistive tool for image processing and analysis. The validation focused on the accuracy of the software's processing and computational features.
7. The Type of Ground Truth Used
For the new clinical workflows employing advanced calculations (DMSA SPECT, Lung Liver Shunt, DICOM SEG file support), the ground truth for validation was based on "relevant publications" referenced in the verification documents. This suggests that established scientific and clinical literature provided the reference for the expected outcomes of these calculations.
For new features without advanced calculations, the "ground truth" was essentially the pre-defined acceptance criteria used in the scripted verification testing.
8. The Sample Size for the Training Set
No information regarding a "training set" or machine learning (ML)/AI model development is provided in the document. The description of the device's validation focuses on analytical verification and scripted testing of its processing and display functionalities, which are characteristic of traditional software development and verification rather than ML model training.
9. How the Ground Truth for the Training Set was Established
As no training set is mentioned, this information is not applicable.
Ask a specific question about this device
Page 1 of 13