Search Results
Found 40 results
510(k) Data Aggregation
(194 days)
Vital Images, Inc.
Vitrea CT Brain Perfusion is a non-invasive post-processing application designed to evaluate areas of brain perfusion. The software can calculate cerebral blood flow (CBF), cerebral blood volume (CBV), local bolus timing (i.e., delay of tissue response, time to peak), and mean transit time (MTT) from dynamic CT image data acquired after the injection of contrast media. The package also allows the calculation of regions of interest and mirrored regions, as well as the visual inspection of time density curves. Vitrea CT Brain Perfusion supports the physician in visualizing the apparent blood perfusion in brain tissue affected by acute stroke. Areas of decreased perfusion, as is observed in acute cerebral infarcts, appear as areas of changed signal intensity (lower for both CBF and CBV and higher for time to peak and MTT).
Vitrea CT Brain Perfusion is a noninvasive post-processing software that calculates cerebral blood flow (CBF), cerebral blood volume (CBV), local bolus timing (i.e., delay of tissue response, time to peak), and mean transit time (MTT) from dynamic CT image data. It displays time density curves, perfusion characteristics in perfusion and summary maps, as well as regions of interest and mirrored regions.
Here's an analysis of the acceptance criteria and study information for the Vitrea CT Brain Perfusion device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document does not explicitly state quantitative acceptance criteria or a direct performance table for the Vitrea CT Brain Perfusion device with the Bayesian algorithm. Instead, the document focuses on demonstrating substantial equivalence to a previously cleared predicate device (Vitrea CT Brain Perfusion with SVD+ algorithm, K121213) and reference device (Olea Sphere V3.0, K152602) due to the addition of a Bayesian algorithm.
The core "acceptance" is based on the conclusion that the new device is "as safe and effective" as the predicate. This is supported by:
- "Algorithm Testing": "The Vitrea CT Brain Perfusion Bayesian algorithm has passed all the verification and validation and is therefore considered validated and acceptable."
- "External Validation": "Based on the scores provided by the physicians, Vital concluded the Brain Perfusion with Bayesian algorithm is as safe and effective as the already cleared Brain Perfusion with SVD+ algorithm and fulfills its intended use."
While direct numerical performance metrics are not given, the implicit acceptance criteria are that the device's output (CBF, CBV, TTP, MTT maps and calculations) is comparable and clinically acceptable to that generated by the predicate device, especially in its ability to support the physician in visualizing perfusion in acute stroke.
2. Sample Size Used for the Test Set and Data Provenance
The document mentions "External Validation" where "physicians evaluated if the Brain Perfusion with Bayesian algorithm (subject device) was substantially equivalent with the Brain Perfusion with SVD+ algorithm (K121213, predicate device)." However, the specifics of the test set, including:
- Sample size: Not explicitly stated (e.g., number of patients/cases).
- Data provenance (country of origin, retrospective/prospective): Not explicitly stated.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of experts: Not explicitly stated how many "physicians" were involved in the "External Validation."
- Qualifications of experts: The document only refers to them as "physicians." Specific qualifications (e.g., "radiologist with 10 years of experience") are not provided.
4. Adjudication Method for the Test Set
The document does not describe any specific adjudication method (e.g., 2+1, 3+1) for establishing ground truth or evaluating the physician scores in the "External Validation." It simply states that "Based on the scores provided by the physicians, Vital concluded..."
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? While an "External Validation" involving "physicians" was performed, the document does not explicitly label it as a formal MRMC comparative effectiveness study in the way this term is typically used for AI-assisted workflows (i.e., comparing human readers with and without AI assistance). The focus was on comparing the Bayesian algorithm's outputs to the predicate's SVD+ algorithm's outputs, as judged by physicians.
- Effect size of human improvement with AI vs. without AI assistance: Not reported, as the study was not framed as a human-in-the-loop improvement study.
6. Standalone (Algorithm Only) Performance
- Was a standalone study done? Yes, the document heavily implies that the "Algorithm Testing" and subsequent "External Validation" primarily focused on the standalone performance of the Bayesian algorithm in generating perfusion maps and calculations. The "External Validation" specifically assessed if the "Brain Perfusion with Bayesian algorithm (subject device) was substantially equivalent with the Brain Perfusion with SVD+ algorithm (K121213, predicate device)." This indicates an evaluation of the algorithm's output itself.
7. Type of Ground Truth Used
The "ground truth" for the external validation appears to be the expert consensus/clinical judgment of the participating "physicians" who evaluated the outputs of the Bayesian algorithm compared to the SVD+ algorithm of the predicate device. There is no mention of pathology or outcomes data being used as ground truth for this particular evaluation.
8. Sample Size for the Training Set
The document does not provide any information regarding the sample size or characteristics of the training set used for the Bayesian algorithm. As this is a 510(k) for a software update (addition of a new algorithm) to an already cleared device, the submission focuses on demonstrating the safety and effectiveness of the change relative to the predicate, rather than fully detailing the original algorithm development.
9. How Ground Truth for the Training Set Was Established
Since information on the training set itself is not provided, the method for establishing its ground truth is also not described in this document.
Ask a specific question about this device
(21 days)
Vital Images, Inc.
Multi Modality Viewer is an option within Vitrea that allows the examination of a series of medical images obtained from MRI, CT, CR, DX, RG, RF, US, XA, PET, and PET/CT scanners. The option also enables clinicians to compare multiple series for the same patient, side-by-side, and switch to other integrated applications to further examine the data.
Multi Modality Viewer is an option within Vitrea that allows the examination and manipulation of a series of medical images obtained from MRI, CT, CR, DX, RG, RF, US, XA, PET, and PET/CT scanners. The option also enables clinicians to compare multiple series for the same patient, side-by-side, and switch to other integrated applications to further examine the data.
The Multi Modality Viewer provides an overview of the study, facilitates side-by-side comparison including priors, allows reformatting of image data, enables clinicians to record evidence and return to previous evidence, and provides easy access to other Vitrea applications for further analysis.
Here's a breakdown of the acceptance criteria and study information for the Multi Modality Viewer, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of numerical "acceptance criteria" for performance metrics in the typical sense (e.g., sensitivity, specificity, accuracy thresholds). Instead, it focuses on functional capabilities and states that verification and validation testing confirmed the software functions according to requirements and that "no negative feedback was received," and "Multi Modality Viewer was rated as equal to or better than the reference devices."
The acceptance is primarily based on establishing substantial equivalence to predicate and reference devices, demonstrating that the new features function as intended and do not raise new questions of safety or effectiveness.
Feature/Criterion | Acceptance Standard (Implied) | Reported Device Performance/Conclusion |
---|---|---|
Overall Safety & Effectiveness | Safe and effective for its intended use, comparable to predicate and reference devices. | Clinical validations demonstrated clinical safety and effectiveness. |
Functional Equivalence | New features operate according to defined requirements and functions similarly to or better than features in reference devices. | Verification testing confirmed software functions according to requirements. External validation evaluators confirmed sufficiency of software to read images and rated it "equal to or better than" reference devices. |
No Negative Feedback | No negative feedback from clinical evaluators regarding functionality or image quality of new features. | "No negative feedback received from the evaluators." |
Substantial Equivalence | Device is substantially equivalent to predicate and reference devices regarding intended use, clinical effectiveness, and safety. | "This validation demonstrates substantial equivalence between Multi Modality Viewer and its predicate and reference devices with regards to intended use, clinical effectiveness and safety." |
Risk Management | All risks reduced as low as possible; overall residual risk acceptable; benefits outweigh risks. | "All risks have been reduced as low as possible. The overall residual risk for the software product is deemed acceptable. The medical benefits of the device outweigh the residual risk..." |
Software Verification (Internal) | Software fully satisfies all expected system requirements and features; all risk mitigations function properly. | "Verification testing confirmed the software functions according to its requirements and all risk mitigations are functioning properly." |
Software Validation (Internal) | Software conforms to user needs and intended use; system requirements and features implemented properly. | "Workflow testing... provided evidence that the system requirements and features were implemented properly to conform to the intended use." |
Cybersecurity | Follows FDA guidance for cybersecurity in medical devices, including hazard analysis, mitigations, controls, and update plan. | Follows internal documentation based on FDA Guidance: "Content of Premarket Submissions for Management of Cybersecurity in Medical Devices." |
Compliance with Standards | Complies with relevant voluntary consensus standards (DICOM, ISO 14971, IEC 62304). | The device "complies with the following voluntary recognized consensus standards" (DICOM, ISO 14971, IEC 62304 listed). |
New features don't raise new safety/effectiveness questions | New features are similar enough to existing cleared features in predicate/reference devices that they don't introduce new concerns. | For each new feature (Full volume MIP, Volume image rendering, 3D Cutting Tool, Clip Plane Box, bone/base segmentation tools, 1 Click Visible Seed, Automatic table segmentation, Automatic bone segmentation, US 2D Cine Viewer, Automatic Rigid Registration), the document states that the added feature "does not raise different questions of safety and effectiveness" due to similarity with a cleared reference device. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The document repeatedly mentions "anonymized datasets" but does not specify the number of cases or images used in the external validation studies.
- Data Provenance: The data used for the external validation studies were "anonymized datasets." The country of origin is not explicitly stated, but the evaluators were from "three different clinical locations." Given Vital Images, Inc. is located in Minnetonka, MN, USA, it's highly probable the data and clinical locations are from the United States. The studies were likely retrospective as they involved reviewing "anonymized datasets" rather than ongoing patient enrollment.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: Three evaluators.
- Qualifications of Experts: The evaluators were "from three different clinical locations" and are described as "experienced professionals" in the context of simulated usability testing and clinical review. Their specific medical qualifications (e.g., radiologist, specific years of experience) are not explicitly detailed in the provided text.
4. Adjudication Method for the Test Set
The document does not describe an explicit "adjudication method" for establishing ground truth or resolving discrepancies between experts in the traditional sense. The phrase "no negative feedback received from the evaluators" and "Multi Modality Viewer was rated as equal to or better than the reference devices" suggests a consensus or individual evaluation model, but not a specific adjudication protocol like 2+1 or 3+1. It appears the evaluations focused on confirming functionality and subjective quality rather than comparing against a pre-established ground truth for a diagnostic task.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- No, an MRMC comparative effectiveness study was not explicitly stated to have been done in the context of measuring improvement with AI vs. without AI assistance.
- The "Substantial Equivalence Validation" involved three evaluators comparing the subject device against its predicate and reference devices. However, this comparison focused on functionality and image quality and aimed to show the equivalence or non-inferiority of the new device and its features, rather than quantifying performance gains due to AI assistance in human readers. The new features mentioned (like automatic segmentation or rigid registration) are components that might assist, but the study design wasn't an MRMC to measure the effect size of this assistance on human performance.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
- The document describes "software verification testing" which confirms "the software functions according to its requirements." This implies a form of standalone testing for the algorithms and features. For example, "Automatic table segmentation" and "Automatic bone segmentation" are algorithms, and their functionality would have been tested independently.
- However, no specific performance metrics (e.g., accuracy, precision) for these algorithms in a standalone capacity are provided from these tests. The external validation was a human-in-the-loop setting where evaluators used the software.
7. The Type of Ground Truth Used
The external validation involved "clinical review of anonymized datasets" where evaluators assessed "functionality and image quality." For new features like segmentation or registration, the "ground truth" would likely be based on the expert consensus or judgment of the evaluators during their review of the anonymized datasets, confirming if the segmentation was accurate or if the registration was correct and useful. There is no mention of pathology, direct clinical outcomes data, or a separate "ground truth" panel.
8. The Sample Size for the Training Set
The document does not specify the sample size for the training set. It details verification and validation steps for the software but does not provide information about the development or training of any AI/ML components within the software. While features like "Automatic table segmentation" and "Automatic bone segmentation" likely involve machine learning, the document does not elaborate on their training data.
9. How the Ground Truth for the Training Set Was Established
Since the document does not specify the training set or imply explicit AI/ML development in the detail often seen for deep learning algorithms, it does not describe how the ground truth for the training set was established.
Ask a specific question about this device
(15 days)
Vital Images, Inc.
Vitrea is a medical diagnostic system that allows the processing, review, analysis, communication and media interchange of multi-dimensional digital images acquired from a variety of imaging devices. Vitrea is not meant for primary image interpretation in mammography.
The Vitrea Advanced Visualization software is a medical diagnostic system that allows the processing, review, analysis, communication, and media interchange of multi-dimensional digital images acquired from a variety of imaging devices.
The Vitrea Advanced Visualization system provides multi-dimensional visualization of digital images to aid clinicians in their analysis of anatomy and pathology. The Vitrea Advanced Visualization user interface follows typical clinical workflow patterns to process, review, and analyze digital images, including:
- Retrieve image data over the network via DICOM
- Display of images in dedicated protocols which are automatically adapted based on exam type
- Select images for closer examination from a gallery of 2D or 3D views
- Interactively manipulate an image in real-time to visualize anatomy and pathology
- Annotate, measure, and record selected views
- Output selected views to standard film or paper printers, or post a report to an intranet web server or export views to another DICOM device
- Retrieve reports that are archived on a Web server
The provided document is a 510(k) summary for Vitrea Advanced Visualization, Version 7.6. This documentation focuses on demonstrating substantial equivalence to a predicate device (Vitrea, Version 7.0 (K150258)) by detailing software changes, intended use, and technological similarities.
Crucially, this document does not describe acceptance criteria for device performance based on clinical outcomes or a study that specifically proves the device meets such criteria. Instead, it discusses:
- Software Verification Testing: This ensures that new features operate according to defined requirements (functional and performance specifications internal to the company for the software itself).
- Risk Management: Assessment of potential harms associated with software modifications and implementation of controls to reduce these risks.
- Compliance with Standards: Adherence to recognized consensus standards like DICOM, ISO 14971, and IEC 62304.
The document explicitly states: "The subject of this 510(k) notification, Vitrea Advanced Visualization software, did not require clinical studies to support safety and effectiveness of the software." This indicates that there are no clinical acceptance criteria or studies providing device performance metrics in the way you've outlined.
Given this, I cannot provide details on your specific requests for acceptance criteria, device performance, sample sizes for test sets, expert-established ground truth, adjudication methods, MRMC studies, standalone performance, or training set ground truth. These are typically associated with clinical performance studies, which were not conducted or required for this 510(k) submission.
The "acceptance criteria" and "study that proves the device meets the acceptance criteria" in this context refer to the software's functional and technical requirements and the verification testing performed to confirm these requirements are met.
Below is a table summarizing the information that is available regarding the device's assessment, which relates to software functionality and risk, rather than clinical performance metrics.
1. Table of Acceptance Criteria and Reported Device Performance
Note: The acceptance criteria and performance reported here are for software functionality and safety features, not clinical diagnostic performance, as clinical studies were not required or performed for this 510(k) submission.
Acceptance Criteria Category | "Acceptance Criteria" (as implied by document) | Reported Device Performance (as described in document) |
---|---|---|
Software Functionality | New features operate according to defined requirements. | Software verification testing confirmed the software functions according to its requirements. |
Risk Management | Potential risks are assessed, benefits outweigh residual risk, risks are as low as possible. | Each risk assessed individually; benefits outweigh risk; all risks reduced "as low as possible"; overall residual risk deemed acceptable. |
Cybersecurity | Adherence to cybersecurity guidance (FDA Guidance: Content of Premarket Submissions for Management of Cybersecurity in Medical Devices). | Follows internal documentation based on FDA Guidance; includes hazard analysis, mitigations, controls, traceability, software update plan, integrity controls, and IFU recommendations. |
Compliance with Standards | Compliance with specified voluntary recognized consensus standards. | Complies with NEMA DICOM (PS 3.1-3.20), AAMI/ANSI/ISO 14971:2007 (Risk Management), and AAMI/ANSI/IEC 62304:2006 (Software Life Cycle Processes). |
Intended Use Equivalence | Similar intended use to predicate device. | Identical Indications for Use statement as the predicate device (Vitrea, Version 7.0 K150258). |
Technological Equivalence | Similar principle of operation and technological characteristics to predicate device. | Detailed comparison table shows "Same" across numerous software functionalities (e.g., selection/loading study, visualization, analysis, reporting, DICOM compliance, data security). |
Safety and Effectiveness Equivalence | Does not raise different questions of safety or effectiveness compared to the predicate device. | Verification/validation testing, risk management, and labeling demonstrate safety and efficacy. Changes do not alter fundamental scientific technology, safety, or intended use. |
2. Sample size used for the test set and the data provenance
Not applicable for clinical performance. For software verification, the "test set" and "data provenance" would refer to the specific test cases and data used for software testing, which are internal to the manufacturer and not detailed in this 510(k) summary.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable, as no clinical test set for diagnostic performance was used.
4. Adjudication method for the test set
Not applicable, as no clinical test set for diagnostic performance was used.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. The document explicitly states no clinical studies were performed. This device is described as "Radiological Image Processing Software" and not an AI-assisted diagnostic tool that would typically undergo such studies.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
Not applicable, as this is software for processing and visualization, not a standalone diagnostic algorithm requiring performance evaluation. No clinical studies were done.
7. The type of ground truth used
Not applicable, as no clinical outcome-based ground truth was established. For software verification, "ground truth" implies the expected output or behavior according to the software requirements.
8. The sample size for the training set
Not applicable, as no clinical studies or machine learning model training are described in this 510(k) summary.
9. How the ground truth for the training set was established
Not applicable, as no training set for a machine learning model is mentioned or elaborated upon.
Ask a specific question about this device
(246 days)
Vital Images, Inc
Vitrea View software is a medical image viewing and information distribution that provides access, through the internet and within the enterprise to multi-modality softcopy medical images (including mammography and digital breast tomosynthesis), reports, and other patient-related information. This data is hosted within disparate archives and repositories for diagnosis, review, communication, and reporting of DICOM and non-DICOM data.
Lossy compressed mammography images and digitized film screen images must not be reviewed for primary image interpretations. Mammographic images may only be interpreted using an FDA cleared display that meets technical specifications reviewed and accepted by FDA or displays accepted by the appropriate regulatory agency for the country in which it is used.
Display monitors used for reading medical images for diagnostic purposes must comply with the applicable regulatory approvals and quality control requirements for their use and maintenance.
Vitrea View software is indicated for use by qualified healthcare professionals including, but not restricted to. radiologists, non-radiology specialists, physicians and technologists.
When accessing Vitrea View software from a mobile device, images viewed are for informational purposes only and not intended for diagnostic use.
The Vitrea View software is a web-based, cross-platform, zero-footprint enterprise image viewer solution capable of displaying both DICOM and non-DICOM medical images. The Vitrea View software enables clinicians and other medical professionals to access patients' medical images with integrations into a variety of medical record systems, such as Electronic Health Record (EHR), Electronic Medical Record (EMR), Health Information Exchange (HIE), Personal Health Record (PHR), and image exchange systems. The Vitrea View software is a communication tool, which supports the physician in the treatment and planning process by delivering access to images at the point of care.
The Vitrea View software offers medical professionals an enterprise viewer for accessing imaging data in context with reports from enterprise patient health information databases, fosters collaboration, and provides workflows and interfaces appropriate for referring physicians and clinicians. IT departments will not have to install client systems, due to the web-based zero-footbrint nature of the Vitrea View software. The Virea View software offers scalability to add new users as demand grows, and may be deployed in a virtualized environment. Some of the general features include:
. Fast time-to-first-image
- Contextual launch integration with single-sign-on
- . Easy study navigation and search capability
- Supports multi-modality vendor-neutral DICOM images
- . Supports non-DICOM images
- lmages display at full diagnostic quality (with appropriate hardware)
- . Basic 2D review tools (zoom, pan, measure)
- Basic 3D and MPR viewing
- Radiology key images
- . Comparative side-by-side review, regardless of image types
- . Collaboration tools
- . Leverages traditional DICOM as well as next-generation DICOMweb image transfer protocols
- Enables federated access to across multiple data sources across multiple sites
- . Web-based zero-footprint architecture
- Secure Access on various Windows® and Mac computers through standard internet
- browsers
- . Secure Access on various iOS®, Android™, and Windows® tablet devices through the device's Internet browser
- . Secure Access on various iOS and Android smartphones through the device's Internet browser
Here's the analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Device Name: Vitrea View
510(k) Number: K163232
1. Table of Acceptance Criteria and Reported Device Performance:
Acceptance Criteria (from Radiologists' Rating) | Reported Device Performance (Vitrea View vs. Reference System) |
---|---|
Visualization of the adipose and fibroglandular tissue | Met clinical equivalence for diagnostic quality |
Visualization of the breast tissue and underlying pectoralis muscle | Met clinical equivalence for diagnostic quality |
Image contrast for differentiation of subtle tissue density differences | Met clinical equivalence for diagnostic quality |
Sharpness, assessment of the edges of fine linear structures, tissue borders and benign calcifications | Met clinical equivalence for diagnostic quality |
Tissue visibility at the skin line | Met clinical equivalence for diagnostic quality |
Artifacts due to image processing, detector failure and other external factors to the breast | Met clinical equivalence for diagnostic quality |
Overall clinical image quality | Met clinical equivalence for diagnostic quality |
2. Sample Size Used for the Test Set and Data Provenance:
- Mammography Image Quality Validation: 50 studies.
- Data Provenance: Studies were "chosen randomly from existing patient studies obtained over a two-day time-frame at the designated Breast Imaging center." This indicates retrospective data from a specific imaging center.
- Digital Breast Tomosynthesis Image Quality Validation: 50 studies.
- Data Provenance: Studies were "chosen randomly from existing patient studies obtained at the designated Breast Imaging center." This also indicates retrospective data from a specific imaging center.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Those Experts:
- Mammography Image Quality Validation: Four experienced radiologists. No further specific qualifications (e.g., years of experience) are provided.
- Digital Breast Tomosynthesis Image Quality Validation: Three experienced radiologists. No further specific qualifications are provided.
4. Adjudication Method for the Test Set:
The studies were multi-reader, multi-case tests where radiologists were asked to rate image quality equivalence. The text states:
- "The radiologists found all of the images displayed met the clinical equivalence for diagnostic quality when displayed using the Vitrea View software as compared to the same studies displayed using the McKesson system."
- "The radiologists found all of the images displayed met the clinical equivalence for diagnostic quality when displayed using Vitrea View as compared to the same studies using the sites existing Phillips Radiology system."
This suggests a consensus or agreement among the radiologists was reached, rather than a formal adjudication method like a 2+1 or 3+1 rule. The criteria were rating the image quality on a scale of 1 to 3, but the specifics of how these individual ratings were combined or adjudicated to reach the overall "met clinical equivalence" conclusion are not detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance:
Yes, MRMC studies were done for both mammography and digital breast tomosynthesis. However, these were not comparative effectiveness studies evaluating human reader improvement with AI assistance. Instead, they were image quality equivalence studies comparing the device's display quality against a cleared predicate/reference device. The purpose was to show that images displayed by "Vitrea View software... met the clinical equivalence for diagnostic quality" when compared to a reference system displaying the same images. Therefore, an effect size of human improvement with AI vs. without AI is not applicable to these studies.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:
No, the studies described were focused on the display quality of the Vitrea View software in conjunction with diagnostic monitors, for review by human radiologists. It was not a standalone algorithmic performance evaluation.
7. The Type of Ground Truth Used:
The ground truth was established by the subjective rating of "experienced radiologists" on the "overall clinical image quality" and other specific image quality parameters. This is effectively expert consensus on image quality and clinical equivalence for diagnostic use. It does not appear to involve pathology or outcomes data to define disease presence or absence.
8. The Sample Size for the Training Set:
The document does not specify a training set or its size. The studies described are verification and validation of the device's performance, not the training of a machine learning model.
9. How the Ground Truth for the Training Set Was Established:
Not applicable, as no training set or ground truth establishment for a training set is mentioned in the provided text.
Ask a specific question about this device
(38 days)
Vital Images, Inc.
Multi Modality Viewer is an option within Vitrea that allows the examination and manipulation of a series of medical images obtained from MRI, CT, CR, DX, RG, RF, XA, PET, and PET/CT scanners. The option also enables clinicians to compare multiple series for the same patient, side-by-side, and switch to other integrated applications to further examine the data.
Multi Modality Viewer is a medical image viewer software application, available on the Vitrea software platform cleared by K150258. The application allows qualified clinicians, including physicians, radiologists and technologists, to display, navigate, manipulate and quantify medical images obtained from MRI, CT, CR, DX, RG, RF, XA, PET, and PET/CT modalities.
The Multi-Modality Viewer provides an overview of the study, facilitates side-by-side comparison including priors, allows clinicians to record evidence and return to previous evidence, and provides easy access to other Vitrea applications for further analysis.
The provided document, a 510(k) summary for the "Multi Modality Viewer" software, primarily focuses on demonstrating substantial equivalence to a predicate device rather than detailing specific acceptance criteria and a study proving those criteria are met for new features that would typically involve a performance study.
The device is an update to an existing medical image viewer, and the clearance is based on the argument that new features (support for additional modalities like PET, PET/CT, CR, DX, RG, RF, XA) do not raise new questions of safety and effectiveness because similar functionalities exist in previously cleared reference devices.
Therefore, the document explicitly states: "The subject of this 510(k) notification, Multi Modality Viewer software, did not require clinical studies to support safety and effectiveness of the software."
However, it does mention "Verification and Validation" activities and "Performance Standards" in a general sense. Based on the provided text, I can extract the following information about general testing and the device's performance, as much as is available:
1. Table of Acceptance Criteria and Reported Device Performance:
Since no specific quantitative acceptance criteria or detailed performance metrics are provided for the new features in the context of a dedicated performance study, the "acceptance criteria" here are inferred from the general statements about software development and verification. The device's "performance" is reported as meeting these general standards.
Acceptance Criteria (Inferred from General Software Practices) | Reported Device Performance (as stated in the document) |
---|---|
Feature functions according to its requirements. | "Verification confirmed that the feature functions according to its requirements." |
Operates on the Vitrea software platform without degrading existing functionality. | "Software testing was completed to ensure the Multi Modality Viewer software functions according to its requirements and operates on the Vitrea software platform without degrading the existing functionality of the Vitrea software platform." |
Meets all product release criteria. | "The Multi Modality Viewer software has achieved all product release criteria." |
Risks are reduced as low as possible and probability of occurrence of harm is "Improbable." | "Every risk has been reduced as low as possible and has been evaluated to have a probability of occurrence of harm of at least 'Improbable.'" |
Unresolved defects do not compromise safety and effectiveness. | "Of the unresolved defects remaining in the released application, each has been carefully evaluated and it has been determined that the software can be used safely and effectively." |
Medical benefits outweigh residual risk. | "The medical benefits of the device outweigh the residual risk for each individual risk and all risks together." |
Compliance with DICOM standard for transfer and storage of data. | "The Vitrea software platform complies with the DICOM standard for transfer and storage of this data and does not modify the contents of DICOM instances." |
Compliance with recognized consensus standards (IEC 62304, ISO 14971, NEMA PS 3.1-3.20). | "The Multi Modality Viewer software complies with the following voluntary recognized consensus standards: PS 3.1- 3.20 (2011), ISO 14971:2007, IEC 62304:2006." |
2. Sample Size Used for the Test Set and Data Provenance:
The document does not specify a distinct "test set" with a particular sample size for a performance study related to the new features. The testing mentioned is "internal verification and external validation" which included "simulated usability testing by experienced professionals." No details about the number of cases or data provenance (country of origin, retrospective/prospective) for these tests are provided.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
No specific number of experts or their detailed qualifications are provided for establishing ground truth for a test set. The document refers to "experienced professionals" for "simulated usability testing."
4. Adjudication Method for the Test Set:
No adjudication method (e.g., 2+1, 3+1, none) is mentioned as no specific performance study test set is described.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done:
No MRMC comparative effectiveness study is mentioned. The document explicitly states: "The subject of this 510(k) notification, Multi Modality Viewer software, did not require clinical studies to support safety and effectiveness of the software."
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
The device is a medical image viewer software. Its function is to display, navigate, manipulate, and quantify medical images for clinicians. It is not an algorithm that performs automated detection or diagnosis. Therefore, a standalone (algorithm only) performance study as typically understood for AI algorithms would not be applicable and is not mentioned. Its performance is intrinsically tied to human interaction.
7. The Type of Ground Truth Used:
As no specific performance study with a dedicated test set is described, there is no mention of the type of ground truth used (e.g., expert consensus, pathology, outcomes data). The validation focuses on the software's ability to function as intended and display data from new modalities accurately, which relies on adherence to standards and internal software verification.
8. The Sample Size for the Training Set:
This device is image viewer software, not a machine learning or AI algorithm that requires a "training set" in the conventional sense. Therefore, no training set size is mentioned.
9. How the Ground Truth for the Training Set Was Established:
As there is no training set for this type of device, this information is not applicable.
Ask a specific question about this device
(192 days)
VITAL IMAGES, INC.
The CT Dual Energy Image View application accepts CT images acquired using different tube voltages and/or tube currents of the same anatomical location. The material composition of body regions may be determined using the energy dependence of the attenuation coefficients of different materials. This approach enables images to be generated at multiple energies within the available spectrum to visualize and analyze information about anatomical and pathological structures.
The Vitrea® CT Dual Energy Image View is a post-processing software application which functions on the Vitrea® Platform, cleared by K150258.This application allows you to load and combine two CT images, acquired from a Toshiba CT scanner with a Dual Energy protocol, of the same anatomical location; one obtained with low kV tube voltage and one obtained with high kV tube voltage. Vitrea® computes a Blended and an Enhanced image with adjustable virtual energy levels, a Virtual Non-Contrast (VNC) image, an lodine Map overlaid on one of the original or derived images, a CT graph, and a best Contrast-to-Noise Ratio (CNR) image. For the Raw Data Analysis workflow, Vitreally calculates a Monochromatic image.
The software supports Toshiba dual-energy studies containing image-based and projection-based reconstruction techniques.
- lmage-based reconstructed data consists of a standard image reconstruction of the two ● scans. Use the Dual Energy Image Domain (Two Volumes) workflow for these datasets.
- Projection-based reconstructed data consists of special material image reconstruction where . the two scans are combined to create two sets of images where a predefined material is defined, typically lodine/Water and/or Calcium/Water. Use the Dual Energy Raw Data Analysis (Four or Six Volumes) workflow for these datasets.
- NOTE: It is important that a description of the material is contained in the DICOM Image Comments tag (0020, 4000) for each dataset at the time of the scan. For lodine/Water projection-based reconstruction, the lodine material image should be labeled "I/H20," and the corresponding Water material image should be labeled "H20/1." For the Calcium/Water projection-based reconstruction, the Calcium material image should be labeled "Ca/H20," and the corresponding Water material image should be labeled "H20/Ca."
NOTE: The dual-energy datasets must be coincident with the same frame of reference UID.
NOTE: Available with Toshiba datasets only.
Here's a summary of the acceptance criteria and the study conducted for the Vital Images, Inc. Vitrea® CT Dual Energy Image View device, based on the provided text:
Important Note: The provided document is a 510(k) summary, which focuses on demonstrating substantial equivalence to a predicate device rather than presenting a detailed independent study with specific numerical acceptance criteria and performance metrics for the new device. Therefore, the "acceptance criteria" discussed are largely qualitative and relate to the device functioning as intended and being comparable to the predicate, rather than precise quantitative thresholds. Similarly, "reported device performance" is described qualitatively as meeting those intentions.
Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Qualitative) | Reported Device Performance (Qualitative) |
---|---|
Software operates according to defined requirements (functional, performance, and safety). | All external validation testing passed with each expert confirming the Vitrea® CT Dual Energy Image View software fulfills the intended use and meets the needs of the user. |
Functionality (e.g., Iodine maps, Virtual Non-Contrast, Monochromatic images, Best CNR, Blended images, Enhanced images at different energy levels) is present and works as intended. | The clinical radiologist rated the quality and performance of each feature as establishing equivalence to the predicate device. |
User interface and usability are acceptable to trained professionals. | Two Radiological Technologists provided positive feedback regarding performance and usability of each feature. |
Safety risks are reduced as low as possible, and benefits outweigh residual risks. | All risks for this feature were collectively reviewed to determine if the benefits outweigh the risk, and it was assessed that the benefits do outweigh the risks. |
Substantial equivalence to the predicate device (K132813) in terms of intended use, indications for use, principle of operation, and technological characteristics. | The device was found substantially equivalent to the predicate device across all these criteria, with noted minor differences not raising new safety/effectiveness questions. |
Study Details
2. Sample size used for the test set and the data provenance:
-
Test Set (External Validation):
- Clinical Radiologist: Evaluated "multiple test cases." The exact number of cases is not specified.
- Radiological Technologists (2): Provided feedback on "a series of questions."
- Data Provenance: The document does not explicitly state the country of origin or whether the data was retrospective or prospective. Given the context of comparing to a predicate device, it's likely existing data or a simulated environment was used, but this is not confirmed.
-
Test Set (Internal Validation / Phantom Testing):
- Sample Size: "Various phantoms." The exact number or types are not specified.
- Data Provenance: Not specified, but likely generated internally for phantom testing.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Three experienced clinical experts were involved in the external validation of the software's features. Specifically, one clinical radiologist and two radiological technologists.
- Qualifications: "Experienced clinical experts," "clinical radiologist," and "Radiological Technologists." Specific years of experience or board certifications are not provided.
4. Adjudication method for the test set:
- The document does not describe a formal adjudication method (like 2+1 or 3+1 consensus).
- Instead, it states that the clinical radiologist evaluated test cases comparing the subject device to the predicate device and rated "quality and performance." The two radiological technologists provided feedback on performance and usability.
- The conclusion states that "All external validation testing passed with each expert confirming the Vitrea® CT Dual Energy Image View software fulfills the intended use and meets the needs of the user," implying individual expert agreement rather than a formal consensus process between them for ground truth.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study was done in the sense of evaluating human reader performance with and without AI assistance to quantify improvement.
- The study involved comparing the new device's output and usability to a predicate device by expert users. It was a comparison of tool equivalence rather than an assessment of human performance augmentation.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The device is a "post-processing software application."
- "Internal Validation (Phantom Testing)" included "side-by-side visual comparisons of dual energy algorithm outputs with Toshiba Display Console as the predicate device using phantom data." This indicates some form of standalone evaluation against a reference/predicate system, but it's not explicitly framed as an "algorithm-only" performance study in isolation from any human interpretation. However, the direct comparison of "algorithm outputs" suggests an assessment of the algorithm's results themselves.
7. The type of ground truth used:
- For the external validation, the ground truth was effectively the expert opinion/comparison of the clinical radiologist against the output of the predicate device (Toshiba Dual Energy System Package K132813). The experts assessed whether the new device's features produced images of comparable "quality and performance" and fulfilled the "intended use."
- For internal validation (phantom testing), the "Toshiba Display Console" served as the reference for "side-by-side visual comparisons of dual energy algorithm outputs," implying its output was considered the reference or "ground truth" for comparison.
8. The sample size for the training set:
- The document does not mention a training set or any machine learning/AI model training. The Vitrea® CT Dual Energy Image View is described as a "post-processing software application" that "computes" various images based on acquired CT data. It does not appear to be an AI/ML device that requires a training set in the conventional sense. The "algorithm" mentioned refers to the computational methods for generating the dual-energy derived images, not a learned model.
9. How the ground truth for the training set was established:
- As no training set is mentioned (see point 8), this information is not applicable.
Ask a specific question about this device
(51 days)
VITAL IMAGES, INC.
Multi Modality Viewer is a software application within Vitrea® that allows the examination and manipulation of a series of medical images obtained from MRI and CT scanners.
The option also enables clinicians to compare multiple series for the same patient, side-by-side, and switch to other integrated applications to further examine the data.
Multi Modality Viewer is a software application which functions on the Vitrea Platform, cleared by K150258. This application allows intuitive navigation, and manipulation of medical images obtained from MRI and CT scanners. This application enables clinicians to compare multiple series of the same patient, side-by-side, and switch to other integrated applications to further examine the data.
lt provides clinical tools to review images to help qualified physicians provide efficient and effective patient care.
Key features:
General Viewing:
- · Linked 2D, MPR and 4D viewers for single and multi-study comparison
- · Creation of retrievable evidence and snapshots
- · User defined flexible display protocols
Access to Advanced Applications and Workflows:
- In application access to MR Stitching application
- · Evidence creation and sharing across workflows
General Image Display, Manipulation, and Analysis Tools:
- · Maximum and Minimum Intensity Projection (MIP/MinIP)
- · Identification and Display of Regions of Interest (ROIs)
- · CINE image display
- · Multi-frame display
- · Color image display
- . Simultaneous multiple studies review
- · Cross-reference lines support
- Display of selected images, series, or entire study .
- · Comparison of multiple series or studies
- · Scroll
- Pan
- Zoom
- · Focus
- · Flip (Vertically, horizontally)
- Invert
- · Rotate (Clockwise, counter-clockwise)
- · Arrow
- · Adjust Registration
- · Auto window level/width setting
- · Text/Arrow annotation (Label)
- · Measurement of distance (Ruler), Angle, Cobb Angle, Ellipse ROI, and Freehand ROI
Specialized Tools:
- · Image subtraction of two series/datasets
- · Access to semi-automated image stitching
- Study and series linking .
- · Register two different series or groups that do not share a frame of reference to link them spatially
The Multi Modality Viewer is a software application for examining and manipulating medical images from MRI and CT scanners.
It is considered substantially equivalent to its predicate device, the MR Core Software (K151115), which only handled MRI images, and a reference device, Softread Software (K040305), which supports both CT and MRI.
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The submission does not explicitly state quantitative acceptance criteria or a specific performance study comparison table for the Multi Modality Viewer beyond feature-by-feature comparison with predicate/reference devices. Instead, it focuses on demonstrating that the device has similar technological characteristics, intended use, and indications for use, and that verification and validation testing confirms its safety and effectiveness.
The document primarily performs a feature-by-feature comparison to establish substantial equivalence.
Criteria (for comparison/equivalence) | Subject Device: Multi Modality Viewer | Predicate Device: MR Core Software (K151115) | Reference Device: Softread (K040305) | Reported Performance (or comparison outcome) |
---|---|---|---|---|
Classification Name | System, Image Processing, Radiological | System, Image Processing, Radiological | N/A | Same |
Regulatory Number | 892.2050 | 892.2050 | N/A | Same |
Product Code | LLZ | LLZ | N/A | Same |
Classification | Class II | Class II | N/A | Same |
Review Panel | Radiology | Radiology | N/A | Same |
Indications for Use | Examines/manipulates MRI and CT images; compares multiple series side-by-side. | Examines/manipulates MRI images; compares multiple series side-by-side. | N/A | Added CT support compared to predicate; similar to reference. |
Intended Users | Radiologists, Clinicians, Technologists | Radiologists, Clinicians, Technologists | N/A | Same |
Patient Population | Not applicable (viewer software) | Not applicable (viewer software) | N/A | Same |
Modality Support | CT and MRI | MRI | CT and MRI | Added CT support compared to predicate; same as reference. |
DICOM Image Communication | Yes | Yes | Yes | Same |
2D Image Review | Yes | Yes | Yes | Same |
2D Comparative Review | Yes | Yes | Yes | Same |
Multi-Planner Reformatting | Yes | Yes | Yes | Same |
MIP/MinIP | Yes | Yes | Yes | Same |
Image Editing, Setting, Saving | Yes | Yes | Yes | Same |
Annotation & Tagging Tools | Yes | Yes | Yes | Same |
Display Options (e.g., thickness) | Yes | Yes | Yes | Same |
Quantitative Measurements | Yes | Yes | Yes | Same |
Snapshot | Yes | Yes | Yes | Same |
Cine Image Display | Yes | Yes | Yes | Same |
Multi-frame Display | Yes | Yes | Yes | Same |
Color Image Display | Yes | Yes | Yes | Same |
Simultaneous Multiple Studies Review | Yes | Yes | Yes | Same |
Cross-reference Lines Support | Yes | Yes | Yes | Same |
Display of Selected Images/Series/Study | Yes | Yes | Yes | Same |
Comparison of Multiple Series/Studies | Yes | Yes | Yes | Same |
Scroll Image | Yes | Yes | Yes | Same |
Zoom Image | Yes | Yes | Yes | Same |
Pan Image | Yes | Yes | Yes | Same |
Focus Image | Yes | Yes | Yes | Same |
Rotate Image | Yes | Yes | Yes | Same |
Flip Image - Vertical | Yes | Yes | Yes | Same |
Flip Image - Horizontal | Yes | Yes | Yes | Same |
Rotate Image - Clockwise | Yes | Yes | Yes | Same |
Rotate Image - Counter-clockwise | Yes | Yes | Yes | Same |
Invert Image | Yes | Yes | Yes | Same |
Arrow | Yes | Yes | Yes | Same |
Auto Window Level/Width Setting | Yes | Yes | Yes | Same |
Measurement of Distance | Yes | Yes | Yes | Same |
Measurement of Angle | Yes | Yes | Yes | Same |
Measurement of Cobb Angle | Yes | Yes | Yes | Same |
Identification & Display of Ellipse ROIs | Yes | Yes | Yes | Same |
Identification & Display of Freehand ROIs | Yes | Yes | Yes | Same |
Manual Registration | Yes | Yes | Yes | Same |
Image Subtraction of two series/datasets | Yes | Yes | Yes | Same |
Study and Series Linking | Yes | Yes | Yes | Same |
Semi-automated Image Stitching | Yes | Yes | Yes | Same |
Time Intensity Analysis | Yes | Yes | N/A (not listed for Softread) | Same (comparison with predicate) |
Batch Save of MPR reformats | Yes | Yes | N/A (not listed for Softread) | Same (comparison with predicate) |
Overall Conclusion: The device is considered substantially equivalent because the added CT modality feature is similar to the reference device and does not raise different questions of safety and effectiveness. The verification and validation testing performed "demonstrate the subject device is as safe and effective as the predicate and reference devices."
2. Sample Size for Test Set and Data Provenance
The document mentions "Verification of the software that included performance and safety testing" and "Validation of the software that included simulated usability testing by experienced professionals." However, it does not specify the sample size used for any test set or the country of origin of the data, nor whether it was retrospective or prospective. The information provided is high-level about the testing processes.
3. Number of Experts Used to Establish Ground Truth and Qualifications
For the "External Validation" mentioned under the "Validation" section, "experienced medical professionals evaluated the application." However, the exact number of experts and their specific qualifications (e.g., radiologist with 10 years of experience) are not specified.
4. Adjudication Method for the Test Set
The document does not describe any specific adjudication method (e.g., 2+1, 3+1) for establishing ground truth or evaluating the test set. It only states that "All validators confirmed that the Multi Modality Viewer software fulfills its intended use."
5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study
The document does not mention a Multi Reader Multi Case (MRMC) comparative effectiveness study and therefore, there is no information on the effect size of human readers improving with AI vs. without AI assistance. This device is a viewer, not an AI-assisted diagnostic tool in the sense of providing automated interpretations.
6. Standalone (Algorithm Only) Performance Study
The primary evaluation appears to be of the software's functionality as a viewer, not as an algorithm performing a specific diagnostic task in standalone mode. Verification and validation tests were conducted to confirm proper function of the device's features, but no standalone performance study measuring diagnostic accuracy or similar metrics for an algorithm without human-in-the-loop was reported.
7. Type of Ground Truth Used
The document indicates that "simulation usability testing by experienced professionals" and "workflow testing" were conducted, with "validators confirmed that the Multi Modality Viewer software fulfills its intended use." This suggests "expert consensus" on usability and fulfillment of intended use as the form of "ground truth" or validation outcome, rather than pathology, outcomes data, or a specific diagnostic ground truth, as the device is a viewing platform.
8. Sample Size for the Training Set
The document does not refer to a "training set" because the Multi Modality Viewer is described as a software application for viewing and manipulating images, not an AI/ML algorithm that requires a training set for model development.
9. How Ground Truth for Training Set Was Established
Since there is no mention of a training set, the method for establishing ground truth for a training set is not applicable or discussed.
Ask a specific question about this device
(55 days)
Vital Images, Inc.
The separately-licensed CT Colonography option is intended for closely examining the lumen of the colon using features such as auto-segmentation, axial imaging, multi-planar reformatting, fly-through, simultaneous display of prone and supine images, and transparent wall view.
Vitrea® CT Colon Analysis software generates 2D and 3D images of the colon to allow close examination of the lumen of the colon, thereby increasing the speed and ease of locating and analyzing suspected polyps, masses and lesions. Vitrea®CT Colon Analysis software has the following features:
- . Auto-seqmentation of the colon
- . Segmentation editing
- . Axial imaging, multi-planar reformatting and 3D views
- . Manual and automatic endoluminal fly-through of the colon
- . Eye-based navigation for performing fly-through and target-based navigation for examining POI and Reverse View mode
- MPR eye placement to adjust view direction down lumen
- . Transparent wall view of the colon with a field-of-view cone to act as a reference during fly-through
- . Dual Volume Viewer window format for side-by-side comparison of prone and supine studies
- Ability to mark points in the colon with arrows; arrow are hidden from view when you fly too close
- SPACE BAR to step through images containing arrows
- Prone/supine registration .
- . Polyp Probe tool to select and characterize polyps
- Polyp assessment using the C-RADS guidelines
- . Fly-through image batches and digital movies
- Special report template that contains an anatomically-labeled diagram of the colon for . easier documentation of findings, as well as a C-RADS report template
- . Fly-through keyboard shortcuts
- . Automatic fluid/stool tagging and subtraction
Vitrea® CT Colon Analysis software deploys from the Vitrea® Platform, cleared under K150258, Vitrea®, Version 7.0 Medical Image Processing Software. The software provides imaging information as an assistance to the physician. The software does not provide diagnosis or determine the recommended medical care.
Here's an analysis of the acceptance criteria and study information for the Vitrea® CT Colon Analysis software, based on the provided document:
Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of "acceptance criteria" with quantitative metrics directly linked to the device's performance for functions like auto-segmentation accuracy, polyp detection sensitivity, or specificity. Instead, the document focuses on demonstrating substantial equivalence to a predicate device and a reference device, primarily through feature comparison and qualitative assessment of enhancements.
The core "acceptance criteria" from the document's perspective appear to be:
- The software successfully performs its described functions (auto-segmentation, multi-planar reformatting, fly-through, simultaneous display of prone and supine images, transparent wall view, Electronic Bowel Cleansing).
- The new Electronic Bowel Cleansing (EBC) feature does not negatively impact the device's safety or effectiveness compared to the predicate.
- The device adheres to relevant standards (DICOM, IEC 62304, ISO 14971).
- The medical benefits outweigh the residual risks.
Given this, the "reported device performance" is largely qualitative and focused on functionality and safety rather than specific quantitative metrics.
Acceptance Criteria (Inferred from document) | Reported Device Performance |
---|---|
- Device successfully generates 2D and 3D images of the colon. |
- Device successfully performs features such as auto-segmentation, axial imaging, multi-planar reformatting, fly-through, simultaneous display of prone and supine images, and transparent wall view.
- Device successfully performs Electronic Bowel Cleansing (EBC) (automatic fluid/stool tagging and subtraction).
- EBC feature does not affect intended use, indications for use, or fundamental scientific technology of the cleared predicate.
- EBC feature does not raise new questions of safety or effectiveness.
- The risk of incorrectly removing polyps when hiding tagged material in 3D is acceptably low (observed 0% during validation).
- Device is safe and effective as the predicate device.
- Device compiles with voluntary recognized consensus standards (DICOM, ISO 14971, IEC 62304). | - Software generates 2D and 3D images as described.
- All listed features (auto-segmentation, axial imaging, multi-planar reformatting, fly-through, simultaneous display of prone and supine images, transparent wall view) are successfully implemented and function.
- Electronic Bowel Cleansing functions by electronically removing residual fecal material tagged with an agent.
- The added EBC feature does not alter the intended use, indications for use, or fundamental scientific technology.
- Verification and validation testing demonstrated the modified device is as safe and effective as the predicate, raising no different questions of safety and effectiveness.
- "The risk of incorrectly removing polyps when hiding tagged material in 3D (which was observed 0% of the time during our validation) has been mitigated because the polyps can still be seen in 2D."
- Device operates according to defined requirements and fulfills standards. |
Study Details for Vitrea® CT Colon Analysis software (focus on Electronic Bowel Cleansing feature):
-
Sample size used for the test set and the data provenance:
- Test Set Sample Size: "datasets based on real patient data and phantoms." The specific number of real patient datasets or phantoms is not provided in this summary.
- Data Provenance: "real patient data and phantoms." The country of origin is not specified. The studies were likely retrospective as they refer to "real patient data" being used for testing.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document mentions "experienced professionals" for usability testing as part of internal validation and "experienced users" for external validation. However, it does not specify the exact number or qualifications of experts used to establish the ground truth for the test set (e.g., diagnosis of polyps, accurate segmentation). The ground truth establishment for the EBC feature likely involved comparing the "cleansed" images to the original data and potentially to expert assessments, but the details are not provided.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- No adjudication method details are provided in this summary.
-
If a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done:
- No MRMC comparative effectiveness study is explicitly mentioned for the enhancement (Electronic Bowel Cleansing). The general clinical performance tests "demonstrate performance, safety, and effectiveness," but it's not described as a comparative MRMC study measuring human reader improvement with/without AI assistance. The statement "The risk of incorrectly removing polyps when hiding tagged material in 3D (which was observed 0% of the time during our validation) has been mitigated because the polyps can still be seen in 2D" suggests some form of evaluation but not necessarily a formal MRMC study on reader performance.
-
If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Yes, implicitly. The "Verification" section confirms "Test cases were executed against the system features and requirements" and "Validation of the software that included phantom testing." The "Internal Validation and Phantom Testing" section also states "Internal validation included internal user acceptance testing using real scans as well as synthetic phantoms. The validation criteria covered image qualitative comparison of segmentation to previous releases, quantitative evaluation of polyp diameter accuracy using phantom datasets, and run time performance evaluation." This indicates algorithm-only performance assessment.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the Electronic Bowel Cleansing feature, the ground truth appears to be based on "real patient data and phantoms." For "quantitative evaluation of polyp diameter accuracy using phantom datasets," it would be the known dimensions within the phantoms. For "image qualitative comparison of segmentation," it would be comparison to previous releases or potentially expert-reviewed segmentation. Explicit pathology or outcomes data as ground truth for the EBC feature's "acceptance" is not specified here, though the general product's goal is to aid in detecting "suspected polyps, masses and lesions."
-
The sample size for the training set:
- The document does not provide information on the sample size used for training the algorithms, nor does it explicitly state if machine learning was used for the EBC feature (though the term "algorithm" is used). This document focuses on the validation of the enhanced feature rather than the initial development of the core CT Colon Analysis algorithms.
-
How the ground truth for the training set was established:
- Not specified, as information regarding a dedicated training set is not provided in this summary.
Ask a specific question about this device
(169 days)
Vital Images, Inc.
The separately-licensed Lung Analysis option is intended for the review and analysis of thoracic CT images for the purposes of characterizing nodules in the lung in a single study, or over the time course of several thoracic studies. Characterizations include diameter, volume over time. The system automatically performs the measurements, allowing lung nodules and measurements to be displayed.
Lung Analysis aids in measuring and characterizing lung nodules. The interface and automated tools help to efficiently determine growth patterns and compose comparative reviews. Lung Analysis is intended for the review and analysis of thoracic CT images for the purposes of characterizing nodules in the lung in a single study, or over the time course of several thoracic studies. Characterizations include diameter, volume over time. The system automatically performs the measurements, allowing lung nodules and measurements to be displayed. The Lung Analysis Software requires the user to identify a nodule and to determine whether it is a GGO or solid nodule in order to use the appropriate characterization tool.
The Lung Analysis software from Vital Images, Inc. (K151283) had its performance evaluated through non-clinical tests including phantom testing and external validation.
1. Table of Acceptance Criteria and Reported Device Performance:
The document provided details on phantom testing results for GGO volume, diameter, and mean CT attenuation measurements. Specific acceptance criteria values were not explicitly stated for all tests. However, for GGO volume measurements, the results were reported as "within acceptable" ranges.
Test Case Name | Parameters (Reported) | Type | Status |
---|---|---|---|
GGO volume accuracy & precision: Siemens 64-CT scanner | GGO volume: Bias (Absolute error, APNE), Precision (PRC) | Phantom dataset (Passed) | Passed |
GGO volume accuracy & precision: Philips 16-CT scanner | GGO volume: Bias (Absolute error, APNE), Precision (PRC) | Phantom dataset (Passed) | Passed |
GGO volume accuracy & precision: Toshiba AQ1 320-CT scanner | GGO volume: Bias (Absolute error, APNE), Precision (PRC) | Phantom dataset (Passed) | Passed |
Intra-reader & inter-reader variability of GGO volume measurements (patient-based) | Intra-reader agreement & Inter-reader agreement: CCCinter, TDI, Coverage Probability | Patient-based dataset (Passed) | Passed |
GGO longest diameter measurement accuracy & precision: Toshiba AQ1 320-CT scanner | Longest diameter: Bias (Absolute error), Absolute percentage normalized error (APNE), Precision (within-nodule standard deviation, wSD) | Phantom dataset (Passed) | Passed |
GGO mean attenuation measurement accuracy & precision: Toshiba AQ-One CT scanner | Mean attenuation: Bias (Absolute error), Absolute percentage normalized error (APNE), Precision (within-nodule standard deviation, wSD) | Phantom dataset (Passed) | Passed |
Summary of Measurement Accuracy for GGO Volume on Toshiba CT scanner (100mAs):
GGO Ref. diameter | Bias APE% | Precision PRC% |
---|---|---|
5mm | 15.04% | 24.74% |
8mm | 16.15% | 11.85% |
10mm | 5.49% | 3.24% |
12mm | 2.98% | 3.38% |
Note: The document states these results "may vary for other CT scanner types or CT manufacturers and for other acquisition conditions."
2. Sample Size Used for the Test Set and Data Provenance:
The document mentions "phantom datasets" and "patient-based datasets" for testing. However, it does not specify the exact sample sizes (number of phantoms or patients/cases) for these test sets. The provenance of the data (country of origin, retrospective or prospective) is not provided.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
The document mentions "experienced medical professionals" for external validation but does not specify the exact number of experts or their detailed qualifications (e.g., "radiologist with 10 years of experience").
4. Adjudication Method for the Test Set:
The document does not specify any adjudication method (e.g., 2+1, 3+1). It only mentions that "intra-reader and inter-reader agreement" was evaluated for GGO volume measurements on patient-based datasets.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:
A MRMC comparative effectiveness study was not performed to evaluate how much human readers improve with AI vs. without AI assistance. The document focuses on the performance of the software itself and its validation against existing datasets and phantom models.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
Yes, a standalone performance evaluation of the algorithm appears to have been conducted through the phantom testing. The "accuracy and precision of the GGO volume measurements after semi-automatic segmentation and before manual editing" indicates an algorithm-only measurement phase.
7. The Type of Ground Truth Used:
The ground truth used for phantom testing was based on "anthropomorphic lung phantoms with synthetic nodules," which provides a controlled and known ground truth for volume and dimensional measurements. For patient-based datasets, the ground truth for reproducibility (intra-reader and inter-reader agreement) appears to have been established by human experts, as indicated by "User-edited GGOs" in the table. However, the ultimate ground truth for lesion characterization (e.g., pathology, outcomes data) is not specified.
8. The Sample Size for the Training Set:
The document does not provide information on the sample size used for the training set.
9. How the Ground Truth for the Training Set Was Established:
The document does not provide information on how the ground truth for the training set was established.
Ask a specific question about this device
(89 days)
VITAL IMAGES, INC.
The Vitrea Lung Density Analysis software provides CT values for the pulmonary tissue from CT thoracic datasets. Three-dimensional (3D) segmentation of the left lung and right lung, volumetric analysis, density evaluations and reporting tools are integrated in a specific workflow to offer the physician a quantitative support for diagnosis and follow-up evaluation of lung tissue images.
Vitrea CT Lung Density Analysis assists in analyzing lung densities and volumes. It semiautomatically segments lung tissues with quantifiable controls and renderings to aid communication with the pulmonologist.
The key features are:
- Semi-automatic right lung, left lung, and airway segmentation .
- Visualization of lung density with color-defined Hounsfield Unit (HU) ranges ●
- . Lung density result quantification with HU density range, volume measurements, lunq density index, and the PD15% measurement
- . Density graph/histogram of the classified lung voxels' relative frequencies
- Comparison of upper and lower lung density index ratios .
- Adjustable density thresholds for refining and optimizing HU ranges ●
- Overlay of density quantification results and density graph histogram for reporting
- Export of density values and curves to CSV tables or copy to clipboard for insertion into a ● report
Here's an analysis of the provided text to extract the acceptance criteria and study details. Please note that the document is a 510(k) summary, which focuses on demonstrating substantial equivalence to a predicate device rather than providing a detailed clinical trial report with specific acceptance criteria and performance metrics against those criteria. Therefore, some information, particularly quantitative acceptance criteria and specific performance measures, is not explicitly stated in this document.
The document primarily relies on demonstrating equivalence in intended use, technological characteristics, and safety and effectiveness management (design controls, risk management, software verification and validation).
Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria in terms of performance metrics (e.g., sensitivity, specificity, accuracy thresholds for lung density measurements or segmentation). Instead, it implicitly defines "acceptance" as meeting functional requirements, user needs, and demonstrating substantial equivalence to the predicate device.
The reported device performance is largely qualitative, focusing on whether the software functions as designed and meets user expectations.
Acceptance Criteria (Implicit from document) | Reported Device Performance |
---|---|
Functional Requirements Met | "Software testing was completed to ensure the new features operate according to defined requirements." |
User Needs and Intended Use Conformance | "The validation team conducted workflow testing that provided evidence that the system requirements and features were implemented, reviewed and met." "During external validation of the CT Lung Density Analysis software, experienced users evaluated the visualization, axial plane location, quantification of density, and snapshots among other features. Each user felt that the Vitrea CT Lung Density Analysis software enables the user to assess and quantify lung density." |
Safety and Risk Mitigation | "Each risk pertaining to these features have been individually assessed to determine if the benefits outweigh the risk. Every risk has been reduced as low as possible and has been evaluated to have a probability of occurrence of harm of 'Improbable.'" "The overall residual risk for the project is deemed acceptable." |
Equivalence to Predicate Device | The entire "Substantial Equivalence Comparison" section details how the subject device is similar in regulatory classification, intended use (with one noted difference that is deemed not to raise new questions of safety/effectiveness), and numerous technological features for data loading, viewing, segmentation, lung volume analysis, lung density analysis, and data export. |
Numerical Quantity Verification | For internal validation, "Results of numerical quantities calculated by CT Lung Density Analysis were verified using CT semi-synthetic phantoms and patient based CT datasets." (No specific metrics or thresholds are provided). |
Study Details:
The document combines internal and external validation for its non-clinical testing. It explicitly states that "The subject of this 510(k) notification, Vitrea CT Lung Density Analysis software, did not require clinical studies to support safety and effectiveness of the software."
-
Sample size used for the test set and the data provenance:
- Internal Validation (Phantom Testing): "various phantoms and patient based CT datasets." No specific number is given for either the phantoms or patients.
- External Validation: No specific number of cases or datasets is explicitly mentioned. The focus is on user evaluation of features.
- Data Provenance: Not specified, but "patient based CT datasets" implies retrospective patient data. Given the company is US-based (Minnetonka, MN), it's likely US data or data from a similar regulated environment.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- For Internal Validation: "Results of numerical quantities calculated by CT Lung Density Analysis were verified using CT semi-synthetic phantoms and patient based CT datasets." It doesn't explicitly state the number or qualifications of experts establishing ground truth for the patient datasets. For phantoms, the ground truth is often inherent in the phantom's design or known physical properties.
- For External Validation: "experienced users evaluated the visualization, axial plane location, quantification of density, and snapshots among other features." The number of experienced users is not specified, nor are their exact qualifications (e.g., "radiologist with X years of experience").
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not specified. The document describes "verification" and "validation," including internal testing and external user acceptance, but does not detail a specific adjudication method for ground truth establishment.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No. The document explicitly states that "The subject of this 510(k) notification... did not require clinical studies to support safety and effectiveness of the software." Therefore, no MRMC study comparing human readers with and without AI assistance was conducted or reported.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The document implies a standalone evaluation was performed during internal validation, where "Results of numerical quantities calculated by CT Lung Density Analysis were verified using CT semi-synthetic phantoms and patient based CT datasets." This focuses on the algorithmic output against a known or established truth without direct human interpretation as part of the primary performance metric. However, the subsequent "External Validation" involves human interaction with the software.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For Internal Validation:
- Semi-synthetic phantoms: Ground truth is inherent in the phantom's known physical properties or generated data.
- Patient-based CT datasets: The type of ground truth is not explicitly stated (e.g., expert consensus on manual measurements, pathology reference standard). It mentions "verified," implying a reference standard was used, but not its nature.
- For the external validation, "ground truth" was more about user acceptance and functionality, rather than specific quantitative medical accuracy against a clinical reference.
- For Internal Validation:
-
The sample size for the training set:
- Not specified. The document details software development and testing, but not the specifics of algorithm training if machine learning was used (which is not explicitly stated but implied for segmentation/analysis).
-
How the ground truth for the training set was established:
- Not specified. As the sample size for the training set and the specific methods of AI/ML are not disclosed, the method for establishing ground truth for training data is also not provided.
Ask a specific question about this device
Page 1 of 4