Search Results
Found 4 results
510(k) Data Aggregation
(29 days)
Aline Ablation Intelligence is a Computed Tomography (CT) and Magnetic Resonance (MR) image processing software package available for use with ablation procedures.
Aline Ablation Intelligence is controlled by the user via a user interface on a workstation.
Aline Ablation Intelligence imports images from CT and MR scanners and facility PACS systems for display and processing during ablation procedures.
Aline Ablation Intelligence is used to assist physicians in planning ablation procedures, including identifying ablation targets and virtual ablation needle placement. Aline Ablation Intelligence is used to assist physicians in confirming ablation zones.
The software is not intended for diagnosis. The software is not intended to predict ablation volumes or predict ablation success.
Aline Ablation Intelligence 1.0.0, is a stand-alone desktop software application with tools and features designed to assist users in planning ablation procedures as well as tools for evaluating ablation procedure's outcome.
The use environment for Aline Ablation Intelligence is the Operating Room and the hospital healthcare environment such as interventional radiology control room.
Aline Ablation Intelligence has five distinct workflow steps:
- Data assignment
- Tumor segmentation
- Needle planning
- Ablation zone segmentation
- Treatment confirmation
Of these workflow steps two (Tumor Segmentation and Needle Planning) make use of the planning image volume. These workflow steps contain features and tools designed to support the planning of ablation procedures. The other two (Ablation Zone Segmentation, and Treatment Confirmation) make use of the confirmation image volume. These workflow steps contain features and tools designed to support the evaluation of the ablation procedure's outcome in the confirmation image volume.
Key features of the Aline Ablation Intelligence Software include: - Workflow steps availability
- Manual and Automated tools for target tissue and ablation zone segmentation
- Overlaying and positioning virtual ablation needles and user-selected estimates of the ablation regions onto the medical images
- Multimodal image fusion and registration
- Compute achieved margins and missed volumes to help the user assess the coverage of the target tissue by the ablation zone
- Data saving and secondary capture generation
The software components provide functions for performing operations related to image display, manipulation, analysis, and quantification, including features designed to facilitate segmentation of the target tissues and ablation zones.
The software system runs on a dedicated workstation and is intended for display and processing, of a Computed Tomography (CT) and/or Magnetic Resonance (MR) image, including contrast enhanced images.
The system can be used on patient data for any patient demographic chosen to undergo the ablation treatment.
Aline Ablation Intelligence uses several algorithms to perform operations to present information to the user in order for them to evaluate the planned and post ablation zones. These include: - Segmentation post-processing
- Automatic ROI definition for Local Rigid Registration
- Measurement and Quantification
Here's a breakdown of the acceptance criteria and study information for the Mirada Medical Ltd. Aline Ablation Intelligence, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document does not explicitly list quantitative acceptance criteria in a table format that would typically be seen in a performance study. Instead, it describes various functionalities and their expected performance characteristics. However, we can infer some criteria and reported performance based on the "Performance" section:
Feature/Functionality | Acceptance Criteria (Inferred) | Reported Device Performance |
---|---|---|
Overall Device Performance | Meet user needs and requirements; substantially equivalent to predicate device; ensures safety and effectiveness. | "Aline Ablation Intelligence meets the user needs and requirements of the device, which are considered to be substantially equivalent to those of the listed predicate device." "Performance testing demonstrates that Aline Ablation Intelligence is substantially equivalent to, and performs at least as safely and effectively as, the listed predicate device. Aline Ablation Intelligence meets requirements for safety and effectiveness and does not introduce any new potential safety risks." |
Segmentation Tools | Provide manual and semi-automated segmentation; system post-processing (remove 2D-holes, disconnected 3D regions). | "Segmentation tools provided within Aline Ablation Intelligence 1.0.0 include manual and semi-automated segmentation, and system post-processing of segmentations to remove 2D-holes and/or disconnected 3D regions present." (Note: Clinical accuracy is user responsibility) |
Registration Tools | Provide automated local rigid registration within ROI; allow user assessment and manual modification. | "Registration tools provided within Aline Ablation Intelligence 1.0.0 include automated local rigid registration within a region of interest around user-segmentations of tumors and ablation zones. Final accuracy of registration is dependent on user assessment and manual modification of the registration prior to acceptance..." |
Linear Distance Measurements | Accurate given image resolution. | "linear distance measures calculated by Aline Ablation Intelligence 1.0.0 are dependent on the image resolution; these are accurate to ¼ of a voxel width and are reported to 0.1mm precision." |
Volumetric Measurements | Accurate given image resolution; whole-voxel resolution; voxel inclusion/exclusion determined by voxel center. | "Volume calculations by Aline Ablation Intelligence 1.0.0 are dependent on the image resolution; these are at whole-voxel resolution and voxel inclusion/exclusion is determined by whether the voxel center is inside or outside the displayed contour. Volume is reported to 0.001cm3 precision." |
Human Factors | Intended to be used safely and effectively; adherence to IEC 62366-1:2015. | "Human factors testing has been performed in line with Applying Human Factors and Usability Engineering to Medical Devices, February 3, 2016 and IEC 62366-1:2015." "Intended to be used safely and effectively by trained physicians and a human factors engineering process has been undertaken, adhering to IEC 62366-1:2015." |
Image Visualization (General) | User satisfaction for accurate use of functions. | "It is the responsibility of the user to determine if the results of image visualization are satisfactory and allow the accurate use of the functions provided." |
Data Output (PACS/DICOM) | Key images can be saved to PACS or DICOM nodes. | "Key images can be acquired which may be saved back to PACS or any DICOM nodes." |
2. Sample Size Used for the Test Set and Data Provenance
The document states that "Performance testing (Bench) was performed, including on the following features, to ensure that performance and accuracy was as expected: Segmentation post-processing Testing, Automatic ROI definition for Local Rigid Registration Testing, Measurement and Quantification Testing."
However, it does not specify the sample size used for this test set nor the data provenance (e.g., country of origin, retrospective or prospective nature of the data).
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document does not specify the number of experts used to establish ground truth or their qualifications for the test set. It mentions that clinical accuracy of segmentation and registration are user responsibilities, implying that a formal expert-driven ground truth process for specific clinical metrics in a test set is not explicitly detailed at this level.
4. Adjudication Method for the Test Set
The document does not specify any adjudication method (e.g., 2+1, 3+1, none) for the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
A MRMC comparative effectiveness study was not explicitly mentioned or described in the provided text. The document focuses on the device's standalone performance and its substantial equivalence to a predicate device based on features and technical characteristics rather than a study evaluating human reader improvement with AI assistance.
6. Standalone (i.e., algorithm only without human-in-the-loop performance) Study
Yes, a standalone performance assessment was conducted, primarily focusing on the algorithms' output when used with human interaction. The "Performance" section describes several "Performance testing (Bench)" activities for specific algorithms:
- Segmentation post-processing Testing
- Automatic ROI definition for Local Rigid Registration Testing
- Measurement and Quantification Testing
While the software itself has human-in-the-loop components (user responsibilities for clinical accuracy of segmentation and registration, user determination of satisfactory visualization), the testing mentioned for these specific features (like accuracy of linear and volumetric measurements relative to image resolution) indicates an evaluation of the algorithm's core output under specific conditions.
7. Type of Ground Truth Used
The document implies that the ground truth for features like linear and volumetric measurements is based on the image resolution and voxel characteristics for the algorithms. For segmentation and registration, the "clinical accuracy... is the responsibility of the user," suggesting that the ground truth for those tasks is ultimately based on user assessment and manual modification when applying the tool. There is no mention of pathology, expert consensus, or outcomes data being independently used to establish ground truth for the performance testing.
8. Sample Size for the Training Set
The document does not specify the sample size used for any training set. It focuses on the validation and verification aspects of the device.
9. How the Ground Truth for the Training Set Was Established
The document does not specify how the ground truth for any potential training set was established. The focus is on the performance testing of the final device.
Ask a specific question about this device
(132 days)
Simplicit90Y is a standalone software device that is used by trained medical professionals as a tool to aid in evaluation and information management of digital medical images.
Simplicit90Y supports the reading and display of a range of DICOM compliant imaging and related formats including but not limited to CT, PT, NM, SPECT, MR, SC, RTSS. Simplicit90Y enables the saving of sessions in a proprietary format as well as the export of formats including CSV and PDF files.
Simplicit90Y is indicated, as an accessory to TheraSphere®, to provide pre-treatment dosimetry planning support including Lung Shunt Fraction estimation (based on planar scintigraphy) and liver single-compartment MIRD schema dosimetry, in accordance with TheraSphere® labelling. Simplicit90Y provides tools to create, transform, and modify contours/Regions of Interest for calculation of Lung Shunt Fraction and Perfused Volume. Simplicit90Y includes features to aid in TheraSphere® dose vial selection, dose vial ordering and creation of customizable reports.
Simplicit90Y is indicated for post-treatment dosimetry and evaluation following Yttrium-90 (Y-90) microsphere treatment. Simplicit90Y provides tools to create, transform, and modify contours/Regions of Interest for the user to define objects in medical image volumes to support TheraSphere® post-Y-90 treatment calculation. The objects include, but are not limited to, tumors and normal tissues, and liver volumes.
Simplicit90Y is indicated for registration, fusion display and review of medical mages allowing medical professionals to incorporate images, such as CT, MRI, PET, CBCT and SPECT in TheraSphere® Yttrium-90 (Y-90) microspheres pretreatment planning and post-Y-90 treatment evaluation.
For post-Yttrium-90 (Y-90) treatment, Simplicit90Y should only be used for the retrospective determination of dose and should not be used to prospectively calculate dose or for the case where there is a need for retreatment using Y-90 microspheres.
Simplicit®Y is a software device which provides features and tools for use in pre-treatment dosimetry planning of TheraSphere® Y-90 microspheres treatment and post-treatment evaluation of Y-90 microspheres treatment and operates on Windows computer systems.
Simplicit®Y pre-treatment dosimetry planning features include Lung Shunt Fraction estimation (based on planar scintigraphy) and liver single-compartment MIRD schema in accordance with TheraSphere® labelling. Simplicit®Y additionally provides tools to support TheraSphere® dose vial selection, dose vial ordering and includes the creation of customizable reports.
After administration of Y-90 microspheres, Simplicit®V provides post-treatment dosimetry evaluation with multi-compartment and voxel-wise MIRD techniques applying the Local Deposition Model with scaling for known injected activity to assist the clinician in performing assessment of treatment efficacy and quality assurance, including assessment of absorbed dose to structures such as liver, lung and tumors.
Simplicit®Y provides 2D and 3D image display and advanced dosimetry analysis tools including isodose contour line plan and dose volume histograms.
Simplicit®9Y provides tools for multi-modal image fusion using rigid and deformable registration capable of manual, semi-automated and fully automated operation. Simplicit90Y includes evaluation tools for assessment of registration quality.
Simplicit®9Y provides semi-automated and automated tools for segmentation of Region/Volume Of Interest on multi-modal images.
Simplicit®Y supports the reading, rendering and display of a range of DICOM compliant imaging and related formats including but not limited to CT, PT, NM, SC, RTSS. Simplicit90Y enables the saving of sessions in a proprietary format as well as the export of RTSS, CSV and PDF files.
Simplicit®Y should not be used to change a treatment plan after treatment has been delivered with Yttrium-90 (Y-90) microsphere implants.
The provided text does not contain detailed acceptance criteria or a specific study proving the device meets those criteria with numerical results. It focuses on regulatory approval based on substantial equivalence to a predicate device.
However, based on the information provided, here's a breakdown of what can be inferred and what is missing:
1. Table of Acceptance Criteria and Reported Device Performance
This information is not explicitly provided in the document. The text states: "The results of performance, functional and algorithmic testing demonstrate that Simplicit®Y meets the user needs and requirements of the device, which are demonstrated to be substantially equivalent to those of the listed predicate device." This is a general statement of compliance, not a table of specific criteria and corresponding performance metrics.
2. Sample Size Used for the Test Set and Data Provenance
This information is not explicitly provided. The document makes general statements about "performance, functional and algorithmic testing" but does not detail the size or nature of the test sets used.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not explicitly provided. The document mentions general validation and verification but does not detail the process of establishing ground truth with experts.
4. Adjudication Method for the Test Set
This information is not explicitly provided.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This information is not explicitly provided. The document describes Simplicit90Y as a "standalone software device" and "a tool to aid in evaluation and information management of digital medical images," but it does not mention MRMC studies comparing human readers with and without AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, the device is described as a "standalone software device." The document implies that "performance, functional and algorithmic testing" was conducted for the device itself.
7. The Type of Ground Truth Used
This information is not explicitly provided. While the device aids in dosimetry planning and evaluation, the method for establishing "ground truth" for the testing mentioned generally is not detailed.
8. The Sample Size for the Training Set
This information is not explicitly provided. The document does not discuss the machine learning aspects of the software, and therefore, does not mention training sets.
9. How the Ground Truth for the Training Set was Established
This information is not explicitly provided.
Summary of what is present in the document:
The provided text is a 510(k) summary for the Simplicit90Y device. The core argument for its acceptance is substantial equivalence to an existing predicate device (MIM – Y90 Dosimetry, K172218) and several reference devices (Mirada RT, Mirada RTx, Mirada XD).
The study that proves the device meets (implicit) acceptance criteria is described as:
- Testing: "Simplicit®Y is validated and verified against its user needs and intended use by the successful execution of planned performance, functional and algorithmic testing included in this submission. The results of performance, functional and algorithmic testing demonstrate that Simplicit®Y meets the user needs and requirements of the device, which are demonstrated to be substantially equivalent to those of the listed predicate device."
- Compliance: "Verification and Validation for Simplicit®Y has been carried out in compliance with the requirements of CFR 21 Part 820 and in adherence to the DICOM standard."
- Conclusion: "In conclusion, performance testing demonstrates that Simplicit®Y is substantially equivalent to, and performs at least as safely and effectively as the listed predicate device. Simplicit®0Y meets requirements for safety and effectiveness and does not introduce any new potential safety risks."
Essentially, the "study" is a set of "performance, functional and algorithmic testing" designed to show that Simplicit90Y performs similarly to its predicate device for its intended use, particularly for:
- Pre-treatment dosimetry planning (Lung Shunt Fraction, liver MIRD schema)
- Post-treatment dosimetry and evaluation (multi-compartment and voxel-wise MIRD techniques)
- Image registration, fusion, and review
- Contouring/Segmentation for regions of interest.
The document highlights that the proposed device has a narrower scope of indications compared to the predicate, focusing specifically on Y-90 dosimetry. This narrower scope, coupled with similar technology, is used to argue that it does not raise new safety or effectiveness issues.
Ask a specific question about this device
(21 days)
Multi Modality Viewer is an option within Vitrea that allows the examination of a series of medical images obtained from MRI, CT, CR, DX, RG, RF, US, XA, PET, and PET/CT scanners. The option also enables clinicians to compare multiple series for the same patient, side-by-side, and switch to other integrated applications to further examine the data.
Multi Modality Viewer is an option within Vitrea that allows the examination and manipulation of a series of medical images obtained from MRI, CT, CR, DX, RG, RF, US, XA, PET, and PET/CT scanners. The option also enables clinicians to compare multiple series for the same patient, side-by-side, and switch to other integrated applications to further examine the data.
The Multi Modality Viewer provides an overview of the study, facilitates side-by-side comparison including priors, allows reformatting of image data, enables clinicians to record evidence and return to previous evidence, and provides easy access to other Vitrea applications for further analysis.
Here's a breakdown of the acceptance criteria and study information for the Multi Modality Viewer, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of numerical "acceptance criteria" for performance metrics in the typical sense (e.g., sensitivity, specificity, accuracy thresholds). Instead, it focuses on functional capabilities and states that verification and validation testing confirmed the software functions according to requirements and that "no negative feedback was received," and "Multi Modality Viewer was rated as equal to or better than the reference devices."
The acceptance is primarily based on establishing substantial equivalence to predicate and reference devices, demonstrating that the new features function as intended and do not raise new questions of safety or effectiveness.
Feature/Criterion | Acceptance Standard (Implied) | Reported Device Performance/Conclusion |
---|---|---|
Overall Safety & Effectiveness | Safe and effective for its intended use, comparable to predicate and reference devices. | Clinical validations demonstrated clinical safety and effectiveness. |
Functional Equivalence | New features operate according to defined requirements and functions similarly to or better than features in reference devices. | Verification testing confirmed software functions according to requirements. External validation evaluators confirmed sufficiency of software to read images and rated it "equal to or better than" reference devices. |
No Negative Feedback | No negative feedback from clinical evaluators regarding functionality or image quality of new features. | "No negative feedback received from the evaluators." |
Substantial Equivalence | Device is substantially equivalent to predicate and reference devices regarding intended use, clinical effectiveness, and safety. | "This validation demonstrates substantial equivalence between Multi Modality Viewer and its predicate and reference devices with regards to intended use, clinical effectiveness and safety." |
Risk Management | All risks reduced as low as possible; overall residual risk acceptable; benefits outweigh risks. | "All risks have been reduced as low as possible. The overall residual risk for the software product is deemed acceptable. The medical benefits of the device outweigh the residual risk..." |
Software Verification (Internal) | Software fully satisfies all expected system requirements and features; all risk mitigations function properly. | "Verification testing confirmed the software functions according to its requirements and all risk mitigations are functioning properly." |
Software Validation (Internal) | Software conforms to user needs and intended use; system requirements and features implemented properly. | "Workflow testing... provided evidence that the system requirements and features were implemented properly to conform to the intended use." |
Cybersecurity | Follows FDA guidance for cybersecurity in medical devices, including hazard analysis, mitigations, controls, and update plan. | Follows internal documentation based on FDA Guidance: "Content of Premarket Submissions for Management of Cybersecurity in Medical Devices." |
Compliance with Standards | Complies with relevant voluntary consensus standards (DICOM, ISO 14971, IEC 62304). | The device "complies with the following voluntary recognized consensus standards" (DICOM, ISO 14971, IEC 62304 listed). |
New features don't raise new safety/effectiveness questions | New features are similar enough to existing cleared features in predicate/reference devices that they don't introduce new concerns. | For each new feature (Full volume MIP, Volume image rendering, 3D Cutting Tool, Clip Plane Box, bone/base segmentation tools, 1 Click Visible Seed, Automatic table segmentation, Automatic bone segmentation, US 2D Cine Viewer, Automatic Rigid Registration), the document states that the added feature "does not raise different questions of safety and effectiveness" due to similarity with a cleared reference device. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The document repeatedly mentions "anonymized datasets" but does not specify the number of cases or images used in the external validation studies.
- Data Provenance: The data used for the external validation studies were "anonymized datasets." The country of origin is not explicitly stated, but the evaluators were from "three different clinical locations." Given Vital Images, Inc. is located in Minnetonka, MN, USA, it's highly probable the data and clinical locations are from the United States. The studies were likely retrospective as they involved reviewing "anonymized datasets" rather than ongoing patient enrollment.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: Three evaluators.
- Qualifications of Experts: The evaluators were "from three different clinical locations" and are described as "experienced professionals" in the context of simulated usability testing and clinical review. Their specific medical qualifications (e.g., radiologist, specific years of experience) are not explicitly detailed in the provided text.
4. Adjudication Method for the Test Set
The document does not describe an explicit "adjudication method" for establishing ground truth or resolving discrepancies between experts in the traditional sense. The phrase "no negative feedback received from the evaluators" and "Multi Modality Viewer was rated as equal to or better than the reference devices" suggests a consensus or individual evaluation model, but not a specific adjudication protocol like 2+1 or 3+1. It appears the evaluations focused on confirming functionality and subjective quality rather than comparing against a pre-established ground truth for a diagnostic task.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- No, an MRMC comparative effectiveness study was not explicitly stated to have been done in the context of measuring improvement with AI vs. without AI assistance.
- The "Substantial Equivalence Validation" involved three evaluators comparing the subject device against its predicate and reference devices. However, this comparison focused on functionality and image quality and aimed to show the equivalence or non-inferiority of the new device and its features, rather than quantifying performance gains due to AI assistance in human readers. The new features mentioned (like automatic segmentation or rigid registration) are components that might assist, but the study design wasn't an MRMC to measure the effect size of this assistance on human performance.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
- The document describes "software verification testing" which confirms "the software functions according to its requirements." This implies a form of standalone testing for the algorithms and features. For example, "Automatic table segmentation" and "Automatic bone segmentation" are algorithms, and their functionality would have been tested independently.
- However, no specific performance metrics (e.g., accuracy, precision) for these algorithms in a standalone capacity are provided from these tests. The external validation was a human-in-the-loop setting where evaluators used the software.
7. The Type of Ground Truth Used
The external validation involved "clinical review of anonymized datasets" where evaluators assessed "functionality and image quality." For new features like segmentation or registration, the "ground truth" would likely be based on the expert consensus or judgment of the evaluators during their review of the anonymized datasets, confirming if the segmentation was accurate or if the registration was correct and useful. There is no mention of pathology, direct clinical outcomes data, or a separate "ground truth" panel.
8. The Sample Size for the Training Set
The document does not specify the sample size for the training set. It details verification and validation steps for the software but does not provide information about the development or training of any AI/ML components within the software. While features like "Automatic table segmentation" and "Automatic bone segmentation" likely involve machine learning, the document does not elaborate on their training data.
9. How the Ground Truth for the Training Set Was Established
Since the document does not specify the training set or imply explicit AI/ML development in the detail often seen for deep learning algorithms, it does not describe how the ground truth for the training set was established.
Ask a specific question about this device
(134 days)
Keosys Medical Imaging Suite (KSWVWR) is intended to be used by trained medical professionals included, but not limited to, radiologists, nuclear medicine physicians and physicists.
Keosys Medical Imaging Suite is a software application intended to aid in diagnostic and evaluation of medical image data. Although this device allows the visualization of mammography images, it is not intended as a tool for primary diagnosis in mammography.
Keosys Medical Imaging Suite can be used for display, process, temporarily store, print and also create and print reports from 2D and 3D multimodal DICOM medical image data. The imaging data can be Computed Tomography (CT), Magnetic Resonance (MR), Radiography X (CR, DX, XRF, MG), Nuclear Medecine (NM) including planar imaging (Static, Whole body, Dynamic, Gated) and tomographic imaging (SPECT, Gated SPECT), Positon Emission Tomography (PT), Ultrasound (US).
Keosys Medical Imaging suite provides tools like rulers, markers or region of interests (e.g. it can be used in an oncology clinical workflow for tumor burden assessment or therapeutic response evaluation).
It is the user responsibility to check that the ambient luminosity conditions, the images compression ratio and the interpretation monitor specifications are consistent with a clinical diagnostic use of the data.
Keosys' Advanced Medical Imaging Software Suite (aka Viewer, aka KSWVWR) is a multimodality diagnostic workstation for visualization and 3D post-processing of Radiological and Nuclear Medicine medical images. It includes dedicated applications for the therapeutic response evaluation process in a multi-vendor, multi-modal and multi time-points context. The solution also includes the latest recommendations for SUV calculation.
Here's an analysis of the provided text to extract the acceptance criteria and details about the study, as requested.
Note: The provided document is a 510(k) summary for the "Advanced Medical Imaging Software Suite (KSWVWR)". It outlines the device's indications for use and compares it to predicate devices. However, it does not contain detailed acceptance criteria, specific study results, or information about sample sizes, ground truth establishment methods, or expert qualifications for a performance study. The document primarily focuses on demonstrating substantial equivalence to previously cleared devices rather than providing a standalone performance study report. Therefore, many of your requested points cannot be directly addressed from this text.
1. Table of Acceptance Criteria and Reported Device Performance
As noted, the document does not explicitly state quantitative acceptance criteria or detailed reported device performance in a study. The focus is on functionality and equivalence.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Ability to display, process, temporarily store, print, and create reports from 2D and 3D multimodal DICOM medical image data. | Stated as a core function and intention of the device. |
Support for various imaging modalities (CT, MR, Radiography X, NM, PET, US). | Stated as compatible with these modalities. |
Provision of tools like rulers, markers, or regions of interest. | Stated as a feature (e.g., for tumor burden assessment). |
Software functionality and performance as described in specifications. | "Performance and functional testing are an integral part of Keosys's software development process." (No specific results provided). |
Substantial equivalence to predicate devices regarding intended use, diagnostic aid, display/manipulation/fuse tools, and multi-modality support. | Claimed substantial equivalence based on a comparison of technical characteristics. |
2. Sample size used for the test set and the data provenance
The document does not provide details on a specific "test set" with sample sizes or data provenance (e.g., country of origin, retrospective/prospective) for a performance study. The testing mentioned is part of the general software development process.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This information is not provided in the document.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
This information is not provided in the document.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not describe an MRMC comparative effectiveness study where human readers used the AI. The device is a "viewer" and "software application intended to aid in diagnostic and evaluation." It provides tools but is not explicitly an "AI" in the sense of providing automated diagnostic suggestions or classifications to be compared with human performance with/without its assistance. It enables the display and manipulation of images and provides measurement tools.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document describes the device as a "software application intended to aid in diagnostic and evaluation of medical image data" and that it "provides tools like rulers, markers or region of interests." It is a diagnostic workstation for visualization and 3D post-processing, and its use is by "trained medical professionals." This implies a human-in-the-loop device. There is no mention of a standalone algorithm performance study.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
This information is not provided in the document.
8. The sample size for the training set
The document does not describe any machine learning or AI components that would historically require a "training set" in the context of supervised learning, nor does it mention a sample size for such.
9. How the ground truth for the training set was established
Not applicable, as no training set is described.
Ask a specific question about this device
Page 1 of 1