Search Results
Found 5 results
510(k) Data Aggregation
(74 days)
Ceevra Reveal 3+ is intended as a medical imaging system that allows the processing, review, analysis, communication and media interchange of multi-dimensional digital images acquired from CT or MR imaging devices and that such processing may include the generation of preliminary segmentations of normal anatomy using software that employs machine learning and other computer vision algorithms. It is also intended as software for preoperative surgical planning, and as software for the intraoperative display of the aforementioned multi-dimensional digital images. Ceevra Reveal 3+ is designed for use by health care professionals and is intended to assist the clinician who is responsible for making all final patient management decisions.
The machine learning algorithms in use by Ceevra Reveal 3+ are for use only for adult patients (22 and over). Three-dimensional images for patients under the age of 22 or of unknown age will be generated without the use of any machine learning algorithms.
Ceevra Reveal 3+, as modified, ("Modified Reveal 3+"), manufactured by Ceevra, Inc. (the "Company"), is a software as a medical device with two main functions: (1) it is used by Company personnel to generate three-dimensional (3D) images from existing patient CT and MR imaging, and (2) it is used by clinicians to view and interact with the 3D images during preoperative planning and intraoperatively.
Clinicians view 3D images via the Mobile Image Viewer software application which runs on compatible mobile devices, and the Desktop Image Viewer software application which runs on compatible computers. The 3D images may also be displayed on compatible external displays, or in virtual reality (VR) format with a compatible off-the-shelf VR headset.
Modified Reveal 3+ includes features that enable clinicians to interact with the 3D images including rotating, zooming, panning, selectively showing or hiding individual anatomical structures, and viewing measurements of or between anatomical structures.
Here's a breakdown of the acceptance criteria and study details for the Ceevra Reveal 3+ device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implied by the reported performance metrics. The study evaluated the accuracy of segmentations generated by the machine learning models. The performance metrics reported are the Sørensen-Dice coefficient (DSC) for volume-based segmentation accuracy and the Hausdorff distance metric at the 95th percentile (HD-95) for surface distance accuracy.
| Anatomical Structure | Imaging Modality | Metric | Reported Device Performance |
|---|---|---|---|
| Prostate | MR prostate imaging | DSC | 0.90 |
| Bladder | MR prostate imaging | DSC | 0.93 |
| Neurovascular bundles | MR prostate imaging | HD-95 | 6.6 mm |
| Kidney | CT abdomen imaging | DSC | 0.92 |
| Kidney | MR abdomen imaging | DSC | 0.89 |
| Artery | CT abdomen imaging | DSC | 0.90 |
| Artery | MR abdomen imaging | DSC | 0.87 |
| Vein | CT abdomen imaging | DSC | 0.88 |
| Vein | MR abdomen imaging | DSC | 0.82 |
| Pulmonary artery | CT chest imaging | DSC | 0.82 |
| Pulmonary vein | CT chest imaging | DSC | 0.83 |
| Airways | CT chest imaging | DSC | 0.82 |
| Bronchopulmonary segments | CT chest imaging | DSC | 0.86 |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: A total of 133 imaging studies were used to evaluate the device.
- Data Provenance: The text does not explicitly state the country of origin. However, it indicates that the device's machine learning algorithms are for use with adults (22 and over) and that "Ethnicity of patients in the datasets was reasonably correlated to the overall US population," implying the data is likely from the United States or at least representative of the US population. It was retrospective data, sourced from various scanning institutions. Independence of training and testing data was enforced at the institution level.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The text states: "Performance was verified by comparing segmentations generated by the machine learning models against segmentations generated by medical professionals from the same imaging study."
The specific number of experts is not mentioned.
Their qualifications are broadly described as "medical professionals," without further detail on their experience level or subspecialty (e.g., radiologist with X years of experience).
4. Adjudication Method for the Test Set
The text implies a direct comparison between the AI's segmentation and the "medical professionals'" segmentation. It does not specify an adjudication method (e.g., 2+1, 3+1 consensus with multiple readers) for establishing the ground truth if there were discrepancies among medical professionals. It simply states "segmentations generated by medical professionals." This might imply a single expert's ground truth, or a pre-established consensus for each case, but no specific method is detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, a MRMC comparative effectiveness study was not explicitly stated or described. The study focused on the performance of the AI model itself (standalone) compared to human-generated ground truth. There is no mention of comparing human readers with AI assistance versus human readers without AI assistance. Therefore, no effect size of how much human readers improve with AI vs. without AI assistance is provided.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance evaluation was done. The study specifically verified the performance of the "machine learning models" by comparing their generated segmentations directly against ground truth established by medical professionals.
7. The Type of Ground Truth Used
The ground truth used was expert consensus / expert-generated segmentations. The text states it was established by "segmentations generated by medical professionals."
8. The Sample Size for the Training Set
The document does not provide the exact sample size for the training set. It only states that "No imaging study used to verify performance was used for training; independence of training and testing data were enforced at the level of the scanning institution, namely, studies sourced from a specific institution were used for either training or testing but could not be used for both." It also mentions that "The data used in the device validation ensured diversity in patient population and scanner manufacturers."
9. How the Ground Truth for the Training Set Was Established
The document does not explicitly state how the ground truth for the training set was established. However, given that the evaluation for the test set used segmentations generated by "medical professionals," it is highly probable that the ground truth for the training set was established in a similar manner, likely through manual segmentation by medical experts.
Ask a specific question about this device
(29 days)
Ceevra Reveal 3+ is intended as a medical imaging system that allows the processing, review, analysis, communication and media interchange of multi-dimensional digital images acquired from CT or MR imaging devices and that such processing may include the generation of preliminary seqmentations of normal anatomy using software that employs machine learning and other computer vision algorithms. It is also intended as software for preoperative surgical planning, and as software for the intraoperative display of the aforementioned multi-dimensional digital images. Ceevra Reveal 3+ is designed for use by health care professionals and is intended to assist the clinician who is responsible for making all final patient management decisions.
The machine learning algorithms in use by Ceevra Reveal 3+ are for use only for adult patients (22 and over). Three-dimensional images for patients under the age of 22 or of unknown age will be generated without the use of any machine learning algorithms
Ceevra Reveal 3+ ("Reveal 3+"), manufactured by Ceevra, Inc. (the "Company"), is a software as a medical device with two main functions: (1) it is used by Company personnel to generate three-dimensional (3D) images from existing patient CT and MR imaging, and (2) it is used by clinicians to view and interact with the 3D images during preoperative planning and intraoperatively.
Clinicians view 3D images via the Reveal 3+ Mobile Image Viewer software application which runs on compatible mobile devices, and the Reveal 3+ Desktop Image Viewer software application which runs on compatible computers. The 3D images may also be displayed on compatible external displays, or in virtual reality (VR) format with a compatible off-the-shelf VR headset.
Reveal 3+ includes features that enable clinicians to interact with the 3D images including rotating, zooming, panning, selectively showing or hiding individual anatomical structures, and viewing measurements of or between anatomical structures.
Here's a detailed breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
| Acceptance Criteria (Metric) | Reported Device Performance |
|---|---|
| Machine Learning Model Performance | |
| Prostate (MR prostate imaging) | 0.87 Sørensen-Dice coefficient (DSC) |
| Bladder (MR prostate imaging) | 0.90 Sørensen-Dice coefficient (DSC) |
| Neurovascular bundles (MR prostate imaging) | 7.8 mm Hausdorff distance metric at the 95th percentile (HD-95) |
| Kidney (CT abdomen imaging) | 0.89 Sørensen-Dice coefficient (DSC) |
| Kidney (MR abdomen imaging) | 0.87 Sørensen-Dice coefficient (DSC) |
| Artery (CT abdomen imaging) | 0.87 Sørensen-Dice coefficient (DSC) |
| Artery (MR abdomen imaging) | 0.83 Sørensen-Dice coefficient (DSC) |
| Vein (CT abdomen imaging) | 0.86 Sørensen-Dice coefficient (DSC) |
| Vein (MR abdomen imaging) | 0.81 Sørensen-Dice coefficient (DSC) |
| Artery (CT chest imaging) | 0.85 Sørensen-Dice coefficient (DSC) |
| Vein (CT chest imaging) | 0.81 Sørensen-Dice coefficient (DSC) |
| Measurement Features Accuracy | All three types of measurements (volumes of structures, diameter of structure, distance between two points) produced by Ceevra Reveal 3+ were verified to be accurate within a mean difference of +/- 10%. |
Study Details:
2. Sample size used for the test set and the data provenance:
- Sample Size: A total of 141 imaging studies were used to evaluate the device's machine learning models.
- Data Provenance: The studies were actual CT or MR imaging studies of patients. No dataset contained more than one imaging study from any particular patient. The data ensured diversity in patient population and scanner manufacturers. Subgroup analysis was performed for patient age, patient sex, and scanner manufacturers.
- Patient Demographics: For non-prostate related datasets, 40% female patients and 60% male patients. Across all datasets, 32% of patients were under 60 years old, 32% were 60 to 70 years old, 30% were over 70 years old, and 6% were of unknown age.
- Scanner Manufacturers: Included GE Medical Systems, Siemens, Toshiba, and Philips Medical Systems.
- Ethnicity: Reasonably correlated to the overall US population.
- Retrospective/Prospective: The text does not explicitly state whether the data was retrospective or prospective, but it refers to "existing patient CT and MR imaging" and "datasets of actual CT or MR imaging studies of patients," which typically implies retrospective use of previously acquired data.
- Country of Origin: Not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: The text states "segmentations generated by medical professionals," but does not explicitly quantify the number of individual experts or medical professionals involved in creating the ground truth for the test set.
- Qualifications of Experts: The experts are broadly described as "medical professionals." No further specific qualifications (e.g., years of experience, subspecialty) are provided.
4. Adjudication method for the test set:
- Adjudication Method: The text does not specify an adjudication method like "2+1" or "3+1." It only states that performance was verified by comparing model-generated segmentations against segmentations generated by medical professionals. This implies a direct comparison rather than a specific multi-expert adjudication workflow.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
- MRMC Study: No, an MRMC comparative effectiveness study involving human readers with and without AI assistance was not explicitly described or reported in the provided text. The study focused on the standalone performance of the machine learning models.
- Effect Size: Not applicable, as no MRMC study was described.
6. If a standalone (i.e., algorithm-only without human-in-the-loop performance) was done:
- Standalone Performance: Yes, the described study evaluates the standalone performance of the machine learning algorithms. The performance metrics (DSC, HD-95) directly assess how well the algorithms' segmentations compare to the ground truth established by medical professionals.
7. The type of ground truth used:
- Ground Truth Type: Expert consensus/segmentation. The ground truth was established by "segmentations generated by medical professionals from the same imaging study."
8. The sample size for the training set:
- Training Set Sample Size: The text states, "No imaging study used to verify performance was used for training; independence of training and testing data were enforced at the level of the scanning institution, namely, studies sourced from a specific institution were used for either training or testing but could not be used for both." However, the specific sample size of the training set is not provided. It is only implied that it was distinct from the 141-study test set.
9. How the ground truth for the training set was established:
- Training Set Ground Truth: The text does not explicitly detail how the ground truth for the training set was established. It only emphasizes the independence of training and testing data and that the test set's ground truth was created by "medical professionals." It is reasonable to infer that the training set ground truth was similarly established by medical professionals, consistent with standard machine learning practices for supervised learning in medical imaging, but this is not explicitly stated.
Ask a specific question about this device
(231 days)
Ceevra Reveal 3 is intended as a medical imaging system that allows the processing, review, and media interchange of multi-dimensional digital images acquired from CT or MR imaging devices and that such processing may include the generation of preliminary segmentations of normal anatomy using software that employs machine learning and other computer vision algorithms. It is also intended as software for preoperative surgical planning, and as software for the intraoperative display of the aforementioned multi-dimensional digital images. Ceevra Reveal 3 is designed for use by health care professionals and is intended to assist the clinician who is responsible for making all final patient management decisions.
The machine learning algorithms in use by Ceevra Reveal 3 are for use only for adult patients (22 and over). Three-dimensional images for patients under the age of 22 or of unknown age will be generated without the use of any machine learning algorithms.
Ceevra Reveal 3 ("Reveal 3"), manufactured by Ceevra, Inc. (the "Company"), is a software as a medical device with two main functions: (1) it is used by Company personnel to generate three-dimensional (3D) images from existing patient CT and MR imaging, and (2) it is used by clinicians to view and interact with the 3D images during preoperative planning and intraoperatively.
Clinicians view 3D images via the Reveal 3 Mobile Image Viewer software application which runs on compatible mobile devices, and the Reveal 3 Desktop Image Viewer software application which runs on compatible computers. The 3D images may also be displayed on compatible external displays, or in virtual reality (VR) format with a compatible off-the-shelf VR headset.
Reveal 3 includes additional features that enable clinicians to interact with the 3D images including rotating, zooming, panning, and selectively showing or hiding individual anatomical structures.
Here's a breakdown of the acceptance criteria and study details for the Ceevra Reveal 3, based on the provided FDA 510(k) summary:
Acceptance Criteria and Device Performance
| Acceptance Criteria (Metric) | Reported Device Performance |
|---|---|
| Prostate (from MR prostate imaging) | 0.87 Sørensen-Dice coefficient (DSC) |
| Bladder (from MR prostate imaging) | 0.90 Sørensen-Dice coefficient (DSC) |
| Neurovascular bundles (from MR prostate imaging) | 7.8 mm Hausdorff distance at 95th percentile (HD-95) |
| Kidney (from CT abdomen imaging) | 0.89 Sørensen-Dice coefficient (DSC) |
| Kidney (from MR abdomen imaging) | 0.87 Sørensen-Dice coefficient (DSC) |
| Artery (from CT abdomen imaging) | 0.87 Sørensen-Dice coefficient (DSC) |
| Artery (from MR abdomen imaging) | 0.83 Sørensen-Dice coefficient (DSC) |
| Vein (from CT abdomen imaging) | 0.86 Sørensen-Dice coefficient (DSC) |
| Vein (from MR abdomen imaging) | 0.81 Sørensen-Dice coefficient (DSC) |
| Artery (from CT chest imaging) | 0.85 Sørensen-Dice coefficient (DSC) |
| Vein (from CT chest imaging) | 0.81 Sørensen-Dice coefficient (DSC) |
Note: The document explicitly states "Performance was verified by comparing segmentations generated by the machine learning models against segmentations generated by medical professionals from the same imaging study." This implies that the acceptance criteria for each metric were met if the reported performance values were achieved or exceeded. However, specific numerical thresholds for acceptance criteria (e.g., "must be ≥ 0.85 DSC") are not explicitly stated in the provided text, only the reported performance. The presented table assumes the reported performance values themselves serve as the basis for demonstrating compliance.
Study Details:
-
Sample Size used for the test set and the data provenance:
- Sample Size: 141 imaging studies.
- Data Provenance: Actual CT or MR imaging studies of patients.
- No dataset contained more than one imaging study from any particular patient.
- Independence of training and testing data was enforced at the level of the scanning institution (studies from a specific institution were used for either training or testing but not both).
- Diversity in patient population was ensured across patient age, patient sex, and scanner manufacturers.
- Subgroup analysis was performed for patient age, patient sex, and scanner manufacturers.
- Non-prostate related datasets: 40% female, 60% male.
- Across all datasets by age: 32% under 60, 32% 60-70, 30% over 70, 6% unknown age.
- Scanner manufacturers included GE Medical Systems, Siemens, Toshiba, and Philips Medical Systems.
- Ethnicity of patients was generally correlated to the overall US population.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document states "segmentations generated by medical professionals." It does not specify the number of medical professionals or their specific qualifications (e.g., radiologist with X years of experience).
-
Adjudication method for the test set:
- The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1) for resolving disagreements among medical professionals if multiple experts were used to create the ground truth. It simply states "segmentations generated by medical professionals."
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
- No, an MRMC comparative effectiveness study comparing human readers with and without AI assistance was not mentioned or described. The study focused on the performance of the machine learning models in comparison to ground truth established by medical professionals.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, performance was verified by comparing the segmentations generated by the machine learning models against the ground truth. This indicates a standalone performance evaluation of the algorithm.
-
The type of ground truth used:
- Expert consensus/manual segmentation by medical professionals. The document states: "Performance was verified by comparing segmentations generated by the machine learning models against segmentations generated by medical professionals from the same imaging study."
-
The sample size for the training set:
- The exact sample size for the training set is not explicitly stated. It only mentions that "No imaging study used to verify performance was used for training; independence of training and testing data were enforced at the level of the scanning institution, namely, studies sourced from a specific institution were used for either training or testing but could not be used for both."
-
How the ground truth for the training set was established:
- The document does not explicitly detail how the ground truth for the training set was established. However, given that the ground truth for the test set was established by "medical professionals," it is highly probable that the training set also used ground truth established by medical professionals or similar expert annotations.
Ask a specific question about this device
(271 days)
Ceevra Reveal 2.0 is intended as a medical imaging system that allows the processing, review, analysis, communication and media interchange of multi-dimensional digital images acquired from CT or MR imaging devices. It is also intended as software for preoperative surgical planning, and as software for the intraoperative display of the aforementioned multi-dimensional digital images. Ceevra Reveal 2.0 is designed for use by health care professionals and is intended to assist the clinician who is responsible for making all final patient management decisions.
Ceevra Reveal 2.0 is a software-only device that allows clinicians to review CT and MR image data in three-dimensional (3D) format and/or stereoscopic 3D format (commonly known as virtual reality, or VR). The 3D and VR images are accessible through the Ceevra Reveal 2.0 mobile application which is used by clinicians for preoperative surgical planning and for the intraoperative display of the aforementioned 3D and VR images.
Ceevra Reveal 2.0 includes two main software-based user interface components, the Processing Interface and Viewer Interface. The Processing Interface is hosted on a cloud-based, virtual workstation and only accessed by authorized personnel, such as an imaging technician. The Processing Interface contains a graphical user interface where an imaging technician can select DICOM-compatible medical images, segment such imitiate processing into a 3D format. The Viewer Interface is a mobile application that is accessible via a compatible, touchscreen enabled, off-the-shelf mobile device to allow for clinicians to review the medical images in 3D and/or VR formats. Only when the compatible mobile device is used in conjunction with a compatible off-the-shelf VR headset can the surgeon view medical images in the VR format.
The product is intended to be used by trained medical professionals, including imaging technicians and clinicians/surgeons, and is used to assist in clinical decision making.
The 3D images generated using Ceevra Reveal 2.0 are intended to be used in connection with surgical operations in which CT or MR images are used for preoperative planning and/or reviewed intraoperatively.
The manner in which the 3D images are viewed and used does not vary between surgery types. The 3D images are viewed solely from the clinicians' compatible mobile devices, and are not viewed through or otherwise integrated with surgical navigation systems.
The provided document, a 510(k) premarket notification for Ceevra Reveal 2.0, does not contain the detailed information necessary to fully address all aspects of the request regarding acceptance criteria and the study that proves the device meets them. The document focuses on establishing substantial equivalence to a predicate device (Clarity Reveal 1.0, K171356), rather than presenting a performance study with detailed acceptance criteria and validation results against ground truth.
Here's an attempt to extract and infer information based on the provided text, and to explain why certain sections cannot be fully completed:
Missing Information: It's important to note that this 510(k) summary primarily focuses on demonstrating substantial equivalence to a predicate device. For devices seeking substantial equivalence as a "picture archiving and communication system" (PACS) with features like 3D visualization, the FDA often emphasizes software verification and validation, and occasionally clinical validation to demonstrate that the device performs as intended and is as safe and effective as the predicate. However, detailed studies with specific performance metrics against a defined ground truth, like those required for diagnostic AI algorithms, are typically not a mandatory part of a 510(k) for such a device unless it introduces a new intended use or technology that raises new questions of safety or effectiveness.
The document indicates: "Safety and performance of Ceevra Reveal 2.0 has been evaluated and verified in accordance with software specifications and applicable performance standards through software verification and validation testing." And it refers to IEC 62304 and FDA Guidance documents for software. This suggests that the "study" proving the device meets acceptance criteria was primarily software V&V, not a clinical performance study with human readers or standalone AI performance metrics against a specific ground truth.
1. A table of acceptance criteria and the reported device performance
Based on the provided document, there are no explicit quantitative acceptance criteria or reported device performance metrics in the format of a clinical study or diagnostic accuracy study. The primary "performance" is implicitly tied to its function as a medical imaging system for processing, review, analysis, communication, and media interchange, as well as for surgical planning and intraoperative display. The acceptance is based on demonstrating that it performs these functions adequately and is substantially equivalent to the predicate.
| Acceptance Criteria (Inferred from functionality and SE claims) | Reported Device Performance (Inferred from documentation) |
|---|---|
| Functional Equivalence: Ability to process, review, analyze, communicate, and interchange multi-dimensional digital images from CT/MR. | Stated to perform these functions, comparable to the predicate device. |
| Image Quality / Fidelity: Produce 3D and VR images suitable for preoperative surgical planning and intraoperative display. | Images are accessible through the mobile application, and viewable in 3D/VR, suggesting visual fidelity is acceptable for intended use. |
| Software Reliability & Safety: Software operates without critical errors and adheres to medical device software standards (IEC 62304). | "Safety and performance of Ceevra Reveal 2.0 has been evaluated and verified in accordance with software specifications and applicable performance standards through software verification and validation testing." Compliance with IEC 62304 and FDA guidance on software and cybersecurity noted. |
| User Interface & Experience: Intuitive and effective interfaces for imaging technicians (Processing Interface) and clinicians/surgeons (Viewer Interface). | Implied through description of interfaces and intended use by medical professionals. |
| Intraoperative Use Capability (Delta from Predicate): Ability to display images intraoperatively. | Explicitly stated as a new feature for Ceevra Reveal 2.0 and compared against the predicate (which does not have this feature), indicating it was tested for this capability. |
2. Sample sized used for the test set and the data provenance
The document does not detail a "test set" in the context of a clinical performance study with patient data and ground truth labels. The "testing" referred to is primarily software verification and validation. Therefore, there is no information on:
- Sample size for a test set (e.g., number of cases or patients).
- Data provenance (e.g., country of origin, retrospective or prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Again, given that the document does not describe a clinical performance study against a specific ground truth for diagnostic accuracy, this information is not available. The ground truth for this type of device, which is a display and processing system, typically relates to the accuracy of the image reconstruction, segmentation, and visualization, rather than a diagnostic outcome. These are validated through engineering and software testing.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable, as no external clinical test set requiring adjudication is described.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC study is mentioned. This device is described as a "medical imaging system" for processing and display, and "software for preoperative surgical planning" and "intraoperative display." It is not described as an AI/CAD (Computer-Aided Detection/Diagnosis) device, and therefore comparative effectiveness studies demonstrating human improvement with AI assistance are not applicable or described in this document.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The device's function is described as providing 3D/VR visualizations for human clinicians. It is explicitly stated that it "is intended to assist the clinician who is responsible for making all final patient management decisions." As such, standalone diagnostic performance in the sense of an algorithm making a decision is not the device's intended function or claimed capability. The "standalone" performance would be related to the accuracy of its 3D reconstruction and segmentation algorithms, which would be validated through internal software testing, not typically reported in detail in the 510(k) summary.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For a device primarily focused on image processing and visualization (like a PACS), the "ground truth" for validation typically refers to the accuracy and fidelity of the 3D reconstructions to the original DICOM data and the anatomical structures within them. This would be established through:
- Reference standard imaging (DICOM): The input CT/MR scans are the "ground truth" regarding the anatomy captured.
- Known segmentation accuracy: If segmentation is performed, its accuracy against manually segmented or expert-reviewed ground truth models.
- Visual inspection and clinical utility assessment: Review by qualified clinicians to ensure the 3D/VR representations are accurate, useful, and do not introduce artifacts or distortions that could mislead.
The document does not specify the exact methods for establishing this ground truth for the "software verification and validation testing."
8. The sample size for the training set
The document does not mention "training sets" as would be relevant for a machine learning or AI-based device. Since it seems to be a rules-based or traditional image processing software, a "training set" in the AI sense is unlikely to have been used or described.
9. How the ground truth for the training set was established
Not applicable, as no training set is indicated.
Ask a specific question about this device
(86 days)
Clarity Reveal is intended as a medical imaging system that allows the processing, review, analysis, communication and media interchange of multi-dimensional digital images acquired from CT or MR imaging devices. It is also intended as software for preoperative surgical planning. Clarity Reveal is designed for use by healthcare professionals and is intended to assist the clinician who is responsible for making all final patient management decisions.
Clarity Reveal is a software-only device that allows clinicians to review CT and MR image data in three-dimensional (3D) format and/or stereoscopic 3D format (commonly known as virtual reality, or VR). The 3D and VR images are accessible through the Clarity Reveal mobile application which is used by clinicians for preoperative planning.
Clarity Reveal includes two main software-based user interface components, the Processing Interface and Viewer Interface. The Processing Interface is hosted on a cloud-based, virtual workstation and only accessed by authorized personnel, such as an imaging technician. The Processing Interface contains a graphical user interface where an imaging technician can select DICOM-compatible medical images, segment such images, and initiate processing into a 3D format. The Viewer Interface is a mobile application that is accessible via a compatible, touchscreen enabled, off-the-shelf mobile device to allow for clinicians to review the medical images in 3D and/or VR formats. Only when the compatible mobile device is used in conjunction with a compatible off-the-shelf VR headset can the surgeon view medical images in the VR format.
The product is intended to be used by trained medical professionals, including imaging technicians and clinicians/surgeons, and is used to assist in clinical decision making.
This document describes the Ceevra Clarity Reveal device, a software-only medical imaging system. The provided text is a 510(k) summary submitted to the FDA, detailing the device's intended use, description, and comparison to a predicate device. However, it does not contain specific acceptance criteria or the study data proving the device meets those criteria.
Therefore, I cannot fulfill your request for:
- A table of acceptance criteria and the reported device performance
- Sample size used for the test set and the data provenance
- Number of experts used to establish the ground truth for the test set and the qualifications
- Adjudication method
- MRMC comparative effectiveness study details
- Standalone performance details
- Type of ground truth used
- Sample size for the training set
- How the ground truth for the training set was established
The document only states the following regarding performance:
- Performance Data (Section 7): "Safety and performance of Clarity Reveal has been evaluated and verified in accordance with software specifications and applicable performance standards through software verification and validation testing. Additionally, the software validation activities were performed in accordance with IEC 62304:2006/AC: 2008- Medical device software - Software life cycle processes, in addition to the FDA Guidance documents, 'Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices' and 'Content of Premarket Submission for Management of Cybersecurity in Medical Devices.'"
This statement indicates that verification and validation were performed to ensure the device met software specifications and applicable performance standards, and that the process followed relevant international standards and FDA guidance. However, it does not disclose the specific performance metrics, acceptance criteria, or the results of those tests.
The 510(k) summary focuses on demonstrating "substantial equivalence" to a predicate device (EchoPixel True3D Viewer K142107) based on intended use, technological characteristics, and general safety/performance verification, rather than providing detailed clinical study results or quantitative performance criteria.
Ask a specific question about this device
Page 1 of 1