Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K243933
    Device Name
    Ceevra Reveal 3+
    Manufacturer
    Date Cleared
    2025-03-04

    (74 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Ceevra Reveal 3+

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Ceevra Reveal 3+ is intended as a medical imaging system that allows the processing, review, analysis, communication and media interchange of multi-dimensional digital images acquired from CT or MR imaging devices and that such processing may include the generation of preliminary segmentations of normal anatomy using software that employs machine learning and other computer vision algorithms. It is also intended as software for preoperative surgical planning, and as software for the intraoperative display of the aforementioned multi-dimensional digital images. Ceevra Reveal 3+ is designed for use by health care professionals and is intended to assist the clinician who is responsible for making all final patient management decisions.

    The machine learning algorithms in use by Ceevra Reveal 3+ are for use only for adult patients (22 and over). Three-dimensional images for patients under the age of 22 or of unknown age will be generated without the use of any machine learning algorithms.

    Device Description

    Ceevra Reveal 3+, as modified, ("Modified Reveal 3+"), manufactured by Ceevra, Inc. (the "Company"), is a software as a medical device with two main functions: (1) it is used by Company personnel to generate three-dimensional (3D) images from existing patient CT and MR imaging, and (2) it is used by clinicians to view and interact with the 3D images during preoperative planning and intraoperatively.

    Clinicians view 3D images via the Mobile Image Viewer software application which runs on compatible mobile devices, and the Desktop Image Viewer software application which runs on compatible computers. The 3D images may also be displayed on compatible external displays, or in virtual reality (VR) format with a compatible off-the-shelf VR headset.

    Modified Reveal 3+ includes features that enable clinicians to interact with the 3D images including rotating, zooming, panning, selectively showing or hiding individual anatomical structures, and viewing measurements of or between anatomical structures.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Ceevra Reveal 3+ device, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implied by the reported performance metrics. The study evaluated the accuracy of segmentations generated by the machine learning models. The performance metrics reported are the Sørensen-Dice coefficient (DSC) for volume-based segmentation accuracy and the Hausdorff distance metric at the 95th percentile (HD-95) for surface distance accuracy.

    Anatomical StructureImaging ModalityMetricReported Device Performance
    ProstateMR prostate imagingDSC0.90
    BladderMR prostate imagingDSC0.93
    Neurovascular bundlesMR prostate imagingHD-956.6 mm
    KidneyCT abdomen imagingDSC0.92
    KidneyMR abdomen imagingDSC0.89
    ArteryCT abdomen imagingDSC0.90
    ArteryMR abdomen imagingDSC0.87
    VeinCT abdomen imagingDSC0.88
    VeinMR abdomen imagingDSC0.82
    Pulmonary arteryCT chest imagingDSC0.82
    Pulmonary veinCT chest imagingDSC0.83
    AirwaysCT chest imagingDSC0.82
    Bronchopulmonary segmentsCT chest imagingDSC0.86

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: A total of 133 imaging studies were used to evaluate the device.
    • Data Provenance: The text does not explicitly state the country of origin. However, it indicates that the device's machine learning algorithms are for use with adults (22 and over) and that "Ethnicity of patients in the datasets was reasonably correlated to the overall US population," implying the data is likely from the United States or at least representative of the US population. It was retrospective data, sourced from various scanning institutions. Independence of training and testing data was enforced at the institution level.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The text states: "Performance was verified by comparing segmentations generated by the machine learning models against segmentations generated by medical professionals from the same imaging study."
    The specific number of experts is not mentioned.
    Their qualifications are broadly described as "medical professionals," without further detail on their experience level or subspecialty (e.g., radiologist with X years of experience).

    4. Adjudication Method for the Test Set

    The text implies a direct comparison between the AI's segmentation and the "medical professionals'" segmentation. It does not specify an adjudication method (e.g., 2+1, 3+1 consensus with multiple readers) for establishing the ground truth if there were discrepancies among medical professionals. It simply states "segmentations generated by medical professionals." This might imply a single expert's ground truth, or a pre-established consensus for each case, but no specific method is detailed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, a MRMC comparative effectiveness study was not explicitly stated or described. The study focused on the performance of the AI model itself (standalone) compared to human-generated ground truth. There is no mention of comparing human readers with AI assistance versus human readers without AI assistance. Therefore, no effect size of how much human readers improve with AI vs. without AI assistance is provided.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, a standalone performance evaluation was done. The study specifically verified the performance of the "machine learning models" by comparing their generated segmentations directly against ground truth established by medical professionals.

    7. The Type of Ground Truth Used

    The ground truth used was expert consensus / expert-generated segmentations. The text states it was established by "segmentations generated by medical professionals."

    8. The Sample Size for the Training Set

    The document does not provide the exact sample size for the training set. It only states that "No imaging study used to verify performance was used for training; independence of training and testing data were enforced at the level of the scanning institution, namely, studies sourced from a specific institution were used for either training or testing but could not be used for both." It also mentions that "The data used in the device validation ensured diversity in patient population and scanner manufacturers."

    9. How the Ground Truth for the Training Set Was Established

    The document does not explicitly state how the ground truth for the training set was established. However, given that the evaluation for the test set used segmentations generated by "medical professionals," it is highly probable that the ground truth for the training set was established in a similar manner, likely through manual segmentation by medical experts.

    Ask a Question

    Ask a specific question about this device

    K Number
    K233568
    Device Name
    Ceevra Reveal 3+
    Manufacturer
    Date Cleared
    2023-12-05

    (29 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Ceevra Reveal 3+

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Ceevra Reveal 3+ is intended as a medical imaging system that allows the processing, review, analysis, communication and media interchange of multi-dimensional digital images acquired from CT or MR imaging devices and that such processing may include the generation of preliminary seqmentations of normal anatomy using software that employs machine learning and other computer vision algorithms. It is also intended as software for preoperative surgical planning, and as software for the intraoperative display of the aforementioned multi-dimensional digital images. Ceevra Reveal 3+ is designed for use by health care professionals and is intended to assist the clinician who is responsible for making all final patient management decisions.

    The machine learning algorithms in use by Ceevra Reveal 3+ are for use only for adult patients (22 and over). Three-dimensional images for patients under the age of 22 or of unknown age will be generated without the use of any machine learning algorithms

    Device Description

    Ceevra Reveal 3+ ("Reveal 3+"), manufactured by Ceevra, Inc. (the "Company"), is a software as a medical device with two main functions: (1) it is used by Company personnel to generate three-dimensional (3D) images from existing patient CT and MR imaging, and (2) it is used by clinicians to view and interact with the 3D images during preoperative planning and intraoperatively.

    Clinicians view 3D images via the Reveal 3+ Mobile Image Viewer software application which runs on compatible mobile devices, and the Reveal 3+ Desktop Image Viewer software application which runs on compatible computers. The 3D images may also be displayed on compatible external displays, or in virtual reality (VR) format with a compatible off-the-shelf VR headset.

    Reveal 3+ includes features that enable clinicians to interact with the 3D images including rotating, zooming, panning, selectively showing or hiding individual anatomical structures, and viewing measurements of or between anatomical structures.

    AI/ML Overview

    Here's a detailed breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Acceptance Criteria and Device Performance

    Acceptance Criteria (Metric)Reported Device Performance
    Machine Learning Model Performance
    Prostate (MR prostate imaging)0.87 Sørensen-Dice coefficient (DSC)
    Bladder (MR prostate imaging)0.90 Sørensen-Dice coefficient (DSC)
    Neurovascular bundles (MR prostate imaging)7.8 mm Hausdorff distance metric at the 95th percentile (HD-95)
    Kidney (CT abdomen imaging)0.89 Sørensen-Dice coefficient (DSC)
    Kidney (MR abdomen imaging)0.87 Sørensen-Dice coefficient (DSC)
    Artery (CT abdomen imaging)0.87 Sørensen-Dice coefficient (DSC)
    Artery (MR abdomen imaging)0.83 Sørensen-Dice coefficient (DSC)
    Vein (CT abdomen imaging)0.86 Sørensen-Dice coefficient (DSC)
    Vein (MR abdomen imaging)0.81 Sørensen-Dice coefficient (DSC)
    Artery (CT chest imaging)0.85 Sørensen-Dice coefficient (DSC)
    Vein (CT chest imaging)0.81 Sørensen-Dice coefficient (DSC)
    Measurement Features AccuracyAll three types of measurements (volumes of structures, diameter of structure, distance between two points) produced by Ceevra Reveal 3+ were verified to be accurate within a mean difference of +/- 10%.

    Study Details:

    2. Sample size used for the test set and the data provenance:

    • Sample Size: A total of 141 imaging studies were used to evaluate the device's machine learning models.
    • Data Provenance: The studies were actual CT or MR imaging studies of patients. No dataset contained more than one imaging study from any particular patient. The data ensured diversity in patient population and scanner manufacturers. Subgroup analysis was performed for patient age, patient sex, and scanner manufacturers.
      • Patient Demographics: For non-prostate related datasets, 40% female patients and 60% male patients. Across all datasets, 32% of patients were under 60 years old, 32% were 60 to 70 years old, 30% were over 70 years old, and 6% were of unknown age.
      • Scanner Manufacturers: Included GE Medical Systems, Siemens, Toshiba, and Philips Medical Systems.
      • Ethnicity: Reasonably correlated to the overall US population.
      • Retrospective/Prospective: The text does not explicitly state whether the data was retrospective or prospective, but it refers to "existing patient CT and MR imaging" and "datasets of actual CT or MR imaging studies of patients," which typically implies retrospective use of previously acquired data.
      • Country of Origin: Not specified.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: The text states "segmentations generated by medical professionals," but does not explicitly quantify the number of individual experts or medical professionals involved in creating the ground truth for the test set.
    • Qualifications of Experts: The experts are broadly described as "medical professionals." No further specific qualifications (e.g., years of experience, subspecialty) are provided.

    4. Adjudication method for the test set:

    • Adjudication Method: The text does not specify an adjudication method like "2+1" or "3+1." It only states that performance was verified by comparing model-generated segmentations against segmentations generated by medical professionals. This implies a direct comparison rather than a specific multi-expert adjudication workflow.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:

    • MRMC Study: No, an MRMC comparative effectiveness study involving human readers with and without AI assistance was not explicitly described or reported in the provided text. The study focused on the standalone performance of the machine learning models.
    • Effect Size: Not applicable, as no MRMC study was described.

    6. If a standalone (i.e., algorithm-only without human-in-the-loop performance) was done:

    • Standalone Performance: Yes, the described study evaluates the standalone performance of the machine learning algorithms. The performance metrics (DSC, HD-95) directly assess how well the algorithms' segmentations compare to the ground truth established by medical professionals.

    7. The type of ground truth used:

    • Ground Truth Type: Expert consensus/segmentation. The ground truth was established by "segmentations generated by medical professionals from the same imaging study."

    8. The sample size for the training set:

    • Training Set Sample Size: The text states, "No imaging study used to verify performance was used for training; independence of training and testing data were enforced at the level of the scanning institution, namely, studies sourced from a specific institution were used for either training or testing but could not be used for both." However, the specific sample size of the training set is not provided. It is only implied that it was distinct from the 141-study test set.

    9. How the ground truth for the training set was established:

    • Training Set Ground Truth: The text does not explicitly detail how the ground truth for the training set was established. It only emphasizes the independence of training and testing data and that the test set's ground truth was created by "medical professionals." It is reasonable to infer that the training set ground truth was similarly established by medical professionals, consistent with standard machine learning practices for supervised learning in medical imaging, but this is not explicitly stated.
    Ask a Question

    Ask a specific question about this device

    K Number
    K222676
    Device Name
    Ceevra Reveal 3
    Manufacturer
    Date Cleared
    2023-04-25

    (231 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Ceevra Reveal 3

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Ceevra Reveal 3 is intended as a medical imaging system that allows the processing, review, and media interchange of multi-dimensional digital images acquired from CT or MR imaging devices and that such processing may include the generation of preliminary segmentations of normal anatomy using software that employs machine learning and other computer vision algorithms. It is also intended as software for preoperative surgical planning, and as software for the intraoperative display of the aforementioned multi-dimensional digital images. Ceevra Reveal 3 is designed for use by health care professionals and is intended to assist the clinician who is responsible for making all final patient management decisions.

    The machine learning algorithms in use by Ceevra Reveal 3 are for use only for adult patients (22 and over). Three-dimensional images for patients under the age of 22 or of unknown age will be generated without the use of any machine learning algorithms.

    Device Description

    Ceevra Reveal 3 ("Reveal 3"), manufactured by Ceevra, Inc. (the "Company"), is a software as a medical device with two main functions: (1) it is used by Company personnel to generate three-dimensional (3D) images from existing patient CT and MR imaging, and (2) it is used by clinicians to view and interact with the 3D images during preoperative planning and intraoperatively.

    Clinicians view 3D images via the Reveal 3 Mobile Image Viewer software application which runs on compatible mobile devices, and the Reveal 3 Desktop Image Viewer software application which runs on compatible computers. The 3D images may also be displayed on compatible external displays, or in virtual reality (VR) format with a compatible off-the-shelf VR headset.

    Reveal 3 includes additional features that enable clinicians to interact with the 3D images including rotating, zooming, panning, and selectively showing or hiding individual anatomical structures.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Ceevra Reveal 3, based on the provided FDA 510(k) summary:

    Acceptance Criteria and Device Performance

    Acceptance Criteria (Metric)Reported Device Performance
    Prostate (from MR prostate imaging)0.87 Sørensen-Dice coefficient (DSC)
    Bladder (from MR prostate imaging)0.90 Sørensen-Dice coefficient (DSC)
    Neurovascular bundles (from MR prostate imaging)7.8 mm Hausdorff distance at 95th percentile (HD-95)
    Kidney (from CT abdomen imaging)0.89 Sørensen-Dice coefficient (DSC)
    Kidney (from MR abdomen imaging)0.87 Sørensen-Dice coefficient (DSC)
    Artery (from CT abdomen imaging)0.87 Sørensen-Dice coefficient (DSC)
    Artery (from MR abdomen imaging)0.83 Sørensen-Dice coefficient (DSC)
    Vein (from CT abdomen imaging)0.86 Sørensen-Dice coefficient (DSC)
    Vein (from MR abdomen imaging)0.81 Sørensen-Dice coefficient (DSC)
    Artery (from CT chest imaging)0.85 Sørensen-Dice coefficient (DSC)
    Vein (from CT chest imaging)0.81 Sørensen-Dice coefficient (DSC)

    Note: The document explicitly states "Performance was verified by comparing segmentations generated by the machine learning models against segmentations generated by medical professionals from the same imaging study." This implies that the acceptance criteria for each metric were met if the reported performance values were achieved or exceeded. However, specific numerical thresholds for acceptance criteria (e.g., "must be ≥ 0.85 DSC") are not explicitly stated in the provided text, only the reported performance. The presented table assumes the reported performance values themselves serve as the basis for demonstrating compliance.

    Study Details:

    1. Sample Size used for the test set and the data provenance:

      • Sample Size: 141 imaging studies.
      • Data Provenance: Actual CT or MR imaging studies of patients.
        • No dataset contained more than one imaging study from any particular patient.
        • Independence of training and testing data was enforced at the level of the scanning institution (studies from a specific institution were used for either training or testing but not both).
        • Diversity in patient population was ensured across patient age, patient sex, and scanner manufacturers.
        • Subgroup analysis was performed for patient age, patient sex, and scanner manufacturers.
          • Non-prostate related datasets: 40% female, 60% male.
          • Across all datasets by age: 32% under 60, 32% 60-70, 30% over 70, 6% unknown age.
          • Scanner manufacturers included GE Medical Systems, Siemens, Toshiba, and Philips Medical Systems.
          • Ethnicity of patients was generally correlated to the overall US population.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • The document states "segmentations generated by medical professionals." It does not specify the number of medical professionals or their specific qualifications (e.g., radiologist with X years of experience).
    3. Adjudication method for the test set:

      • The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1) for resolving disagreements among medical professionals if multiple experts were used to create the ground truth. It simply states "segmentations generated by medical professionals."
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:

      • No, an MRMC comparative effectiveness study comparing human readers with and without AI assistance was not mentioned or described. The study focused on the performance of the machine learning models in comparison to ground truth established by medical professionals.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Yes, performance was verified by comparing the segmentations generated by the machine learning models against the ground truth. This indicates a standalone performance evaluation of the algorithm.
    6. The type of ground truth used:

      • Expert consensus/manual segmentation by medical professionals. The document states: "Performance was verified by comparing segmentations generated by the machine learning models against segmentations generated by medical professionals from the same imaging study."
    7. The sample size for the training set:

      • The exact sample size for the training set is not explicitly stated. It only mentions that "No imaging study used to verify performance was used for training; independence of training and testing data were enforced at the level of the scanning institution, namely, studies sourced from a specific institution were used for either training or testing but could not be used for both."
    8. How the ground truth for the training set was established:

      • The document does not explicitly detail how the ground truth for the training set was established. However, given that the ground truth for the test set was established by "medical professionals," it is highly probable that the training set also used ground truth established by medical professionals or similar expert annotations.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1