Search Results
Found 2 results
510(k) Data Aggregation
(279 days)
CoverScan is a medical image management and processing software package that allows the display, analysis and postprocessing of DICOM compliant medical images and MR data.
CoverScan provides both viewing and analysis capabilities to ascertain quantified metrics of multiple organs such as the heart, lungs, liver, spleen, pancreas and kidney.
CoverScan provides measurements in different organs to be used for the assessment of longitudinal and transversal relaxation time and rate (T1, SR-T1, cT1, T2), fat content (proton density fat fraction or PDFF) and metrics of organ function (e.g., left ventricular ejection fraction and lung fractional area change on deep inspiration).
These metrics derived from the interpreted by a licensed physician, vield information that may assist in diagnosis, clinical management and monitoring of patients.
CoverScan is not intended for asymptomatic screening. This device is intended for use with Siemens 1.5T MRI scanners.
CoverScan is a post-processing software system comprised of several software modules. It uses acquired MR data to produce metrics of quantified tissue characteristics of the heart, lungs, liver, kidneys, pancreas and spleen.
Metrics produced by CoverScan can be used by licensed physicians in a clinical setting for the purposes of assessing multiple organs.
The provided documentation describes the CoverScan v1 device, which is a medical image management and processing software. While it mentions internal verification and validation testing, and that product specifications were met, it does not explicitly state specific quantitative acceptance criteria or detailed results of a study proving the device meets those criteria, especially in a comparative effectiveness context (MRMC).
The document primarily focuses on establishing substantial equivalence to predicate devices through a qualitative comparison of intended use, technological characteristics, and performance features. It indicates that "bench testing included functional verification to ensure software installation, licensing, labeling, and feature functionality all met design requirements" and that "The accuracy and precision of device measurements was assessed using purpose-built phantoms and in-vivo acquired data from volunteers." However, it does not provide the specific quantitative results of these assessments against defined acceptance criteria.
Therefore, much of the requested information cannot be directly extracted from the provided text. I will explain what information is available and highlight what is missing.
Here's an attempt to structure the information based on the provided text, with clear indications where the information is not present:
Acceptance Criteria and Device Performance Study (CoverScan v1)
The provided document indicates that CoverScan v1 underwent internal verification and validation testing to confirm it met product specifications and its overall ability to meet user needs was validated. However, specific, quantitative acceptance criteria for metrics like accuracy, sensitivity, specificity, etc., are not explicitly defined in the provided text. Similarly, the reported numerical device performance against such criteria is not detailed. The document broadly states that "The accuracy and precision of device measurements was assessed using purpose-built phantoms and in-vivo acquired data from volunteers."
Missing Information: A detailed table of acceptance criteria with numerical targets and the corresponding reported device performance values.
1. A table of acceptance criteria and the reported device performance
| Metric / Category | Acceptance Criteria (Quantitative) | Reported Device Performance (Quantitative) | Source/Test Type |
|---|---|---|---|
| Accuracy of measurements (cT1, T1, PDFF, T2) | Not explicitly defined in the document | "Assessed using purpose-built phantoms and in-vivo acquired data from volunteers covering a range of physiological values for cT1, T1 and PDFF." "Inter and intra operator variability was also assessed." | Bench testing, Phantom studies, In-vivo volunteer data |
| Precision of measurements | Not explicitly defined in the document | "Assessed using purpose-built phantoms... To assess the precision of CoverScan v1 measurements across supported scanners, in-vivo volunteer data was used." | Bench testing, Phantom studies, In-vivo volunteer data |
| Functional Verification | "Software installation, licensing, labeling, and feature functionality all met design requirements." | "Bench testing included functional verification to ensure software installation, licensing, labeling, and feature functionality all met design requirements." | Bench testing |
| Stress Testing | "System as a whole provides all the capabilities necessary to operate according to its intended use." | "All of the different components of the CoverScan software have been stress tested to ensure that the system as a whole provides all the capabilities necessary to operate according to its intended use." | Stress testing |
2. Sample sized used for the test set and the data provenance
- Test Set Sample Size: The document mentions "in-vivo acquired data from volunteers" and that "Volunteers participating in the performance testing were representative of the intended patient population." However, the specific number of cases or volunteers used in the test set is not provided.
- Data Provenance: The document does not explicitly state the country of origin of the data or whether it was retrospective or prospective. It only mentions "in-vivo acquired data from volunteers," implying prospectively collected data for assessment, but this is not explicitly stated.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This information is not provided in the document. The text indicates that the device produces quantitative metrics that are "interpreted by a licensed physician" to "assist in diagnosis, clinical management and monitoring of patients." However, it does not describe how ground truth was established for the performance testing, nor the number or qualifications of experts involved in that process.
4. Adjudication method for the test set
This information is not provided in the document. There is no mention of an adjudication process (e.g., 2+1, 3+1, none) for the test set's ground truth.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not indicate that a MRMC comparative effectiveness study was performed. The device is described as a "post-processing software system" that provides "quantified metrics" and does not describe AI assistance for human readers in a diagnostic workflow. The primary method of performance assessment mentioned is the accuracy and precision of the measurements themselves using phantoms and volunteer data, not reader performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The performance testing described involves the device generating quantitative metrics. The phrase "The accuracy and precision of device measurements was assessed" suggests a standalone performance assessment of the algorithm's output (measurements) against some reference (phantoms, in-vivo data). While the final interpretation is by a physician, the core performance reported relates to the device's ability to produce these measurements consistently and accurately, which aligns with a standalone assessment of the algorithms.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth for the "accuracy and precision of device measurements" was established using:
- Purpose-built phantoms: These phantoms "containing vials with different relaxation times corresponding to the physiological ranges of tissue values." This provides a known, controlled physical ground truth.
- In-vivo acquired data from volunteers: For this type of data, the text does not specify how ground truth was established (e.g., through other validated methods, clinical outcomes, or expert consensus on a final diagnosis based on all available information). It only mentions that the studies "covering a range of physiological values for cT1, T1 and PDFF."
8. The sample size for the training set
This information is not provided in the document. The document describes CoverScan v1 as software that takes acquired MR data and processes it; it does not detail any machine learning training processes or associated datasets.
9. How the ground truth for the training set was established
As there is no mention of a specific training set in the provided text, the method for establishing its ground truth is also not described. The document implies that the device is a measurement and processing tool rather than a machine learning model that requires a dedicated training set.
Ask a specific question about this device
(67 days)
Hepatica (Hepatica v1) is a post-processing medical device software that presents quantified metrics which may contribute to the assessment of a patient's liver health.
Hepatica (Hepatica v1) uses image visualisation and analysis tools to process DICOM 3.0 compliant magnetic resonance image datasets to produce semi-automatic segmented 3D models of the work of Couinaud and the Brisbane 2000 terminology. For each identified Couinaud segment, volumetric data is determined and reported.
Hepatica v1) may also report iron corrected-T1 (cT1) and PDFF calculated using the IDEAL method from multi-slice acquisitions, on a per segment basis, over the whole liver. Both metrical values of different fundamental liver tissue characteristics that can be used as measures of liver tissue health.
Hepatica (Hepatica v1) provides trained clinicians with additional information to evaluate the volume and health of a patient's liver on a segmental basis. It is not intended to replace the established procedures for the assessment of a patient's liver health. However, information gathered through existing diagnostic tests, clinical evaluation of the patient, as well Hepatica (Hepatica v1), may support surgical decision making.
Hepatica v1 is a standalone software device that imports MR datasets encompassing the abdomen, including the liver. Visualisation and display of T1-weighted MR data which can be analysed, and quantitative metrics of tissue characteristics and liver volume are then reported. Datasets imported into Hepatica are DICOM 3.0 compliant and reported metrics are independent of the MRI equipment vendor. It allows for the 3D visualisation of the liver and quantification of metrics (cT1, PDFF and volumetry) from liver tissue and exportation of results and images to a deliverable report. Hepatica v1 supports semi-automatic liver segmentation of T1-weighted volumetric data. Liver segmentation in Hepatica v1 requires the placement of anatomical landmarks to define the outer contours of the liver and can be adjusted by the operator, where necessary. Where available, whole liver and segmental cT1 and PDFF quantitative metrics derived from the predicate device may be presented in the final report. Hepatica uses volumetric datasets to create 2D anatomical views from all supported scanners. Where available, cT1 and PDFF parametric maps are derived from the predicate device. Quantified metrics and images derived from the analysis of liver volume and tissue characteristics are collated into a report for evaluation and interpretation by a clinician.
Here's a breakdown of the acceptance criteria and study information for the Hepatica (Hepatica v1) device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state acceptance criteria in a quantitative format (e.g., "accuracy must be >X%"). Instead, it states that the performance testing "demonstrates that Hepatica v1 is at least as safe and effective as the predicate device and does not introduce any new risks."
However, it provides "Upper and Lower Limits of Agreement" from a Bland-Altman analysis, which can be interpreted as the range within which the device's measurements agree with the gold standard. A tighter range indicates higher accuracy. The precision results are also presented as "Upper and Lower limits of Agreement" for repeatability and reproducibility.
The table below summarizes the reported device performance, which implies that these values met the internal acceptance parameters for demonstrating substantial equivalence:
| Metric | Type of Measurement | Reported Device Performance (Upper and Lower Limits of Agreement) |
|---|---|---|
| Volume (% of total liver volume) | ||
| Segment 1 | Accuracy | -0.49% to 0.95% |
| Segment 2 | Accuracy | -3.09% to 5.06% |
| Segment 3 | Accuracy | -5.01% to 3.9% |
| Segment 4a | Accuracy | -4.60% to 4.26% |
| Segment 4b | Accuracy | -5.50% to 2.56% |
| Segment 5 | Accuracy | -1.54% to 3.38% |
| Segment 6 | Accuracy | -4.34% to 4.29% |
| Segment 7 | Accuracy | -3.30% to 1.79% |
| Segment 8 | Accuracy | -3.86% to 5.54% |
| Whole liver | Accuracy | -4.16% to 0.54% |
| Segment 1 | Repeatability | -0.72% to 0.65% |
| Segment 2 | Repeatability | -3.06% to 3.24% |
| Segment 3 | Repeatability | -2.67% to 3.13% |
| Segment 4a | Repeatability | -2.48% to 2.43% |
| Segment 4b | Repeatability | -1.82% to 1.96% |
| Segment 5 | Repeatability | -4.45% to 4.45% |
| Segment 6 | Repeatability | -3.60% to 4.10% |
| Segment 7 | Repeatability | -3.32% to 3.33% |
| Segment 8 | Repeatability | -4.99% to 3.81% |
| Whole liver | Repeatability | -6.15% to 3.78% |
| Segment 1 | Reproducibility | -1.39% to 0.90% |
| Segment 2 | Reproducibility | -3.10% to 3.15% |
| Segment 3 | Reproducibility | -2.41% to 2.06% |
| Segment 4a | Reproducibility | -2.54% to 2.58% |
| Segment 4b | Reproducibility | -1.70% to 1.74% |
| Segment 5 | Reproducibility | -4.97% to 5.94% |
| Segment 6 | Reproducibility | -3.69% to 5.40% |
| Segment 7 | Reproducibility | -4.39% to 3.59% |
| Segment 8 | Reproducibility | -6.23% to 5.04% |
| Whole liver | Reproducibility | -16.6% to 6.95% |
| cT1 | ||
| Segment 1 | Accuracy | -1.13% to 0.61% |
| Segment 2 | Accuracy | -2.38% to 1.56% |
| Segment 3 | Accuracy | -1.51% to 1.31% |
| Segment 4a | Accuracy | -0.77% to 1.10% |
| Segment 4b | Accuracy | -1.32% to 1.13% |
| Segment 5 | Accuracy | -1.11% to 0.87% |
| Segment 6 | Accuracy | -1.00% to 0.83% |
| Segment 7 | Accuracy | -0.88% to 0.64% |
| Segment 8 | Accuracy | -0.91% to 1.09% |
| Whole liver | Accuracy | 0.00% to 0.00% (This suggests perfect agreement or rounding issues) |
| PDFF | ||
| Segment 1 | Accuracy | -0.26% to 0.21% |
| Segment 2 | Accuracy | -0.33% to 0.38% |
| Segment 3 | Accuracy | -0.16% to 0.17% |
| Segment 4a | Accuracy | -0.30% to 0.23% |
| Segment 4b | Accuracy | -0.16% to 0.14% |
| Segment 5 | Accuracy | -0.16% to 0.18% |
| Segment 6 | Accuracy | -0.16% to 0.26% |
| Segment 7 | Accuracy | -0.12% to 0.18% |
| Segment 8 | Accuracy | -0.24% to 0.32% |
| Whole liver | Accuracy | -0.02% to 0.02% |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: The document states that the performance testing used "previously acquired in-vivo data from healthy and non-healthy volunteers." It also mentions "Volunteers participating in the performance testing were representative of the intended patient population." However, a specific number for the sample size (N) of these volunteers or images in the test set is not provided.
- Data Provenance: "Previously acquired in-vivo data." The country of origin is not explicitly stated. It can be inferred that the data is likely from the UK, given the submitter's address (Oxford, UK). The data is retrospective, as it was "previously acquired."
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: The document refers to the gold standard as "radiologists." It does not specify the number of radiologists involved in establishing the ground truth.
- Qualifications of Experts: The qualification is "radiologists." No further details on their years of experience or sub-specialty are provided.
4. Adjudication Method for the Test Set
The document states that the "gold standard" is "radiologists." It does not describe any specific adjudication method (e.g., 2+1, 3+1 consensus) used to establish this ground truth. It implies that the radiologists' readings were considered the definitive truth.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The study compared the device's measurements to a "gold standard" (radiologists' assessments), not the performance of human readers with vs. without AI assistance. Therefore, there is no effect size reported for human readers improving with AI assistance.
6. If a Standalone Study Was Done
Yes, a standalone (algorithm only without human-in-the-loop performance) study was done. The performance testing section directly reports on the "Accuracy" and "Precision" of Hepatica v1's measurements (cT1, PDFF, and volumetry) when compared to the gold standard. The device operators are "trained Perspectum internal operators," but the reported metrics are explicitly from the device's output. The statement "The variation introduced by operator measurements are well within the acceptance criteria" also suggests an understanding of the device's standalone performance separate from human interpretation of the reports.
7. The Type of Ground Truth Used
The primary ground truth used for accuracy comparison is expert consensus/interpretation, specifically "radiologists." For cT1 and PDFF, these are quantitative measurements derived from imaging, which radiologists would interpret or measure. For volumetry, it's also based on radiological assessment.
8. The Sample Size for the Training Set
The document does not provide the sample size for the training set. It only mentions the "previously acquired in-vivo data from healthy and non-healthy volunteers" used for performance testing.
9. How the Ground Truth for the Training Set Was Established
Since the training set sample size and its specifics are not mentioned, how its ground truth was established is not described in the provided document.
Ask a specific question about this device
Page 1 of 1