Search Results
Found 6 results
510(k) Data Aggregation
(208 days)
Intended Use
Viewing, post-processing, qualitative and quantitative evaluation of blood vessels and cardiovascular CT images in DICOM format.
Indications for Use
cvi42 Coronary Plaque Software Application is intended to be used for viewing, post-processing, qualitative and quantitative evaluation of cardiovascular computed tomography (CT) images in a Digital Imaging and Communications in Medicine (DICOM) Standard format.
It enables a set of tools to assist physicians in qualitative and quantitative assessment of cardiac CT images to determine the presence and extent of coronary plaques and stenoses, in patients who underwent Coronary Computed Tomography Angiography (CCTA) for evaluation of CAD or suspected CAD.
cvi42 Coronary Plaque's semi-automated machine learning algorithms are intended for an adult population.
cvi42 Coronary Plaque shall be used only for cardiac images acquired from a CT scanner. It shall be used by qualified medical professionals, experienced in examining cardiovascular CT images, for the purpose of obtaining diagnostic information as part of a comprehensive diagnostic decision-making process.
Circle's cvi42 Coronary Plaque Software Application ('cvi42 Coronary Plaque' or 'Coronary Plaque Module', for short) is a Software as a Medical Device (SaMD) that enables the analysis of CT Angiography scans of the coronary arteries of the heart. It is designed to support physicians in the visualization, evaluation, and analysis of coronary vessel plaques through manual or semi-automatic segmentation of vessel lumen and wall to determine the presence and extent of coronary plaques and luminal stenoses, in patients who underwent Coronary Computed Tomography Angiography (CCTA) for the evaluation of coronary artery disease (CAD) or suspected CAD. The device is intended to be used as an aid to the existing standard of care and does not replace existing software applications that physicians use. The Coronary Plaque Module can be integrated into an image viewing software intended for visualization of cardiac images, such as Circle's FDA-cleared cvi42 Software Application. The Coronary Plaque Module does not interface directly with any data collection equipment, and its functionality is independent of the type of vendor acquisition equipment. The analysis results are available on-screen, can be sent to report or saved for future review.
The Coronary Plaque Module consists of multiplanar reconstruction (MPR) views, curved planar reformation (CPR) and straightened views, and 3D rendering of the original CT data. The Module displays three orthogonal MPR views that the user can freely adjust to any position and orientation. Lines and regions of interest (ROIs) can be manually drawn on these MPR images for quantitative measurements.
The Coronary Plaque Module implements an Artificial Intelligence/Machine Learning (AI/ML) algorithm to detect lumen and vessel wall structures. Further, the module implements post-processing methods to convert coronary artery lumen and vessel wall structures to editable surfaces and detect the presence and type of coronary plaque in the region between the lumen and vessel wall. All surfaces generated by the system are editable and users are advised to verify and correct any errors.
The device allows users to perform the measurements listed in Table 1.
Here's a summary of the acceptance criteria and study details based on the provided FDA 510(k) Clearance Letter for the cvi42 Coronary Plaque Software Application:
1. Table of Acceptance Criteria and Reported Device Performance
| Endpoint | Acceptance Criteria (Implied) | Reported Device Performance | Pass / Fail |
|---|---|---|---|
| Lumen Mean Dice Similarity Coefficient (DSC) | Not explicitly stated but inferred as >= 0.76 with positive result | 0.76 | Pass |
| Wall Mean Dice Similarity Coefficient (DSC) | Not explicitly stated but inferred as >= 0.80 with positive result | 0.80 | Pass |
| Lumen Mean Hausdorff Distance (HD) | Not explicitly stated but inferred as <= 0.77 mm with positive result | 0.77 mm | Pass |
| Wall Mean Hausdorff Distance (HD) | Not explicitly stated but inferred as <= 0.87 mm with positive result | 0.87 mm | Pass |
| Total Plaque (TP) Pearson Correlation Coefficient (PCC) | Not explicitly stated but inferred as >= 0.97 with positive result | 0.97 | Pass |
| Calcified Plaque (CP) Pearson Correlation Coefficient (PCC) | Not explicitly stated but inferred as >= 0.99 with positive result | 0.99 | Pass |
| Non-Calcified Plaque (NCP) Pearson Correlation Coefficient (PCC) | Not explicitly stated but inferred as >= 0.93 with positive result | 0.93 | Pass |
| Low-Attenuation Plaque (LAP) Pearson Correlation Coefficient (PCC) | Not explicitly stated but inferred as >= 0.74 with positive result | 0.74 | Pass |
Note: The acceptance criteria for each endpoint are not explicitly numerical in the provided text. They are inferred to be "met Circle's pre-defined acceptance criteria" and are presented here as the numeric value reported, implying that the reported value met or exceeded the internal acceptance threshold for a 'Pass'.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not explicitly stated. The document mentions "All data used for validation were not used during the development of the ML algorithms" and "Image information for all samples was anonymized and limited to ePHI-free DICOM headers." However, the specific number of cases or images in the test set is not provided.
- Data Provenance: Sourced from multiple sites, with 100% of the data sampled from US sources. The data consisted of images acquired from major vendors of CT imaging devices.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Three expert annotators were used.
- Qualifications of Experts: Not explicitly stated beyond "expert annotators." The document implies they are experts in coronary vessel and lumen wall segmentation within cardiac CT images.
4. Adjudication Method for the Test Set
The ground truth was established "from three expert annotators." While it doesn't explicitly state "2+1" or "3+1", the use of three annotators suggests a consensus-based adjudication, likely majority vote (e.g., if two out of three agreed, that constituted the ground truth, or a more complex consensus process). The specific method is not detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
No. The document states, "No clinical studies were necessary to support substantial equivalence." The evaluation was primarily based on the performance of the ML algorithms against a reference standard established by experts, not on how human readers improved their performance with AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes. The performance evaluation focused on the "performance of the ML-based coronary vessel and lumen wall segmentation algorithm... evaluated against pre-defined acceptance criteria and compared to a reference standard established from three expert annotators." This indicates a standalone performance assessment of the algorithm's output. The device is also described as having "semi-automated machine learning algorithms", implying the user can verify and correct.
7. The Type of Ground Truth Used
Expert Consensus. The ground truth was established "from three expert annotators," indicating that human experts' annotations formed the reference standard against which the algorithm's performance was measured.
8. The Sample Size for the Training Set
Not explicitly stated. The document mentions the ML algorithms "have been trained and tested on images acquired from major vendors of CT imaging devices," but it does not provide the specific sample size for the training set. It only clarifies that the validation data was not used for training.
9. How the Ground Truth for the Training Set Was Established
Not explicitly stated. The document describes how the ground truth for the validation/test set was established (three expert annotators). It does not provide details on how the ground truth for the training set was generated.
Ask a specific question about this device
(158 days)
StrokeSENS ASPECTS is a computer-aided diagnosis (CADx) software device used to assist the clinician in the assessment and characterization of brain tissue abnormalities using CT image data.
The Software automatically registers images and uses an Atlas to segment and analyze ASPECTS Regions. StrokeSENS ASPECTS extracts image data from individual voxels in the image to provide analysis and computer analytics and relates the analysis to the atlas defined ASPECTS regions. The imaging features are then synthesized by an artificial intelligence algorithm into a single ASPECT (Alberta Stroke Program Early CT) Score.
StrokeSENS ASPECTS is indicated for evaluation of patients presenting for diagnostic imaging workup with known MCA or ICA occlusion, for evaluation of extent of disease. Extent of disease refers to the number of ASPECTS regions affected which is reflected in the total score. StrokeSENS ASPECTS provides information that may be useful in the characterization of ischemic brain tissue injury during image interpretation (within 12 hours from time last known well).
StrokeSENS ASPECTS provides a comparative analysis to the ASPECTS standard of care radiologist assessment by providing highlighted ASPECTS regions and an automated editable ASPECTS score for clinician review. StrokeSENS ASPECTS presents the original and annotated images for concurrent reads. StrokeSENS ASPECTS additionally provides a visualization of the voxels contributing to the automated ASPECTS score.
Limitations:
- StrokeSENS ASPECTS is not intended for primary interpretation of CT images. It is used to assist physician evaluation.
- StrokeSENS ASPECTS has been validated in patients with known MCA or ICA occlusion prior to ASPECTS scoring.
- Use of StrokeSENS ASPECTS in clinical settings other than brain ischemia within 12 hours from time last known well, caused by known ICA or MCA occlusions, has not been tested.
- StrokeSENS ASPECTS has only been validated and is intended to be used in patient populations aged over 21.
Contraindications:
- StrokeSENS ASPECTS is contraindicated for use on brain scans displaying neurological pathologies other than acute ischemic stroke, such as tumors or abscesses, hemorrhagic transformation, and hematoma.
Cautions:
- Patient Motion: Excessive patient motion leading to artifacts that make the scan technically inadequate.
StrokeSENS ASPECTS is a stand-alone software device that uses machine learning algorithms to automatically process NCCT (non-contrast computed tomography) brain image data to provide an output ASPECTS score based on the Alberta Stroke Program Early CT Score (ASPECTS) guidelines.
The post-processing image results and ASPECTS score are identified based on regional imaging features and overlayed onto brain scan images. StrokeSENS ASPECTS provides an automated ASPECTS score based on the input CT data for the physician. The score includes which ASPECTS regions are identified based on regional imaging features derived from non-contrast computed tomography (NCCT) brain image data. The results are generated based on the Alberta Stroke Program Early CT Score (ASPECTS) guidelines and provided to the clinician for review and verification. At the discretion of the clinician, the scores may be adjusted based on the clinician's judgement.
StrokeSENS ASPECTS can connect with other DICOM-compliant devices, to transfer NCCT scans for software processing.
Results and images can be sent to a PACS via DICOM transfer and can be viewed on a PACS workstation or via the StrokeSENS UI or other DICOM-compatible radiological viewer.
StrokeSENS ASPECTS provides an automated workflow which will automatically process image data received by the system in accordance with pre-configured user DICOM routing preferences.
StrokeSENS ASPECTS principal workflow for NCCT includes the following key steps:
- Receive NCCT DICOM Image
- Automated image analysis and processing to identify and visualize the voxels which have been included in the ASPECTS score (Also referred to as a 'heat map' or 'VCTA; Voxels Contributing to ASPECTS Score').
- Automated image analysis and processing to register the subject image to an atlas to segment and highlight ASPECTS regions and to display whether or not each region is qualified as contributing to the ASPECTS score.
- Generation of auto-generated results for review and analysis by users.
- Generation of verified/modified result summary for archiving, once the user verifies or modifies the results.
Once the auto-generated ASPECTS score results are available, the physician is asked to confirm that the case in question is for an ICA or MCA occlusion and is able to modify/verify the ASPECTS regional score. The ASPECTS auto-generated results, including the ASPECTS score, indication of affected side, affected ASPECTS regions and voxel-wise analysis (shown as a heatmap of voxels 'contributing to ASPECTS score'), along with the user-verified/modified result summary can be sent to the Picture Archiving and Communications System (PACS).
Here's an analysis of the acceptance criteria and study that proves the device meets those criteria, based on the provided FDA 510(k) Clearance Letter.
Acceptance Criteria and Device Performance
The provided text details two primary performance studies: Standalone Performance and Clinical Validation (MRMC study), along with a Clinical Validation of Voxels Contributing to ASPECTS (VCTA). The acceptance criteria are implicitly derived from the reported performance benchmarks for these studies.
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria (Implicit) | Reported Device Performance (Standalone Study) | Reported Device Performance (MRMC Clinical Validation) | Reported Device Performance (VCTA Clinical Validation) |
|---|---|---|---|
| Standalone Performance: | |||
| AUC-ROC for region-level Clustered ROC Analysis | 90.9% (95% CI = [88.7%, 93.1%]) | N/A (Standalone study only) | N/A (Standalone study only) |
| Accuracy | 90.6% [89.7%, 91.5%] | N/A (Standalone study only) | N/A (Standalone study only) |
| Sensitivity | 70.6% [69.2%, 72.1%] | N/A (Standalone study only) | N/A (Standalone study only) |
| Specificity | 93.9% [93.2%, 94.7%] | N/A (Standalone study only) | N/A (Standalone study only) |
| Clinical Validation (Reader Improvement - MRMC): | |||
| Statistically significant improvement in reader AUC with AI assistance vs. without AI assistance | N/A (MRMC study only) | Statistically significant improvement of 5.7% from 68.6% (unaided) to 74.3% (aided) (p-value<0.001) | N/A (MRMC study only) |
| Statistically significant improvement in sensitivity with AI assistance vs. without AI assistance | N/A (MRMC study only) | Statistically significant improvement of 9.7% from 41.3% (unaided) to 51.0% (aided) (p-value<0.001) | N/A (MRMC study only) |
| Statistically significant improvement in specificity with AI assistance vs. without AI assistance | N/A (MRMC study only) | Statistically significant improvement of 1.6% from 95.9% (unaided) to 97.5% (aided) (p-value<0.001) | N/A (MRMC study only) |
| Statistically significant improvement in overall percentage agreement (accuracy) with AI assistance vs. without AI assistance | N/A (MRMC study only) | Statistically significant improvement of 2.6% from 89.5% (unaided) to 92.0% (aided) (p-value<0.001) | N/A (MRMC study only) |
| Increase in inter-reader consistency (Fleiss's Kappa) with AI assistance | N/A (MRMC study only) | Increased by 28.5%, from 32.3% (unaided) to 60.8% (aided) | N/A (MRMC study only) |
| Reduction in variation of performance between readers with AI assistance | N/A (MRMC study only) | The range in AUC between users was narrower with StrokeSENS ASPECTS than unassisted | N/A (MRMC study only) |
| Clinical Validation (VCTA): | |||
| High Concordance Rate (agreement between device VCTA overlay and expert neuroradiologist assessment of ischemic tissue) | N/A (VCTA study only) | N/A (VCTA study only) | 97.0% (proportion of cases with a consensus score of Fair Concordance or above) |
2. Sample Size and Data Provenance
Standalone Performance Test Set:
- Sample Size: 200 non-contrast CT scans.
- Data Provenance: Patients were from multiple clinical sites, including 77 from Canada, 59 from the United States, 51 from Europe, and 13 from Asia-Australia. The cases are implied to be retrospective, as they are being used for validation after data collection.
MRMC Clinical Validation Test Set:
- Sample Size: 100 non-contrast CT scans.
- Data Provenance: 50% of the patients were from 11 sites in Canada, and the other 50% were from 12 sites in the United States. Implied retrospective, used for validation.
VCTA Clinical Validation Test Set:
- Sample Size: Not explicitly stated, but derived from "the proportion of cases." It would be part of the same test sets or a subset thereof.
3. Number of Experts and Qualifications for Ground Truth
- Standalone Performance: Not explicitly stated how many experts established the ground truth, but it refers to an "expert consensus reference standard" for the primary standalone performance assessment. The types of experts are implied to be medical professionals capable of scoring ASPECTS.
- MRMC Clinical Validation: For the reader study, the ground truth was established through a "reference standard." The text states that "the results showed statistically significant improvements in the agreement between the readers and a reference standard." The number and qualifications of experts establishing this specific reference standard are not detailed, though it's likely a panel of expert radiologists or neurologists.
- VCTA Clinical Validation: "Expert neuroradiologist assessment of ischemic tissue." The number of neuroradiologists involved is not specified, but the term "consensus score" implies more than one. No specific years of experience or board certifications are mentioned.
4. Adjudication Method for the Test Set
- The term "expert consensus reference standard" or "consensus score" (for VCTA) is used, implying an adjudication process to arrive at the ground truth. However, the exact method (e.g., 2+1, 3+1, majority vote, etc.) is not explicitly described for any of the studies.
5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study
- Yes, an MRMC study was done.
- Effect Size of Human Reader Improvement with AI vs. without AI assistance:
- AUC Improvement: A statistically significant improvement of 5.7% from 68.6% (unaided) to 74.3% (aided) (p-value<0.001).
- Sensitivity Improvement: A statistically significant improvement of 9.7% from 41.3% (unaided) to 51.0% (aided) (p-value<0.001).
- Specificity Improvement: A statistically significant improvement of 1.6% from 95.9% (unaided) to 97.5% (aided) (p-value<001).
- Overall Accuracy Improvement: Improved by 2.6% from 89.5% (unaided) to 92.0% (aided) (p-value<001).
- Inter-reader Consistency (Fleiss's Kappa): Increased by 28.5% from 32.3% (unaided) to 60.8% (aided).
- Reduction in performance variation: The range in AUC between users was narrower with StrokeSENS ASPECTS than unassisted.
6. Standalone (Algorithm Only) Performance Study
- Yes, a standalone performance study was done.
- The results are detailed in the "Standalone Performance" section, including AUC-ROC (90.9%), accuracy (90.6%), sensitivity (70.6%), and specificity (93.9%).
7. Type of Ground Truth Used
- Expert Consensus: The primary ground truth for the standalone performance and MRMC studies was based on an "expert consensus reference standard."
- Neuroradiologist Assessment: For the VCTA validation, ground truth was derived from "expert neuroradiologist assessment of ischemic tissue."
- The ground truth in all cases relates to the ASPECTS score and identification of affected ASPECTS regions, which is a radiological assessment. It does not mention pathological or long-term outcomes data as the primary ground truth.
8. Sample Size for the Training Set
- Not explicitly stated in the provided text. The document focuses on the performance and validation datasets. The number of cases used to train the "machine learning algorithms" or "artificial intelligence algorithm" is not disclosed in this summary.
9. How Ground Truth for Training Set Was Established
- Not explicitly stated in the provided text. Given it's an AI/ML algorithm, the training set would also require labeled data (ground truth). However, the method for establishing this ground truth (e.g., expert consensus, single expert, automated labeling) is not described in this document.
Ask a specific question about this device
(29 days)
cvi42 is intended to be used for viewing, post-processing, qualitative and quantitative evaluation of cardiovascular magnetic resonance (MR) images and computed tomography (CT) images in a Digital Imaging and Communications in Medicine (DICOM) Standard format.
lt enables:
· Importing cardiac MR & CT Images in DICOM format.
• Supporting clinical diagnostics by qualitative analysis of cardiac MR & CT images using display functionality such as panning, windowing, zooming, navigation through series/slices and phases, 3D reconstruction of images including multiplanar reconstructions of the images.
• Supporting clinical diagnostics by quantitative measurement of the heart and adjacent vessels in cardiac MR & CT images, specifically signal intensity, distance, area, volume, and mass.
• Supporting clinical diagnostics by using area and volume for measuring cardiac function and derived parameters cardiac output and cardiac index in long axis and short axis cardiac MR & CT images.
• Flow quantifications based on velocity encoded cardiac MR images (including two and four dimensional flow analysis).
• Strain analysis of cardiac MR images by providing measurements of 2D LV myocardial function (displacement, velocity, strain, strain rate, time to peak, and torsion).
· Supporting clinical diagnostics of cardiac CT images including quantitative measurements of calcified plaques in the coronary arteries (calcium scoring), specifically Agatston and volume and mass calcium scores, visualization and quantitative measurement of heart structures including coronaries, femoral, aortic, and mitral valves.
· Evaluating CT and MR images of blood vessels. Combining digital image processing and visualization tools such as multiplanar reconstruction (MPR), thin/thick maximum intensity projection (MIP), inverted MIP thin/thick, volume rendering technique (VRT), curved planar reformation (CPR), processing tools such as bone removal (based on both single energy and dual energy) table removal and evaluation tools (vessel centerline calculation, lumen calculation, stenosis calculation) and reporting tools (lesion location, lesion characteristics) and key images. The software package is designed to support the physician in confirming the presence of physician identified lesion in blood vessels and evaluation, documentation and follow up of any such lesions.
cvi42 shall be used by qualified medical professionals, experienced in examining and evaluating cardiovascular MR or CT images, for the purpose of obtaining diagnostic information as part of a comprehensive diagnostic decision-making process. cvi42 is a software application that can be used as a stand-alone product or in a networked environment.
The target population for cvi42 and its manual workflows is not restricted; however, cvi42's semiautomated machine learning algorithms, included in the MR Function and CORE CT modules, are intended for an adult population. Further, image acquisition by a cardiac MR or CT scanner may limit the use of the software for certain sectors of the general public.
cvi42 shall not be used to view or analyze images of any part of the body except the cardiac images acquired from a cardiovascular magnetic resonance or computed tomography scanner.
cvi42 Software Application ("cvi42") is a software as a medical device (SaMD) that is intended for evaluating CT and MR images of the cardiovascular system. Combining digital image processing, visualization, quantification, and reporting tools, cvi42 is designed to support physicians in the evaluation and analysis of cardiovascular imaqing studies.
cvi42 uses machine learning techniques to aid in semi-automatic contouring of regions of interest in cardiac MR or CT images.
The data used to train these machine learning algorithms were sourced from multiple clinical sites from urban centers and from different countries. When selecting data for training, the importance of model generalization was considered and data was selected such that a good distribution of patient demographics, scanner, and image parameters were represented. The separation into training versus validation datasets is made on the study level to ensure no overlap between the two sets. As such, different scans from the same study were not split between the training and validation datasets. None of the cases used for model validation were used for training the machine learning models.
cvi42 has a graphical user interface which allows users to analyze cardiac MR & CT images qualitatively and quantitatively.
cvi42 accepts uploaded data files previously acquired by MR or CT scanners or other data collection equipment but does not interface directly with such equipment. Its functionality is independent of the type of vender acquisition equipment. The analysis results are available onscreen and can be saved with the software for future review.
Here's a breakdown of the acceptance criteria and study details for the cvi42 Software Application, based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
For cvi42 Auto (MR-CMR Function, CORE CT Coronary, and CORE CT-Calcium):
| Module | Acceptance Criteria | Reported Device Performance |
|---|---|---|
| CMR Function Analysis | Classification Accuracy: Based on True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN).Mean Volume Prediction Error (MAE): For Short Axis (SAX) and Long Axis (LAX) volumetric measurements. | Series Classification Performance: 97% - 100%Volumetric MAE (SAX): 7% - 10%Volumetric MAE (LAX): 5% - 9% |
| Calcium Analysis | Classification Accuracy: Based on TP, TN, FP, and FN. | Classification Performance: 86% - 99% |
| Coronary Analysis | Centerline Quality and Performance: Based on TP and FN.Success Rate for Relevant Masks. | Centerline Performance: 82% - 94%Mask Performance: 98% - 100% |
For CORE CT (CT Function Module):
| Metric | Acceptance Criteria | Reported Device Performance (compared to a reference standard established from three expert readers) |
|---|---|---|
| LV Cavity Segmentation | Not explicitly stated numerical acceptance criteria, but implied to be within acceptable clinical limits for MAE, Dice, HD, and EF bias compared to expert readers. | MAE: Less than 10% difference.Dice Coefficient: Above 86%.3D Hausdorff Distance (HD): Below 9.5 mm.EF Bias: 1.3% with a 95% confidence interval of [-12, 14]. |
| RV Cavity Segmentation | Not explicitly stated numerical acceptance criteria, but implied to be within acceptable clinical limits for MAE, Dice, HD, and EF bias compared to expert readers. | MAE: Less than 18%.Dice Coefficient: Above 85%.HD: Below 18 mm.EF Bias: -5.5% with a 95% confidence interval of [-15, 4.4]. |
| LV Myocardium Segmentation | Not explicitly stated numerical acceptance criteria, but implied to be within acceptable clinical limits for MAE, Dice, and HD compared to expert readers. | MAE: Less than 17%.Dice Coefficient: Above 82%.HD: Below 15 mm. |
2. Sample Size for the Test Set and Data Provenance
For cvi42 Auto (MR-CMR Function, CORE CT Coronary, and CORE CT-Calcium):
- Sample Size: n = 235 anonymized patient images acquired from major vendors of MR and CT imaging devices.
- 70 samples for Coronary Analysis
- 102 samples for Calcium analysis
- 63 samples for SAX Function contouring
- 63 samples for each of 2-CV, 3-CV, and 4CV LAX function contouring
- 252 samples for Function Classification
- Data Provenance: Images were acquired from major vendors of MR and CT imaging devices. At least 50% of the data came from a U.S. population. The document does not specify if the data was retrospective or prospective, but the phrasing "were used for the validation" implies retrospective use of existing data.
For CORE CT (CT Function Module):
- Sample Size: Not explicitly stated, but the validation data was sourced from 9 different sites.
- Data Provenance: Sourced from 9 different sites, with 90% of the data sampled from US sources. The document does not specify if the data was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
For CORE CT (CT Function Module):
- Number of Experts: Three expert readers.
- Qualifications: "Expert readers" – specific qualifications (e.g., years of experience, board certification) are not detailed in the provided text.
For cvi42 Auto (MR-CMR Function, CORE CT Coronary, and CORE CT-Calcium), the document does not explicitly state the number of experts used to establish ground truth for the test set. It does mention expert readers for the comparison in the CORE CT section.
4. Adjudication Method for the Test Set
For CORE CT (CT Function Module):
- The "reference standard" was "established from three expert readers." The specific adjudication method (e.g., majority vote, specific consensus process) is not detailed, but it implies a consensus or agreement among these three experts.
For cvi42 Auto (MR-CMR Function, CORE CT Coronary, and CORE CT-Calcium):
- The document does not explicitly state the adjudication method for establishing ground truth for these modules.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- No, an MRMC comparative effectiveness study was not explicitly stated to have been done to measure human reader improvement with AI vs. without AI assistance. The performance tests described are primarily focused on the standalone performance of the AI algorithms (Machine Learning Derived Outputs) compared to a ground truth or a reference standard established by experts.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, standalone performance was assessed. The sections titled "Validation of Machine Learning Derived Outputs" and "CORE CT: CT Function" describe the evaluation of the algorithms' performance (e.g., classification accuracy, MAE, Dice coefficient, HD, EF bias) against pre-defined acceptance criteria and a reference standard made by experts, without human-in-the-loop assistance for the AI's output generation. This is a standalone assessment of the algorithms.
7. The Type of Ground Truth Used
- Expert Consensus: For the CORE CT module, the ground truth (reference standard) used for evaluation was established by "three expert readers." This implies an expert consensus or expert-derived ground truth.
- For other modules (cvi42 Auto), the document states that performance was evaluated against "pre-defined acceptance criteria" but does not explicitly describe how the ground truth for those criteria was established, though it likely involved expert annotations or established clinical metrics.
8. The Sample Size for the Training Set
- The document states: "The data used to train these machine learning algorithms were sourced from multiple clinical sites from urban centers and from different countries." However, the specific sample size for the training set is not provided for any of the modules. It only mentions that the data was selected for good distribution of patient demographics, scanner, and image parameters.
9. How the Ground Truth for the Training Set Was Established
- The document does not explicitly describe how the ground truth for the training set was established. It only states that the training data "were sourced from multiple clinical sites from urban centers and from different countries." It also notes that "the separation into training versus validation datasets is made on the study level to ensure no overlap between the two sets." This suggests that the training data would have had associated ground truth data (e.g., expert annotations, clinical measurements) to enable supervised learning, but the method of establishing that ground truth is not detailed.
Ask a specific question about this device
(197 days)
TruPlan enables visualization and measurement of structures of the heart and vessels for pre-procedural planning and sizing for the left atrial appendage closure (LAAC) procedure.
To facilitate the above, TruPlan provides general functionality such as:
- Segmentation of cardiovascular structures
- Visualization and image reconstruction techniques: 2D review, Volume Rendering. MPR
- Simulation of TEE views, ICE views, and fluoroscopic rendering
- Measurement and annotation tools
- Reporting tools
The TruPlan Computed Tomography (CT) Imaging Software application (referred to herein as "TruPlan") is a software as a medical device (SAMD) that helps qualified users with imagebased pre-operative planning of Left Atrial Appendage Closure (LAAC) procedure using CT data. The TruPlan device is designed to support the anatomical assessment of the Left Atrial Appendage (LAA) prior to the LAAC procedure. This includes the assessment of the LAA size, shape, and relationships with adjacent cardiac and extracardiac structures. This assessment helps the physician determine the size of a closure device needed for the LAAC procedure. The TruPlan application is a visualization software and has basic measurement tools. The device is intended to be used as an aid to the existing standard of care. It is not replacing the existing software applications physicians use for planning the Left Atrial Appendage Closure procedure.
Pre-existing CT images are uploaded in TruPlan application manually by the end-user. The images can be viewed by the user in the original CT image as well as simulated views. The software displays the views in a modular format as follows:
- LAA
- Fluoro (fluoroscopy, simulation)
- Trans Esophageal Echo (TEE, simulation)
- Intra Cardiac Echography (ICE, simulation)
- Thrombus
- Multiplanar Reconstruction (MPR)
Each of these views offer the user visualization and quantification capabilities for pre-procedural planning of the Left Atrial Appendage Closure procedure; none are intended for diagnosis. The quantification tools are based on user-identified regions of interest and are user-modifiable. The device allows users to perform the measurements (all done on MPR viewers) listed in Table 1.
Additionally, the device generates a 3D rendering of the heart (including left ventricle, left atrium, and LAA) using machine learning methodology. The 3D rendering is for visualization purposes only. No measurements or annotation can be done using this view.
TruPlan also provides reporting functionality to capture screenshots and measurements and to store them as a PDF document.
TruPlan is installed as a standalone software onto the user's Windows PC (desktop) or laptop (Windows is the only supported operating system). TruPlan does not operate on a server or cloud.
The provided text does not contain the detailed information required to describe the acceptance criteria and the comprehensive study that proves the device meets those criteria.
While the document (a 510(k) summary) mentions "Verification and validation activities were conducted to verify compliance with specified design requirements" and "Performance testing was conducted to verify compliance with specified design requirements," it does not provide any specific quantitative acceptance criteria or the actual performance data. It also states "No clinical studies were necessary to support substantial equivalence," which means there was no multi-reader multi-case (MRMC) study or standalone performance study in a clinical setting with human readers.
Therefore, I cannot fulfill most of the requested points from the input. However, based on the information provided, I can infer and state what is missing or not applicable.
Here's a breakdown of the requested information and what can/cannot be extracted from the provided text:
1. A table of acceptance criteria and the reported device performance
Cannot be provided. The document states that "performance testing was conducted to verify compliance with specified design requirements," and "Validated phantoms were used for assessing the quantitative measurement output of the device." However, it does not specify what those "specified design requirements" (i.e., acceptance criteria) were, nor does it report the actual quantitative performance results (e.g., accuracy, precision) of the device against those criteria.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
Cannot be provided. The document refers to "Validated phantoms" for quantitative measurement assessment. This implies synthetic or controlled data rather than patient data. No details are given regarding the number of phantoms used or their characteristics. There is no mention of "test set" in the context of patient data, data provenance, or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
Cannot be provided. Since no clinical test set with patient data is described, there's no mention of experts establishing ground truth for such a set. The testing was done on "validated phantoms" for "quantitative measurement output," suggesting a comparison against known ground truth values inherent to the phantom design rather than expert consensus on medical images.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
Cannot be provided. Given the lack of a clinical test set and expert review, no adjudication method is mentioned or applicable.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a MRMC study was NOT done. The document explicitly states: "No clinical studies were necessary to support substantial equivalence." This means there was no MRMC study to show human reader improvement with AI assistance. The submission relies on "performance testing and predicate device comparisons" for substantial equivalence.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Likely yes, for certain aspects, but specific performance data is not provided. The document mentions "Validated phantoms were used for assessing the quantitative measurement output of the device." This implies an algorithmic, standalone assessment of the device's measurement capabilities against the known values of the phantoms. However, the exact methodology, metrics, and results of this standalone performance are not detailed.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth for the quantitative measurement assessment was based on "Validated phantoms." This means the ground truth for measurements (e.g., distances, areas) would be the known, precisely manufactured dimensions of the phantoms, not expert consensus, pathology, or outcomes data.
8. The sample size for the training set
Cannot be provided. The document mentions that the device "generates a 3D rendering of the heart (including left ventricle, left atrium, and LAA) using machine learning methodology." This indicates that a training set was used for this specific function. However, the size of this training set is not mentioned anywhere in the provided text.
9. How the ground truth for the training set was established
Cannot be provided. While it's implied that there was a training set for the "machine learning methodology" used for 3D rendering, the document does not explain how the ground truth for this training set was established.
Ask a specific question about this device
(99 days)
ct42 is intended to be used for viewing, post-processing and quantitative evaluation of cardiovascular computed tomography (CT) images in a Digital Imaging and Communications in Medicine (DICOM) Standard format. It enables; Importing Cardiac CT Images in DICOM format Supporting clinical diagnostics by qualitative analysis of the cardiac CT images using display functionality such as panning, windowing, zooming, navigation through series/slices and phases, 3D reconstruction of images including multi-planner reconstructions of the images. Supporting clinical diagnostics by quantitative measurement of the heart and adjacent vessels in cardiac CT images, specifically distance, area, volume and mass Supporting clinical diagnostics by using area and volume measurements for measuring LV function and derived parameters cardiac output and cardiac index in long axis and short axis cardiac CT images. Supporting clinical diagnostics by quantitative measurements of calcified plaques in the coronary arteries (calcium scoring), specifically Agatston and volume and mass calcium scores. It shall be used by qualified medical professionals, experienced in examining and evaluating cardiovascular CT images, for the purpose of obtaining diagnostic information as part of a comprehensive diagnostic decision-making process. ct42 is a software application that can be used as a stand-alone product or in a networked environment. The target population for the ct42 is not restricted, however the image acquisition by a cardiac CT scanner may limit the use of the device for certain sectors of the general public. ct42 shall not be used to view or analyze images of any part of the body except the cardiac CT images acquired from a cardiovascular CT scanner.
ct42 is a dedicated software application for evaluating cardiovascular images in a DICOM Standard format. The software can be used as a stand-alone product that can be integrated into a hospital or private practice environment. ct42 has a graphical user interface which allows users to qualitatively and quantitatively analyze cardiac CT images for volume/mass, and calcium scoring. It provides a comprehensive set of tools for the analysis of Cardiovascular Computed Tomography (CT) images.
Here's an analysis of the provided text regarding the acceptance criteria and study for the ct42 Cardiac Computed Tomography (CT) Software:
Note: The provided document is a 510(k) Summary, which typically focuses on demonstrating substantial equivalence to a predicate device rather than presenting a detailed performance study with acceptance criteria in the manner one might find in a clinical trial report. As such, some requested information (like specific numerical acceptance criteria and a detailed study proving the device meets those criteria) is not explicitly present in the provided text. The document states that "The successful non-clinical testing demonstrates the safety and effectiveness of the ct42 when used for the defined indications for use and demonstrates that the device for which the 510(k) is submitted performs as well as or better than the legally marketed predicate device."
Acceptance Criteria and Reported Device Performance
The document does not explicitly state numerical acceptance criteria in a table format with corresponding reported performance for the ct42 software. Instead, it relies on demonstrating equivalence to a predicate device (Ziosoft - Cardiac Function Analysis & Calcium Scoring, K083446) by possessing similar features and functionalities. The "Conclusion" section indirectly serves as the statement of meeting acceptance, asserting that "ct42... demonstrates that the device... performs as well as or better than the legally marketed predicate device."
Here's a table based on the "Device Comparison Table" provided, highlighting the features where equivalence is drawn, which implicitly serve as the "acceptance criteria" for functionality:
| Acceptance Criteria (Feature/Functionality) | Reported Device Performance (ct42) |
|---|---|
| Post processes ECG gated - Cardiac CT images | YES |
| Image viewer functionality | YES |
| Left ventricular ejection fraction | YES |
| End diastolic volume | YES |
| End systolic volume | YES |
| Stroke volume | YES |
| Cardiac output | YES |
| Cardiac Index | YES |
| Wall thickness | YES |
| Wall thickness ratio | YES |
| Wall movement | YES |
| Volume Curve | YES |
| Calcium Scoring | YES |
| Evaluates calcified plaque in the coronary arteries | YES |
| Agatston calcium score | YES |
| Volume calcium score | YES |
| Calcium mass/density calculations | YES (calculates mass) |
| DICOM compliant | YES |
Additional Information on the Study:
-
Sample size used for the test set and the data provenance:
- Sample Size: Not explicitly stated in the provided 510(k) summary. The document mentions "non-clinical testing" and testing "according to the specifications that are documented in a Master Software Test Plan," but specific details about the number of cases or images in the test set are absent.
- Data Provenance: Not specified. It's unclear if the data was retrospective or prospective, or its country of origin.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not specified. The document does not detail how ground truth was established for any testing.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not specified.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No. The document presents a substantial equivalence claim based on feature comparison and non-clinical software testing. It does not describe a MRMC comparative effectiveness study involving human readers and AI assistance. The device itself is a software application for viewing, post-processing, and quantitative evaluation, implying it's a tool for human professionals, but no study on human performance improvement is mentioned.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The submission focuses on "non-clinical testing" to demonstrate safety and effectiveness and equivalence to a predicate device. This implies testing the algorithm's functionality and accuracy in various measurements (distance, area, volume, mass, calcium scoring) in a standalone manner, but the specifics of how this was done (e.g., comparing algorithm outputs to known truths or another software's output) are not detailed. The term "standalone" performance in the context of an FDA submission for this type of device usually refers to the accuracy of its quantitative measurements rather than a human-like diagnostic output.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- Not explicitly stated. Given the nature of the device (quantitative measurements for cardiac CT), ground truth for accuracy testing would typically involve comparisons to:
- Manual measurements by experts (expert consensus)
- Measurements from another validated software/method
- Perhaps in some cases, correlation with pathology or invasive measurements, though this is less common for software functionality claims.
- The document does not detail which of these, if any, were used.
- Not explicitly stated. Given the nature of the device (quantitative measurements for cardiac CT), ground truth for accuracy testing would typically involve comparisons to:
-
The sample size for the training set:
- Not applicable/Not mentioned. The provided document is for a traditional 510(k) submission for ct42. It describes a software application for quantitative analysis of CT images. It does not indicate that this device utilizes machine learning or AI models that would require a distinct "training set" in the modern sense. The "testing" mentioned refers to traditional software validation and verification.
-
How the ground truth for the training set was established:
- Not applicable, as no training set (for machine learning) is implied or mentioned for this device.
Ask a specific question about this device
(72 days)
cmr42 is intended to be used for viewing, post-processing and quantitative evaluation of cardiovascular magnetic resonance (MR) images in a Digital Imaging and Communications in Medicine (DICOM) Standard format. It enables:
Importing Cardiac MR Images in DICOM format
Supporting clinical diagnostics by qualitative analysis of the cardiac MR images using display functionality such as panning, windowing, zooming, navigation through series/slices and phases.
Supporting clinical diagnostics by quantitative measurement of the heart and adjacent vessels in cardiac MR images, specifically distance, area, volume and mass
Supporting clinical diagnostics by using area and volume measurements for measuring LV function and derived parameters cardiac output and cardiac index in long axis and short axis cardiac MR images.
Flow quantifications based on velocity encodes images
It shall be used by qualified medical professionals, experienced in examining and evaluating cardiovascular MR images, for the purpose of obtaining diagnostic information as part of a comprehensive diagnostic decision-making process. cmr42 is a software application that can be used as a stand-alone product or in a networked environment.
The target population for the cmr42 is not restricted, however the image acquisition by a cardiac magnetic resonance scanner may limit the use of the device for certain sectors of the general public.
cmr42 shall not be used to view or analyze images of any part of the body except the cardiac magnetic resonance images acquired from a cardiovascular magnetic resonance scanner.
cmr42 is a dedicated software application for evaluating cardiovascular images in a DICOM Standard format. The software can be used as a stand-alone product that can be integrated into a hospital or private practice environment. cmr42 has a graphical user interface which allows users to qualitatively and quantitatively analyze cardiac images for volume/mass, and flow quantification. It provides a comprehensive set of tools for the analysis of Cardiovascular Magnetic Resonance (CMR) images.
The provided 510(k) summary for the cmr42 Cardiac MR Software Application (K082628) describes its intended use and a general statement about testing but does not provide a detailed table of acceptance criteria or the specific results of a study proving the device meets these criteria.
Instead, the summary states:
"Description and Testing: cmr42 have been tested according to the specifications that are Conclusion of Testing documented in a Master Software Test Plan. Testing is an integral part of Circle Cardiovascular Imaging Inc software development process as described in the company's product development process."
"Conclusion: The successful non-clinical testing demonstrates the safety and effectiveness of the cm42 when used for the defined indications for use and demonstrates that the device for which the 510(k) is submitted performs as well as or better than the legally marketed predicate device."
This indicates that internal testing was conducted to specifications but the specific acceptance criteria and detailed performance metrics are not publicly available in this document. The submission focuses on demonstrating substantial equivalence to predicate devices (MRI-MAGNETIC RESONANCE ANALYTICAL SOFTWARE SYSTEM (MASS) K994283 and MRI-Flow Analytical Software K994282) rather than publishing detailed performance studies against explicitly stated acceptance criteria.
Therefore, for most of the requested information, the answer is "Not provided in the given document."
Here's a breakdown of what can be extracted or inferred based on the supplied text:
1. A table of acceptance criteria and the reported device performance:
| Acceptance Criteria (Inferred from device description and comparison) | Reported Device Performance (General Statement) |
|---|---|
| Qualitative Analysis: - Viewing, panning, windowing, zooming - Navigation through series/slices and phases | Demonstrated safety and effectiveness for qualitative analysis. |
| Quantitative Measurements: - Distance, area, volume, mass of heart and adjacent vessels - LV function, cardiac output, cardiac index from long and short axis images | Demonstrated safety and effectiveness for quantitative measurements; performs "as well as or better than" predicate devices. |
| Flow Quantifications: - Based on velocity encoded images | Demonstrated safety and effectiveness for flow quantifications; performs "as well as or better than" predicate devices. |
| Image Compatibility: - Import DICOM-format Cardiac MR Images - Images from all MRI scanner vendors supported | Functionality confirmed. |
| User Interface and Functionality: - Graphical user interface - Comprehensive tool sets - Dynamic display of ventricular contractions - DICOM compliant networking - Reports with visualization and quantitative parameters | Functionality confirmed and described as having "task specific modules with corresponding tool sets." |
| General Performance: - Safety and Effectiveness | Non-clinical testing demonstrates safety and effectiveness. Performs "as well as or better than" the legally marketed predicate devices. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
- Not provided in the given document. The document only mentions "non-clinical testing" and testing "according to the specifications that are documented in a Master Software Test Plan." Specifics about the test set, its size, or its provenance are not included.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
- Not provided in the given document. The document does not describe the methodologies for establishing ground truth or the involvement of experts in the testing phase. The device itself is intended for use by "qualified medical professionals, experienced in examining and evaluating cardiovascular MR images."
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not provided in the given document. No information on adjudication methods for establishing ground truth or evaluating test results is given.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study is not mentioned as part of this 510(k) submission. The submission focuses on the standalone performance of the software in comparison to predicate devices, not on human reader performance with or without AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone evaluation was implicitly done. The "cmr42" is described as a "software application" for viewing, post-processing, and quantitative evaluation. The testing described is "non-clinical testing" to demonstrate its safety and effectiveness and that it performs as well as or better than predicate devices. This implies evaluating the software's performance on its own capabilities.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- Not explicitly stated in the given document. Since the device performs quantitative measurements (distance, area, volume, mass), the ground truth for testing would likely involve highly accurate reference measurements, possibly derived from manual expert measurements, phantoms, or other validated methods. However, the specific type of ground truth is not detailed.
8. The sample size for the training set:
- Not applicable / Not provided. The cmr42 is described as an "Image Processing System" and "software application" for analysis. At the time of this 2008 submission, the focus for such devices was primarily on deterministic algorithms and user-driven analysis tools rather than AI/machine learning models that require distinct training sets. Therefore, a "training set" in the modern AI sense is unlikely to have been a component of its development or evaluation, and no such information is provided.
9. How the ground truth for the training set was established:
- Not applicable. As a training set is not mentioned, the method for establishing its ground truth is also not provided.
Ask a specific question about this device
Page 1 of 1