Search Results
Found 8 results
510(k) Data Aggregation
(158 days)
Circle Cardiovascular Imaging Inc.
StrokeSENS ASPECTS is a computer-aided diagnosis (CADx) software device used to assist the clinician in the assessment and characterization of brain tissue abnormalities using CT image data.
The Software automatically registers images and uses an Atlas to segment and analyze ASPECTS Regions. StrokeSENS ASPECTS extracts image data from individual voxels in the image to provide analysis and computer analytics and relates the analysis to the atlas defined ASPECTS regions. The imaging features are then synthesized by an artificial intelligence algorithm into a single ASPECT (Alberta Stroke Program Early CT) Score.
StrokeSENS ASPECTS is indicated for evaluation of patients presenting for diagnostic imaging workup with known MCA or ICA occlusion, for evaluation of extent of disease. Extent of disease refers to the number of ASPECTS regions affected which is reflected in the total score. StrokeSENS ASPECTS provides information that may be useful in the characterization of ischemic brain tissue injury during image interpretation (within 12 hours from time last known well).
StrokeSENS ASPECTS provides a comparative analysis to the ASPECTS standard of care radiologist assessment by providing highlighted ASPECTS regions and an automated editable ASPECTS score for clinician review. StrokeSENS ASPECTS presents the original and annotated images for concurrent reads. StrokeSENS ASPECTS additionally provides a visualization of the voxels contributing to the automated ASPECTS score.
Limitations:
- StrokeSENS ASPECTS is not intended for primary interpretation of CT images. It is used to assist physician evaluation.
- StrokeSENS ASPECTS has been validated in patients with known MCA or ICA occlusion prior to ASPECTS scoring.
- Use of StrokeSENS ASPECTS in clinical settings other than brain ischemia within 12 hours from time last known well, caused by known ICA or MCA occlusions, has not been tested.
- StrokeSENS ASPECTS has only been validated and is intended to be used in patient populations aged over 21.
Contraindications:
- StrokeSENS ASPECTS is contraindicated for use on brain scans displaying neurological pathologies other than acute ischemic stroke, such as tumors or abscesses, hemorrhagic transformation, and hematoma.
Cautions:
- Patient Motion: Excessive patient motion leading to artifacts that make the scan technically inadequate.
StrokeSENS ASPECTS is a stand-alone software device that uses machine learning algorithms to automatically process NCCT (non-contrast computed tomography) brain image data to provide an output ASPECTS score based on the Alberta Stroke Program Early CT Score (ASPECTS) guidelines.
The post-processing image results and ASPECTS score are identified based on regional imaging features and overlayed onto brain scan images. StrokeSENS ASPECTS provides an automated ASPECTS score based on the input CT data for the physician. The score includes which ASPECTS regions are identified based on regional imaging features derived from non-contrast computed tomography (NCCT) brain image data. The results are generated based on the Alberta Stroke Program Early CT Score (ASPECTS) guidelines and provided to the clinician for review and verification. At the discretion of the clinician, the scores may be adjusted based on the clinician's judgement.
StrokeSENS ASPECTS can connect with other DICOM-compliant devices, to transfer NCCT scans for software processing.
Results and images can be sent to a PACS via DICOM transfer and can be viewed on a PACS workstation or via the StrokeSENS UI or other DICOM-compatible radiological viewer.
StrokeSENS ASPECTS provides an automated workflow which will automatically process image data received by the system in accordance with pre-configured user DICOM routing preferences.
StrokeSENS ASPECTS principal workflow for NCCT includes the following key steps:
- Receive NCCT DICOM Image
- Automated image analysis and processing to identify and visualize the voxels which have been included in the ASPECTS score (Also referred to as a 'heat map' or 'VCTA; Voxels Contributing to ASPECTS Score').
- Automated image analysis and processing to register the subject image to an atlas to segment and highlight ASPECTS regions and to display whether or not each region is qualified as contributing to the ASPECTS score.
- Generation of auto-generated results for review and analysis by users.
- Generation of verified/modified result summary for archiving, once the user verifies or modifies the results.
Once the auto-generated ASPECTS score results are available, the physician is asked to confirm that the case in question is for an ICA or MCA occlusion and is able to modify/verify the ASPECTS regional score. The ASPECTS auto-generated results, including the ASPECTS score, indication of affected side, affected ASPECTS regions and voxel-wise analysis (shown as a heatmap of voxels 'contributing to ASPECTS score'), along with the user-verified/modified result summary can be sent to the Picture Archiving and Communications System (PACS).
Here's an analysis of the acceptance criteria and study that proves the device meets those criteria, based on the provided FDA 510(k) Clearance Letter.
Acceptance Criteria and Device Performance
The provided text details two primary performance studies: Standalone Performance and Clinical Validation (MRMC study), along with a Clinical Validation of Voxels Contributing to ASPECTS (VCTA). The acceptance criteria are implicitly derived from the reported performance benchmarks for these studies.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Implicit) | Reported Device Performance (Standalone Study) | Reported Device Performance (MRMC Clinical Validation) | Reported Device Performance (VCTA Clinical Validation) |
---|---|---|---|
Standalone Performance: | |||
AUC-ROC for region-level Clustered ROC Analysis | 90.9% (95% CI = [88.7%, 93.1%]) | N/A (Standalone study only) | N/A (Standalone study only) |
Accuracy | 90.6% [89.7%, 91.5%] | N/A (Standalone study only) | N/A (Standalone study only) |
Sensitivity | 70.6% [69.2%, 72.1%] | N/A (Standalone study only) | N/A (Standalone study only) |
Specificity | 93.9% [93.2%, 94.7%] | N/A (Standalone study only) | N/A (Standalone study only) |
Clinical Validation (Reader Improvement - MRMC): | |||
Statistically significant improvement in reader AUC with AI assistance vs. without AI assistance | N/A (MRMC study only) | Statistically significant improvement of 5.7% from 68.6% (unaided) to 74.3% (aided) (p-value |
Ask a specific question about this device
(29 days)
Circle Cardiovascular Imaging Inc.
cvi42 is intended to be used for viewing, post-processing, qualitative and quantitative evaluation of cardiovascular magnetic resonance (MR) images and computed tomography (CT) images in a Digital Imaging and Communications in Medicine (DICOM) Standard format.
lt enables:
· Importing cardiac MR & CT Images in DICOM format.
• Supporting clinical diagnostics by qualitative analysis of cardiac MR & CT images using display functionality such as panning, windowing, zooming, navigation through series/slices and phases, 3D reconstruction of images including multiplanar reconstructions of the images.
• Supporting clinical diagnostics by quantitative measurement of the heart and adjacent vessels in cardiac MR & CT images, specifically signal intensity, distance, area, volume, and mass.
• Supporting clinical diagnostics by using area and volume for measuring cardiac function and derived parameters cardiac output and cardiac index in long axis and short axis cardiac MR & CT images.
• Flow quantifications based on velocity encoded cardiac MR images (including two and four dimensional flow analysis).
• Strain analysis of cardiac MR images by providing measurements of 2D LV myocardial function (displacement, velocity, strain, strain rate, time to peak, and torsion).
· Supporting clinical diagnostics of cardiac CT images including quantitative measurements of calcified plaques in the coronary arteries (calcium scoring), specifically Agatston and volume and mass calcium scores, visualization and quantitative measurement of heart structures including coronaries, femoral, aortic, and mitral valves.
· Evaluating CT and MR images of blood vessels. Combining digital image processing and visualization tools such as multiplanar reconstruction (MPR), thin/thick maximum intensity projection (MIP), inverted MIP thin/thick, volume rendering technique (VRT), curved planar reformation (CPR), processing tools such as bone removal (based on both single energy and dual energy) table removal and evaluation tools (vessel centerline calculation, lumen calculation, stenosis calculation) and reporting tools (lesion location, lesion characteristics) and key images. The software package is designed to support the physician in confirming the presence of physician identified lesion in blood vessels and evaluation, documentation and follow up of any such lesions.
cvi42 shall be used by qualified medical professionals, experienced in examining and evaluating cardiovascular MR or CT images, for the purpose of obtaining diagnostic information as part of a comprehensive diagnostic decision-making process. cvi42 is a software application that can be used as a stand-alone product or in a networked environment.
The target population for cvi42 and its manual workflows is not restricted; however, cvi42's semiautomated machine learning algorithms, included in the MR Function and CORE CT modules, are intended for an adult population. Further, image acquisition by a cardiac MR or CT scanner may limit the use of the software for certain sectors of the general public.
cvi42 shall not be used to view or analyze images of any part of the body except the cardiac images acquired from a cardiovascular magnetic resonance or computed tomography scanner.
cvi42 Software Application ("cvi42") is a software as a medical device (SaMD) that is intended for evaluating CT and MR images of the cardiovascular system. Combining digital image processing, visualization, quantification, and reporting tools, cvi42 is designed to support physicians in the evaluation and analysis of cardiovascular imaqing studies.
cvi42 uses machine learning techniques to aid in semi-automatic contouring of regions of interest in cardiac MR or CT images.
The data used to train these machine learning algorithms were sourced from multiple clinical sites from urban centers and from different countries. When selecting data for training, the importance of model generalization was considered and data was selected such that a good distribution of patient demographics, scanner, and image parameters were represented. The separation into training versus validation datasets is made on the study level to ensure no overlap between the two sets. As such, different scans from the same study were not split between the training and validation datasets. None of the cases used for model validation were used for training the machine learning models.
cvi42 has a graphical user interface which allows users to analyze cardiac MR & CT images qualitatively and quantitatively.
cvi42 accepts uploaded data files previously acquired by MR or CT scanners or other data collection equipment but does not interface directly with such equipment. Its functionality is independent of the type of vender acquisition equipment. The analysis results are available onscreen and can be saved with the software for future review.
Here's a breakdown of the acceptance criteria and study details for the cvi42 Software Application, based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
For cvi42 Auto (MR-CMR Function, CORE CT Coronary, and CORE CT-Calcium):
Module | Acceptance Criteria | Reported Device Performance |
---|---|---|
CMR Function Analysis | Classification Accuracy: Based on True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN). | |
Mean Volume Prediction Error (MAE): For Short Axis (SAX) and Long Axis (LAX) volumetric measurements. | Series Classification Performance: 97% - 100% | |
Volumetric MAE (SAX): 7% - 10% | ||
Volumetric MAE (LAX): 5% - 9% | ||
Calcium Analysis | Classification Accuracy: Based on TP, TN, FP, and FN. | Classification Performance: 86% - 99% |
Coronary Analysis | Centerline Quality and Performance: Based on TP and FN. | |
Success Rate for Relevant Masks. | Centerline Performance: 82% - 94% | |
Mask Performance: 98% - 100% |
For CORE CT (CT Function Module):
Metric | Acceptance Criteria | Reported Device Performance (compared to a reference standard established from three expert readers) |
---|---|---|
LV Cavity Segmentation | Not explicitly stated numerical acceptance criteria, but implied to be within acceptable clinical limits for MAE, Dice, HD, and EF bias compared to expert readers. | MAE: Less than 10% difference. |
Dice Coefficient: Above 86%. | ||
3D Hausdorff Distance (HD): Below 9.5 mm. | ||
EF Bias: 1.3% with a 95% confidence interval of [-12, 14]. | ||
RV Cavity Segmentation | Not explicitly stated numerical acceptance criteria, but implied to be within acceptable clinical limits for MAE, Dice, HD, and EF bias compared to expert readers. | MAE: Less than 18%. |
Dice Coefficient: Above 85%. | ||
HD: Below 18 mm. | ||
EF Bias: -5.5% with a 95% confidence interval of [-15, 4.4]. | ||
LV Myocardium Segmentation | Not explicitly stated numerical acceptance criteria, but implied to be within acceptable clinical limits for MAE, Dice, and HD compared to expert readers. | MAE: Less than 17%. |
Dice Coefficient: Above 82%. | ||
HD: Below 15 mm. |
2. Sample Size for the Test Set and Data Provenance
For cvi42 Auto (MR-CMR Function, CORE CT Coronary, and CORE CT-Calcium):
- Sample Size: n = 235 anonymized patient images acquired from major vendors of MR and CT imaging devices.
- 70 samples for Coronary Analysis
- 102 samples for Calcium analysis
- 63 samples for SAX Function contouring
- 63 samples for each of 2-CV, 3-CV, and 4CV LAX function contouring
- 252 samples for Function Classification
- Data Provenance: Images were acquired from major vendors of MR and CT imaging devices. At least 50% of the data came from a U.S. population. The document does not specify if the data was retrospective or prospective, but the phrasing "were used for the validation" implies retrospective use of existing data.
For CORE CT (CT Function Module):
- Sample Size: Not explicitly stated, but the validation data was sourced from 9 different sites.
- Data Provenance: Sourced from 9 different sites, with 90% of the data sampled from US sources. The document does not specify if the data was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
For CORE CT (CT Function Module):
- Number of Experts: Three expert readers.
- Qualifications: "Expert readers" – specific qualifications (e.g., years of experience, board certification) are not detailed in the provided text.
For cvi42 Auto (MR-CMR Function, CORE CT Coronary, and CORE CT-Calcium), the document does not explicitly state the number of experts used to establish ground truth for the test set. It does mention expert readers for the comparison in the CORE CT section.
4. Adjudication Method for the Test Set
For CORE CT (CT Function Module):
- The "reference standard" was "established from three expert readers." The specific adjudication method (e.g., majority vote, specific consensus process) is not detailed, but it implies a consensus or agreement among these three experts.
For cvi42 Auto (MR-CMR Function, CORE CT Coronary, and CORE CT-Calcium):
- The document does not explicitly state the adjudication method for establishing ground truth for these modules.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- No, an MRMC comparative effectiveness study was not explicitly stated to have been done to measure human reader improvement with AI vs. without AI assistance. The performance tests described are primarily focused on the standalone performance of the AI algorithms (Machine Learning Derived Outputs) compared to a ground truth or a reference standard established by experts.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, standalone performance was assessed. The sections titled "Validation of Machine Learning Derived Outputs" and "CORE CT: CT Function" describe the evaluation of the algorithms' performance (e.g., classification accuracy, MAE, Dice coefficient, HD, EF bias) against pre-defined acceptance criteria and a reference standard made by experts, without human-in-the-loop assistance for the AI's output generation. This is a standalone assessment of the algorithms.
7. The Type of Ground Truth Used
- Expert Consensus: For the CORE CT module, the ground truth (reference standard) used for evaluation was established by "three expert readers." This implies an expert consensus or expert-derived ground truth.
- For other modules (cvi42 Auto), the document states that performance was evaluated against "pre-defined acceptance criteria" but does not explicitly describe how the ground truth for those criteria was established, though it likely involved expert annotations or established clinical metrics.
8. The Sample Size for the Training Set
- The document states: "The data used to train these machine learning algorithms were sourced from multiple clinical sites from urban centers and from different countries." However, the specific sample size for the training set is not provided for any of the modules. It only mentions that the data was selected for good distribution of patient demographics, scanner, and image parameters.
9. How the Ground Truth for the Training Set Was Established
- The document does not explicitly describe how the ground truth for the training set was established. It only states that the training data "were sourced from multiple clinical sites from urban centers and from different countries." It also notes that "the separation into training versus validation datasets is made on the study level to ensure no overlap between the two sets." This suggests that the training data would have had associated ground truth data (e.g., expert annotations, clinical measurements) to enable supervised learning, but the method of establishing that ground truth is not detailed.
Ask a specific question about this device
(145 days)
Circle Cardiovascular Imaging, Inc.
TruPlan enables visualization and measurement of structures of the heart and vessels for:
- · Pre-procedural planning and sizing for the left atrial appendage closure (LAAC) procedure
- · Post-procedural evaluation for the LAAC procedure
To facilitate the above, TruPlan provides general functionality such as:
- · Segmentation of cardiovascular structures
- · Visualization and image reconstruction techniques: 2D review, Volume Rendering, MPR
- · Simulation of TEE views, ICE views, and fluoroscopic rendering
- · Measurement and annotation tools
- · Reporting tools
TruPlan's intended patient population is comprised of adult patients.
The TruPlan Computed Tomography (CT) Imaging Software application ("TruPlan") is a software as a medical device that helps qualified users with image-based pre-procedural planning and post-procedural follow-up of the Left Atrial Appendage Closure (LAAC) procedure using CT data. TruPlan is designed to support the anatomical assessment of the Left Atrial Appendage (LAA) prior to and following the LAAC procedure. This includes the assessment of the LAA size, shape, and relationships with adjacent cardiac and extracardiac structures. This assessment helps the physician determine the size of a closure device needed for the LAAC procedure and evaluate LAAC device placement in a follow-up CT study. The TruPlan application is a visualization software and has basic measurement tools. The device is intended to be used as an aid to the existing standard of care and does not replace existing software applications physicians use for planning or follow-up for a LAAC procedure.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) submission for TruPlan Computed Tomography (CT) Imaging Software:
Acceptance Criteria and Device Performance Study for TruPlan CT Imaging Software
The TruPlan Computed Tomography (CT) Imaging Software by Circle Cardiovascular Imaging, Inc. underwent validation of its machine learning (ML) derived outputs to demonstrate its performance relative to pre-defined acceptance criteria. The device contains two primary ML algorithms: Left Heart Segmentation and Landing Zone Detection.
1. Table of Acceptance Criteria and Reported Device Performance
Feature / Metric | Acceptance Criteria (Pre-defined) | Reported Device Performance |
---|---|---|
Left Heart Segmentation Algorithm | ||
Probability of Bone Removal (Segmentation Accuracy) | Not explicitly stated as a numerical threshold, but implied to be high for correct segmentation. | 532/533 cases (99.81%) for bone removal |
Probability of LAA Visualization (Segmentation Accuracy) | Not explicitly stated as a numerical threshold, but implied to be high for correct visualization. | 519/533 cases (97.37%) for LAA visualization |
Landing Zone Detection Algorithm | ||
Landing Zone Plane Distance Metric | Within 10 mm | 97/100 cases (97%) within 10 mm (mean distance: 3.87 mm) |
Landing Zone Contour Center Distance Metric | Within 12 mm | 99/100 cases (99%) within 12 mm (mean distance: 2.92 mm) |
Note: The document states that "All performance testing results met Circle's pre-defined acceptance criteria," indicating that the reported performance metrics met or exceeded the internal thresholds established by the manufacturer, even if the exact numerical acceptance percentages for segmentation accuracy were not explicitly listed as criteria in the provided text.
2. Sample Sizes Used for the Test Set and Data Provenance
- Test Set Sample Size:
- Left Heart Segmentation: 533 anonymized patient images
- Landing Zone Detection: 100 anonymized patient images
- Data Provenance:
- Country of Origin: The validation data was sourced from multiple sites across the U.S. and other urban regions. Specifically:
- Left Heart Segmentation: U.S., Canada, South America, Europe, and Asia.
- Landing Zone Detection: Various sites across the U.S.
- Retrospective/Prospective: The data used for validation were pre-existing CT images, common for retrospective studies. The document states "All data used for validation were not used during the development of the training algorithms," ensuring independence.
- Country of Origin: The validation data was sourced from multiple sites across the U.S. and other urban regions. Specifically:
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document mentions that for the Landing Zone Detection algorithm, "The landing zone was manually contoured by multiple expert readers for evaluation."
For the training data of the Left Heart Segmentation algorithm, it states "the left heart structures were manually annotated by multiple expert readers." While this refers to training, it implies a similar process and expert qualification for testing.
The specific number of experts and their explicit qualifications (e.g., "radiologist with 10 years of experience") are not specified in the provided text for either training or validation ground truth establishment. It only states "expert readers."
4. Adjudication Method for the Test Set
The document indicates that for the Landing Zone Detection ground truth, the "landing zone was manually contoured by multiple expert readers." For the Left Heart Segmentation training data, "manually annotated by multiple expert readers." This implies a consensus or majority vote approach might have been used, but the specific adjudication method (e.g., 2+1, 3+1, none) is not explicitly detailed in the provided text.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
No, a MRMC comparative effectiveness study was not done. The document explicitly states: "No clinical studies were necessary to support substantial equivalence." The performance data presented is that of the algorithm's standalone performance against expert-defined ground truth, rather than a comparison of human readers with and without AI assistance. Therefore, an effect size of human reader improvement with AI vs. without AI assistance is not provided and was not part of this submission.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
Yes, standalone performance was evaluated. The metrics reported (Probability of Bone Removal, Probability of LAA Visualization, Landing Zone Plane Distance, Landing Zone Contour Center Distance) are direct measurements of how accurately the ML algorithms perform their specific tasks when processing the CT images. The "Validation of Machine Learning Derived Outputs" section focuses purely on the algorithm's performance against ground truth.
7. The Type of Ground Truth Used
The ground truth used was expert consensus/manual contouring/annotation.
- For Left Heart Segmentation: "left heart structures were manually annotated by multiple expert readers."
- For Landing Zone Detection: "the landing zone was manually contoured by multiple expert readers."
This is observational data interpreted by human experts, not pathology or outcomes data.
8. The Sample Size for the Training Set
- Left Heart Segmentation: 113 cases
- Landing Zone Detection: 273 cases
9. How the Ground Truth for the Training Set Was Established
- Left Heart Segmentation: "the left heart structures were manually annotated by multiple expert readers."
- Landing Zone Detection: "the landing zone was manually contoured by expert readers."
Similar to the test set, the ground truth for training was established through manual annotation and contouring by expert readers. The document emphasizes that "the separation into training versus validation datasets is made on the study level to ensure no overlap between the two sets."
Ask a specific question about this device
(219 days)
Circle Cardiovascular Imaging Inc
cvi42 Auto is intended to be used for viewing, post-processing, qualitative evaluation of cardiovasular magnetic resonance (MR) and computed tomography (CT) images in a Digital Imaging and Communications in Medicine (DICOM) Standard format.
It enables a set of tools to assist physicians in qualitative assessment of cardiac images and quantitative measurements of the heart and adjacent vessels: perform calcium scoring: and to confirm the presence of physician-identified lesion in blood vessels.
The target population for cvi42 Auto's manual workflows is not restricted; however, cvi42 Auto's semi-automated machine learning algorithms are intended for an adult population.
cvi42 Auto shall be used only for cardiac images acquired from an MR or CT scanner. It shall be used by qualified medical professionals, experienced in examining and evaluating cardiovascular MR or CT images, for the purpose of obtaining diagnostic information as part of a comprehensive decision-making process.
cvi42 Auto is a software as a medical device (SaMD) that is intended for evaluating CT and MR images of the cardiovascular system. Combining digital image processing, visualization, guantification, and reporting tools, cvi42 Auto device is designed to support the physician in confirming the presence or absence of physician-identified lesion in blood vessels and evaluation, documentation and follow up of any such lesions.
cvi42 Auto uses machine learning techniques to aid in semi-automatic contouring of regions of interest of cardiac magnetic resonance (MR) or computed tomography (CT) images as follows:
-
- Cardiac Function: semi-automatic contouring of the four heart chambers (including left ventricle, left atrium, right ventricle, right atrium) in MR images.
-
- Calcium Assessment: using pixel intensity technique, identify calcified plaque in major coronary arteries in non-contrast enhanced CT images.
-
- Coronary Analysis: semi-automatic placement of centerline in coronary vessels to visualize the coronary arteries and assess stenosis in non-contrast enhanced CT images.
The data used to train these machine learning algorithms were sourced from multiple clinical sites from urban centers and from different countries. When selecting data for training, the importance of model generalization was considered and data was selected such that a good distribution of patient demographics, scanner, and image parameters were represented. The separation into training versus validation datasets is made on the study level to ensure no overlap between the two sets. As such, different scans from the same study were not split between the training and validation datasets. None of the cases used for model validation were used for training the machine learning models.
cvi42 Auto software has a graphical user interface which allows users to analyze cardiac images qualitatively and quantitatively for volume/mass, function and signal intensity changes including a reporting function.
The device can be integrated into a hospital, private practice environment, or medical research institution and provides clinical diagnosis decision support tools for the cardiovascular MR and CT technique.
Additionally, the software is designed to generate 3D view of the heart in CT images for qualitative assessment of the coronary artery. No quantitative assessment can be made from the 3D image.
The software does not interface directly with any data collection equipment; instead, the software uploads data files previously generated by such equipment. Its functionality is independent of the type of vendor acquisition equipment. The analysis results are available on-screen and can be saved within the software for future review.
The provided text describes the acceptance criteria and the study that proves the cvi42 Auto Imaging Software Application meets these criteria.
Here's an organized breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are described as pre-defined performance thresholds for the machine learning models. The reported performance is the achieved accuracy or error rate.
Feature / Metric | Acceptance Criteria (Pre-defined) | Reported Device Performance |
---|---|---|
CMR Function Analysis | ||
Series Classification Accuracy | Defined by True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN) | 97% - 100% |
Volumetric Mean Absolute Error (MAE) for SAX | Not explicitly stated but calculated. | 7% - 10% |
Volumetric Mean Absolute Error (MAE) for LAX | Not explicitly stated but calculated. | 5% - 9% |
Calcium Analysis | ||
Classification Accuracy | Defined by TP, TN, FP, and FN | 86% - 99% |
Coronary Analysis | ||
Centerline Quality and Performance | Defined by TP and FN | 82% - 94% |
Mask Performance | Success rate for relevant masks | 98% - 100% |
Note: The document states that "All performance testing results met Circle's pre-defined acceptance criteria." While specific numerical "acceptance criteria" are not given for all metrics, the reported performance ranges are implicitly within the accepted thresholds.
2. Sample Size Used for the Test Set and Data Provenance
- Total anonymized patient images for validation: n = 235
- Breakdown by analysis type (note: total is >235 as some analyses might use overlapping sets or different views from the same patient):
- Coronary Analysis: 70 samples
- Calcium Analysis: 102 samples
- SAX Function Contouring: 63 samples
- 2-CV LAX Function Contouring: 63 samples
- 3-CV LAX Function Contouring: 63 samples
- 4-CV LAX Function Contouring: 63 samples
- Function Classification: 252 samples
- Data Provenance: "Across all MR and CT machine manufacturers." "At least 50% of the data came from a U.S. population." The data for validation was explicitly stated to not have been used during the development of the training algorithms, indicating a distinct test set. The document implies a retrospective collection of anonymized patient images for validation.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not specify the number or qualifications of experts used to establish the ground truth for the test set. It only mentions that the device is "intended to be used by qualified medical professionals, experienced in examining and evaluating cardiovascular MR or CT images, for the purpose of obtaining diagnostic information as part of a comprehensive decision-making process." This likely refers to the users of the device, not necessarily the ground truth adjudicators for the validation study.
4. Adjudication Method for the Test Set
The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1) for establishing the ground truth on the test set. The results are presented as direct performance metrics against an assumed ground truth, but how that ground truth was derived (e.g., single expert, consensus) is not detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The provided text does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done to evaluate how human readers improve with AI vs. without AI assistance. The performance data presented focuses on the algorithm's standalone performance or its semi-automated function.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
Yes, a standalone study was done. The performance data provided (e.g., classification accuracies, MAE, centerline performance, mask performance) describes the performance of the machine learning algorithms themselves (the "semi-automated machine learning algorithms"), rather than human-AI team performance. The mention of "semi-automatic contouring" and "semi-automatic placement of centerline" implies that the AI assists, but the reported metrics appear to be related to the accuracy of the algorithm's output.
7. Type of Ground Truth Used
The type of ground truth used is not explicitly stated in detail for the validation set. Given the context of "semi-automatic contouring" and "classification accuracy," it is highly probable that the ground truth for contouring (e.g., for heart chambers) would have been established by expert manual segmentation, and for classifications (e.g., calcium presence), it would be based on expert review or established clinical criteria. However, explicit details like "expert consensus" or "pathology" are not mentioned.
8. The Sample Size for the Training Set
The document states: "The data used to train these machine learning algorithms were sourced from multiple clinical sites from urban centers and from different countries." However, the specific sample size (number of images or patients) used for the training set is not provided in the given text.
9. How the Ground Truth for the Training Set Was Established
The document mentions that training data was "sourced from multiple clinical sites" and that "the importance of model generalization was considered and data was selected such that a good distribution of patient demographics, scanner, and image parameters were represented." It also differentiates between training and validation datasets by ensuring "no overlap between the two sets."
While it broadly states that data was selected considering generalization, it does not explicitly detail how the ground truth for the training set was established (e.g., expert annotation, clinical reports, etc.).
Ask a specific question about this device
(197 days)
Circle Cardiovascular Imaging Inc.
TruPlan enables visualization and measurement of structures of the heart and vessels for pre-procedural planning and sizing for the left atrial appendage closure (LAAC) procedure.
To facilitate the above, TruPlan provides general functionality such as:
- Segmentation of cardiovascular structures
- Visualization and image reconstruction techniques: 2D review, Volume Rendering. MPR
- Simulation of TEE views, ICE views, and fluoroscopic rendering
- Measurement and annotation tools
- Reporting tools
The TruPlan Computed Tomography (CT) Imaging Software application (referred to herein as "TruPlan") is a software as a medical device (SAMD) that helps qualified users with imagebased pre-operative planning of Left Atrial Appendage Closure (LAAC) procedure using CT data. The TruPlan device is designed to support the anatomical assessment of the Left Atrial Appendage (LAA) prior to the LAAC procedure. This includes the assessment of the LAA size, shape, and relationships with adjacent cardiac and extracardiac structures. This assessment helps the physician determine the size of a closure device needed for the LAAC procedure. The TruPlan application is a visualization software and has basic measurement tools. The device is intended to be used as an aid to the existing standard of care. It is not replacing the existing software applications physicians use for planning the Left Atrial Appendage Closure procedure.
Pre-existing CT images are uploaded in TruPlan application manually by the end-user. The images can be viewed by the user in the original CT image as well as simulated views. The software displays the views in a modular format as follows:
- LAA
- Fluoro (fluoroscopy, simulation)
- Trans Esophageal Echo (TEE, simulation)
- Intra Cardiac Echography (ICE, simulation)
- Thrombus
- Multiplanar Reconstruction (MPR)
Each of these views offer the user visualization and quantification capabilities for pre-procedural planning of the Left Atrial Appendage Closure procedure; none are intended for diagnosis. The quantification tools are based on user-identified regions of interest and are user-modifiable. The device allows users to perform the measurements (all done on MPR viewers) listed in Table 1.
Additionally, the device generates a 3D rendering of the heart (including left ventricle, left atrium, and LAA) using machine learning methodology. The 3D rendering is for visualization purposes only. No measurements or annotation can be done using this view.
TruPlan also provides reporting functionality to capture screenshots and measurements and to store them as a PDF document.
TruPlan is installed as a standalone software onto the user's Windows PC (desktop) or laptop (Windows is the only supported operating system). TruPlan does not operate on a server or cloud.
The provided text does not contain the detailed information required to describe the acceptance criteria and the comprehensive study that proves the device meets those criteria.
While the document (a 510(k) summary) mentions "Verification and validation activities were conducted to verify compliance with specified design requirements" and "Performance testing was conducted to verify compliance with specified design requirements," it does not provide any specific quantitative acceptance criteria or the actual performance data. It also states "No clinical studies were necessary to support substantial equivalence," which means there was no multi-reader multi-case (MRMC) study or standalone performance study in a clinical setting with human readers.
Therefore, I cannot fulfill most of the requested points from the input. However, based on the information provided, I can infer and state what is missing or not applicable.
Here's a breakdown of the requested information and what can/cannot be extracted from the provided text:
1. A table of acceptance criteria and the reported device performance
Cannot be provided. The document states that "performance testing was conducted to verify compliance with specified design requirements," and "Validated phantoms were used for assessing the quantitative measurement output of the device." However, it does not specify what those "specified design requirements" (i.e., acceptance criteria) were, nor does it report the actual quantitative performance results (e.g., accuracy, precision) of the device against those criteria.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
Cannot be provided. The document refers to "Validated phantoms" for quantitative measurement assessment. This implies synthetic or controlled data rather than patient data. No details are given regarding the number of phantoms used or their characteristics. There is no mention of "test set" in the context of patient data, data provenance, or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
Cannot be provided. Since no clinical test set with patient data is described, there's no mention of experts establishing ground truth for such a set. The testing was done on "validated phantoms" for "quantitative measurement output," suggesting a comparison against known ground truth values inherent to the phantom design rather than expert consensus on medical images.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
Cannot be provided. Given the lack of a clinical test set and expert review, no adjudication method is mentioned or applicable.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a MRMC study was NOT done. The document explicitly states: "No clinical studies were necessary to support substantial equivalence." This means there was no MRMC study to show human reader improvement with AI assistance. The submission relies on "performance testing and predicate device comparisons" for substantial equivalence.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Likely yes, for certain aspects, but specific performance data is not provided. The document mentions "Validated phantoms were used for assessing the quantitative measurement output of the device." This implies an algorithmic, standalone assessment of the device's measurement capabilities against the known values of the phantoms. However, the exact methodology, metrics, and results of this standalone performance are not detailed.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth for the quantitative measurement assessment was based on "Validated phantoms." This means the ground truth for measurements (e.g., distances, areas) would be the known, precisely manufactured dimensions of the phantoms, not expert consensus, pathology, or outcomes data.
8. The sample size for the training set
Cannot be provided. The document mentions that the device "generates a 3D rendering of the heart (including left ventricle, left atrium, and LAA) using machine learning methodology." This indicates that a training set was used for this specific function. However, the size of this training set is not mentioned anywhere in the provided text.
9. How the ground truth for the training set was established
Cannot be provided. While it's implied that there was a training set for the "machine learning methodology" used for 3D rendering, the document does not explain how the ground truth for this training set was established.
Ask a specific question about this device
(79 days)
CIRCLE CARDIOVASCULAR IMAGING INC
cvi42 vascular analysis add-on is an image analysis software package add-on for evaluating CT and MR images of blood vessels. Combining digital image processing and visualization tools such as multiplaner reconstruction (MPR), thin/think maximum intensity projection (MIP) thin and think, inverted MIP thin and think, volume rendering technique (VRT), curved planner reformation, processing tools such as bone removal (based on both single energy and dual energy) table removal and evaluation tools (vessel centerline calculation, lumen calculation, stenosis calculation) and reporting tools (lesion location, lesion characteristics) and key images), the software package is designed to support the physician in conforming the presence or absence of physician identified lesion in blood vessels and evaluation, documentation and follow up of any such lesions.
It shall be used by qualified medical professionals, experienced in examining and evaluating cardiovascular CT or MR images, for the purpose of obtaining diagnostic information as part of a comprehensive diagnostic decision-making process. cvi42 is a software application that can be used as a stand-alone product or in a networked environment.
The target population for the cvi42 is not restricted, however the image acquisition by a cardiac CT or MR scanner may limit the use of the device for certain sectors of the general public.
cvi42 vascular add-on is software application for evaluating cardiovascular images in a DICOM Standard format. The software can be used as a stand-alone product that can be integrated into a hospital or private practice environment. cvi42 has a graphical user interface which allows users to qualitatively and quantitatively analyze cardiac CT & MR images.
The provided text describes the cvi42 device, an image analysis software for CT and MR images of blood vessels, but it does not contain the detailed acceptance criteria or the study results that specifically prove the device meets those criteria.
The document states that "cvi42 have been tested according to the specifications that are documented in a Master Software Test Plan," and that "The successful non-clinical testing demonstrates the safety and effectiveness of the cvi42 when used for the defined indications for use and demonstrates that the device for which the 510(k) is submitted performs as well as or better than the legally marketed predicate device." However, it does not provide the specifics of these tests, acceptance criteria, or performance metrics.
Therefore, I cannot fill out the requested table or answer the specific questions about the study design, sample sizes, ground truth establishment, or expert involvement based on the provided text.
The information related to the predicate device "iNtuition (K121916)" is largely for functional comparison, showing both devices have similar capabilities, but it does not present performance data for either the predicate or the cvi42 device.
Ask a specific question about this device
(99 days)
CIRCLE CARDIOVASCULAR IMAGING INC.
ct42 is intended to be used for viewing, post-processing and quantitative evaluation of cardiovascular computed tomography (CT) images in a Digital Imaging and Communications in Medicine (DICOM) Standard format. It enables; Importing Cardiac CT Images in DICOM format Supporting clinical diagnostics by qualitative analysis of the cardiac CT images using display functionality such as panning, windowing, zooming, navigation through series/slices and phases, 3D reconstruction of images including multi-planner reconstructions of the images. Supporting clinical diagnostics by quantitative measurement of the heart and adjacent vessels in cardiac CT images, specifically distance, area, volume and mass Supporting clinical diagnostics by using area and volume measurements for measuring LV function and derived parameters cardiac output and cardiac index in long axis and short axis cardiac CT images. Supporting clinical diagnostics by quantitative measurements of calcified plaques in the coronary arteries (calcium scoring), specifically Agatston and volume and mass calcium scores. It shall be used by qualified medical professionals, experienced in examining and evaluating cardiovascular CT images, for the purpose of obtaining diagnostic information as part of a comprehensive diagnostic decision-making process. ct42 is a software application that can be used as a stand-alone product or in a networked environment. The target population for the ct42 is not restricted, however the image acquisition by a cardiac CT scanner may limit the use of the device for certain sectors of the general public. ct42 shall not be used to view or analyze images of any part of the body except the cardiac CT images acquired from a cardiovascular CT scanner.
ct42 is a dedicated software application for evaluating cardiovascular images in a DICOM Standard format. The software can be used as a stand-alone product that can be integrated into a hospital or private practice environment. ct42 has a graphical user interface which allows users to qualitatively and quantitatively analyze cardiac CT images for volume/mass, and calcium scoring. It provides a comprehensive set of tools for the analysis of Cardiovascular Computed Tomography (CT) images.
Here's an analysis of the provided text regarding the acceptance criteria and study for the ct42 Cardiac Computed Tomography (CT) Software:
Note: The provided document is a 510(k) Summary, which typically focuses on demonstrating substantial equivalence to a predicate device rather than presenting a detailed performance study with acceptance criteria in the manner one might find in a clinical trial report. As such, some requested information (like specific numerical acceptance criteria and a detailed study proving the device meets those criteria) is not explicitly present in the provided text. The document states that "The successful non-clinical testing demonstrates the safety and effectiveness of the ct42 when used for the defined indications for use and demonstrates that the device for which the 510(k) is submitted performs as well as or better than the legally marketed predicate device."
Acceptance Criteria and Reported Device Performance
The document does not explicitly state numerical acceptance criteria in a table format with corresponding reported performance for the ct42 software. Instead, it relies on demonstrating equivalence to a predicate device (Ziosoft - Cardiac Function Analysis & Calcium Scoring, K083446) by possessing similar features and functionalities. The "Conclusion" section indirectly serves as the statement of meeting acceptance, asserting that "ct42... demonstrates that the device... performs as well as or better than the legally marketed predicate device."
Here's a table based on the "Device Comparison Table" provided, highlighting the features where equivalence is drawn, which implicitly serve as the "acceptance criteria" for functionality:
Acceptance Criteria (Feature/Functionality) | Reported Device Performance (ct42) |
---|---|
Post processes ECG gated - Cardiac CT images | YES |
Image viewer functionality | YES |
Left ventricular ejection fraction | YES |
End diastolic volume | YES |
End systolic volume | YES |
Stroke volume | YES |
Cardiac output | YES |
Cardiac Index | YES |
Wall thickness | YES |
Wall thickness ratio | YES |
Wall movement | YES |
Volume Curve | YES |
Calcium Scoring | YES |
Evaluates calcified plaque in the coronary arteries | YES |
Agatston calcium score | YES |
Volume calcium score | YES |
Calcium mass/density calculations | YES (calculates mass) |
DICOM compliant | YES |
Additional Information on the Study:
-
Sample size used for the test set and the data provenance:
- Sample Size: Not explicitly stated in the provided 510(k) summary. The document mentions "non-clinical testing" and testing "according to the specifications that are documented in a Master Software Test Plan," but specific details about the number of cases or images in the test set are absent.
- Data Provenance: Not specified. It's unclear if the data was retrospective or prospective, or its country of origin.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not specified. The document does not detail how ground truth was established for any testing.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not specified.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No. The document presents a substantial equivalence claim based on feature comparison and non-clinical software testing. It does not describe a MRMC comparative effectiveness study involving human readers and AI assistance. The device itself is a software application for viewing, post-processing, and quantitative evaluation, implying it's a tool for human professionals, but no study on human performance improvement is mentioned.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The submission focuses on "non-clinical testing" to demonstrate safety and effectiveness and equivalence to a predicate device. This implies testing the algorithm's functionality and accuracy in various measurements (distance, area, volume, mass, calcium scoring) in a standalone manner, but the specifics of how this was done (e.g., comparing algorithm outputs to known truths or another software's output) are not detailed. The term "standalone" performance in the context of an FDA submission for this type of device usually refers to the accuracy of its quantitative measurements rather than a human-like diagnostic output.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- Not explicitly stated. Given the nature of the device (quantitative measurements for cardiac CT), ground truth for accuracy testing would typically involve comparisons to:
- Manual measurements by experts (expert consensus)
- Measurements from another validated software/method
- Perhaps in some cases, correlation with pathology or invasive measurements, though this is less common for software functionality claims.
- The document does not detail which of these, if any, were used.
- Not explicitly stated. Given the nature of the device (quantitative measurements for cardiac CT), ground truth for accuracy testing would typically involve comparisons to:
-
The sample size for the training set:
- Not applicable/Not mentioned. The provided document is for a traditional 510(k) submission for ct42. It describes a software application for quantitative analysis of CT images. It does not indicate that this device utilizes machine learning or AI models that would require a distinct "training set" in the modern sense. The "testing" mentioned refers to traditional software validation and verification.
-
How the ground truth for the training set was established:
- Not applicable, as no training set (for machine learning) is implied or mentioned for this device.
Ask a specific question about this device
(72 days)
CIRCLE CARDIOVASCULAR IMAGING INC.
cmr42 is intended to be used for viewing, post-processing and quantitative evaluation of cardiovascular magnetic resonance (MR) images in a Digital Imaging and Communications in Medicine (DICOM) Standard format. It enables:
Importing Cardiac MR Images in DICOM format
Supporting clinical diagnostics by qualitative analysis of the cardiac MR images using display functionality such as panning, windowing, zooming, navigation through series/slices and phases.
Supporting clinical diagnostics by quantitative measurement of the heart and adjacent vessels in cardiac MR images, specifically distance, area, volume and mass
Supporting clinical diagnostics by using area and volume measurements for measuring LV function and derived parameters cardiac output and cardiac index in long axis and short axis cardiac MR images.
Flow quantifications based on velocity encodes images
It shall be used by qualified medical professionals, experienced in examining and evaluating cardiovascular MR images, for the purpose of obtaining diagnostic information as part of a comprehensive diagnostic decision-making process. cmr42 is a software application that can be used as a stand-alone product or in a networked environment.
The target population for the cmr42 is not restricted, however the image acquisition by a cardiac magnetic resonance scanner may limit the use of the device for certain sectors of the general public.
cmr42 shall not be used to view or analyze images of any part of the body except the cardiac magnetic resonance images acquired from a cardiovascular magnetic resonance scanner.
cmr42 is a dedicated software application for evaluating cardiovascular images in a DICOM Standard format. The software can be used as a stand-alone product that can be integrated into a hospital or private practice environment. cmr42 has a graphical user interface which allows users to qualitatively and quantitatively analyze cardiac images for volume/mass, and flow quantification. It provides a comprehensive set of tools for the analysis of Cardiovascular Magnetic Resonance (CMR) images.
The provided 510(k) summary for the cmr42 Cardiac MR Software Application (K082628) describes its intended use and a general statement about testing but does not provide a detailed table of acceptance criteria or the specific results of a study proving the device meets these criteria.
Instead, the summary states:
"Description and Testing: cmr42 have been tested according to the specifications that are Conclusion of Testing documented in a Master Software Test Plan. Testing is an integral part of Circle Cardiovascular Imaging Inc software development process as described in the company's product development process."
"Conclusion: The successful non-clinical testing demonstrates the safety and effectiveness of the cm42 when used for the defined indications for use and demonstrates that the device for which the 510(k) is submitted performs as well as or better than the legally marketed predicate device."
This indicates that internal testing was conducted to specifications but the specific acceptance criteria and detailed performance metrics are not publicly available in this document. The submission focuses on demonstrating substantial equivalence to predicate devices (MRI-MAGNETIC RESONANCE ANALYTICAL SOFTWARE SYSTEM (MASS) K994283 and MRI-Flow Analytical Software K994282) rather than publishing detailed performance studies against explicitly stated acceptance criteria.
Therefore, for most of the requested information, the answer is "Not provided in the given document."
Here's a breakdown of what can be extracted or inferred based on the supplied text:
1. A table of acceptance criteria and the reported device performance:
Acceptance Criteria (Inferred from device description and comparison) | Reported Device Performance (General Statement) |
---|---|
Qualitative Analysis: |
- Viewing, panning, windowing, zooming
- Navigation through series/slices and phases | Demonstrated safety and effectiveness for qualitative analysis. |
| Quantitative Measurements: - Distance, area, volume, mass of heart and adjacent vessels
- LV function, cardiac output, cardiac index from long and short axis images | Demonstrated safety and effectiveness for quantitative measurements; performs "as well as or better than" predicate devices. |
| Flow Quantifications: - Based on velocity encoded images | Demonstrated safety and effectiveness for flow quantifications; performs "as well as or better than" predicate devices. |
| Image Compatibility: - Import DICOM-format Cardiac MR Images
- Images from all MRI scanner vendors supported | Functionality confirmed. |
| User Interface and Functionality: - Graphical user interface
- Comprehensive tool sets
- Dynamic display of ventricular contractions
- DICOM compliant networking
- Reports with visualization and quantitative parameters | Functionality confirmed and described as having "task specific modules with corresponding tool sets." |
| General Performance: - Safety and Effectiveness | Non-clinical testing demonstrates safety and effectiveness. Performs "as well as or better than" the legally marketed predicate devices. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
- Not provided in the given document. The document only mentions "non-clinical testing" and testing "according to the specifications that are documented in a Master Software Test Plan." Specifics about the test set, its size, or its provenance are not included.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
- Not provided in the given document. The document does not describe the methodologies for establishing ground truth or the involvement of experts in the testing phase. The device itself is intended for use by "qualified medical professionals, experienced in examining and evaluating cardiovascular MR images."
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not provided in the given document. No information on adjudication methods for establishing ground truth or evaluating test results is given.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study is not mentioned as part of this 510(k) submission. The submission focuses on the standalone performance of the software in comparison to predicate devices, not on human reader performance with or without AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone evaluation was implicitly done. The "cmr42" is described as a "software application" for viewing, post-processing, and quantitative evaluation. The testing described is "non-clinical testing" to demonstrate its safety and effectiveness and that it performs as well as or better than predicate devices. This implies evaluating the software's performance on its own capabilities.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- Not explicitly stated in the given document. Since the device performs quantitative measurements (distance, area, volume, mass), the ground truth for testing would likely involve highly accurate reference measurements, possibly derived from manual expert measurements, phantoms, or other validated methods. However, the specific type of ground truth is not detailed.
8. The sample size for the training set:
- Not applicable / Not provided. The cmr42 is described as an "Image Processing System" and "software application" for analysis. At the time of this 2008 submission, the focus for such devices was primarily on deterministic algorithms and user-driven analysis tools rather than AI/machine learning models that require distinct training sets. Therefore, a "training set" in the modern AI sense is unlikely to have been a component of its development or evaluation, and no such information is provided.
9. How the ground truth for the training set was established:
- Not applicable. As a training set is not mentioned, the method for establishing its ground truth is also not provided.
Ask a specific question about this device
Page 1 of 1