Search Results
Found 2 results
510(k) Data Aggregation
(196 days)
AutoSeg
AutoSeg-H is medical imaging software that is intended to professionals with tools to aid them in reading, interpreting, reporting, and treatment planning. AutoSeg-H accepts DICOM compliant medical images acquired from imaging device (Computed Tomography).
AutoSeg-H provides the tools for specific analysis applications which provide custom UI, targeted measurements and reporting functions including:
-
Coronary Artery Analysis for CT coronary arteriography images: which is intended for the qualitative and quantitative analysis of coronary arteries.
-
Valve Analysis: which is intended for automatic extraction of the heart and aorta regions, automatic detection of the contour of the aorta and valves, measurement of the vicinity of the valves.
-
4-Chamber Analysis: which is intended for automatic extraction of the left atrium, and right ventricle from CT.
AutoSeg-H(V1.0.0.01) is software that is intended to provide trained medical professionals with tools to aid them in reading, interpreting, and treatment planning. AutoSeg-H(V1.0.0.01) runs on Windows standalone installed on a commercial general-purpose Windows-compatible computer and accepts DICOM compliant medical images acquired from a CT. The AutoSeg-H is not connected to a PACS system directly but retrieves saved image data from a computer. Image data obtained from the computer are used for display, image processing, analysis, etc. AutoSeg-H cannot be used to interpret Mammography images.
The main functions of AutoSeg-H are shown below.
- Coronary Artery Analysis (CT) -
- Aortic Valve Analysis (CT) -
- -4-Chamber Analysis (CT)
The provided text details the 510(k) summary for the AutoSeg-H device. As per the document, no clinical studies were performed. The non-clinical performance testing focused on verifying that the software met performance test criteria, functioned without errors, and met standards for estimating quantitative measurement and segmentation algorithm errors. The details regarding acceptance criteria, study design, and ground truth establishment for a standalone algorithm performance or comparative effectiveness study are not explicitly provided in the furnished document.
However, based on the Non-Clinical Test Summary, we can infer some aspects of the performance evaluation.
Here's an attempt to answer your questions based only on the provided text, noting where information is explicitly stated or absent:
1. A table of acceptance criteria and the reported device performance
The document states: "Through the performance test, it was confirmed that AutoSeg-H meets all performance test criteria and that all functions work without errors. Test results support the conclusion that actual device performance satisfies the design intent and is equivalent to its predicate device."
- Acceptance Criteria: While specific numerical acceptance criteria (e.g., minimum accuracy, sensitivity, specificity, or error tolerances) are not explicitly detailed in the provided text, the criteria are broadly described as:
- Meeting "all performance test criteria."
- Ensuring "all functions are operating without errors."
- Meeting "test standards which estimation of the quantitative measurement error."
- Meeting "test standards which estimation of segmentation algorithm error."
- Reported Device Performance: The reported performance is a qualitative statement of meeting these unquantified criteria. No specific metrics or numerical results are provided in this summary.
Acceptance Criterion (Inferred from text) | Reported Device Performance |
---|---|
Meets all performance test criteria | Confirmed to meet. |
All functions operate without errors | Confirmed to work without errors. |
Meets standards for quantitative measurement error estimation | Confirmed to meet. |
Meets standards for segmentation algorithm error estimation | Confirmed to meet. |
Satisfies design intent and is equivalent to predicate device | Confirmed to satisfy design intent and equivalent predicate. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size: The document does not specify the sample size (number of cases or images) used for the non-clinical performance testing.
- Data Provenance: The document does not specify the country of origin of the data or whether it was retrospective or prospective. It only states that the device accepts "DICOM compliant medical images acquired from imaging device (Computed Tomography)."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not provide information on the number or qualifications of experts used to establish ground truth for the non-clinical test set. The testing appears to be primarily software verification against pre-defined performance and error estimation standards.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not describe any adjudication method for establishing ground truth, as it does not mention human readers or expert consensus for ground truth generation in the non-clinical testing. The "Test results were reviewed by designated technical professionals," but this refers to reviewing the software test outcomes, not establishing medical ground truth for image interpretation.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document explicitly states: "No clinical studies were considered necessary and performed." Therefore, no MRMC comparative effectiveness study was done.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, the non-clinical performance testing described seems to be a standalone (algorithm only) evaluation. The tests were conducted to "confirm that AutoSeg-H meets all performance test criteria and all functions are operating without errors," and to "confirm that AutoSeg-H meets test standards which estimation of the quantitative measurement error" and "segmentation algorithm error." This suggests an evaluation of the algorithm's output against defined standards or computational accuracy, rather than human-in-the-loop performance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The document largely discusses testing against "performance test criteria" and "test standards which estimation of the quantitative measurement error" and "segmentation algorithm error." This implies that the 'ground truth' for the non-clinical tests would involve computational benchmarks, reference measurements, or predefined correct segmentations, rather than medical ground truth established by expert consensus, pathology, or outcomes data. The core technology is described as an "advanced active contour algorithm with entropy regularization," suggesting validation against known segmentation accuracy targets.
8. The sample size for the training set
The document does not provide any information regarding the training set's sample size.
9. How the ground truth for the training set was established
The document does not provide any information regarding how the ground truth for the training set was established.
Ask a specific question about this device
(139 days)
ATLAS-BASED AUTOSEGMENTATION
Atlas-Based Autosegmentation is a standalone software application that produces estimates of anatomy boundary contours needed for the creation of a radiotherapy treatment plan.
Contouring of radiation therapy targets and surrounding anatomical structures (also known as image segmentation) is a critical part of radiation treatment planning that can be extremely time consuming. Atlas-Based Autosegmentation (ABAS) is a software application that automates the contouring process using atlas-based autosegmentation. This method uses an already-segmented image set (atlas) to segment a set of new, user-input images using deformable registration algorithms. The contours ABAS generates are not usable for treatment as-is; they must be exported to a treatment planning system for editing. However, Atlas-based Autosegmentation provides a good starting point from which minimal editing is required, enabling the clinician to create a high quality treatment plan more efficiently.
Here's a summary of the acceptance criteria and study information for the Atlas-Based Autosegmentation (ABAS) device, based on the provided text:
Important Note: The provided document is a 510(k) summary, which often focuses on demonstrating substantial equivalence rather than detailed clinical performance studies. As such, information regarding specific quantitative acceptance criteria or detailed clinical trial results is limited. The document explicitly states: "Clinical trials were not performed as part of the development of this product. Clinical testing is not advantageous in demonstrating substantial equivalence or safety and effectiveness of the device since testing can be performed such that no human subjects are exposed to risk. Clinically oriented validation test cases were written and executed in-house by CMS customer support personnel. ABAS was deemed fit for clinical use."
Therefore, many of the requested sections below will reflect the absence of such clinical studies.
Acceptance Criteria and Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Functional Verification | ABAS successfully passed verification testing as documented in the "ABAS Verification Test Report." This testing ensured the system operates as designed. |
Clinical Suitability | "Clinically oriented validation test cases were written and executed in-house by CMS customer support personnel. ABAS was deemed fit for clinical use." |
Substantial Equivalence | Found substantially equivalent by the FDA to predicate devices (BrainLAB iPlan RT Dose (K053584), Pinnacle3 (K041577), IKOEngelo (K061006)). |
Study Details
-
Sample size used for the test set and the data provenance:
- Sample Size: Not specified in the provided document. The text mentions "clinically oriented validation test cases" but does not quantify the number of cases.
- Data Provenance: The document does not specify the country of origin of the data. The "clinically oriented validation test cases" were executed "in-house by CMS customer support personnel," implying they were likely internal or simulated datasets, not from external clinical sites. The data was retrospective as clinical trials were not performed.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not provided. The phrase "clinically oriented validation test cases" suggests some form of clinical relevance, but the method and personnel for establishing ground truth are not described.
-
Adjudication method for the test set:
- Not specified. Given the testing was "in-house by CMS customer support personnel" and clinical trials were not performed, a formal adjudication process akin to clinical studies is unlikely to have occurred.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. The document explicitly states: "Clinical trials were not performed as part of the development of this product." The device's primary function is described as providing an "initial contouring function" that requires further editing by clinicians in a treatment planning system. Therefore, a study on human reader improvement with AI assistance (i.e., human-in-the-loop performance) was not presented.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, in a sense, standalone performance was assessed through non-clinical verification and internal "clinically oriented validation test cases." The device is described as a "standalone software application that produces estimates of anatomy boundary contours." The "Summary of Non-Clinical Testing" indicates "Verification tests were written and executed to ensure that the system is working as designed," and the "clinically oriented validation test cases" were used to deem ABAS "fit for clinical use." However, quantitative metrics of accuracy, precision, etc., for this standalone performance against a defined ground truth, are not provided in this summary. It's also important to note the disclaimer that "The contours ABAS generates are not usable for treatment as-is; they must be exported to a treatment planning system for editing."
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not explicitly stated. For the "clinically oriented validation test cases," the nature of the ground truth is not detailed beyond "already-segmented image set (atlas)" being used in the process. It's reasonable to infer that the "atlas" itself serves as a form of expert-derived ground truth for the autosegmentation process.
-
The sample size for the training set:
- Not specified. The device uses an "atlas-based autosegmentation" method, which implies a training set or an "atlas" library. However, the size or composition of this atlas is not mentioned. It states that users can "expand its library of atlases," suggesting a flexible and potentially user-managed training-like data source.
-
How the ground truth for the training set was established:
- The document implies that the ground truth for the model's operation is derived from "already-segmented image set (atlas)." How these initial atlas segmentations were created (e.g., by experts, manually) is not detailed in this summary.
Ask a specific question about this device
Page 1 of 1