Search Results
Found 4 results
510(k) Data Aggregation
(196 days)
AI Medic Inc.
AutoSeg-H is medical imaging software that is intended to professionals with tools to aid them in reading, interpreting, reporting, and treatment planning. AutoSeg-H accepts DICOM compliant medical images acquired from imaging device (Computed Tomography).
AutoSeg-H provides the tools for specific analysis applications which provide custom UI, targeted measurements and reporting functions including:
-
Coronary Artery Analysis for CT coronary arteriography images: which is intended for the qualitative and quantitative analysis of coronary arteries.
-
Valve Analysis: which is intended for automatic extraction of the heart and aorta regions, automatic detection of the contour of the aorta and valves, measurement of the vicinity of the valves.
-
4-Chamber Analysis: which is intended for automatic extraction of the left atrium, and right ventricle from CT.
AutoSeg-H(V1.0.0.01) is software that is intended to provide trained medical professionals with tools to aid them in reading, interpreting, and treatment planning. AutoSeg-H(V1.0.0.01) runs on Windows standalone installed on a commercial general-purpose Windows-compatible computer and accepts DICOM compliant medical images acquired from a CT. The AutoSeg-H is not connected to a PACS system directly but retrieves saved image data from a computer. Image data obtained from the computer are used for display, image processing, analysis, etc. AutoSeg-H cannot be used to interpret Mammography images.
The main functions of AutoSeg-H are shown below.
- Coronary Artery Analysis (CT) -
- Aortic Valve Analysis (CT) -
- -4-Chamber Analysis (CT)
The provided text details the 510(k) summary for the AutoSeg-H device. As per the document, no clinical studies were performed. The non-clinical performance testing focused on verifying that the software met performance test criteria, functioned without errors, and met standards for estimating quantitative measurement and segmentation algorithm errors. The details regarding acceptance criteria, study design, and ground truth establishment for a standalone algorithm performance or comparative effectiveness study are not explicitly provided in the furnished document.
However, based on the Non-Clinical Test Summary, we can infer some aspects of the performance evaluation.
Here's an attempt to answer your questions based only on the provided text, noting where information is explicitly stated or absent:
1. A table of acceptance criteria and the reported device performance
The document states: "Through the performance test, it was confirmed that AutoSeg-H meets all performance test criteria and that all functions work without errors. Test results support the conclusion that actual device performance satisfies the design intent and is equivalent to its predicate device."
- Acceptance Criteria: While specific numerical acceptance criteria (e.g., minimum accuracy, sensitivity, specificity, or error tolerances) are not explicitly detailed in the provided text, the criteria are broadly described as:
- Meeting "all performance test criteria."
- Ensuring "all functions are operating without errors."
- Meeting "test standards which estimation of the quantitative measurement error."
- Meeting "test standards which estimation of segmentation algorithm error."
- Reported Device Performance: The reported performance is a qualitative statement of meeting these unquantified criteria. No specific metrics or numerical results are provided in this summary.
Acceptance Criterion (Inferred from text) | Reported Device Performance |
---|---|
Meets all performance test criteria | Confirmed to meet. |
All functions operate without errors | Confirmed to work without errors. |
Meets standards for quantitative measurement error estimation | Confirmed to meet. |
Meets standards for segmentation algorithm error estimation | Confirmed to meet. |
Satisfies design intent and is equivalent to predicate device | Confirmed to satisfy design intent and equivalent predicate. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size: The document does not specify the sample size (number of cases or images) used for the non-clinical performance testing.
- Data Provenance: The document does not specify the country of origin of the data or whether it was retrospective or prospective. It only states that the device accepts "DICOM compliant medical images acquired from imaging device (Computed Tomography)."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not provide information on the number or qualifications of experts used to establish ground truth for the non-clinical test set. The testing appears to be primarily software verification against pre-defined performance and error estimation standards.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not describe any adjudication method for establishing ground truth, as it does not mention human readers or expert consensus for ground truth generation in the non-clinical testing. The "Test results were reviewed by designated technical professionals," but this refers to reviewing the software test outcomes, not establishing medical ground truth for image interpretation.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document explicitly states: "No clinical studies were considered necessary and performed." Therefore, no MRMC comparative effectiveness study was done.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, the non-clinical performance testing described seems to be a standalone (algorithm only) evaluation. The tests were conducted to "confirm that AutoSeg-H meets all performance test criteria and all functions are operating without errors," and to "confirm that AutoSeg-H meets test standards which estimation of the quantitative measurement error" and "segmentation algorithm error." This suggests an evaluation of the algorithm's output against defined standards or computational accuracy, rather than human-in-the-loop performance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The document largely discusses testing against "performance test criteria" and "test standards which estimation of the quantitative measurement error" and "segmentation algorithm error." This implies that the 'ground truth' for the non-clinical tests would involve computational benchmarks, reference measurements, or predefined correct segmentations, rather than medical ground truth established by expert consensus, pathology, or outcomes data. The core technology is described as an "advanced active contour algorithm with entropy regularization," suggesting validation against known segmentation accuracy targets.
8. The sample size for the training set
The document does not provide any information regarding the training set's sample size.
9. How the ground truth for the training set was established
The document does not provide any information regarding how the ground truth for the training set was established.
Ask a specific question about this device
(21 days)
MEDIC, INC.
Soft tissue and bone imaging of both lower extremities simultaneously as allowed by the MRI system. Magnetic resonance peripheral angiography.
The 1.5T PV Array, Catalog #155GE1501, interfaces with the G.E. 1.5 Tesla Signa® system. It has been designed and optimized to collect peripheral vascular image data in three overlapping coil groups. The multi-channel design utilizes the G.E. Phased Array Coil inputs and utilizes standard coil configuration files available on the G.E. Signa® Plasma or Mouse-Driven Screen. The coil form geometry has been formed to facilitate close coupling of the imaging coil's region-of-sensitivity to the anatomy of interest. The coil assembly comes with a comfort pad set to comfortably place the patient on the coil assembly.
This document is a 510(k) premarket notification for a medical device, specifically an MRI accessory coil. It focuses on demonstrating substantial equivalence to a predicate device rather than presenting a study for meeting specific acceptance criteria. Therefore, most of the requested information regarding acceptance criteria and a study to prove the device meets them, especially in the context of AI/algorithm performance, is not available in this type of submission.
Here's a breakdown of the available information based on your request, along with explanations for the missing elements:
1. A table of acceptance criteria and the reported device performance
This information is not present in the provided 510(k) summary. For MRI coils, acceptance criteria typically relate to performance metrics like signal-to-noise ratio (SNR), image uniformity, and artifact levels, often compared directly to the predicate device or established standards. However, these specific metrics and their performance results are not detailed in this summary, which focuses on the regulatory submission.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
This information is not provided in the document. 510(k) summaries for devices like MRI coils generally do not detail "test sets" or "data provenance" in the way an AI/algorithm study would. Instead, performance is often demonstrated through engineering testing, phantom studies, and possibly limited human subject imaging (but not typically a large-scale clinical trial with detailed sample information for proving performance against set criteria).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
This is not applicable and not provided. Establishing ground truth by experts is a concept relevant to diagnostic algorithms or AI systems where human interpretation is the benchmark. For an MRI coil, performance is assessed through technical measurements and image quality evaluations, not by expert consensus on diagnostic outcomes based on a "test set" in the traditional sense.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
This is not applicable and not provided. Adjudication methods are used in studies involving human readers or interpretations, primarily in clinical trials or studies evaluating diagnostic accuracy, which is not the primary focus of this 510(k) submission for an MRI coil.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This is not applicable and not provided. This document describes an MRI accessory coil, not an AI or algorithm-driven diagnostic system. Therefore, an MRMC study related to AI assistance is irrelevant to this submission.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This is not applicable and not provided. This document describes an MRI accessory coil, not an algorithm or AI system.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
This is not applicable and not provided. As explained above, the concept of "ground truth" as used in diagnostic studies (e.g., from pathology or outcomes) is not relevant to the technical performance evaluation of an MRI coil in a 510(k) submission. Performance is typically a comparison of physical specifications and imaging characteristics to a predicate device.
8. The sample size for the training set
This is not applicable and not provided. Training sets are used for machine learning algorithms, which are not described in this 510(k) submission for an MRI coil.
9. How the ground truth for the training set was established
This is not applicable and not provided. As explained above, training sets and their ground truth are concepts relevant to AI/ML development, which is outside the scope of this particular medical device submission.
Summary of what the document does provide:
- Device Description: The 1.5T PV Array, Catalog #155GE1501, interfaces with the G.E. 1.5 Tesla Signa® system. It's designed to collect peripheral vascular image data in three overlapping coil groups, utilizing G.E. Phased Array Coil inputs. It has a coil form geometry for close coupling to the anatomy and includes a comfort pad set.
- Intended Use: Soft tissue and bone imaging of both lower extremities simultaneously, and magnetic resonance peripheral angiography.
- Substantial Equivalence: The document asserts that the modified device has the same technological characteristics as the unmodified device, with only minor changes to the size and physical orientation of individual elements. Materials, use, and safety features are equivalent. Both are receive-only MRI antennas.
- Predicate Device: Unmodified Device Tradename: Array Coil Model # 100GE1500 (Lower Extremity Quadrature Detection), 510(k) No. K933659.
- Regulatory Classification: Class II/Radiology/LNH, Product Code 90 MOS.
Conclusion:
The provided text is a 510(k) summary for an MRI accessory coil. Its purpose is to demonstrate substantial equivalence to a predicate device for regulatory clearance, not to present a detailed study on meeting specific diagnostic acceptance criteria with a rigorous clinical trial or AI performance evaluation. Therefore, most of the questions regarding acceptance criteria, study design, sample sizes, expert involvement, and ground truth establishment (especially in the context of AI) are not addressed in this type of document.
Ask a specific question about this device
(167 days)
TECHNIMECA MEDIC, INC.
Ask a specific question about this device
(98 days)
TECHNIMECA MEDIC, INC.
Ask a specific question about this device
Page 1 of 1