Search Results
Found 5 results
510(k) Data Aggregation
(73 days)
ProFound AI® V3.0 is a computer-assisted detection and diagnosis (CAD) software device intended to be used concurrently by interpreting physicians while reading digital breast tomosynthesis (DBT) exams from compatible DBT systems. The system detects soft tissue densities (masses, architectural distortions and calcifications in the 3D DBT slices. The detections and Certainty of Finding and Case Scores assist interpreting physicians in identifying soft tissue densities and calcifications that may be confirmed or dismissed by the interpreting Physician
The ProFound Al® V3.0 device detects malignant soft-tissue densities and calcifications in digital breast tomosynthesis (DBT) images. The ProFound AI V3.0 software allows an interpreting physician to quickly identify suspicious soft tissue densities and calcifications by marking the detected areas in the tomosynthesis images. When the ProFound AI V3.0 marks are displayed by a user, the marks will appear as overlays on the tomosynthesis images. Each detected finding will also be assigned a "score" that corresponds to the ProFound AI V3.0 algorithm's confidence that the detected finding is a cancer (Certainty of Finding). Certainty of Finding scores are a percentage in range of 0% to indicate CAD's confidence that the finding is malignant. ProFound AI V3.0 also assigns a score to each case (Case Score) as a percentage in range of 0% to 100% to indicate CAD's confidence that the case has malignant findings. The higher the Certainty of Finding or Case Score, the higher the confidence that the detected finding is a cancer or that the case has malignant findings.
The provided text describes specific acceptance criteria and the study conducted to demonstrate that ProFound AI® Software V3.0 meets these criteria.
1. Table of Acceptance Criteria and Reported Device Performance
The document states that the "Indications for Use" remain unchanged from the Predicate UNMODIFIED Device ProFound AI V2.1, and that the "technological characteristics of Modified Device, ProFound AI V3.0 remain unchanged from Unmodified Device ProFound AI V2.1 as the predicate." The key improvement for V3.0 is "software improvements leading to improved specificity for GE and Hologic modalities."
While specific numerical acceptance criteria (e.g., minimum sensitivity, minimum specificity) are not explicitly stated in a table format with target thresholds, the performance is assessed relative to the predicate device (ProFound AI V2.0/V2.1). The primary performance improvements demonstrated are in specificity.
| Acceptance Criterion (Implicitly based on Predicate Equivalence) | Reported Device Performance (ProFound AI V3.0) |
|---|---|
| Non-inferiority in case sensitivity vs. ProFound AI V2.0/V2.1 | Hologic DBT: The conclusion of non-inferiority of the standalone performance of ProFound AI V3.0 with a Hologic DBT screening population compared to the baseline performance of ProFound AI V2 with a Hologic DBT screening population in terms of case sensitivity, FP rate per 3D volume, and AUC. Claims established in the original Reader Study (K182373) apply to ProFound AI V3.0 with Hologic DBT. GE DBT: The conclusion of non-inferiority of the standalone performance of ProFound AI V3.0 with a GE DBT screening population compared to the baseline performance of ProFound AI V2 with a Hologic DBT screening population in terms of case sensitivity, FP rate per 3D volume, and AUC. Claims established in the original Reader Study (K182373) apply to ProFound AI V3.0 with GE DBT. |
| Improved specificity for GE and Hologic modalities | Hologic DBT: A paired comparison demonstrated a significant increase in specificity from V2.0 to V3.0. GE DBT: A paired comparison demonstrated a significant increase in specificity from V2.0 to V3.0. |
| Retention of original Indications for Use | Unchanged from ProFound AI V2.1. |
| Non-raising of different questions of safety and effectiveness | "These changes do not raise different questions of safety and effectiveness." |
2. Sample Size Used for the Test Set and Data Provenance
The document refers to a "ProFound AI V2 Pivotal Reader Study Clinical Study Report (CSR) (K182373)". This reader study, performed for the predicate device, is the basis for the claims applicable to V3.0 regarding non-inferiority in sensitivity.
For the Hologic DBT Non-clinical Validation Testing and GE DBT Non-clinical Validation Testing for V3.0 itself, the document states, "A paired comparison assessed the performance of ProFound AI V3.0 on [Hologic/GE] DBT images to the performance of ProFound AI V2.0 on the same set of [Hologic/GE] DBT images," implying that the test set for these specificity comparisons consisted of images from both Hologic and GE systems, from which both V2.0 and V3.0 analyses were derived.
- Sample Size: The exact number of cases or images in the test set specifically for the V3.0 validation studies (Hologic and GE paired comparisons) is not explicitly stated in the provided text. The non-inferiority claims rely on the original K182373 study, but its sample size is also not detailed here.
- Data Provenance: The document does not specify the country of origin for the data. Since the device is U.S. FDA cleared, it is plausible the data is from the US, but this is not confirmed. The studies are described as "Non-clinical Validation Testing" and "Supplemental Standalone Study," indicating they are retrospective studies.
3. Number of Experts Used to Establish Ground Truth and Qualifications
The provided text does not explicitly state the number of experts used to establish the ground truth or their specific qualifications for the test sets. It references "the original Reader Study described in 0074-6003. PowerLook® Tomo Detection V2 Pivotal Reader Study Clinical Study Report (CSR) (K182373)", which would have involved radiologists, but details are not provided here.
4. Adjudication Method for the Test Set
The adjudication method for establishing ground truth for the test sets is not explicitly stated in the provided text.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done?
- The document states that the claims established in the original Reader Study (K182373) for the predicate device (V2) apply to ProFound AI V3.0. This original study was likely an MRMC study to support the human-in-the-loop performance of V2.
- For V3.0 itself, the validation focuses on standalone performance comparisons between V2.0 and V3.0 to demonstrate non-inferiority in sensitivity and improvement in specificity. A new human reader study was not conducted specifically for V3.0 to re-evaluate human reader improvement.
- Effect size of human readers improving with AI vs. without AI assistance: This information is not provided for V3.0, as the primary validation focused on the standalone performance of the algorithm and its non-inferiority/specificity improvement over its predecessor. The predicate device's MRMC study (K182373) would contain this information for V2.
6. Standalone (i.e., algorithm only without human-in-the-loop performance) Study
- Yes, standalone studies were done. The document explicitly refers to "ProFound AI V3.0 Hologic supplemental Standalone Study" and "ProFound AI V3.0 GE Supplemental Standalone Study." These studies compared the performance of V3.0 with V2.0/V2.1 on the same image sets.
- The performance metrics assessed in these standalone studies included:
- Case sensitivity
- FP rate per 3D volume (False Positives)
- Area Under the localized Receiver Operating Characteristic (ROC) Curve (AUC)
- Specificity (which was shown to have a significant increase)
7. Type of Ground Truth Used
The type of ground truth used is not explicitly detailed in the provided text. However, for breast cancer detection, ground truth for such studies typically involves:
- Expert Consensus: Multiple radiologists reviewing cases and reaching agreement.
- Pathology: Biopsy-proven presence or absence of malignancy.
- Follow-up Outcomes Data: Clinical follow-up over time to confirm benign or malignant status.
Given that the device detects "malignant soft-tissue densities and calcifications," it is highly likely that pathology (biopsy results) and/or expert radiologist consensus with follow-up were used to establish definitive ground truth regarding the presence and nature of cancers.
8. Sample Size for the Training Set
The document does not specify the sample size used for the training set of ProFound AI V3.0. It mentions that V3.0 uses "deep learning technology to process feature computations and uses pattern recognition to identify suspicious breast lesions," which implies a training phase, but details about the training data are absent.
9. How the Ground Truth for the Training Set Was Established
The document does not specify how the ground truth for the training set was established. Similar to the test set, it would likely involve expert annotations, pathology, and/or follow-up data.
Ask a specific question about this device
(70 days)
ProFound™ AI V2.1 Software is a computer-assisted detection and diagnosis (CAD) software device intended to be used concurrently by interpreting physicians while reading digital breast tomosynthesis (DBT) exams from compatible DBT systems. The system detects soft tissue densities (masses, architectural distortions and asymmetries) and calcifications in the 3D DBT slices. The detections and Certainty of Finding and Case Scores assist interpreting physicians in identifying soft tissue densities and calcifications that may be confirmed or dismissed by the interpreting physician.
ProFound AI V2.1 detects malignant soft-tissue densities and calcifications in digital breast tomosynthesis (DBT) images. ProFound AI V2.1 has the same performance with the DBT systems cleared for use with ProFound AI V2; furthermore, it provides support for additional DBT systems. The ProFound AI V.2.1 Software allows a radiologist to quickly identify suspicious soft tissue densities (masses, architectural distortions and asymmetries) and calcifications by marking the detected areas in the tomosynthesis images. When the ProFound AI V2.1 marks are displayed, the marks will appear as overlays on the 3D tomosynthesis images. For 3D tomosynthesis cases and depending on the functionality offered by the viewing/reading application, the ProFound AI V2.1 marks may also serve as a navigation tool for users because each mark can be linked to the tomosynthesis slice where the detection was identified. Each detected region is also assigned a "score" that corresponds to the ProFound AI V2.1 algorithm's confidence that the detected region is malignant (certainty of finding). Each case is also assigned a case score that corresponds to the ProFound AI V2.1 algorithm's confidence that a case is malignant. The certainty of finding scores are represented as an integer in range of 0 to 100 to indicate the CAD confidence that the detected region or case is malignant. The higher the certainty of finding or case score, the more likely the detected region or case is to be malignant.
Here’s a summary of the acceptance criteria and the study details for the ProFound™ AI Software V2.1, based on the provided FDA 510(k) summary.
1. Table of Acceptance Criteria and Reported Device Performance
The document states that "Case-Level Sensitivity, Lesion-Level Sensitivity, FP Rate in Non-Cancer Cases, and Specificity met design specifications" for both Siemens Standard and Empire Reconstruction datasets. However, the specific numerical acceptance criteria are not explicitly provided in the text. The document refers to "design specifications" and "the detailed results are in the User Manual," implying these numerical targets exist but are not included in the 510(k) summary provided.
For the comparison studies, the acceptance criterion was "the difference between the control group [Hologic] and the test group [Siemens Standard/Empire] is within the margin of non-inferiority for Sensitivity and AUC, and FPPI." The reported performance was that "Each of the three measures produced differences that were within the margin of non-inferiority." Again, specific numerical margins for non-inferiority are not detailed.
| Acceptance Criteria (Not explicitly stated numerically, but implied) | Reported Device Performance (Met criteria) |
|---|---|
| Standalone Performance: | |
| Case-Level Sensitivity meets design specifications | Met design specifications (for both Siemens Standard and Empire Reconstruction) |
| Lesion-Level Sensitivity meets design specifications | Met design specifications (for both Siemens Standard and Empire Reconstruction) |
| FP Rate in Non-Cancer Cases meets design specifications | Met design specifications (for both Siemens Standard and Empire Reconstruction) |
| Specificity meets design specifications | Met design specifications (for both Siemens Standard and Empire Reconstruction) |
| Non-Inferiority Comparison (vs. Hologic): | |
| Difference in Sensitivity (Siemens vs. Hologic) within non-inferiority margin | Within the margin of non-inferiority (for both Siemens Standard and Empire Reconstruction) |
| Difference in FPPI (Siemens vs. Hologic) within non-inferiority margin | Within the margin of non-inferiority (for both Siemens Standard and Empire Reconstruction) |
| Difference in AUC (Siemens vs. Hologic) within non-inferiority margin | Within the margin of non-inferiority (for both Siemens Standard and Empire Reconstruction) |
2. Sample Size Used for the Test Set and Data Provenance
- Siemens Standard Reconstruction Dataset:
- Sample Size: 694 cases (238 cancer, 456 non-cancer)
- Provenance: Not explicitly stated (e.g., country of origin). The study is described as a "screening population dataset," implying it is collected for screening purposes. The terms "stratified bootstrap procedure was used to estimate performance over a screening patient population" suggest it's representative of a screening population. Whether it's retrospective or prospective is not explicitly stated, but "dataset consisted of" typically implies retrospective collection for testing.
- Siemens Empire Reconstruction Dataset:
- Sample Size: 322 cases (140 cancer, 182 non-cancer)
- Provenance: Not explicitly stated (e.g., country of origin). Similar to the Standard Reconstruction dataset, it is described as a "screening population dataset," implying it is collected for screening purposes. Whether it's retrospective or prospective is not explicitly stated, but "dataset consisted of" typically implies retrospective collection for testing.
- Hologic (Control Group for Comparison): The document references "baseline performance of ProFound AI for DBT V2.0 with Hologic DBT images." While a control group is mentioned, the specific sample size for the Hologic dataset used in the comparison is not provided in this excerpt, only that the performance was used as a reference for non-inferiority.
3. Number of Experts Used to Establish Ground Truth and Qualifications
The document does not explicitly state the number of experts used or their qualifications for establishing ground truth for the test sets.
4. Adjudication Method for the Test Set
The document does not explicitly state the adjudication method used for the test sets (e.g., 2+1, 3+1).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study (AI vs. without AI assistance) is not described in this document. The studies presented are standalone performance evaluations of the AI system and non-inferiority comparisons of the AI system's performance across different DBT acquisition systems. The "concurrently by interpreting physicians" in the indication for use suggests a human-in-the-loop interaction, but a specific MRMC study to quantify human improvement with AI is not detailed here.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
Yes, standalone (algorithm only without human-in-the-loop performance) studies were done.
- The "ProFound AI for DBT V2.1 Siemens Standard Screening Population Dataset" study explicitly states: "Standalone testing was performed on tomosynthesis slices only."
- Similarly, the "ProFound AI for DBT V2.1 Siemens Empire Screening Population Dataset" study states: "Standalone testing was performed on tomosynthesis slices only."
- The comparison studies ("Standalone Hologic Comparison Test Results") also involve comparing "the standalone performance of ProFound AI for DBT V2.0 with Hologic DBT images to the performance of ProFound AI for DBT V2.1 with Siemens Standard/Empire Reconstruction DBT images."
7. Type of Ground Truth Used
The type of ground truth used is not explicitly stated in this excerpt. However, in the context of screening population datasets for cancer detection, ground truth is typically established by:
- Pathology (biopsy results) for positive cases.
- Long-term follow-up (e.g., 1-2 years of negative imaging) for negative cases.
8. Sample Size for the Training Set
The document does not specify the sample size used for the training set.
9. How the Ground Truth for the Training Set Was Established
The document does not specify how the ground truth for the training set was established. It only mentions that the "ProFound AI 2.1 algorithm uses deep learning technology to process feature computations and uses pattern recognition to identify suspicious breast lesions." This implies a training process based on labeled data, but details about the origin and establishment of those labels are not provided in this excerpt.
Ask a specific question about this device
(97 days)
PowerLook® Tomo Detection V2 Software is a computer-assisted detection and diagnosis (CAD) software device intended to be used concurrently by interpreting physicians while reading digital breast tomosynthesis (DBT) exams from compatible DBT systems. The system detects soft tissue densities (masses, architectural distortions and asymmetries) and calcifications in the 3D DBT slices. The detections and Certainty of Finding and Case Scores assist interpreting physicians in identifying soft tissue densities and calcifications that may be confirmed or dismissed by the interpreting physician.
PLTD V2 detects malignant soft-tissue densities and calcifications in digital breast tomosynthesis (DBT) image. The PLTD V2 software allows a interpreting physician to quickly identify suspicious soft tissue densities and calcifications by marking the detected areas in the tomosynthesis images. When the PLTD V2 marks are displayed by a user, the marks will appear as overlays on the tomosynthesis images. The PLTD V2 marks also serve as a navigation tool for users, because each mark is linked to the tomosynthesis plane where the detection was identified. Users can navigate to the plane associated with each mark by clicking on the detection mark. Each detected region will also be assigned a "score" that corresponds to the PLTD V2 algorithm's confidence that the detected region is a cancer (Certainty of Finding Score). Certainty of Finding scores are relative scores assigned to each detected region and a Case Score is assigned to each case regardless of the number of detected regions. Certainty of Finding and Case Scores are computed by the PLTD V2 algorithm and represent the algorithm's confidence that a specific finding or case is malignant. The scores are represented on a 0% to 100% scale. Higher scores represent a higher algorithm confidence that a finding or case is malignant. Lower scores represent a lower algorithm confidence that a finding or case is malignant.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text.
1. Acceptance Criteria and Reported Device Performance
The device is a Computer-Assisted Detection and Diagnosis (CAD) software for digital breast tomosynthesis (DBT) exams. The acceptance criteria are largely demonstrated through the multi-reader multi-case (MRMC) pivotal reader study and standalone performance evaluations.
Table of Acceptance Criteria and Reported Device Performance:
| Criteria Category | Metric | Acceptance Criteria (Implied / Stated) | Reported Device Performance (with CAD vs. without CAD) |
|---|---|---|---|
| Pivotal Reader Study (Human-in-the-Loop) | |||
| Radiologist Performance | Case-level Area Under the Receiver Operating Characteristic (ROC) Curve (AUC) | Non-inferiority to radiologist performance without CAD. Implicit superiority is also a desirable outcome. | AUC with CAD: 0.852 AUC without CAD: 0.795 Average difference: 0.057 (95% CI: 0.028, 0.087); p < 0.01 (non-inferiority and difference) |
| Radiologist Reading Time | Reading Time | Superiority (shorter time) compared to reading without CAD. | Reading time improved 52.7% with CAD (95% CI: 41.8%, 61.5%; p < 0.01). |
| Sensitivity (Case-Level) | Case-level Sensitivity | Non-inferiority (margin delta = 0.05). Implicit superiority is also a desirable outcome. | Average case-level sensitivity with CAD: 0.850 Average case-level sensitivity without CAD: 0.770 Average increase: 0.080 (95% CI: 0.026, 0.134); p < 0.01 (non-inferiority and difference) |
| Sensitivity (Lesion-Level) | Lesion-level Sensitivity | Non-inferiority (margin delta = 0.05). Implicit superiority is also a desirable outcome. | Average per-lesion sensitivity with CAD: 0.853 Average per-lesion sensitivity without CAD: 0.769 Average increase: 0.084 (95% CI: 0.029, 0.139); p < 0.01 (non-inferiority and difference) |
| Specificity (Case-Level) | Case-level Specificity | Non-inferiority (margin delta = 0.05). | Specificity with CAD: 0.696 Specificity without CAD: 0.627 Average increase: 0.069 (95% CI: 0.030, 0.108); p < 0.01 (non-inferiority) |
| Recall Rate (Case-Level) | Recall Rate in Non-Cancer Cases | Non-inferiority (margin delta = 0.05). Lower recall rates are better. | Average recall rate with CAD: 0.309 Average recall rate without CAD: 0.380 Average reduction: 0.072 (95% CI: 0.031, 0.112); p < 0.01 (non-inferiority) |
| Standalone Performance (Algorithm Only) | |||
| Hologic DBT (Screening) | Case-Level Sensitivity, Lesion-Level Sensitivity, FP Rate in Non-Cancer Cases, Specificity | Met design specifications (details in User Manual, not specified here). | Met design specifications (specific values not provided in document, but deemed acceptable). |
| GE DBT (Screening) | Case-Level Sensitivity, Lesion-Level Sensitivity, FP Rate in Non-Cancer Cases, Specificity | Met design specifications (details in User Manual, not specified here). | Met design specifications (specific values not provided in document, but deemed acceptable). |
| GE vs. Hologic DBT | Sensitivity, FPPI, AUC (non-inferiority) | Non-inferiority between GE DBT and Hologic DBT performance within specified margins. | Differences in Sensitivity, FPPI, and AUC were within the margin of non-inferiority (specific values not provided, but deemed acceptable). |
2. Sample Sizes and Data Provenance
-
Pivotal Reader Study (Test Set):
- Sample Size: 260 Hologic digital breast tomosynthesis (DBT) cases. This included 65 cancer cases with 66 malignant lesions.
- Data Provenance: Retrospective. The country of origin is not explicitly stated, but given the FDA submission, it's likely primarily from the US or regions with similar regulatory standards.
-
Standalone Performance Studies (Test Set):
- Hologic DBT: 655 Hologic DBT cases, including 235 cancer cases with 242 malignant lesions.
- GE DBT: 610 GE DBT cases, including 204 cancer cases with 221 malignant lesions.
- Data Provenance: Retrospective. No country of origin specified.
-
Training Set Sample Size:
- The document does not explicitly state the sample size for the training set. It mentions the algorithm uses "deep learning technology," implying a substantial training set would have been used.
3. Number of Experts and Qualifications for Test Set Ground Truth
-
Pivotal Reader Study & Standalone Studies: The document does not explicitly state the number of experts used to establish the ground truth for the test sets. However, for the pivotal reader study, it states: "The purpose of the pivotal study was to compare clinical performance of radiologists using CAD... to that of radiologists using DBT without CAD." This implies the radiologists in the study were the interpreting physicians who would typically establish ground truth in a clinical setting based on their diagnostic judgment and follow-up (biopsy, etc.). It mentions that the ground truth for cancer cases required "malignant lesion localization."
-
Qualifications of Experts: The readers in the pivotal study were "24 tomosynthesis radiologist readers." While specific experience levels (e.g., "10 years of experience") are not provided, "radiologist" implies medical doctors specialized in radiology, typically with several years of post-medical school training and board certification.
4. Adjudication Method for Test Set
The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1 consensus) for establishing the ground truth for the test sets. It implies that the ground truth for malignancy was established via biopsy or other confirmed pathology (referred to as "malignant lesions" and "cancer cases"). For the reader study, the "ground truth" for evaluating reader performance was the confirmed cancer status of the cases and lesions.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Yes, an MRMC comparative effectiveness study was done. This was the "Pivotal Reader Study."
- Effect Size of Human Reader Improvement with AI vs. Without AI Assistance:
- AUC: Radiologists' per-subject average AUC improved by 0.057 when using CAD (0.852 with CAD vs. 0.795 without CAD).
- Reading Time: Reading time improved 52.7% with CAD.
- Case-Level Sensitivity: Average sensitivity increased by 0.080 (0.850 with CAD vs. 0.770 without CAD).
- Lesion-Level Sensitivity: Average sensitivity increased by 0.084 (0.853 with CAD vs. 0.769 without CAD).
- Case-Level Specificity: Average specificity increased by 0.069 (0.696 with CAD vs. 0.627 without CAD).
- Recall Rate in Non-Cancer Cases: Average recall rate was reduced by 0.072 (0.309 with CAD vs. 0.380 without CAD).
6. Standalone Performance (Algorithm Only without Human-in-the-Loop Performance)
- Yes, standalone studies were done.
- A standalone study was conducted for Hologic DBT cases (N=655).
- A standalone study was conducted for GE DBT cases (N=610).
- A comparison of performance between GE DBT and Hologic DBT was also performed to show non-inferiority.
7. Type of Ground Truth Used
The ground truth used for the cancer cases (in both standalone and reader studies) appears to be pathology-confirmed malignancy (referred to as "cancer cases" and "malignant lesions"). The document states that "malignant lesion localization was required for a reader to correctly detect cancer in a case," implying confirmed cancerous findings.
8. Sample Size for the Training Set
The document does not specify the sample size for the training set. It mentions the use of "deep learning technology," which typically requires large datasets for training.
9. How Ground Truth for the Training Set Was Established
The document does not explicitly state how the ground truth for the training set was established. Given it's a deep learning-based CAD system for cancer detection, it's highly probable that ground truth for the training data would also have been established through pathology-confirmed diagnoses (biopsy results) and/or by expert radiologists' review and consensus with prolonged follow-up to confirm benign status.
Ask a specific question about this device
(73 days)
The Axxent® Electronic Brachytherapy System Model 110 XP 1200 is a high dose rate Brachytherapy device for use with Axxent Applicators to treat lesions, tumors and conditions in or on the body where radiation is indicated. Only Xoft Axxent Surface Applicators can be used with the Axxent Electronic Brachytherapy System Model 110 XP 1200.
The Axxent Electronic Brachytherapy System consists of two primary components: the Axxent System Controller (Controller); the Axxent HDR X-ray Source-2.2 (Catheter/Source). The System is designed to deliver doses of X-ray radiation to tissue in proximity to the applicator using a miniature X-ray tube powered by the Controller.
The Axxent Electronic Brachytherapy System is a mobile, computer-controlled platform that is responsible for the overall operation of the System. The Controller is designed to work with the Source, which is a miniature X-ray tube located at the end of a flexible catheter. The Catheter/Source is inserted into a lumen of an appropriate Applicator which are cleared separately under their 510(k). The Axxent Electronic Brachytherapy System Model 110 XP 1200 described in this 510(k) will only be used for surface applications using Xoft Axxent Surface Applicators.
The provided text describes a 510(k) premarket notification for the Axxent Electronic Brachytherapy System Model 110 XP 1200. This is a medical device for radiation therapy, not an AI/ML imaging device. Therefore, much of the requested information regarding AI-specific criteria (such as sample sizes for test and training sets, expert ground truth adjudication, MRMC studies, or standalone algorithm performance) is not applicable or available in this document.
However, I can extract the acceptance criteria related to the device's performance and the nature of the study conducted to prove it meets those criteria.
Acceptance Criteria and Reported Device Performance
The acceptance criteria for this device are focused on demonstrating that technological changes do not negatively impact the device's fundamental functional, scientific, and performance characteristics, particularly concerning radiation dosage. The device seeks substantial equivalence to its predicate device (Axxent Electronic Brachytherapy System, K122951).
| Acceptance Criteria / Performance Aspect | Reported Device Performance (Model 110 XP 1200 vs. Predicate) |
|---|---|
| Spatial Parameters (Azimuthal and Polar Variation) | Equivalence with the current device. |
| Depth Dose | Equivalence with the current device. |
| First and Second Half Value Layers | Agreement between the current x-ray source/catheter and the proposed source/catheter measurement. |
| Consistency of Spatial Measurements, Depth Dose, and Source/Catheter Spectrum after Extended Use | Consistency demonstrated. |
| Source/Catheter Output Linearity and Reproducibility | Output is linear as a function of time and reproducible. |
| Proposed Source/Catheter Longevity | Functions for at least as long as the current source. |
| Usability in Simulated Clinical Setting | Able to be used in the same manner as the current x-ray source/catheter in a simulated clinical setting. |
| Clinical Dose Equivalence in Surface Applicator Indication | Clinical dose is identical when using either source/catheter design in the surface applicator indication. |
Study Details (as per the document):
-
Sample size used for the test set and the data provenance: Not applicable in the context of an AI/ML study. The testing was non-clinical performance data (laboratory testing of the device's physical properties), not based on a "test set" of patient data.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable. The ground truth for this device's performance is based on physical measurements of radiation characteristics.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. This is not an AI/ML imaging device.
-
If a standalone (i.e. algorithm only without human-in-the loop performance) was done: Not applicable. This is not an AI/ML imaging device.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc): The ground truth for performance was established through physical measurements and validation testing of the device's characteristics (e.g., spatial parameters, depth dose, half-value layers, output linearity, longevity).
-
The sample size for the training set: Not applicable. This is not an AI/ML device.
-
How the ground truth for the training set was established: Not applicable.
Overall Study Description:
The study referenced is a non-clinical performance assessment conducted to support the substantial equivalence of the Axxent Electronic Brachytherapy System Model 110 XP 1200 to its predicate device. This involved a series of laboratory tests and validation activities focused on the physical and operational characteristics of the device, particularly the changes in the cooling system and anode target. The goal was to confirm that these changes did not alter the fundamental safety and effectiveness of the device, especially concerning radiation delivery. The conclusion from these non-clinical tests was that the clinical dose is identical regardless of whether the proposed or current source/catheter design is used in the surface applicator indication.
Ask a specific question about this device
(50 days)
The Second Look® Viewer is intended to be used to display low resolution, nondiagnostic medical images with annotations such as pre-computed regions-ofinterest or pre-computed CAD marks.
The Second Look® Viewer is composed of three primary components. The major component is the computer (1), which is supported by a touchscreen monitor (2) and a barcode reader (3). The computer is a conventional Intel based computer that is connected to the Second Look® CAD processing unit via a network. The Second Look Viewer serves no other purpose than viewer support. The physician interfaces to the software using the touch screen or barcode reader. The physician first barcodes a given patient whose mammography case has already been processed by the Second Look CAD processing unit. The viewer displays the Mammagraph™s with the pre-computed CAD marks overlayed. Second Look Viewer allows a radiologist to review Second Look® Analog CAD output, in softcopy format. The physician may touch any of the small images and see a higher resolution, magnified image along with characterization information for that image. After viewing the mammography case, the radiologist may use the Second Look® Viewer to review pre-computed ultrasound results, such as CADStream® output, if such results already exist for the case under review.
The provided text is a 510(k) summary for the Second Look® Viewer and related FDA correspondence. It describes the device, its intended use, and states that performance and validation testing indicate substantial equivalence to a predicate device. However, it does not contain the detailed information necessary to fully address all aspects of your request regarding acceptance criteria and a specific study proving the device meets those criteria.
Specifically, the document lacks:
- A table of acceptance criteria and reported device performance.
- Sample sizes for test sets, data provenance, number of experts, their qualifications, or adjudication methods for ground truth.
- Information on Multi-Reader Multi-Case (MRMC) comparative effectiveness studies or standalone algorithm performance.
- Details about the type of ground truth used (e.g., pathology, outcomes).
- Sample size for the training set or how ground truth was established for the training set.
The document states, "Results of performance and validation testing indicate that the Second Look® Viewer is substantially equivalent to the predicate device." This is a high-level conclusion required for 510(k) clearance, but the specific data and methodology of those tests are not included in this summary.
Therefore, given the provided text, I can only provide the information that is present:
1. A table of acceptance criteria and the reported device performance
- Acceptance Criteria: Not explicitly stated in this document. The overarching acceptance criterion for 510(k) clearance is "substantial equivalence" to a predicate device for its intended use.
- Reported Device Performance: "Results of performance and validation testing indicate that the Second Look® Viewer is substantially equivalent to the predicate device." No specific performance metrics (e.g., sensitivity, specificity, accuracy, display quality, latency) are provided.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not specified.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not specified.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not specified. The device is a "Viewer" for pre-computed CAD marks; it is not described as an AI algorithm itself for diagnosis or detection. Therefore, an MRMC study comparing human readers with and without AI assistance is unlikely to be directly relevant to the Viewer's specific function as described, though it would be relevant to the CAD system that generates the marks.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- This device is a "Viewer," which by definition is a human-in-the-loop device (displaying information to a physician). It is not an algorithm that performs a diagnostic task independently.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)
- Not specified.
8. The sample size for the training set
- Not specified.
9. How the ground truth for the training set was established
- Not specified.
Conclusion from the provided text:
The provided 510(k) summary focuses on the administrative aspects of regulatory clearance based on substantial equivalence. It describes the device, its intended use (displaying low-resolution, non-diagnostic images with annotations/CAD marks), and states that performance and validation testing were conducted, the results of which supported the substantial equivalence claim. However, it does not disclose the detailed results, methodologies, sample sizes, or ground truth establishment processes from those underlying tests.
Ask a specific question about this device
Page 1 of 1