Search Results
Found 34 results
510(k) Data Aggregation
(214 days)
Multi-purpose Voxel Dosimetry (Personalized Dosimetry in Molecular Radiotherapy) Regulation Number: 21 CFR 892.1100
Regulation Number: 21 CFR §892.1100 Regulation Name: Scintillation (Gamma) Camera Regulatory Class: Class
CFR 807.92(a)(3)]:
K163687 OLINDA/EXM v. 2.0 Hermes Medical Solutions AB Regulation Number 21 CFR 892.1100
QDOSE® Multi-purpose Voxel Dosimetry is indicated for use to provide estimates of radiation absorbed dose to organs and tissues of the body from medically administered radiopharmaceuticals, and to calculate total-body effective dose. Radiation absorbed dose calculations are based on clinical measurements of radioactivity biodistributions and biokinetics. QDOSE® is intended for applications in clinical nuclear medicine, molecular radiotherapy, radiation safety evaluations, risk assessment, record-keeping, and regulatory compliance. QDOSE® is indicated for use by professionals (medical physicists, radiologists and oncologists including nuclear medicine physicians), radiologic imaging technologists, health physicists and radiation safety officers and administrators, students in training, and others having interest in ability to calculate internal radiation doses from medically administered radiopharmaceuticals.
QDOSE® is a software package for calculating internal radiation doses from clinically administered radiopharmaceuticals. Patient time-activity data may be imported to QDOSE® in DICOM files from nuclear medicine clinical imaging. Dosimetry performed within QDOSE® is based on the use of calculated S values, determined for patient-like phantoms using a Monte Carlo method. The S values provide the average absorbed dose to a target organ generated by a unit of activity in a source organ time-activity curves from quantitative nuclear medicine imaging data are integrated to yield an estimate of the number of radionuclide decays representing the area under a time-activity function, similarly to the mathematical process used by OLINDA/EXM. QDOSE® dose calculations are performed by multiplying a source organ timeactivity curve integral by the S value generated from Monte Carlo calculations. The product of the dose calculations is an output of radiation absorbed doses to specified target organs per unit administered activity.
The provided text describes the QDOSE® Multi-purpose Voxel Dosimetry device and its substantial equivalence to a predicate device (OLINDA/EXM v.2.0) for regulatory approval (K230221). It includes information on performance testing and comparison, but it does not explicitly state specific acceptance criteria in a quantitative manner (e.g., "Accuracy must be within X%"). Instead, it describes performance in terms of favorable comparison and small or insignificant differences relative to theoretical values and predicate/reference devices.
Therefore, the "acceptance criteria" are inferred from the demonstrated performance and the conclusion of substantial equivalence.
Here's an attempt to structure the information based on the provided text, acknowledging the limitations regarding explicit acceptance criteria:
Acceptance Criteria and Study Proving Device Performance: QDOSE® Multi-purpose Voxel Dosimetry
The acceptance criteria for the QDOSE® device are implicitly defined by its demonstrated ability to perform internal radiation dosimetry in a manner "similar" or "comparable" to established predicate and reference devices, and to produce results that are quantitatively close to theoretical values where applicable. The study aims to demonstrate substantial equivalence to the predicate device and other reference devices, indicating that the new device is as safe and effective.
1. Table of Acceptance Criteria (Inferred) and Reported Device Performance:
Acceptance Criteria (Inferred from Performance Goals) | Reported Device Performance and Comparison |
---|---|
Accuracy of Time-Integrated Cumulated Activities: | |
- Planar Workflow: Effective half-lives and calculated activities should compare favorably to theoretical values and between QDOSE® and Hermes Voxel Dosimetry. | Planar Workflow: Average deviation of measured effective half-lives was ~0.2% (between QDOSE® and Hermes Voxel Dosimetry). Average difference of calculated activities was ~1.3%. |
- Hybrid Workflow: Cumulated activities should compare favorably to theoretical values. | Hybrid Workflow: Deviation of cumulated activities was ~0.3%. |
- Volumetric Workflow: Cumulated activities should compare favorably to theoretical values. | Volumetric Workflow: Deviation of cumulated activities was ~0.04%. |
Accuracy of Organ Absorbed Dose Calculations: | |
- Mean relative difference for beta/gamma-emitting radionuclides (adult male/female) compared to OLINDA/EXM 2.0 (Note: Anatomical phantom differences acknowledged). | Pooled Beta/Gamma Emitters: Mean relative difference was 7% for adult male phantom and 8.8% for adult female phantom (QDOSE® IDAC-Dose 2.1 vs. OLINDA/EXM 2.0). These differences reflect known anatomical model variations. |
- Mean relative difference for alpha-emitting radionuclides (adult male/female) compared to OLINDA/EXM 2.0. | Alpha Emitters: Mean relative difference was 10.7% for adult male phantom and 11.6% for adult female phantom (QDOSE® IDAC-Dose 2.1 vs. OLINDA/EXM 2.0). These differences reflect known anatomical model variations. |
- Organ-specific relative differences compared to OLINDA/EXM 2.0 should be within acceptable ranges for clinical use. | Organ-Specific Differences: Varied from ~1% for kidneys, liver, spleen, and thyroid, to ~25% for red marrow. These differences are attributed to variations in assumed anatomical geometry, mass, shape, position, and tissue composition between the software packages. |
- Agreement with spherical model calculations, compared to OLINDA/EXM. | Spherical Model: Agreement with less than 5% difference for absorbed dose values. |
- Agreement with Voxel S method calculations, compared to OLINDA/EXM. | Voxel S Method: Mean difference relative to OLINDA/EXM of about 6%. |
Intercomparison with other established dosimetry software: | Observable variability in reported doses should be generally small and within acceptable clinical ranges. For example, for organ walls with contents, within ±20%. |
Nonclinical Intercomparison by Others: QDOSE® (IDAC-Dose 2.1) compared favorably with OLINDA 1 and 2, ICRP Publication 128, and MIRDcalc 1. Observed variability was generally small, and for organ walls with contents, all results were still within ±20% and within the standard error usually assumed for medical internal radiation dose estimates. | |
Safety and Effectiveness: Differences from predicate should not raise new questions regarding safety and effectiveness. | The submission concludes that differences between QDOSE® and predicate/reference devices "do not raise new questions regarding safety and effectiveness of the device," and it is "as safe and effective as are its predicate devices." Minor differences in embedded methodology are considered normal and not critical because basic physics equations and nuclear data remain constant. User-related factors (training, calibration, ROI delineation) are acknowledged as main sources of small numerical differences. The device allows for patient-specific internal dosimetry based on fundamental science principles and internationally accepted methods. |
2. Sample Size Used for the Test Set and Data Provenance:
- Test Set Sample Size: The document refers to "phantom datasets" for the applicant's nonclinical testing. No specific number for the test set "sample size" in terms of unique phantom cases is provided. The tests involved calculating time-integrated cumulated activities across planar, hybrid, and volumetric workflows, and comparing organ absorbed doses for various radionuclides and phantoms (adult male/female).
- Data Provenance: The data for the nonclinical tests was generated from "phantom datasets." The document does not specify the country of origin of this data, but the company (Versant Medical Physics and Radiation Safety) is based in Kalamazoo, Michigan, USA, and the software developer (ABX-CRO) is from Dresden, Germany. The tests conducted by "others" refer to intercomparison studies published by groups like the Medical Internal Radiation Dose (MIRD) Committee of the Society of Nuclear Medicine and Medical Imaging. This suggests a mix of internal company data and external, possibly multi-center or widely accepted, phantom-based benchmark data. The studies are nonclinical, using phantom data, not patient data (retrospective or prospective).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications:
- Number of Experts: Not explicitly stated. The "ground truth" for the nonclinical phantom tests appears to be established by comparison with theoretical values (for time-integrated activities) and established, validated software packages (OLINDA/EXM, ICRP Publication 128, MIRDcalc 1).
- Qualifications of Experts: The document emphasizes that dose calculations are based on "fundamental science principles and internationally accepted methods and phantom models" (Page 10). The internal dose calculational engine (IDAC-Dose 2.1) is used by the International Commission on Radiological Protection (ICRP) to generate dose estimates. This implies that the 'ground truth' or comparators are derived from well-established scientific communities and their endorsed methodologies rather than individual expert adjudication on a case-by-case basis.
4. Adjudication Method for the Test Set:
- Adjudication Method: Not applicable in the sense of human expert consensus for a clinical test set. The validation is primarily through comparison against theoretical values and results from established, previously validated software (OLINDA/EXM, ICRP Publication 128, MIRDcalc 1). Any "adjudication" is implicitly integrated within the established scientific and regulatory standards for dosimetry software validation.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:
- No, an MRMC comparative effectiveness study was not done. The studies described are nonclinical, using phantom data and software comparisons, not human readers interpreting medical images. The device is a dosimetry calculation software, not an AI for image interpretation or diagnosis.
6. If a Standalone (algorithm only without human-in-the-loop performance) was done:
- Yes, the primary evaluation is a standalone (algorithm only) performance assessment. The nonclinical tests evaluate the QDOSE® software's computational results (cumulated activities, absorbed doses) against theoretical values and outputs from other dosimetry software. While the software takes "clinical nuclear medicine diagnostic imaging" as input, the performance evaluation itself focuses on the accuracy of the algorithm's calculations, assuming correct data input. The statement "The software device includes all processing and calculation steps required for an internal dosimetry evaluation" (Page 4) and its comparison to other software further supports this.
7. The Type of Ground Truth Used:
- Theoretical Values / Computational Benchmarks: For time-integrated cumulated activities, QDOSE® results were compared against "theoretical values from calculations" (Page 13).
- Established Software Outputs: For absorbed dose calculations, the ground truth was predominantly the outputs from predicate and reference software devices (e.g., OLINDA/EXM 1.1 and 2.0, Hermes Voxel Dosimetry, ICRP Publication 128, MIRDcalc 1). The document acknowledges that "absolute ground truth in medical internal radiation dosimetry is not known" (Page 12, footnote 1), implying that the established software outputs serve as the best available benchmark.
8. The Sample Size for the Training Set:
- Not specified. The document focuses on performance testing (validation) and comparison to predicate devices. It does not mention a "training set" as would be typical for a machine learning model, suggesting that the software relies on established physics models and algorithms rather than statistical learning from a large dataset for its core functionality.
9. How the Ground Truth for the Training Set was Established:
- Not Applicable. Since a distinct "training set" in the context of a machine learning model is not described, the concept of establishing ground truth for it also does not apply. The "internal calculational algorithm" (IDAC-Dose2.1) is based on a "phantom-based approach" and incorporates "ICRP computational framework" and "MIRD schema" (Page 9, 6). This points to an engineering and physics-based development rather than a data-driven training approach.
Ask a specific question about this device
(265 days)
| K163687 |
| Regulation Number/
Product Code: | 21 CFR 892.1100
The system is intended for use by Nuclear Medicine (NM) or Radiology practitioners and referring physicians for display, processing, archiving, printing, reporting and networking of NMI data, including planar scans (Static, Whole Body, Dynamic, Multi-Gated) and tomographic scans (SPECT, dedicated PET or Camera-Based-PET) acquired by gamma cameras or PET scanners. The system can run on dedicated workstation or in a server-client configuration.
The NM or PET data can be coupled with registered and or fused CT or MR scans, and with physiological sigmals in order to depict, localize, and/or quantify the distribution of radionuclide tracers and anatomical structures in scamed body tissue for clinical diagnostic purposes.
The DaTQUANT optional application enables visual evaluation and quantification of 1231-ioflupane (DaTscanTM) images. DaTQUANT Normal Database option enables quantification relative to normal population databases of 1231-ioflupane (DaTscanTM) images. These applications may assist in detection of loss of functional dopaminergic neuron terminals in the striatum, which is correlated with Parkinson disease.
The Q.Lung AI application may aid physicians in:
-Diagnosis of Pulmonary Embolism (PE), Chronic Obstructive Pulmonary Disease (COPD), Emphysema and other lung deficiencies.
-Assess the fraction of total lung function provided by a lobe or whole lung for Lung cancer resection requiring removal of an entire lobe, bilobectomy, or pneumonectomy.
The Q.Brain application allows the user to visualize and quantify relative changes in the brain's metabolic function or blood flow activity between a subject's images and controls, which may be resulting from brain functions in: -Epileptic seizures
-Dementia. Such as Alzheimer's disease, Lewy body dementia, Parkinson's disease with dementia, vascular dementia, and frontotemporal dementia.
-Inflammation
-Brain death
-Cerebrovascular disease such as Acute stroke, Chronic and acute ischemia
-Traumatic Brain Injury (TBI)
When integrated with the patient's clinical and diagnostic information may aid the physician in the interpretation of cognitive complaints, neuro-degenerative disease processes and brain injuries.
The Alcyone CFR application allows for the quantification of coronary vascular function by deriving Myocardial Blood Flow (MBF) and then calculating Coronary Flow Reserve (CFR) indices on data acquired on PET scamers and on stationary SPECT scanners with the capacity for dynamic SPECT imaging. These indices may add information to physicians using Myocardial Perfusion Imaging for the diagnosis of Coronary Artery Disease (CAD).
The Exini Bone application is intended to be used with NM bone scans for the evaluation of adult male patients with bone metastases from prostate cancer. Exini Bone quantifies the selected lesions and provides a Bone Scan Index value as adjunct information related to the progression of disease.
The Q.Liver application provides processing, quantification, and multidimensional review of Liver SPECT/PET and CT images for display, segmentation, and a calculation of the SPECT 'liver to lune' shunt value and the patient's Body Surface Area (BSA) for use in calculating a therapeutic dose for Selective Internal Radiation Therapy (SIRT) treatment using a user defined formula.
The O.Thera AI application allows physicians review and monitor patient radiation doses derived from nuclear medicine imaging data, including SPECT/CT, PET/CT, and Whole-body Planar images, and from biological samples from the patient. The application provides estimates of isotope residence time, absorbed dose, and equivalent dose at the whole organ level, as well as estimates of whole-body effective dose. The output from Q.Thera AI may aid physicians in monitoring patient radiation doses.
For use with internally administered radioactive products. O.Thera AI should not be used to deviate from approved product dosing and administration instructions. Refer to the product's prescribing informations.
Xeleris V Processing and Review System is a Nuclear Medicine Software system that is designed for general nuclear medicine processing and review procedures for detection of radioisotope tracer uptake in the patient's body, using a variety of individual processing applications orientated to specific clinical applications. It includes all of the clinical applications and features in the current production version of the predicate Xeleris V and, introduces two clinical applications
Q.Thera AI: The Q.Thera Al application allows physicians review and monitor patient radiation doses derived from nuclear medicine imaging data, including SPECT/CT, and Whole-body Planar images, and from biological samples from the patient. The application provides estimates of isotope residence time, absorbed dose, and equivalent dose at the whole organ level, as well as estimates of whole-body effective dose. The output from Q.Thera Al may aid physicians in monitoring patient radiation doses.
Q.Thera AI is a modification to the predicate's Dosimetry Toolkit application for enhancing site's dosimetry workflow through the following updates:
- Image Pre-Processing: Q.Thera Al uses the predicate's Q.Volumetrix MI application for image preprocessing, bringing additional automated organ segmentations as well as enabling dosimetry on PET/CT imaging data.
- Dosimetry Calculations: Q.Thera Al adds calculation of radiation doses to Dosimetry Toolkit's previous determination of isotope residence time. Similar to the reference Olinda/EXM (K163687), the added calculations follow the guidelines published by the Medical Internal Radiation Dose (MIRD) committee of the Society of Nuclear Medicine (SNM) and models from publication Nº 89 of the International Commission on Radiological Protection (ICRP).
Generate Planar: The Generate Planar application produces 2D derived planar images from 3D SPECT images that are acquired using GE Healthcare's StarGuide SPECT-CT system (K210173). Generate Planar was first cleared on Xeleris 4.0 (K153355). It was also included in StarGuide's 510(k) clearance for producing derived planar images from hybrid SPECT-CT studies. Xeleris V brings the Generate Planar application from Xeleris 4.0 and expands it to also produce derived planar images from SPECT-only studies.
This document does not contain the specific acceptance criteria or a detailed study proving the device meets those criteria, as typically found in a clinical study report. The document is a 510(k) summary for the Xeleris V Processing and Review System, which focuses on demonstrating substantial equivalence to a predicate device rather than presenting a de novo clinical trial with detailed performance metrics and acceptance thresholds.
However, based on the information provided, we can infer some aspects related to the evaluation of the new applications, Q.Thera AI and Generate Planar, that are part of the Xeleris V system.
Here's a breakdown of the available information:
1. Table of acceptance criteria and reported device performance:
The document does not provide a table with explicit acceptance criteria (e.g., minimum sensitivity, specificity, accuracy) or quantitative reported device performance for the Q.Thera AI and Generate Planar applications against predefined thresholds.
Instead, the non-clinical testing sections describe the scope of testing for these new applications:
- Q.Thera AI: "Bench testing for Q.Thera AI confirmed the correctness of the resulting radiation doses across different possible combinations (e.g. models, organs, isotopes) of calculations."
- Generate Planar: "For Generate Planar, bench testing demonstrated similarity between derived planar images produced from SPECT only studies to derived planar images produced from SPECT-CT studies. Similarity was demonstrated using representative clinical datasets for a variety of factors that impact attenuation levels (e.g. body region, BMI)."
These statements highlight that the "acceptance criteria" were qualitative demonstrations of "correctness" for Q.Thera AI calculations and "similarity" for Generate Planar images. There are no numerical performance metrics or thresholds mentioned.
2. Sample size used for the test set and the data provenance:
- Q.Thera AI: The document mentions "different possible combinations (e.g. models, organs, isotopes) of calculations" for bench testing, but does not specify a sample size for the test set or the number of cases. The data provenance is also not explicitly stated (e.g., country of origin, retrospective/prospective).
- Generate Planar: "representative clinical datasets for a variety of factors that impact attenuation levels (e.g. body region, BMI)" were used. Again, the specific sample size, number of cases, and data provenance are not provided.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
This information is not provided. The testing described is bench testing focusing on internal correctness and similarity, not necessarily involving expert-derived ground truth on a test set of patient cases for diagnostic accuracy.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
Not applicable, as no external expert review or adjudication of performance on a clinical test set is described.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No MRMC comparative effectiveness study is mentioned. The document explicitly states: "The proposed Xeleris V did not require clinical studies to support substantial equivalence." This implies that no studies comparing human reader performance with and without AI assistance were conducted as part of this submission.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
The descriptions of "bench testing" for both Q.Thera AI and Generate Planar imply standalone evaluations of the algorithms' outputs against expected "correctness" or "similarity" without human intervention for interpretation or diagnosis. However, specific standalone performance metrics (e.g., accuracy against a gold standard) are not provided.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Q.Thera AI: The "correctness of the resulting radiation doses" implies a ground truth based on established dosimetric models and calculations (e.g., "MIRD committee of SNM and ICRP Publication 89"). This would be a ground truth derived from established scientific/medical formulas and guidelines rather than expert consensus on patient data or pathology.
- Generate Planar: "similarity between derived planar images" suggests a ground truth or reference for comparison were other derived planar images (from SPECT-CT studies as cleared on Xeleris 4.0), rather than a clinical ground truth like pathology.
8. The sample size for the training set:
The document does not provide information about the training set size for the AI components of Q.Thera AI or Generate Planar. Given the nature of the description (dosimetry calculations based on models and similarity of image generation), it's possible that these are more rule-based or model-based applications rather than deep learning models requiring large training datasets, but this is not explicitly stated.
9. How the ground truth for the training set was established:
This information is not provided, as details about a training set are absent.
Ask a specific question about this device
(555 days)
BALTIMORE MD 21231
Re: K212587
Trade/Device Name: 3D-RD-S Regulation Number: 21 CFR 892.1100 Regulation
Name: 3D-RD-S Classification Name: Scintillation (gamma), Camera Regulatory Class: Class I (21 CFR 892.1100
. | Indications for
Use is Equivalent |
| Product Code
Regulation | IYX / 21 CFR 892.1100
| IYX / 21 CFR 892.1100
3D-RD-S is intended to estimate radiation absorbed dose (and related quantities) to tissues after administration of a radioactive product. For use with internally administrated radioactive products, 3D-RD-S should not be used to deviate from product dosing and administrations. Refer to the product's prescribing information for instructions.
3D-RD-S is a cloud-based software as a medical device (SaMD) that interacts with the user via web browsers (for example Google Chrome). Users are trained healthcare professionals with significant dosimetry knowledge and experience and also responsible for the input of the appropriate values and to make correct interpretation of the output data. 3D-RD-S takes numerical input data in the form of activity in source tissues as a function of time (TAC data) or the integral of the activity (TIA data) in source tissues over time. It then calculates the absorbed dose to a set of target tissues based on the organ sizes and anatomies of a set of standard phantoms. The software provides the user the ability to account for the differences in tissue masses between the phantoms and the subject and model uncertainties in the input data.
Calculation results can be viewed and updated by other users. The software provides the ability to calculate absorbed doses and related radiobiological quantities from input data. The calculations can be made for supported radionuclides based on data in the report 89 from the International Council on Radiation Protection (ICRP). Doses to target tissues are a function of the activity integrated over time (time-integrated activity. TIA) in a set of specified source organs. The software provides two modules for the integration of input time vs. activity curve (TAC) data. First, the user can use curve fitting methods to estimate a curve that passes through the TAC data from a set of supported fitting functions. Visual and numerical indicators of how well the fitting function works with the data are provided. Notifications are given if fitting parameters are non-physical. The TAC data can then be integrated using the fitting function, or by approximating the activity between measured time points with line and assuming activity after the last time-point decays with the radionuclide's physical half-life. If desired, the user can use a combination of the curve fit, linear interpolation between the lines, and exponentially decaying extrapolation based on the physical half-life, to integrate the time-activity curves.
The calculated radiobiological quantities purport to relate physical dose to biological response and are dependent on the specification of radiobiological constants. The guantities supported include the whole-body effective dose and the relative biological effectiveness (RBE) weighted dose. The effective dose is calculated based on ICRP tissue weighting factors. The RBE weighted dose is calculated using user specified RBEs for the different radiation types (standard values are provided as defaults).
3D-RD-S provides total and individual dose estimates for the various particle types, i.e., alpha particles, beta (+ and -) particles, discrete electrons (e.g., Auger electrons), and photons (gamma and x-rays). The resulting doses are plotted in a bar graph and can, along with input data, be exported in a spreadsheet.
The provided document, a 510(k) Summary for the 3D-RD-S device, details the acceptance criteria and the studies conducted to demonstrate its performance.
Here's an analysis of the provided information:
1. Table of Acceptance Criteria and Reported Device Performance:
The document describes several benchmark tests, each with an implicit or explicit acceptance criterion and the corresponding performance.
Test Type | Acceptance Criteria | Reported Device Performance |
---|---|---|
Benchmark Test (1): | Absolute percent difference between absorbed dose calculated by 3D-RD-S and OLINDA/EXM v2.0 (predicate) for source tissues |
Ask a specific question about this device
(203 days)
44 11130 Stockholm SWEDEN
Re: K163687
Trade/Device Name: OLINDA/EXM v2.0 Regulation Number: 21 CFR 892.1100
The intended use of OLINDA/EXM is to provide estimates (deterministic) of absorbed radiation dose at the whole organ level as a result of administering any radionuclide and to calculate effective whole-body dose. This is dependent on input data regarding bio distribution being supplied to the application.
The OLINDA/EXM® v2.0 is a modification of OLINDA/EXM® v1.1 (K033960) and includes new human models and nuclides. OLINDA/EXM® 2.0 employs a new set of decay data recommended by the International Commission on Radiological Protection (ICRP). OLINDA/EXM® 2.0 introduces a new series of anthropomorphic human body models (phantoms), so new values of Specific Absorbed Fractions (SAF), di (T←S) were generated. These phantoms were based on updated values of the mass of the target region (mr) recommended by the ICRP. The base product design of OLINDA/EXM® V2.0 is the same as for the OLINDA/EXM® V1.1 (K033960).
The provided document is a 510(k) summary for a medical device called OLINDA/EXM v2.0. This document primarily focuses on demonstrating substantial equivalence to a predicate device (OLINDA/EXM v1.1) rather than presenting a detailed clinical study with acceptance criteria and device performance in the way one might expect for a diagnostic or therapeutic AI device.
However, based on the information provided, here's a breakdown of what can be extracted and what is not explicitly stated in the document regarding acceptance criteria and a study:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not provide a formal table of acceptance criteria with corresponding performance metrics like sensitivity, specificity, accuracy, or effect sizes, as would be common for diagnostic algorithms. Instead, the "acceptance criteria" appear to be related to the verification and validation of the software itself and its consistency with the previous version. The performance is described in terms of "good compliance" with the predicate device.
Acceptance Criteria (Inferred from "Testing" description) | Reported Device Performance |
---|---|
All software specifications met | The testing results supports that all the software specifications have met the acceptance criteria. |
Risk analysis completed and risk control implemented to mitigate identified hazards | (Implicitly met as per submission) |
"Good compliance" in comparison to OLINDA/EXM v1.1 (K033960) | Comparisons were made between OLINDA/EXM® v2.0 and OLINDA/EXM® v1.1 (K033960). The results showed a good compliance. |
Same technological characteristics as OLINDA EXM® v1.1 | The proposed device OLINDA/EXM® v2.0 has the same technological characteristics as the original device OLINDA EXM® v1.1. |
Same indication for use as OLINDA EXM® v1.1 | The proposed device OLINDA/EXM® v2.0 and the predicate devices OLINDA/EXM® v1.1 (K033960) have the same indication for use. |
2. Sample Size Used for the Test Set and Data Provenance
This information is not explicitly provided in the document. The "tests for verification and validation" are mentioned, but the specific details of a "test set" (e.g., number of cases, type of data) are not described. Given that the device calculates radiation dose based on input data regarding biodistribution and relies on established models (ICRP decay data, anthropomorphic phantoms), the testing likely involved comparing output values for a range of inputs rather than a clinical dataset of patient images.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not explicitly provided. Since the device calculates deterministic radiation doses based on models, the "ground truth" would likely be derived from established physical and biological models, rather than expert interpretation of medical images or clinical outcomes.
4. Adjudication Method for the Test Set
This information is not explicitly provided. Adjudication methods like 2+1 or 3+1 are typically used when human experts are disagreeing on interpretations for a ground truth. This is not applicable to a dose calculation software validating against established models and data.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
This information is not explicitly provided, and it is unlikely such a study was performed or needed given the nature of the device. MRMC studies are typically for diagnostic AI systems where human readers interpret medical images. This device is a software tool for calculating radiation dose.
6. If a Standalone Study (Algorithm Only Without Human-in-the-Loop Performance) Was Done
The document implies that the "testing" described for verification and validation was a standalone evaluation of the algorithm's performance against its specifications and the predicate device. The comparison showing "good compliance" between OLINDA/EXM v2.0 and OLINDA/EXM v1.1 suggests an algorithm-only evaluation. However, the exact methodology is not detailed.
7. The Type of Ground Truth Used
The "ground truth" for this device likely refers to:
- Established physical and biological models: The document mentions "new human models and nuclides," "new set of decay data recommended by the International Commission on Radiological Protection (ICRP)," and "updated values of the mass of the target region (mr) recommended by the ICRP." These are the underlying scientific references against which the calculations would be validated.
- Outputs of the predicate device (OLINDA/EXM v1.1): The comparison showing "good compliance" with the predicate device implies that the predicate's outputs served as a reference for validating the new version.
8. The Sample Size for the Training Set
This information is not applicable/provided. OLINDA/EXM v2.0 is a deterministic calculation software based on established physical and biological models, not a machine learning or AI model that requires a "training set" in the conventional sense. It's a software tool that implements mathematical models and data.
9. How the Ground Truth for the Training Set Was Established
This information is not applicable/provided for the same reasons as #8. The "ground truth" here is derived from scientific consensus and established data (e.g., ICRP recommendations) that are used as inputs or validation references for the software's calculations, not a "training set" for a learning algorithm.
Ask a specific question about this device
(228 days)
Hacarmel, 30200 ISRAEL
Re: K160933
Trade/Device Name: Discovery NM 750b Biopsy Regulation Number: 21 CFR 892.1100
|
| Classification Names | 21CFR 892.1100
The Discovery NM 750b Gamma Camera is intended to measure and image the distribution of selected single photon emission radioisotopes in the human body to aid in the evaluation of lesions. The resultant images are intended to be reviewed by qualified medical professionals. The Discovery NM 750b Gamma Camera is intended for diagnostic imaging of the breast and other small body parts. The Discovery NM 750b Gamma Camera when used for breast imaging is intended as an adjunct to mammography or other breast imaging modalities (it is not intended for primary screening of the population). The Discovery NM 750b Gamma Camera is indicated for planar and dynamic planar scintigraphy in the energy range 80-200ke V for the detection and display of radioisotope tracer uptake in patients of all ages.
When used with the optional Discovery NM 750b Biopsy system, the Discovery NM 750b is designed to accurately locate, in three dimensions, lesions in the breast using information derived from stereotactic pairs of two-dimensional images. It is intended to provide guidance for interventional purposes such as biopsy and pre-surgical localization.
The Discovery NM 750b Biopsy system is an optional accessory for the Discovery NM 750b gamma camera (K102231) that utilizes stereotactic imaging to help guide invasive procedures. It is intended for 3D lesion localization to provide the physician image guidance for vacuum assisted needle biopsy of breast lesions determined to be suspicious through molecular breast imagine or other imaging.
The Biopsy system uses a pair of CZT "biopsy" detectors with fixed stereotactic positions. These two detectors acquire pair of anqulated two-dimensional images that are used in determining the 3D localization of the pre-identified suspicious lesion.
The Discovery NM750b Biopsy system includes hardware and software components, which auides the user throughout the biopsv work-flow. The hardware components enable the use of a variety of off-the-shelf biopsy vacuum needles.
In addition to the hardware components, the biopsy system accessory includes software components which, in part, through the user interface help quide the user stepwise through the biopsy workflow. The Discovery NM 750b Biopsy system is designed to support a variety of commercially available vacuum assisted biopsy devices and needles.
The provided text lacks specific acceptance criteria and detailed study results for the Discovery NM 750b Biopsy system. While it mentions various tests performed, it does not present quantifiable metrics or a clear study that "proves" the device meets specific acceptance criteria in a structured manner.
However, based on the information provided, here's an attempt to answer your request by extracting what is available and noting what is absent:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state formal acceptance criteria with numerical targets. Instead, it describes general successful completion of various tests.
Acceptance Criteria (Inferred from text) | Reported Device Performance (Where available) |
---|---|
Conformance to applicable IEC 60601-1 standards | "completed testing and is certified to conform to the applicable IEC 60601-1 standards." |
No new hazards identified | "No new hazards were identified" |
No unexpected test results obtained | "no unexpected test results were obtained." |
Performance according to specifications | "The testing demonstrated that the Discovery NM750b Biopsy system performs according to specifications" |
Functions as intended | "and functions as intended." |
Successful verification/validation testing | "successful verification/validation testing" |
Accurate 3D localization of lesions (implied by Intended Use) | "[The system is] designed to accurately locate, in three dimensions, lesions in the breast using information derived from stereotactic pairs of two-dimensional images. It is intended to provide guidance for interventional purposes such as biopsy and pre-surgical localization." Bench performance testing using phantoms and simulated clinical use testing by physicians were performed to demonstrate this. |
Guidance for interventional purposes (biopsy and pre-surgical localization) | "It is intended to provide guidance for interventional purposes such as biopsy and pre-surgical localization." Simulated clinical use testing by physicians demonstrated utility. |
2. Sample Size Used for the Test Set and Data Provenance
The document mentions "additional engineering bench performance testing using phantoms" and "simulated clinical use testing performed by physicians using a commercially available breast biopsy phantom and a supporting phantom." It specifies "This phantom setup had radiotracer-injected simulated lesions against a uniform radioactive background. The activities of the lesions and background were set to be representative of actual clinical use."
- Sample Size for Test Set: Not explicitly stated as a number of cases or lesions. It refers to "cases that represent a broad range of clinically relevant scenarios."
- Data Provenance: The data appears to be prospective from simulated clinical scenarios using phantoms. There is no mention of human subject data or country of origin for such data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Experts used: "physicians" performed the simulated clinical use testing.
- Qualifications of experts: Not specified beyond "physicians."
4. Adjudication Method for the Test Set
Not mentioned. The testing was described as "physician-performed clinical simulation testing." It's unclear if multiple physicians reviewed the same cases or if there was any adjudication process if different outcomes were observed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
No, an MRMC comparative effectiveness study is not mentioned. The study described focused on the device's performance in simulated clinical scenarios, not on comparing human reader performance with and without AI assistance from this specific device.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
The document describes the "Discovery NM750b Biopsy system" as including both hardware and software components that "guides the user throughout the biopsv work-flow." The testing involved "simulated clinical use testing performed by physicians." This suggests the evaluation was for the system as a whole, with human involvement, rather than a standalone algorithm performance without human interaction.
7. The Type of Ground Truth Used
The ground truth for the "simulated clinical use testing" was based on a phantom setup with "radiotracer-injected simulated lesions" where their "activities of the lesions and background were set to be representative of actual clinical use." This is an artificial, controlled ground truth rather than expert consensus, pathology, or outcomes data from human patients.
8. The Sample Size for the Training Set
The document describes the device as an optional accessory using "stereotactic imaging and optics principles" that are "well established." It states the "technological characteristics and corresponding fundamental principles of operation of the Biopsy System are identical or equivalent to that of the GammaLoc system and Senoegraphe Stereo." This suggests the device leverages existing, established technology rather than a machine learning model that would require a distinct training set. Therefore, a training set size is not applicable as described in the context of typical AI/ML-based devices.
9. How the Ground Truth for the Training Set was Established
As no training set (in the AI/ML sense) is mentioned or implied for this device's core functionality, this question is not applicable. The device relies on physical principles and established imaging techniques.
Ask a specific question about this device
(24 days)
Device Name: Sentinella 102 (models Sentinella 102 and Sentinella 102 Horus) Regulation Number: 21 CFR 892.1100
Common name: Portable Gamma Camera Classification name: Scintillation Gamma Camera, Class I. 21 CFR § 892.1100
Sentinella 102 (models Sentinella 102 and Sentinella 102 Horus) is a mobile gamma camera system which is intended for imaging the distribution of radionuclides in the human body by means of photon detection. The images are intended to be interpreted by qualified personnel.
Sentinella 102 (models Sentinella 102 and Sentinella 102 Horus) may be used intraoperatively if a protective sheath is used.
Sentinella 102 (models Sentinella 102 and Sentinella 102 Horus) may be used at the patient's bedside, or in Emergency Room or Intensive Care Unit.
Sentinella 102 is a currently marketed portable gamma camera system which includes a small gamma camera designed to obtain images from small organs and structures labeled using radionuclides emitting gamma-rays.
The Sentinella system also includes analysis and display equipment, a cart and ergonomic arm, which facilitates the equipment portability and positioning, and accessories.
Due to the difficulty which may involve indentifying the physical location in the body of the patient of the structures observed in the gammagraphy, the model Sentinella 102 Horus incorporates an optical camera that registers the same area that it is being observed by the gamma camera. Both images are coregistered and shown in real time. During this process, the gammagraphy is not reprocessed or modified in any way, so remains unaltered at the end of the process.
This FDA 510(k) summary (K162052) primarily addresses a modification to an existing device, the Sentinella 102 (models Sentinella 102 and Sentinella 102 Horus), specifically the introduction of a new collimator model. The document explicitly states that no new clinical testing has been carried out because there are no new indications for use. Therefore, a comprehensive study proving acceptance criteria for a new device or algorithm is not present in this document.
Instead, the document asserts substantial equivalence based on the fact that the new collimator does not change the indications for use, biocompatibility, electrical safety, electromagnetic compatibility, software, or overall performance specifications compared to the previously cleared predicate device (K143156).
Here's an attempt to answer the questions based on the provided text, acknowledging that a full "study" as requested isn't detailed for this specific submission:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table with specific acceptance criteria and reported device performance for this submission. It states that the device has the same performance specifications as the previous models already certified by the FDA (predicates). It mentions that "no new NEMA performance tests were necessary for the present submission" and that "The previous NEMA test report was carried out using the NEMA NU-1:2007."
To create such a table, one would need to refer to the K143156 submission for the specific performance criteria and results based on NEMA NU-1:2007. Without that previous document, the exact metrics are unavailable here.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
Not applicable. No new clinical or performance test set was used for this 510(k) submission, as it relies on the performance of the predicate device (K143156).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Not applicable. No new test set requiring expert ground truth establishment was used for this submission. The interpretation of images is generally intended to be by "qualified personnel" as stated in the Indications for Use, but this refers to clinical use, not a specific study methodology outlined here.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable. No new test set requiring adjudication was used for this submission.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. This device is a gamma camera system, not an AI-powered diagnostic or assistive tool. No MRMC study or AI-related effectiveness study was performed or mentioned in this document.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
Not applicable. This device is a gamma camera system and not an algorithm. Therefore, "standalone" algorithm performance is not relevant to this submission.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
Not applicable for this specific 510(k) due to the nature of the submission (modification of an existing device without new clinical testing). For the original predicate device (K143156), performance testing would likely have involved phantom studies and possibly clinical validation leading to NEMA NU-1:2007 compliance, which uses established physical and technical metrics rather than clinical "ground truth" like pathology for image interpretation.
8. The sample size for the training set
Not applicable. As a physical medical imaging device, it does not involve a "training set" in the context of machine learning or AI.
9. How the ground truth for the training set was established
Not applicable. As a physical medical imaging device, it does not involve a "training set" in the context of machine learning or AI.
Ask a specific question about this device
(15 days)
Trade/Device Name: Sentinella 102 and 102 Horus (Models: FP-0040 and FP-0055) Regulation Number: 21 CFR 892.1100
Sentinella 102 (Models Sentinella 102 Horus) is a mobile gamma camera system which is intended for imaging the distribution of radionuclides in the human body by means of photon detection. The images are intended to be interpreted by qualified personnel.
Sentinella 102 (Models Sentinella 102 Horus) may be used intraoperatively, if a protective sterile sheath is used.
Sentinella 102 (Models Sentinella 102 Horus) may be used at the patient's bedside, or in Emergency Room or Intensive Care Unit.
Sentinella 102 (Models Sentinella 102 Horus) is a mobile gamma camera system which is intended for imaging the distribution of radionuclides in the human body by means of photon detection.
The provided text is a 510(k) premarket notification letter from the FDA regarding a medical device, the "Sentinella 102 and 102 Horus" gamma camera system. This document grants market clearance based on substantial equivalence to a predicate device.
Crucially, this document does not contain information about specific acceptance criteria or a study proving the device meets those criteria in the context of clinical performance metrics like sensitivity, specificity, accuracy, or reader improvement with AI.
The letter confirms the device's regulatory classification, its intended use (imaging the distribution of radionuclides in the human body), and outlines the general controls and regulations it must comply with. It does not include the results of performance studies that would typically define acceptance criteria for diagnostic efficacy.
Therefore, I cannot provide the requested information about acceptance criteria or a study proving the device meets them from this document. The document primarily addresses regulatory clearance, not clinical performance metrics or studies using AI.
Ask a specific question about this device
(71 days)
Scintillation (Gamma) |
| Classification Name: | Scintillation (gamma) camera |
| Device Class: | 21CFR 892.1100
devices:
- . ergo Imaging System, K100838 Cleared on April 23, 2010 Product Code: IYX CFR Section: 892.1100
Molecular Breast Imaging System, K111791 ● Cleared on September 23, 2011 Product Code: IYX CFR Section: 892.1100
8803
January 15, 2013
Re: K123408
Trade/Device Name: ergo Imaging System Regulation Number: 21 CFR 892.1100
The ergo Imaging System is intended to image the distribution of radionuclides in the body by means of a photon radiation detector. In so doing, the system produces images depicting the anatomical distribution of radioisotopes within the human body for interpretation by authorized medical personnel. The ergo Imaging System is used by trained medical personnel to perform nuclear medicine studies.
It is indicated for lymphatic scintigraphy and parathyroid scintigraphy, It can be used intraoperatively when protected by sterile drapes. It is also indicated to aid in the evaluation of lesions in the breast and other small body parts. When used for breast imaging, it is indicated to serve as an adjunct to mammography or other primary breast imaging modalities.
The ergo Imaging System incorporates Digirad's Solid State RIM detector design with 3mm pixels for general purpose planar imaging, cleared under K100838. Sterile drapes are specified for intraoperative use. The ergo Imaging System, in conjunction with the optional Breast Imaging Accessory (BIA), enables the user to perform scintimammography and extremity imaging with stabilization.
The provided text is a 510(k) summary for the Digirad ergo Imaging System, which is a gamma camera. The document primarily focuses on demonstrating substantial equivalence to a predicate device and expanding indications for use.
Based on the provided text, the device itself is a gamma camera, not an AI/ML-based device. The "Testing" section (H) explicitly states: "Verification and Validation tests were conducted to demonstrate the ergo Imaging System functions per specification. These tests include Electromagnetic Compatibility, Electrical Safety, and gamma camera performance testing including NEMA standard NU 1-2007 with phantoms."
This indicates that the acceptance criteria and performance evaluation are related to the physical performance of the gamma camera, not to algorithmic performance on image interpretation. Therefore, the requested information elements related to AI/ML device testing (such as ground truth establishment with experts, MRMC studies, standalone algorithm performance, training/test set sample sizes for algorithms, etc.) are not applicable to this submission as described.
The acceptance criteria are likely standard NEMA performance metrics for gamma cameras. While the document broadly states "Testing results demonstrate that the ergo Imaging System continues to meet the specifications," it does not list specific numerical acceptance criteria or performance metrics in a table format within this 510(k) summary.
Therefore, many of the requested items cannot be extracted from this specific document.
Here's an attempt to answer the quantifiable parts based on the provided text, while noting the limitations:
1. A table of acceptance criteria and the reported device performance
- Acceptance Criteria: Not explicitly listed as numerical targets in the summary. Implied to be compliance with NEMA NU 1-2007 standards for gamma camera performance.
- Reported Device Performance: Not explicitly listed as numerical results in the summary. The summary states: "Testing results demonstrate that the ergo Imaging System continues to meet the specifications and is substantially equivalent to the predicate devices, based on comparisons of intended use and technology, and overall system performance."
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Sample Size: Not applicable in the context of an AI/ML test set. The testing described involves physical phantoms and engineering tests, not patient data sets.
- Data Provenance: Not applicable.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
- Not applicable. Ground truth was established by physical phantoms and engineering measurements according to NEMA standards for gamma camera performance.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Not applicable.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No. This is not an AI/ML device, and no MRMC study is mentioned.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. This is not an AI/ML diagnostic algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Physical phantoms and engineering specifications/measurements (NEMA standard NU 1-2007).
8. The sample size for the training set
- Not applicable. This is not an AI/ML device; there is no training set mentioned.
9. How the ground truth for the training set was established
- Not applicable.
In summary, the provided document describes a traditional medical device (gamma camera) and its regulatory submission, which relies on engineering performance standards (like NEMA) rather than clinical studies involving AI/ML performance on patient data sets with human experts.
Ask a specific question about this device
(88 days)
KT1771
Trade/Device Name: LumaGEM™ Molecular Breast Imaging System Regulation Number: 21 CFR 892.1100
The LumaGEM™ Molecular Breast Imaging System is intended to measure and image the distribution of radionuclides by means of photon detection in order to aid in the evaluation of lesions in the breast tissue and other small body parts. The LumaGEM™ Molecular Breast Imaging System, when used for breast imaging, is intended to serve as an adjunct to mammography or other primary breast imaging modalities. The LumaGEM™ Molecular Breast Imaging System is indicated for planar scintigraphy in the energy range of 30-300 keV for the detection and display of radioisotope tracer uptake in patients of all ages. The resultant images are intended to be viewed by qualified medical professionals.
The LumaGEM™ Molecular Breast Imaging System is a scintillation camera system, which uses Cadmium Zinc Telluride (CZT) detectors to create an image of radionuclide distribution. The LumaGEM™ Molecular Breast Imaging System is available in a dualhead or single-head configuration and can be used to help identify suspected lesions in breast tissue as an adjunct to standard mammography. The LumaGEM™ Molecular Breast Imaging System is provided with a customized gantry, which allows flexible positioning to facilitate accurate breast imaging, and a workstation to enable image acquisition and analysis functions.
The provided text describes the LumaGEM™ Molecular Breast Imaging System, its indications for use, and a summary of testing conducted to support substantial equivalence. However, it does not contain specific acceptance criteria, a detailed study proving the device meets these criteria, or most of the requested information regarding study design elements such as sample sizes, expert qualifications, or ground truth establishment for either training or test sets.
The document primarily focuses on:
- Administrative details: Applicant, contact person, dates, classification, product codes.
- Device description: How it works (CZT detectors, dual/single head, gantry, workstation).
- Indications for Use: What the device is intended for (aid in evaluation of breast lesions, adjunct to mammography, planar scintigraphy).
- Predicate devices: List of substantially equivalent devices.
- Testing for substantial equivalence: A general list of tests performed (Gamma camera verification, System verification, Electrical and mechanical safety, Electromagnetic compatibility) without details of methodologies, results, or acceptance criteria.
Therefore, most of the information requested in the prompt cannot be extracted from the provided text.
Here is a summary of what can be inferred or directly stated from the text:
1. Table of acceptance criteria and the reported device performance:
- Not available in the provided text. The document only lists general categories of testing performed (e.g., "Gamma camera verification testing") but does not provide specific acceptance criteria (e.g., minimum spatial resolution, sensitivity) or the corresponding performance results.
2. Sample size used for the test set and the data provenance:
- Not available in the provided text. The document does not describe any clinical test sets, patient data, or their origin. The testing mentioned is bench testing.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not available in the provided text. Since no clinical test set is described, there's no mention of experts or ground truth establishment for such a set.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not available in the provided text. No clinical test set details are provided.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Not available in the provided text. The device is a "Scintillation (gamma) camera" (imaging device), not an AI-assisted diagnostic tool described with human-in-the-loop performance. The document only mentions that "resultant images are intended to be viewed by qualified medical professionals."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not applicable/Not available. This is a medical imaging device, not an algorithm. The performance of the imaging system itself is implied to be evaluated by the "Gamma camera verification testing" and "System verification testing," but no specific performance metrics or "standalone" study details are provided.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not available. The document does not describe any ground truth for clinical performance, as only bench testing is explicitly mentioned. For bench testing, the "ground truth" would be the known properties of the phantoms or test objects used, but these details are not provided.
8. The sample size for the training set:
- Not applicable/Not available. This is an imaging device, not a machine learning algorithm that requires a training set in the typical sense.
9. How the ground truth for the training set was established:
- Not applicable/Not available. As above, no training set for an algorithm is mentioned or implied.
In conclusion, the provided text describes the regulatory submission for an imaging device but lacks the detailed performance study information typically associated with establishing acceptance criteria against a defined clinical or algorithm performance benchmark. The "TESTING IN SUPPORT OF SUBSTANTIAL EQUIVALENCE DETERMINATION" section only lists categories of bench testing rather than clinical efficacy studies.
Ask a specific question about this device
(82 days)
The Dilon 6800 Acella Digital Gamma Camera is intended to be used to measure and image the distribution of selected single photon emitting radioisotopes in the human body. The resulting images are intended to be reviewed by qualified medical personnel.
The Dilon 6800 Acella (Acella) is a modification to the Dilon 2000 (now known as the Dilon 6800), a high resolution, small field of view, portable gamma camera designed for general use in imaging radio pharmaceuticals. The Acella has a larger field-of-view than the Dilon 6800 and replaces photomultiplier tubes with photodiodes. Both technologies convert visible light photons generated by scintillation crystals into electronic signals.
The provided text describes the Dilon 6800 Acella Scintillation Camera, a modification of the Dilon 6800. It focuses on the device's substantial equivalence to its predicate device and the new detector technology. However, the document does not contain the detailed information necessary to fully answer all aspects of your request regarding acceptance criteria and a study proving those criteria.
Specifically, the document lacks:
- A table of acceptance criteria and reported device performance.
- Sample sizes for test sets and data provenance.
- Number and qualifications of experts for ground truth.
- Adjudication methods.
- Any mention of a multi-reader multi-case (MRMC) comparative effectiveness study or human-in-the-loop performance.
- Details on standalone algorithm performance.
- Specific types of ground truth used (beyond general "performance requirements").
- Sample size for the training set.
- How ground truth for the training set was established.
The text primarily summarizes the technical changes and confirms that the device meets "predetermined success criteria according to established protocols" without providing the specific criteria or study details.
Here's a breakdown of what can be extracted and what is missing:
1. A table of acceptance criteria and the reported device performance
- Acceptance Criteria: Not explicitly detailed in a table. The document states that "Testing has demonstrated that the Acella, with the larger Field of View, has met predetermined success criteria according to established protocols." This implies criteria exist but are not presented.
- Reported Device Performance: Not explicitly detailed in a table. The document states the performance is "equivalent to the Dilon 6800 camera and the predicate detector technology." No specific metrics (e.g., sensitivity, specificity, resolution, image quality scores) are provided.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Not provided. The text only mentions "Performance testing" and "Verification testing" without specifying sample sizes or data provenance.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not provided. The text states that "The resulting images are intended to be reviewed by qualified medical personnel," but this refers to clinical use, not the ground truth establishment for testing.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not provided.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not applicable / Not provided. This device is a gamma camera (hardware), not an AI-assisted diagnostic software. There is no mention of AI or human-in-the-loop performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Applicability is limited by device type. As a hardware device (gamma camera), "standalone algorithm performance" in the context of AI is not relevant. The performance refers to the imaging capabilities of the camera itself. While tests were done as a "standalone" device (without human interpretation being part of the device's direct output), no specific metrics are given.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)
- Not explicitly stated. The document refers to "predetermined success criteria" and "established protocols" for performance, which likely involve phantom studies or comparison with existing validated imaging systems for physical performance metrics (like resolution, uniformity, sensitivity). It does not suggest ground truth based on pathology or clinical outcomes for diagnostic accuracy validation in the way an AI diagnostic tool might.
8. The sample size for the training set
- Not applicable / Not provided. This device is a gamma camera. The concept of a "training set" is generally associated with machine learning or AI models, which is not what this device is. Its development involves engineering and hardware performance testing, not algorithmic training on data.
9. How the ground truth for the training set was established
- Not applicable / Not provided. (See point 8)
Ask a specific question about this device
Page 1 of 4