Search Results
Found 6 results
510(k) Data Aggregation
(168 days)
Ask a specific question about this device
(281 days)
The ClearVision ExamVue Flat Panel detector is indicated for use in general radiology, specialist radiology including podiatry, orthopedic, and other specialties, and in mobile x-ray systems.
The ClearVision ExamVue Flat Panel detector is not indicated for use in mammography.
The ClearVision ExamVue Flat Panel Detector consists of a line of 3 different models of solid state x-ray detectors, of differing size and characteristics, designed for use by radiologists and radiology technicians for the acquisition of digital x-ray images. The ClearVision ExamVue Flat Panel Detector captures digital images of anatomy through the conversion of x-rays to electronic signals, eliminating the need for film or chemical processing to create a hard copy image. The ClearVision ExamVue Flat Panel Detector incorporates the ExamVueDR software, which performs the processing, presentation and storage of the image in DICOM format.
All models of the ClearVision ExamVue Flat Panel Detector use aSi TFTD for the collection of light generated by a CsI scintillator, for the purpose of creating a digital x-ray image. The three available models are:
- a. A 14x17in (35x43cm) tethered cassette sized panel
- b. A 14x17in (35x43cm) wireless cassette sized panel with automatic exposure detection
- c. A 17x17in (43x43cm) tethered panel for fixed installations.
The provided document is a 510(k) premarket notification for a medical device called the "ClearVision ExamVue Flat Panel Detector". This document primarily focuses on establishing substantial equivalence to previously cleared predicate devices rather than proving a device meets specific clinical acceptance criteria through a dedicated study.
Therefore, many of the requested elements for describing specific acceptance criteria and study details cannot be fully extracted from this document. The document presents laboratory performance data and mentions "clinical images" but does not detail a formal clinical study with specific acceptance criteria as one would find in a clinical trial.
However, based on the information provided, here's what can be inferred and stated:
Acceptance Criteria and Reported Device Performance
The document does not specify formal clinical acceptance criteria (e.g., sensitivity, specificity, accuracy targets that the device must meet for a specific diagnostic task). Instead, the "acceptance criteria" are implied by demonstrating substantial equivalence to predicate devices through technical specifications and performance characteristics, as well as indications for use and safety. The primary "study" proving the device meets these (implied) acceptance criteria is the comparison of its technical specifications and general performance to those of the predicate devices.
| Acceptance Criteria (Implied by Substantial Equivalence) | Reported Device Performance (ClearVision ExamVue Flat Panel Detector) |
|---|---|
| Technical Equivalence to Predicate Devices: | |
| Pixel Pitch (similar to 139um-143um of predicates) | 143um (FDX3543RP, FDX4343R), 140um (FDX3543RPW) |
| Limiting Resolution (compared to predicates) | 3.7lp/mm (all models), which is superior to "Over 3lp/mm" and "3lp/mm" of predicates. |
| DQE @ 1 lp/mm (compared to predicates) | 57% (FDX3543RP), 60% (FDX3543RPW), 58% (FDX4343R), which is superior to 33% Gadox / 46% CsI and 45% Gadox / 65% CsI of predicates. (Note: The new device exclusively uses CsI, which is stated to be higher performance than Gadox). |
| MTF @ 1 lp/mm (compared to predicates) | 63% (FDX3543RP), 68% (FDX3543RPW), 65% (FDX4343R), which is comparable to or superior to 63% Gadox / 72% CsI and 57% Gadox / 59% CsI of predicates. (Again, comparing CsI to CsI performance where applicable given the new device's exclusive use of CsI). |
| Scintillator technology (same type as predicate CsI option) | Exclusively CsI |
| Digital Image Conversion (14-16 bit) | 16 bit (FDX3543RP), 14 bit (FDX3543RPW, FDX4343R) - comparable to 14 bit of predicates. |
| DICOM compatibility | Yes |
| Use of aSi TFTD technology | Yes |
| Functional Equivalence: | |
| General radiography and exclusion of mammography in Indications for Use | Indicated for general radiology, specialist radiology (podiatry, orthopedic), and mobile x-ray systems. Not indicated for mammography. (Same as predicates). |
| Integration with ExamVueDR software for image processing, presentation, and storage | Integrated with ExamVueDR software for final processing and presentation, which was previously 510(k) cleared (K142930) and used with the predicate devices. |
| Electrical Safety and EMC Standards (IEC 60601-1, IEC 60601-1-2) | Met. |
| Biocompatibility | Data provided for patient-contacting surfaces showing no known adverse reactions. |
| Image acquisition control interface (hard-wired to x-ray generator or AED, and software) | Tested as part of laboratory and clinical testing; software control interface for exposure settings previously tested with K142930. |
Study Details
-
Sample size used for the test set and the data provenance:
- The document mentions "Clinical images were provided" and "clinical testing of the hardware" but does not specify a separate "test set" in terms of number of cases/patients used to evaluate performance against specific diagnostic endpoints or ground truth.
- Data provenance (country of origin, retrospective/prospective) is not specified for these "clinical images". The application is from JPI Healthcare Co., LTD, based in Seoul, South Korea, so the data may originate from there, but this is not explicitly stated.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not provided. The "clinical images" and "clinical testing" are used to show the device "works as intended" in addition to laboratory data, rather than for a formal evaluation against expert-derived ground truth for diagnostic accuracy.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable as a formal adjudication process for a diagnostic performance test set is not described.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study is mentioned. This device is an X-ray detector, not an AI diagnostic algorithm, so "human readers improve with AI vs without AI assistance" is not applicable in this context.
-
If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- The device itself is a hardware component (Flat Panel Detector) and not an "algorithm" in the sense of an AI/CAD system. Its performance is assessed standalone through technical specifications and image quality metrics (DQE, MTF, Limiting Resolution) in a laboratory setting, and "clinical images" are used for qualitative assessment.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the "clinical images" mentioned, the type of ground truth is not specified. Given the context, it's likely qualitative assessment by radiologists that the images are of diagnostic quality for their intended use, rather than a comparison to a definitive clinical ground truth for specific pathologies. For the technical specifications (DQE, MTF, Limiting Resolution), these are objective measurements derived from physical phantom or test object images, not clinical ground truth.
-
The sample size for the training set:
- This device is hardware; it does not have a "training set" in the context of machine learning algorithms. The associated software (ExamVueDR) processes and presents images, but details about its own training data (if any for image processing algorithms) are not provided here.
-
How the ground truth for the training set was established:
- Not applicable, as this is hardware and not a machine learning algorithm.
Ask a specific question about this device
(605 days)
The Clear Vision DR7000F product is intended for use by a qualified/trained doctor or technician on both adult and pediatric subjects for taking diagnostic radiographic exposures of the skull, spinal column,. chest, abdomen, extremities, and other body parts. Applications can be performed with the patient sitting, standing, or lying in the prone or supine position.
The Clear Vision DR7000F system is intended to be used in medical clinics and hospitals for emergency, orthopedic, chiropractic, and other medical purposes. This device is not indicated for use in mammography.
The Clear Vision DR7000F system is a high-resolution digital imaging system designed for digital radiography. It is designed to replace conventional film radiography techniques. This system consists of a tube head/collimator assembly mounted on a U-Arm, along with a generator, generator control, and a detector, operating software.
The detector which is used proposed device is LTX240AA01-A (K090742) and LLX240AB01 (K102587) of Samsung Mobile Display Co., Ltd. These detectors are cleared by FDA 510(k).
The provided 510(k) summary for the Clear Vision DR7000F does not contain information about acceptance criteria or a study proving the device meets specific performance criteria related to AI or algorithm-only performance.
The document describes a digital radiography X-ray system, which is a hardware device, not an AI or algorithm-based diagnostic tool. The submission focuses on demonstrating substantial equivalence to predicate hardware devices and compliance with electrical, mechanical, environmental safety, and performance standards for X-ray systems.
Therefore, most of the requested information regarding AI/algorithm performance, ground truth establishment, expert review, and training/test set sizes is not applicable to this document.
Here's a breakdown of what can be extracted or inferred from the provided text, and where information is missing / not applicable:
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Electrical, Mechanical, Environmental Safety & Performance: Compliant with EN/IEC 60601-1, 60601-1-1, 60601-1-3, 60601-2-7, 60601-2-28, 60601-2-32. | All test results were satisfactory. |
| EMC: Compliant with EN/IEC 60601-1-2(2007). | Testing was conducted in accordance with standard EN/IEC 60601-1-2(2007). All test results were satisfactory. |
| X-ray Detector Performance: Not explicitly stated as a separate criterion, but performance and clinical testing were provided as recommended by FDA guidance for Solid State X-ray Imaging Devices. | Performance and clinical testing for the X-ray detectors were provided. (The document indicates the detectors LTX240AA01-A and LLX240AB01 were previously cleared by FDA 510(k), implying their performance was acceptable.) |
| Substantial Equivalence: To predicate devices CDX-DR80D (Choongwae Medical Corp.) and LTX240AA01-A, LLX240AB01 (Samsung Mobile Display Co. Ltd.). | The conclusion states the device is substantially equivalent to the predicate devices, implying it meets the necessary performance and safety profiles. |
Regarding specific questions related to AI/Algorithm performance:
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Not applicable / Not provided. This device is an X-ray imaging system, not an AI algorithm. The "clinical testing" mentioned for the X-ray detectors likely refers to performance evaluation under clinical conditions, not an algorithm's diagnostic accuracy on a test set of images.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable / Not provided. No specific "ground truth" establishment for an algorithm's performance is mentioned. Evaluation of an X-ray system focuses on image quality, radiation dose, safety, and functionality, which are assessed against technical specifications and clinical utility, rather than diagnostic "ground truth" for an AI model.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable / Not provided.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not applicable / Not provided. This submission is for a medical imaging device, not an AI-assisted diagnostic tool. No MRMC study is mentioned.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable / Not provided. No standalone algorithm performance is discussed.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Not applicable / Not provided. For an X-ray device, "ground truth" generally relates to physical measurements (e.g., spatial resolution, contrast-to-noise ratio, MTF, DQE) and clinical image quality (diagnostic acceptability) rather than a pathology reference for an AI diagnosis.
8. The sample size for the training set
- Not applicable / Not provided. No AI training set is mentioned.
9. How the ground truth for the training set was established
- Not applicable / Not provided. No AI training is mentioned.
Summary regarding the device:
The Clear Vision DR7000F is a digital radiography X-ray system. The study proving it meets acceptance criteria primarily involves engineering and performance testing against established international standards (EN/IEC 60601 series) for medical electrical equipment, as well as specific guidance for solid-state X-ray imaging devices. The acceptance criteria relate to electrical, mechanical, environmental safety, electromagnetic compatibility (EMC), and the technical performance and clinical utility of the X-ray detectors. The "study" mentioned is the compilation of these satisfactory test results conducted by the manufacturer, demonstrating compliance and substantial equivalence to existing cleared predicate hardware devices.
Ask a specific question about this device
(97 days)
ClearVision is intended to be used by dentists and other qualified professionals for producing diagnostic x-ray radiographs of dentition, jaws and other oral structures.
ClearVision is a digital imaging system for dental radiographic application. The product is to be used for routine dental radiographic examinations such as bitewings, periapicals, etc. Two different sized sensors (size 1 and size 2) are utilized to image different anatomy and for different patient sizes. The CMOS sensor connects directly to a USB connection in a PC without the need for an intermediate electrical interface. ClearVision works with a standard dental intraoral x-ray source without any connection to the x-ray source. ClearVision captures an image automatically upon sensing the production of x-ray and after the x-ray is complete, transfers the image to an imaging software program on the PC. Disposable sheaths are used with each use to prevent cross-contamination between patients.
This provides an analysis of the provided text regarding the ClearVision Digital Sensor System.
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for the ClearVision Digital Sensor System are derived from its comparison to predicate devices (Schick CDR and Gendex GXS-700) and general engineering requirements. The reported device performance indicates equivalency or superiority to these predicates.
| Acceptance Criteria Category | Specific Criteria/Test | Predicate Device A (Schick CDR) Performance | Predicate Device B (Gendex GXS-700) Performance | ClearVision Sensor Performance | Met? |
|---|---|---|---|---|---|
| Imaging Performance | Image Line Pair Phantom | - | - | Equivalent to GXS-700, Superior to Schick CDR | Yes |
| Image Aluminum Step Wedge | - | - | Equivalent to GXS-700, Superior to Schick CDR | Yes | |
| Image Tooth Phantom | - | - | Equivalent to GXS-700, Superior to Schick CDR | Yes | |
| Electrical Safety | IEC 60601-1 compliance | - | - | Meets requirements | Yes |
| EMI/EMC | IEC 60601-1-2 compliance | - | - | Meets requirements | Yes |
| Durability | Sensor housing and cable mechanical testing | - | - | Met all specified requirements | Yes |
| Reliability | Consistent image capture and transfer over extended life | - | - | Completely reliable | Yes |
| Image Quality Consistency | Consistent over expected lifetime exposures to radiation | - | - | Meets requirements | Yes |
| Hermetic Classification | IP67 per IEC 60529 | - | - | Meets requirements | Yes |
Note: The document states "found to be equivalent to the Gendex GXS-700 in all three tests and superior to the Schick sensor in all three imaging tests," implying that the performance level of the GXS-700 served as the primary benchmark for "equivalency" for the ClearVision's imaging performance.
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state a specific numerical sample size for the test set used in the imaging performance comparison. It mentions "each sensor to image a line pair phantom, an aluminum step wedge, and a tooth phantom." This implies at least one instance of imaging each of these phantoms per sensor.
The data provenance is not explicitly stated. Given the context of a 510(k) submission and the nature of the tests (imaging phantoms), it is highly likely that this was prospective data generated in a controlled laboratory or engineering setting, likely within the United States where the company is based. There is no mention of patient data or clinical trials.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The document does not describe the use of human experts to establish ground truth for the test set. The tests performed ("imaging a line pair phantom, an aluminum step wedge, and a tooth phantom") are objective, physical measurements against established standards for image quality and resolution (e.g., line pairs, step wedge density differences). Therefore, the "ground truth" would be inherent in the physical phantoms themselves and the objective metrics used to evaluate the images.
4. Adjudication Method for the Test Set
No adjudication method is described, as the evaluation methods appear to be objective and quantitative (e.g., measuring line pairs, density differences). Human interpretation or consensus for ground truth was not mentioned or implied.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The study described is a technical performance comparison of the device against predicate devices using physical phantoms, not a clinical study involving human readers or patient cases. Therefore, there is no effect size related to human reader improvement with or without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, the described imaging performance comparison of the ClearVision sensor is a standalone assessment. The evaluation focuses solely on the device's ability to capture and produce images from phantoms, without any involvement of a human interpreter in the loop for diagnostic decision-making during the testing process. The device itself is a digital sensor, not an AI algorithm.
7. The Type of Ground Truth Used
The ground truth used for the imaging performance tests was objective, physical standards provided by the phantoms:
- Line pair phantom: Provides known spatial frequencies (lines per millimeter) to assess resolution.
- Aluminum step wedge: Provides known material thicknesses/densities to assess contrast and dynamic range.
- Tooth phantom: Likely provides a realistic but standardized representation of dental anatomy to assess overall image quality and detail capture.
8. The Sample Size for the Training Set
The document describes a physical medical device (a digital X-ray sensor), not an AI/machine learning algorithm. Therefore, there is no training set in the context of an algorithm or AI model. The device's "training" or development would have involved engineering design, prototyping, and iterative testing to meet specifications, but not a dataset for training an algorithm.
9. How the Ground Truth for the Training Set Was Established
Since this is a physical device and not an AI algorithm, the concept of a "training set" and establishing ground truth for it is not applicable. The device's "ground truth" during its development would have been established through engineering specifications, material properties, and performance targets derived from scientific principles and a comparison to existing technologies.
Ask a specific question about this device
(325 days)
The Clear Vision DR 2000 product is intended for use by a qualified/trained doctor or technician on both adult and pediatric subjects for taking diagnostic radiographic exposures of the skull, spinal column,, chest, abdomen, extremities, and other body parts. Applications can be performed with the patient sitting, standing, or lying in the prone or supine position.
The Clear Vision DR 2000 system is intended to be used in medical clinics and hospitals for emergency, orthopedic, chiropractic, and other medical purposes. This device is not indicated for use in mammography.
The Clear Vision DR2000 system is a high-resolution digital imaging system designed for digital radiography. It is designed to replace conventional film radiography techniques. This system consists of a tube head/collimator assembly mounted on a U-Arm, along with a generator, generator control, and a detector, operating software.
The detector which is used proposed device is QXR9 (K073056) and QXR16 (K080553) of Vieworks Co., Ltd. These detectors are cleared by FDA 510(k).
The provided 510(k) summary for the Clear Vision DR 2000 does not contain information about explicit acceptance criteria for diagnostic performance, nor does it detail a specific study proving the device meets such criteria in terms of clinical accuracy or reader performance.
Instead, the submission focuses on demonstrating substantial equivalence to predicate devices through technical specifications, safety, and electromagnetic compatibility (EMC) testing. The "performance data" mentioned refers to these engineering and safety tests rather than clinical performance for diagnostic accuracy.
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Not available in the provided text. The submission focuses on demonstrating technical compliance and substantial equivalence to predicate devices, not on quantitative diagnostic performance metrics.
2. Sample Size Used for the Test Set and Data Provenance
Not applicable. There is no mention of a clinical "test set" for diagnostic performance evaluation. The "testing" referred to in the document pertains to electrical safety, mechanical, and EMC tests.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
Not applicable. No ground truth establishment for diagnostic performance is described.
4. Adjudication Method for the Test Set
Not applicable. No diagnostic performance test set or adjudication method is described.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
Not applicable. The Clear Vision DR 2000 is a digital radiography X-ray system, not an AI-powered diagnostic tool. Therefore, an MRMC study assessing AI assistance is not relevant to this submission.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Not applicable. The Clear Vision DR 2000 is a hardware system for image acquisition, not a standalone diagnostic algorithm.
7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, Etc.)
Not applicable. No diagnostic performance evaluation requiring ground truth is described.
8. The Sample Size for the Training Set
Not applicable. The Clear Vision DR 2000 is a medical imaging acquisition device; it does not inherently involve a "training set" in the context of machine learning or AI algorithms for diagnostic purposes.
9. How the Ground Truth for the Training Set Was Established
Not applicable. As no training set is mentioned in the context of diagnostic algorithms, the establishment of ground truth for such a set is not relevant.
Summary of the Study Discussed in the 510(k) Submission:
The study detailed in this 510(k) submission is a series of engineering and safety tests to ensure the Clear Vision DR 2000 system meets relevant industry standards and is substantially equivalent to predicate devices. These tests include:
- Electrical, mechanical, environmental safety and performance testing according to standards EN/IEC 60601-1, EN/IEC 60601-1-1, EN/IEC 60601-1-3, EN/IEC 60601-2-7, EN/IEC 60601-2-28, and EN/IEC 60601-2-32.
- EMC testing in accordance with standard EN/IEC 60601-1-2(2007).
The acceptance criteria for these tests would be compliance with the specific requirements outlined in each of those EN/IEC standards. The reported device performance is that "All test results were satisfactory," indicating that the device met the specified engineering and safety criteria for each standard.
The focus of this 510(k) is to demonstrate that the device is safe and effective for its intended use as a digital radiography X-ray system, primarily by showing that its technical characteristics and safety features align with established standards and legally marketed predicate devices, rather than through a clinical study of diagnostic accuracy. The use of pre-cleared detectors (QXR9 and QXR16) also supports the claim of substantial equivalence.
Ask a specific question about this device
(80 days)
The ClearVision nuclear medicine imaging system is intended for use as a diagnostic imaging device to acquire and process gated and non-gated Single Photon Emission Computed Tomography (SPECT) images.
Used with appropriate radiopharmaceuticals, the ClearVision system produces images that depict the anatomical distribution of radioisotopes within the myocardium.
The ClearVision Nuclear Medicine Imaging System acquires and processes cardiac data including gated and non-gated Single Photon Emission Computed Tomography (SPECT) studies. After completion of an acquisition, the operator can select the resulting acquisition data file to generate both qualitative and quantitative results for review by a physician. This includes processing using Release 5.6 of Segami Corporation's Mirage processing software that was previously cleared under 510(k) number K043441 dated 13-January-2005.
The acquisition system consists of either a single or dual small field-of-view detectors with each mounted on top of a tower that contains system electronics. To support the acquisition of SPECT data, the patient chair rotates up to 360 degrees in either clockwise or counterclockwise direction.
Prior to a patient scan, the following system features are used to ensure the myocardium is centered within each detector's field of view (FOV):
- . Each tower can be moved horizontally along rails mounted to the floor plate.
- . The patient chair seat pan can be moved side-to-side.
- Vertical and a horizontal beam lasers are mounted to side of detector.
The ClearVision system's compact footprint and small FOV detector are specifically designed for placement in a facility lacking adequate floor space for a typical nuclear medicine imaging system.
The provided document is a 510(k) summary for the GVI Medical Devices ClearVision Nuclear Imaging System. It focuses on demonstrating substantial equivalence to a predicate device rather than presenting a detailed study with acceptance criteria and performance metrics for the ClearVision system itself.
Therefore, much of the requested information regarding acceptance criteria, specific study designs, sample sizes, expert qualifications, and ground truth establishment is not available in this document. The document describes a comparison of features and performance characteristics to a predicate device (Digirad Cardius 1 XPO and Cardius 2 XPO SPECT Imaging Systems, K070542) to establish substantial equivalence.
Here's a breakdown of what can be extracted and what is not available based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state "acceptance criteria" but rather presents a "Feature Comparison Summary" to demonstrate substantial equivalence to the predicate device. The performance characteristics listed are NEMA (National Electrical Manufacturers Association) standards, which are common for SPECT systems and implicitly serve as performance benchmarks.
| Feature | Acceptance Criteria (Predicate) | Reported ClearVision Performance | Does it meet acceptance criteria? |
|---|---|---|---|
| NEMA Reconstructed Spatial Resolution | 11.00 mm (for predicate) | 9.8 mm (central), 7.6 mm (tangential), 8.4 mm (radial) | The ClearVision's spatial resolution values (smaller numbers indicate better resolution) are superior to the predicate's 11.00 mm, indicating it meets or exceeds this aspect. |
| NEMA System Sensitivity | 160 cpm / uci (for predicate) | 147 cpm / uci | ClearVision's sensitivity is slightly lower than the predicate, but this is presented in the context of substantial equivalence, implying it is within an acceptable range for the intended use given other features. The document explicitly states "performs as well as the predicate". |
| NEMA Energy Resolution | < 10.5 % (for predicate) | ≤ 9.0 % | The ClearVision's energy resolution is better (lower percentage) than the predicate, indicating it meets or exceeds this aspect. |
| Energy Range | 50 - 170 keV (for predicate) | 90 – 160 keV | The ClearVision's energy range is narrower than the predicate, this is a difference in specification, but not framed as a failure to meet a criterion for substantial equivalence. |
| Small Detector UFOV | Yes (6.2" x 8.3") | Yes (8.5" x 8.5") | The ClearVision has a larger UFOV, which is generally a benefit, so it meets or exceeds this. |
Note: The "Acceptance Criteria" column above is inferred from the predicate device's performance characteristics as presented for comparison in the document for demonstrating substantial equivalence.
2. Sample Size Used for the Test Set and Data Provenance
Not Available. The document does not describe a clinical study with a specific test set, patient data, or data provenance (country of origin, retrospective/prospective). The comparison is based on technical specifications and performance characteristics from NEMA standards and device design.
3. Number of Experts Used to Establish Ground Truth and Qualifications
Not Available. There is no mention of a human expert review or ground truth establishment process in the context of a clinical study in this document.
4. Adjudication Method
Not Available. As no expert review or human-in-the-loop study is described, no adjudication method is mentioned.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No. The document does not mention any MRMC comparative effectiveness study, nor any evaluation of human readers' improvement with or without AI assistance. The device is a SPECT imaging system, not an AI-assisted diagnostic tool in the sense of a software algorithm interpreting images.
6. Standalone (Algorithm Only) Performance Study
Yes, implicitly, for certain physical parameters. The performance characteristics listed (NEMA Reconstructed Spatial Resolution, NEMA System Sensitivity, NEMA Energy Resolution) are measures of the system's inherent physical performance, which can be considered "standalone" as they don't involve a human in the loop for the measurement itself. However, this is for the imaging system's hardware performance, not an "algorithm" in the sense of an AI model making diagnostic interpretations. The Segami Mirage processing software (Release 5.6) used for processing is mentioned as previously cleared (K043441), indicating its standalone processing capabilities were evaluated separately.
7. Type of Ground Truth Used
For the physical performance characteristics of the device, the "ground truth" is established through standardized phantom measurements according to NEMA (National Electrical Manufacturers Association) protocols. These are empirical measurements of physical performance metrics (e.g., resolution in mm, sensitivity in cpm/uci, energy resolution in %). For images generated, the ground truth for diagnostic interpretation would typically involve clinical data like pathology or patient outcomes, but this level of detail is not discussed for the ClearVision system in this document.
8. Sample Size for the Training Set
Not Applicable/Not Available. The ClearVision is a hardware device (SPECT scanner) combined with processing software. There is no mention of a "training set" in the context of machine learning. The system's design and engineering are based on established physics and medical imaging principles.
9. How the Ground Truth for the Training Set Was Established
Not Applicable/Not Available. As there is no "training set" in the machine learning sense, there is no discussion of how ground truth for such a set was established.
Summary Explanation:
This 510(k) submission primarily focuses on demonstrating substantial equivalence to an existing predicate device (Digirad Cardius 1 XPO and Cardius 2 XPO SPECT Imaging Systems), rather than presenting a new clinical study with novel acceptance criteria and extensive human-in-the-loop performance evaluations. The document highlights the ClearVision's technical specifications and physical performance characteristics (e.g., NEMA standards) in comparison to the predicate, arguing that despite some differences (like detector technology and collimator type), it performs "as well as" the predicate and does not introduce new safety risks. This approach is common for demonstrating equivalence of medical devices, especially imaging hardware, where performance is often benchmarked against established industry standards and predicate device capabilities.
Ask a specific question about this device
Page 1 of 1