Search Results
Found 2 results
510(k) Data Aggregation
(55 days)
D2RS & D2RS9090 Digital Dynamic Remote System is indicated for use in generating fluoroscopic images of human anatomy for vascular angiography, diagnostic, and interventional procedures. It is also indicated for generating fluoroscopic images of human anatomy for cardiology, diagnostic, and interventional procedures. It is intended to replace fluoroscopic images obtained through image intensifier technology. Not intended for mammography applications.
The D2RS & D2RS9090 are direct digital dynamic remote-controlled fluoroscopy and radiography systems equipped with the latest generation of Canon Flat Panel Detector (FPD). The single FPD can perform both fluoroscopy and radiography and is detachable and portable for direct projections to create a unique and highly versatile 3-in-1 imaging solution. The receptor panel directly converts the X-ray images captured by the sensor into a high-resolution digital images. The instrument is suited for use inside a patient environment. This unit converts the X-rays into digital signals. The unit can acquire still and moving images. The system includes a remotely controlled tilting/elevating table. A new tilting table has been introduced (D2RS9090). This new variant of D2RS consists of a modification of the foot of the remote controlled table. Instead of being down the table, this one is now behind. This allows a tilting from -90 to +90° (versus -25 to 90° with previous version) and an elevation from 368 to 1455 cm (versus 640 to 930 cm with previous version). The control console has been updated to a more modern looking design but is functionally identical to the predicate.
The provided text describes a 510(k) premarket notification for a medical device called D2RS & D2RS9090 Digital Dynamic Remote System. The submission aims to establish substantial equivalence to a predicate device (D2RF Digital Dynamic Remote System). Here's a breakdown of the requested information based on the provided text, with notes where information is not explicitly stated.
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are generally implied to be "as safe and effective as the predicate device" and compliance with relevant IEC standards and FDA guidances. The performance is assessed against these implicit criteria and through a comparison of technological characteristics.
Acceptance Criteria (Implied / Explicit) | Reported Device Performance and Evidence |
---|---|
Technological Equivalence / Superiority (Implicit) | Comparison Table (New Device vs. Predicate): |
Indicatons for Use remain unchanged | Same Indications for Use: D2RS & D2RS9090 is indicated for generating fluoroscopic images of human anatomy for vascular angiography, diagnostic, and interventional procedures, and for cardiology, diagnostic, and interventional procedures. Intended to replace image intensifier technology, not for mammography. (Matches predicate) |
Updated Digital Panel Performance (Superior or Equivalent) | New Panel: Canon CXDI-RF Wireless B1 (Predicate used Canon CXDI-50RF) |
- Dimension: 480 x 460 mm (vs. 492.8 x 503.1 mm)
- Useful Area: 42 x 43 cm (vs. 35 x 43 cm) - Superior
- Scintillator: CsI (Same)
- Pixel Pitch: 160 µm (Same)
- Spatial Resolution: 3.1 lp/mm (Same)
- Matrix: 2592 x 2656 pixels (vs. 2208 x 2688 pixels) - Superior
- AD Conversion: 16 bits (vs. 14 bits) - Superior
- DQE: 60% (0.5 lp/mm) (vs. 59% (0.5 lp/mm)) - Superior
- MTF: 38% (2 lp/mm) (vs. 30% (2 lp/mm)) - Superior
- Fps Max: 30 fps (Same)
- Interface: Ethernet/Wireless WiFi (vs. Ethernet) - Superior
- DICOM 3: YES (Same)
- Operating Temperature: 15-40°C (Same) |
| New Tilting Table Functionality (Superior or Equivalent) | New Tilting Table (D2RS9090): Tilting from -90 to +90° (vs. -25 to 90° for predicate) - Superior. Elevation from 368 to 1455 cm (vs. 640 to 930 cm for predicate) - Superior. |
| Compliance with International Standards and FDA Guidance (Explicit) | Bench/Performance Testing Data: Systems covering all generator/panel combinations were assembled and tested and found to be operating properly. Software validated per FDA Guidance for Software. Cybersecurity recommendations followed per FDA Guidance. New digital x-ray panel tested per FDA Guidance for Solid State X-ray Imaging Devices.
Compliance with: IEC 60601-1, IEC 60601-1-3, IEC 60601-1-2, IEC 60601-1-6, IEC 60601-2-28, IEC 60601-2-54, IEC 62304. Unit complies with US Performance Standard for radiographic equipment. |
| Diagnostic Quality of Images (Implicit based on indications for use) | Clinical Evaluation: A board certified radiologist reviewed both static and moving images and found them to be of excellent diagnostic quality. |
| Overall Safety and Effectiveness (Conclusion for Substantial Equivalence Dt.) | Conclusion: Based on comparison of technological characteristics and bench/clinical results, the modified and updated system is as safe and effective as the predicate device and is therefore substantially equivalent. |
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set: Not explicitly stated. The text mentions "both static and moving images" were reviewed, but doesn't quantify the number of images or cases.
- Data Provenance: Not explicitly stated (e.g., country of origin, retrospective or prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: "A board certified radiologist" (singular) was used.
- Qualifications: "board certified radiologist." Specific experience level (e.g., years of experience) is not mentioned.
4. Adjudication method for the test set
- Adjudication Method: "A board certified radiologist reviewed... and found them to be of excellent diagnostic quality." This implies a single-reader review rather than an adjudication process involving multiple experts. No formal adjudication method (like 2+1 or 3+1) is mentioned.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No, an MRMC comparative effectiveness study was not explicitly done. The study described is a single-reader review for diagnostic image quality, comparing the device's output to an implicit standard of "excellent diagnostic quality," likely in comparison to the predicate device's expected performance. This submission is for an imaging system, not an AI-assisted diagnostic tool.
- Effect Size: Not applicable, as no MRMC study with AI assistance was performed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Not applicable in the context of this submission. The device is an X-ray imaging system, where human interpretation of the images is inherent to its use. The "algorithm" here refers to the system's image acquisition and processing capabilities, not an AI for diagnosis. The clinical evaluation focuses on the quality of the images produced for human interpretation.
7. The type of ground truth used
- Type of Ground Truth: The ground truth for the clinical evaluation was based on the subjective assessment of a "board certified radiologist" who determined the images to be of "excellent diagnostic quality." This is an expert opinion/consensus implicitly benchmarked against diagnostic standards. It is not pathology, or outcomes data.
8. The sample size for the training set
- Sample Size for Training Set: Not applicable. The device described is an X-ray imaging system, not an AI/machine learning algorithm that requires a training set in the conventional sense. The "training" of such a system involves engineering and calibration to produce high-quality images according to technical specifications and clinical needs, rather than learning from a dataset.
9. How the ground truth for the training set was established
- Ground Truth for Training Set: Not applicable. As mentioned above, this is an X-ray imaging system, not an AI/ML algorithm that generates a "ground truth" during a training phase.
In summary, the submission focuses on demonstrating substantial equivalence primarily through technological comparison to a predicate device, adherence to relevant technical standards, and a qualitative assessment of image quality by a radiologist. It is not an AI-based diagnostic device; therefore, many of the questions related to AI studies (MRMC, training sets, etc.) are not directly applicable.
Ask a specific question about this device
(66 days)
The URS-50RF is indicated for use in generating fluoroscopic images of human anatomy for vascular angiography, diagnostic and interventional procedures. It is also indicated for generating fluoroscopic images of human anatomy for cardiology, diagnostic, and interventional procedures. It is intended to replace fluoroscopic images obtained through image intensifier technology. Not intended for mammography applications.
The Canon Dynamic/Static DR URS-50RF is a portable digital radiography that can take images of any part of the body. It directly converts the X-ray images captured by the LANMIT (Large Area New MIS Sensor and TFT) sensor into a high-resolution digital images. The instrument is suited for use inside a patient environment. This unit converts the X-rays into digital signals. The unit can acquire still and moving images.
Here's an analysis of the provided text regarding the acceptance criteria and study for the Canon Dynamic/Static DR Model URS-50RF Fluoroscopic Digital X-Ray System:
Summary of Device and Study Information (K093688)
This 510(k) summary describes a fluoroscopic digital X-ray system, the Canon Dynamic/Static DR URS-50RF, intended to generate fluoroscopic images for vascular angiography, diagnostic and interventional procedures, and cardiology. It aims to replace image intensifier technology. The submission focuses on demonstrating substantial equivalence to predicate devices, primarily through performance testing and software validation.
1. Table of Acceptance Criteria and Reported Device Performance
Note: The provided document is a 510(k) summary. For medical devices, particularly those establishing substantial equivalence, explicit "acceptance criteria" are often phrased in terms of meeting or exceeding the performance of legally marketed predicate devices, or complying with relevant standards. The document does not list specific numerical acceptance criteria for image quality, diagnostic accuracy, or clinical endpoints. Instead, it makes a general statement about performance.
Acceptance Criteria Category | Specific Acceptance Criteria (as implied/stated) | Reported Device Performance |
---|---|---|
Safety & Effectiveness | Device is safe and effective | Device demonstrated safe and effective operation. |
Performance Comparability | Device performs comparably to predicate devices | Device performs comparably to predicate devices. |
Substantial Equivalence | Device is substantially equivalent to predicate devices | Device is substantially equivalent to predicate devices. |
Technological Characteristics | Technological characteristics are equal to or better than predicate devices | Technological characteristics are equal to or better than predicate devices, and units are functionally identical. |
Electrical Safety | Compliance with relevant electrical safety standards | Electrical safety testing performed, unit complies with US Performance Standard for radiographic equipment. |
Electromagnetic Compatibility (EMC) | Compliance with relevant EMC standards | Electromagnetic Compatibility testing performed. |
Software Validation | Software is validated | Software Validation performed. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not explicitly stated. The document mentions "Tests were performed on the device," but does not specify the number of cases, images, or subjects used for performance testing.
- Data Provenance: Not specified. It's unclear if the testing involved human subjects, phantoms, or simulated data, or the country of origin of any data used. Given the nature of a 510(k) for an imaging device, it's highly probable that bench testing with phantoms and potentially some limited clinical evaluation (if required to show equivalence for image quality) was involved, but details are absent.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified.
4. Adjudication Method for the Test Set
- Adjudication Method: Not specified. With no mention of expert review or ground truth establishment, no adjudication method is detailed.
5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study
- MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was not specifically mentioned or implied in the provided 510(k) summary. The summary focuses on demonstrating substantial equivalence to already marketed devices based on technological characteristics and general performance testing, rather than a direct comparison of physician performance with and without AI assistance.
- Effect Size of Human Reader Improvement: Not applicable, as no MRMC study (or AI assistance) was described.
6. Standalone (Algorithm Only) Performance Study
- Standalone Study: This device is a hardware fluoroscopic digital X-ray system, not an AI algorithm. Therefore, the concept of a "standalone (algorithm only)" performance study does not apply in this context. The performance described relates to the entire system's ability to acquire and process images.
7. Type of Ground Truth Used
- Type of Ground Truth: Not explicitly stated. The performance testing is generally described as validating that the device is "safe and effective" and "performs comparably" to predicate devices. For an imaging system, ground truth might involve:
- Physical Measurements: Using phantoms to verify spatial resolution, contrast resolution, noise, dose efficiency, etc.
- Clinical Image Quality Assessment: Expert review of images to ensure diagnostic interpretability, though this isn't detailed as "ground truth" establishment in the psychological sense.
- Comparison to Predicate: Performance is often benchmarked against images from the predicate device.
8. Sample Size for the Training Set
- Sample Size for Training Set: Not applicable. This device is a hardware imaging system, not an AI or machine learning algorithm that requires a "training set."
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set Establishment: Not applicable, as there is no training set for a hardware device.
Ask a specific question about this device
Page 1 of 1