Search Results
Found 3 results
510(k) Data Aggregation
(142 days)
YSIO X.pree
The intended use of the device YSIO X.pree is to visualize anatomical structures of human beings by converting an X-ray pattern into a visible image.
The device is a digital X-ray system to generate X-ray images from the whole body including the skull, chest, abdomen, and extremities. The acquired images support medical professionals to make diagnostic and/or therapeutic decisions.
YSIO X.pree is not for mammography examinations.
The YSIO X.pree is a radiography X-ray system. It is designed as a modular system with components such as a ceiling suspension with an X-ray tube, Bucky wall stand, Bucky table, X-ray generator, portable wireless, and fixed integrated detectors that may be combined into different configurations to meet specific customer needs.
The following modifications have been made to the cleared predicate device:
- Updated generator
- Updated collimator
- Updated patient table
- Updated Bucky Wall Stand
- New X.wi-D 24 portable wireless detector
- New virtual AEC selection
- New status indicator lights
The provided 510(k) clearance letter and summary for the YSIO X.pree device (K250738) indicate that the device is substantially equivalent to a predicate device (K233543). The submission primarily focuses on hardware and minor software updates, asserting that these changes do not impact the device's fundamental safety and effectiveness.
However, the provided text does not contain the detailed information typically found in a clinical study report regarding acceptance criteria, sample sizes, ground truth establishment, or expert adjudication for an AI-enabled medical device. This submission appears to be for a conventional X-ray system with some "AI-based" features like auto-cropping and auto-collimation, which are presented as functionalities that assist the user rather than standalone diagnostic algorithms requiring extensive efficacy studies for regulatory clearance.
Based on the provided document, here's an attempt to answer your questions, highlighting where information is absent or inferred:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria in terms of performance metrics (e.g., sensitivity, specificity, or image quality scores) with corresponding reported device performance values for the AI features. The "acceptance" appears to be qualitative and based on demonstrating equivalence to the predicate device and satisfactory usability/image quality.
If we infer acceptance criteria from the "Summary of Clinical Tests" and "Conclusion as to Substantial Equivalence," the criteria seem to be:
Acceptance Criteria (Inferred) | Reported Device Performance (as stated in document) |
---|---|
Overall System: Intended use met, clinical needs covered, stability, usability, performance, and image quality are satisfactory. | "The clinical test results stated that the system's intended use was met, and the clinical needs were covered." |
New Wireless Detector (X.wi-D24): Images acquired are of adequate radiographic quality and sufficiently acceptable for radiographic usage. | "All images acquired with the new detector were adequate and considered to be of adequate radiographic quality." and "All images acquired with the new detector were sufficiently acceptable for radiographic usage." |
Substantial Equivalence: Safety and effectiveness are not affected by changes. | "The subject device's technological characteristics are same as the predicate device, with modifications to hardware and software features that do not impact the safety and effectiveness of the device." and "The YSIO X.pree, the subject of this 510(k), is similar to the predicate device. The operating environment is the same, and the changes do not affect safety and effectiveness." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: Not explicitly stated as a number of cases or images. The "Customer Use Test (CUT)" was performed at two university hospitals.
- Data Provenance: The Customer Use Test (CUT) was performed at "Universitätsklinikum Augsburg" in Augsburg, Germany, and "Klinikum rechts der Isar, Technische Universität München" in Munich, Germany. The document states "clinical image quality evaluation by a US board-certified radiologist" for the new detector, implying that the images themselves might have originated from the German sites but were reviewed by a US expert. The study design appears to be prospective in the sense that the new device was evaluated in a clinical setting in use rather than historical data being analyzed.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts
- Number of Experts: For the overall system testing (CUT), it's not specified how many clinicians/radiologists were involved in assessing "usability," "performance," and "image quality." For the new wireless detector (X.wi-D24), it states "a US board-certified radiologist."
- Qualifications of Experts: For the new wireless detector's image quality evaluation, the expert was a "US board-certified radiologist." No specific experience level (e.g., years of experience) is provided.
4. Adjudication Method for the Test Set
No explicit adjudication method (e.g., 2+1, 3+1 consensus) is described for the clinical evaluation or image quality assessment. The review of the new detector was done by a single US board-certified radiologist, not multiple independent readers with adjudication.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and what was the effect size of how much human readers improve with AI vs. without AI assistance.
- MRMC Study: No MRMC comparative effectiveness study is described where human readers' performance with and without AI assistance was evaluated. The AI features mentioned (Auto Cropping, Auto Thorax Collimation, Auto Long-Leg/Full-Spine collimation) appear to be automatic workflow enhancements rather than diagnostic AI intended to directly influence reader diagnostic accuracy.
- Effect Size: Not applicable, as no such study was conducted or reported.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done.
The document does not describe any standalone performance metrics for the AI-based features (Auto Cropping, Auto Collimation). These features seem to be integrated into the device's operation to assist the user, rather than providing a diagnostic output that would typically be evaluated in a standalone study. The performance of these AI functions would likely be assessed as part of the overall "usability" and "performance" checks.
7. The Type of Ground Truth Used
- For the overall system and the new detector, the "ground truth" seems to be expert opinion/consensus (qualitative clinical assessment) on the system's performance, usability, and the adequacy of image quality for radiographic use. There is no mention of pathology, outcomes data, or other definitive "true" states related to findings on the images.
8. The Sample Size for the Training Set
The document does not provide any information about a training set size for the AI-based auto-cropping and auto-collimation features. This is typical for 510(k) submissions of X-ray systems where such AI features are considered ancillary workflow tools rather than primary diagnostic aids.
9. How the Ground Truth for the Training Set was Established
Since no training set information is provided, there is no information on how ground truth was established for any training data.
In summary: The 510(k) submission for the YSIO X.pree focuses on demonstrating substantial equivalence for an updated X-ray system. The "AI-based" features appear to be workflow automation tools that were assessed as part of general system usability and image quality in a "Customer Use Test" and a limited clinical image quality evaluation for the new detector. It does not contain the rigorous quantitative performance evaluation data for AI software as might be seen for a diagnostic AI algorithm that requires a detailed clinical study for clearance.
Ask a specific question about this device
(200 days)
YSIO X.pree
The intended use of the device YSIO X.pree is to visualize anatomical structures of human beings by converting an X-ray pattern into a visible image.
The device is a digital X-ray system to generate X-ray images from the whole body including the skull, chest, abdomen, and extremities. The acquired images support medical professionals to make diagnostic and/or therapeutic decisions.
YSIO X.pree is not for mammography examinations.
The YSIO X.pree is a radiography X-ray system. It is designed as a modular system with components such as a ceiling suspension with an X-ray tube, Bucky wall stand, Bucky table, X-ray generator, portable wireless, and fixed integrated detectors that may be combined into different configurations to meet specific customer needs.
The following modifications have been made to the cleared predicate device:
- -New Camera Model in Collimator
- -New Auto Collimation Function: Auto Long-Leg/Full-Spine
- -Two new wireless detectors
The provided text is a 510(k) summary for the YSIO X.pree X-ray system. It describes the device, its intended use, and comparisons to predicate and reference devices. However, it does not contain the detailed clinical study information typically required to directly answer all aspects of your request regarding acceptance criteria and performance metrics for an AI/CADe device.
Specifically, the document mentions:
- A "Customer Use Test (CUT)" was performed at the "Universitätsklinikum Augsburg, Germany," focusing on "System function and performance-related clinical workflow, Image quality, Ease of use, Overall performance and stability."
- "The results of the clinical test stated that the intended use of the system was met, and the clinical need covered."
- "All images acquired with the new detectors were sufficiently acceptable for radiographic usage."
This summary indicates that new features, particularly the "Auto Collimation Function: Auto Long-Leg/Full-Spine" which is AI-based (taken from the MULTIX Impact algorithm, K213700), underwent testing. However, the FDA 510(k) summary does not include the specific acceptance criteria with reported performance against those criteria, nor detailed information about the study design (sample size, ground truth establishment, expert qualifications, etc.) for the AI-based auto collimation feature. The "Customer Use Test" appears to be a general usability and performance test for the overall system and new detectors, rather than a rigorous performance study for an AI algorithm with specific quantitative metrics.
Therefore, I cannot fully complete the table and answer all questions with the provided text. I can only extract what is present.
Here's a breakdown of what can be extracted and what cannot:
1. Table of Acceptance Criteria and Reported Device Performance:
The document does not provide a table of explicit acceptance criteria for the AI-based auto collimation function with corresponding quantitative performance metrics (e.g., accuracy, precision for delimiting regions of interest). It only states that the overall system and new detectors' images were "sufficiently acceptable for radiographic usage" and that the "intended use of the system was met, and the clinical need covered."
Acceptance Criteria | Reported Device Performance |
---|---|
For overall system and new detectors (from Customer Use Test): | |
System function and performance-related clinical workflow met criteria | Intended use of the system was met, and the clinical need covered. |
Image quality acceptable | All images acquired with the new detectors were sufficiently acceptable for radiographic usage. |
Ease of use acceptable | Not explicitly quantified, but implied by overall "intended use met." |
Overall performance and stability acceptable | Not explicitly quantified, but implied by overall "intended use met." |
For AI-based Auto Collimation (Auto Long-Leg/Full-Spine): | Information Not Provided in Text |
2. Sample size used for the test set and the data provenance:
- Test set sample size for AI-based auto collimation: Not specified in the provided text.
- Data Provenance: The Customer Use Test (CUT) was performed at "Universitätsklinikum Augsburg, Germany." This suggests prospective data collection in a clinical setting in Germany for the general system and new detectors. It is not explicitly stated if the AI-based auto collimation performance was evaluated on this specific dataset, or if a separate dataset (and its provenance) was used for validating the AI. Given the AI algorithm was "taken over" from the MULTIX Impact (K213700) and that previous 510(k) for that device might contain more details.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not specified for any specific ground truth establishment (especially for the AI-based auto collimation). The "Customer Use Test" involved clinical evaluation, implying healthcare professionals (presumably radiologists or radiographers) were involved, but their number and specific qualifications for establishing ground truth for AI performance are not detailed.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not specified.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study is described for the AI-based auto collimation. The document focuses on device safety and substantial equivalence to a predicate, not enhancement of human reader performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not explicitly detailed. The AI auto collimation feature is integrated into the workflow, implying it assists, but a standalone technical performance study for the AI component itself is not described with quantitative results. The statement that the "Multix Impact algorithm has been taken over" suggests that its performance characteristics might have been established during the clearance of the MULTIX Impact (K213700), but those details are not in this document.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not specified for the AI-based auto collimation. For the general system usability and image quality, the "Customer Use Test" implies a clinical assessment, likely representing expert (clinician) judgment.
8. The sample size for the training set:
- Not specified. The document states that the AI algorithm was "taken over" from the MULTIX Impact. This implies the training was done previously for the MULTIX Impact, but the size of that training set is not provided here.
9. How the ground truth for the training set was established:
- Not specified, for the same reasons as in point 8.
Ask a specific question about this device
(124 days)
YSIO X.pree
The device is a digital X-ray system to generate X-ray images from the whole body including the skull, chest, abdomen, and extremities. The acquired images support medical professionals to make diagnostic and/or therapeutic decisions. Generic clinical benefits of radiographic examinations within the intended use are applicable for this device.
YSIO X.pree is not for mammography examinations.
The YSIO X.pree is a radiography X-ray system. It is designed as a modular system with components such as a ceiling suspension with X-ray tube, Bucky wall stand, Bucky table, X-ray generator, portable wireless and fixed integrated detectors that may be combined into different configurations to meet specific customer needs.
The provided document is a 510(k) summary for the Siemens YSIO X.pree X-ray system. It does not contain information about the acceptance criteria or a study proving the device meets specific performance criteria for an AI/CAD-related product.
The document primarily focuses on establishing substantial equivalence to a predicate device (Ysio Max) based on technological characteristics, intended use, and compliance with general safety and performance standards for X-ray systems.
Specifically, the document states:
- "Al-based Auto Cropping" is a feature described as a "New Algorithm," but the comparison table explicitly states it "does not affect safety or effectiveness." This implies that its performance was not a critical factor in the substantial equivalence determination for this 510(k). The document does not provide any performance metrics or studies related to this AI feature.
- The comparison tables highlight changes in DQE and MTF for the "MAX mini" detector, noting "small changes...does not affect safety and effectiveness." These are technical specifications of the detector, not overall system performance against clinical or perceptual criteria.
Therefore, since the document does not seem to describe an AI/CAD device that requires specific clinical performance testing against established acceptance criteria, I cannot fulfill the request for a table of acceptance criteria and associated study details from the provided text.
The information requested, such as sample size, ground truth establishment, expert adjudication, MRMC studies, and standalone performance, is typically found in submissions for AI/CAD-assisted diagnostic devices where the AI's performance is central to the safety and effectiveness claim. This 510(k) notice is for a general radiographic X-ray system, where the primary focus is on the hardware and its general imaging capabilities.
Ask a specific question about this device
Page 1 of 1