Search Results
Found 4 results
510(k) Data Aggregation
(23 days)
Philips Magnetic Resonance (MR) systems are Medical Electrical Systems indicated for use as a diagnostic device. This MR system enables trained physicians to obtain cross-sectional images, spectroscopic images and/or spectra of the internal structure of the head, body or extremities, in any orientation, representing the spatial distribution of protons or other nuclei with spin. Image appearance is determined by many different physical properties of the tissue and the MR scan technique applied, and presence of contrast agents. The use of contrast imaging applications should be performed consistent with the approved labeling for the contrast agent. The trained clinical user can adjust the MR scan parameters to customize image appearance, accelerate image acquisition, and synchronize with the patient's breathing or cardiac cycle. The systems can use combinations of images to produce physical parameters, and related derived images, spectra, and measurements of physical parameters, when interpreted by a trained physician, provide information that may assust diagnosis and therapy planning. The accuracy of determined physical parameters depends on system and scan parameters, and must be controlled and validated by the clinical user. In addition the Philips MR systems provide imaging capabilities, such as MR fluoroscopy, to guide and evaluate interventional and minimally invasive procedures in the head, body and extremities. MR Interventional procedures, performed inside or adjacent to the Philips MR system, must be performed with MR Conditional or MR Safe instrumentation as selected and evaluated by the clinical user for use with the specific MR system configuration in the hospital. The appropriateness and use of information from a Philips MR system for a specific interventional procedure and specific MR system configuration must be validated by the clinical user.
This Special 510(k) submission will include modifications of the proposed Ingenia Elition and MR 7700 MR Systems as compared to Philips legally marketed devices, primary predicate device MR 7700 R11 MR System of the 510(k) submission MR 5300 and MR 7700 R11 MR Systems (K223442, 12/23/2022) as well as the secondary predicate device being the legally marketed Ingenia Elition R11 MR System of the 510(k) submission Achieva, Ingenia, Ingenia CX, Ingenia Elition and Ingenia Ambition MR Systems R11 (K213583, 04/15/2022). In this 510(k) submission, Philips Medical Systems Nederland B.V. will be addressing the following minor hardware enhancements for the proposed Ingenia Elition and MR 7700 MR Systems since the last 510(k) submission (K223442, 12/23/2022) for each of the systems: 1. The SmokeDetector Interlock, a component used in the legally marketed Ingenia Elition and MR 7700 systems, becomes a mandatory risk control measure. 2. Minor design change to current gradient coil type WB30S Identical to the predicate devices, the proposed Ingenia Elition and MR 7700 MR Systems are intended to be marketed with the following pulse sequences and coils that are previously cleared by FDA: 1. mDIXON (K102344) 2. SWIp (K131241) 3. mDIXON-Quant (K133526) 4. MRE (K140666) 5. mDIXON XD (K143128) 6. O-MAR (K143253) 7. 3D APT (K172920) 8. Compatible System Coils (identical to the predicate devices)
The provided FDA 510(k) summary for the Philips Ingenia Elition and MR 7700 MR Systems does not contain information about acceptance criteria or a study that proves the device meets specific performance criteria in the context of an AI/human-in-the-loop study. Instead, this submission is for minor hardware enhancements and focuses on demonstrating substantial equivalence to previously cleared predicate devices through compliance with recognized standards and non-clinical verification tests.
Therefore, I cannot provide details for most of the requested information points, as they are not present in the given text. The submission explicitly states: "The proposed Ingenia Elition and MR 7700 MR Systems did not introduce any modification to the indication for use or technological characteristics relative to the predicate devices that would require clinical testing."
However, I can extract information related to the non-clinical performance data and the general approach.
Here's the breakdown of what can be inferred and what cannot:
1. A table of acceptance criteria and the reported device performance
| Acceptance Criteria (Inferred) | Reported Device Performance (Inferred) |
|---|---|
| Compliance with IEC 60601-2-33 (Medical electrical equipment - Part 2-33: Particular requirements for the basic safety and essential performance of magnetic resonance equipment for medical diagnosis) | Verification test results demonstrate that the proposed Ingenia Elition and MR 7700 MR Systems meet the acceptance criteria and are adequate for the intended use. |
| Compliance with ANSI/AAMI ES60601-1 (Medical Electrical Equipment - Part 1: General Requirements For Basic Safety And Essential Performance) | Verification test results demonstrate that the proposed Ingenia Elition and MR 7700 MR Systems meet the acceptance criteria and are adequate for the intended use. |
| Compliance with IEC 60601-1-2 (Electromagnetic disturbances Requirements and tests) | Verification test results demonstrate that the proposed Ingenia Elition and MR 7700 MR Systems meet the acceptance criteria and are adequate for the intended use. |
| Compliance with IEC 60601-1-6 (Usability) | Verification test results demonstrate that the proposed Ingenia Elition and MR 7700 MR Systems meet the acceptance criteria and are adequate for the intended use. |
| Compliance with IEC 60601-1-8 (Alarm systems) | Verification test results demonstrate that the proposed Ingenia Elition and MR 7700 MR Systems meet the acceptance criteria and are adequate for the intended use. |
| Compliance with ISO 14971 (Application of risk management to medical devices) | Risk management activities show that all risks are sufficiently mitigated; new risks identified are mitigated to an acceptable level; and overall residual risk is acceptable. |
| Compliance with IEC 62366-1 (Application of usability engineering to medical devices) | Verification test results demonstrate that the proposed Ingenia Elition and MR 7700 MR Systems meet the acceptance criteria and are adequate for the intended use. |
| Compliance with IEC 62304 (Medical device software - Software life cycle processes) | Verification test results demonstrate that the proposed Ingenia Elition and MR 7700 MR Systems meet the acceptance criteria and are adequate for the intended use. |
| No significant changes to the essential performance and safety of the device compared to predicate. | Non-clinical verification tests performed with regards to requirement specifications and risk management demonstrate the device meets acceptance criteria and is adequate for intended use. Validation testing performed with primary and secondary predicate devices remains valid. |
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Not provided. The submission focuses on non-clinical verification tests against standards for hardware modifications, not performance on a specific dataset.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable/provided. No clinical study involving expert ground truth is described. The assessment is based on engineering verification and compliance with standards.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable/provided. No clinical study is described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, not done. This submission is for minor hardware enhancements to an MR system, not for an AI-powered diagnostic aid, and therefore, no MRMC study or AI assistance evaluation is mentioned.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- No, not done. As mentioned, this is not an AI-powered device.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Not applicable/provided. For the purpose of this submission, the "ground truth" for the device's acceptable performance is defined by compliance with established international and FDA-recognized consensus standards for medical electrical equipment and risk management.
8. The sample size for the training set
- Not applicable/provided. There is no mention of a training set as this is not an AI/machine learning device submission.
9. How the ground truth for the training set was established
- Not applicable/provided. There is no mention of a training set.
Ask a specific question about this device
(243 days)
SmartCT assists physicians during vascular and non-vascular procedures, with diagnosis, treatment planning, interventional procedures and treatment follow-up by creating 3D views from sets of 2D images created during rotational acquisitions.
SmartCT provides high-speed and high-resolution 3D visualizations of vasculature, hemorrhages, soft tissue and bone structures.
SmartCT provides live image guidance for navigating endovascular structures anywhere in the body.
SmartCT helps to assess anatomical intra-procedurally, such as the estimate of a vessel or lesion size, diameter or volume, and anatomical distances between relevant structures.
SmartCT is intended to be used for human patients that have been elected for the procedures as described in the Indications for Use.
SmartCT is a 3D image visualization and analysis software product (Interventional Tool) intended to provide fast and high-resolution 3D visualization of vasculature, hemorrhages, soft tissue and bone structures, thereby helping the physician to identify pathologies and supporting the physician to define and plan the intervention strategy.
SmartCT runs on a software platform called the Interventional Workspot, and is intended to be used with a Philips Interventional X-Ray System.
SmartCT supports 3D Rotational Angiography (3DRA), Cone Beam CT (CBCT) and VasoCT acquisition protocols, and it includes 3D roadmap functionality. The CBCT and VasoCT protocols are only available for the 20" detector of the Interventional X-Ray system. The 3DRA protocols are available for all detectors.
The 3DRA protocols are available with SmartCT Angio, the CBCT protocols with SmartCT Soft Tissue and the VasoCT protocols with SmartCT Vaso. The 3D roadmap functionality (SmartCT Roadmap) comes in combination with SmartCT Angio, SmartCT Soft Tissue or SmartCT Vaso.
SmartCT includes filters to improve the image quality of the reconstruction by reducing the noise caused by metal objects or other objects that absorb high levels of X-ray radiation.
SmartCT provides overlays of live 2D fluoroscopic images with a 3D reconstruction of the vessel tree.
SmartCT offers tools to manually measure sizes and volumes of anatomical structures such as lesions, aneurysms or vessels. It also offers a vessel analysis tool that provides semi-automatic measurements of the diameter of a segmented vessel.
SmartCT can be controlled from both the control room and the examination room.
SmartCT provides workflow guidance to support the physician in the workflow of acquiring and processing 3D images.
The Philips Medical Systems SmartCT device did not require a clinical study to demonstrate substantial equivalence. Instead, equivalence was established through non-clinical performance testing and comparison to predicate devices. Therefore, the information provided below is derived from the non-clinical performance testing and usability validation described in the 510(k) summary.
Acceptance Criteria and Device Performance for SmartCT (K201583)
The provided documentation does not include a specific table of acceptance criteria with corresponding device performance metrics in quantitative terms for the SmartCT device. Instead, it describes general compliance with recognized standards and successful completion of various tests.
However, based on the non-clinical performance testing and validation activities, the acceptance criteria implicitly relate to:
- Compliance with recognized standards: The device must comply with a list of FDA-recognized consensus standards (e.g., IEC 62304 for software life cycle, IEC 62366-1 for usability, ISO 14971 for risk management, NEMA PS 3.1 - 3.20 DICOM).
- Functional and non-functional requirements: All requirements specified in the System Requirements Specification must be met.
- Safety risk control measures: All safety risk control measures from the Detailed Risk Management Matrix must be implemented.
- Privacy and Security requirements: All privacy and security requirements and mitigations must be implemented.
- Usability: The device must be safe and effective for the intended use, users (physicians and nurse/technicians), and use environment.
- Intended use and claims: The device must conform to its intended use, claims, and user needs, as demonstrated through simulated use and expert opinion.
Reported Device Performance (Implicit):
The documentation states that:
- "Results demonstrated that all executed verification tests were passed."
- "SmartCT was found to be safe and effective for the intended use, users and use environment." (From Usability validation)
- "Results demonstrated that all executed validation protocols were passed." (From Simulated use design validation)
- "Results demonstrated that these commercial claims and user needs were successfully validated." (From Clinical experience/expert opinion evaluation)
- "All these tests were used to support substantial equivalence of the subject device and demonstrate that SmartCT: complies with the afore mentioned international and FDA-recognized consensus standards, and meets the acceptance criteria and is adequate for its intended use."
Detailed Study Information (Based on Non-Clinical & Usability Validation):
-
Table of Acceptance Criteria and Reported Device Performance:
Acceptance Criteria Category Reported Device Performance Regulatory Compliance Complies with IEC 62304, IEC 62366-1, IEC 82304-1, ISO 14971, ISO 15223-1, UL 2900-1, NEMA PS 3.1-3.20, IEC 80001-1. Functional & Non-Functional Requirements All executed verification tests were passed to verify functional and non-functional requirements, performance, reliability, and safety. Safety Risk Control All safety risk control measures from the Detailed Risk Management Matrix were implemented; verification tests passed. Privacy & Security All privacy and security requirements and mitigations were implemented; verification tests passed. Usability Found to be safe and effective for the intended use, users (physicians, nurse/technicians), and use environment. Validation protocols in the form of a clinical workflow were passed by participants who fulfill the intended user profile. Intended Use & Claims Conforms to intended use, claims, and user needs. Commercial claims and user needs regarding clinical functionality were successfully validated by assessing clinical images. -
Sample sizes used for the test set and data provenance:
- Test Set Sample Size: The document does not specify a numerical sample size for "test sets" in the traditional sense of a clinical trial. Instead, it refers to:
- "Representative clinical users (both physicians and nurse/technicians)" for usability validation. The exact number is not provided.
- "Participants who fulfill the intended user profile" for simulated use validation. The exact number is not provided.
- "Clinical images" for expert opinion evaluation. The number of images is not provided.
- Data Provenance: The document does not specify country of origin for the data used in these non-clinical tests. It refers to a "simulated use environment" and "clinical images," implying retrospective or simulated data rather than prospective patient data.
- Test Set Sample Size: The document does not specify a numerical sample size for "test sets" in the traditional sense of a clinical trial. Instead, it refers to:
-
Number of experts used to establish the ground truth for the test set and qualifications of those experts:
- Number of Experts: The document refers to "representative clinical users (both physicians and nurse/technicians)" for usability validation and suggests "expert opinion" for clinical experience evaluation. The exact number of experts is not quantified in the provided text.
- Qualifications of Experts: The experts were identified as "physicians and nurse/technicians." Specific experience levels (e.g., "radiologist with 10 years of experience") are not detailed.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- The document does not describe a formal adjudication method (like 2+1 or 3+1 consensus) for establishing ground truth or evaluating performance in the context of the described non-clinical tests. The tests focused on compliance with requirements, usability, and claims validation, often through successful completion of tasks or expert feedback, rather than a diagnostic accuracy assessment requiring multi-reader adjudication.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC Study: A multi-reader multi-case (MRMC) comparative effectiveness study was not performed. The 510(k) summary explicitly states that the proposed SmartCT "did not require clinical study since substantial equivalence... was demonstrated with the following attributes: Indication for use; Technological characteristics; Non-clinical performance testing; and Safety and effectiveness." The SmartCT device is described as "3D image visualization and analysis software" and an "Interventional Tool," suggesting it enhances existing capabilities rather than being a standalone diagnostic AI. Therefore, no effect size of human reader improvement with/without AI assistance is provided.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- The SmartCT device is designed as an "Interventional Tool" that "assists physicians" and "provides live image guidance for navigating endovascular structures." Its description emphasizes visualization, analysis, and guidance for physicians during procedures. While it includes "implementing algorithm on Vessel Analysis tool," the testing described (usability, simulated workflow) inherently involves human interaction. Therefore, a standalone (algorithm only) performance study in the absence of a human operator was not conducted or reported as such. The focus appears to be on human-in-the-loop performance and interaction.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the non-clinical validation activities, the "ground truth" was established through:
- Expert Opinion/Feedback: For Usability validation and Clinical experience evaluation, "representative clinical users (physicians and nurse/technicians)" and "experts" assessed the device's safety, effectiveness, and ability to meet claims/user needs with "clinical images."
- System Requirements Specification: For software verification testing, performance was measured against predefined functional and non-functional requirements.
- Industry Standards: Compliance with recognized international and FDA consensus standards served as a form of ground truth for regulatory and technical performance.
- For the non-clinical validation activities, the "ground truth" was established through:
-
The sample size for the training set:
- The document does not provide information regarding a specific "training set" sample size. Given that the SmartCT is described as "3D image visualization and analysis software" that utilizes existing acquisition protocols (3DRA, CBCT, VasoCT) and includes "filters to improve the image quality of the reconstruction" and an "implementing algorithm on Vessel Analysis tool," it's possible that internal development and optimization (which might involve training data) occurred. However, the FDA submission focuses on validation and verification against established standards and predicate devices rather than the specifics of machine learning model training.
-
How the ground truth for the training set was established:
- As no information on a specific "training set" is provided, the method for establishing its ground truth is also not available in the supplied text.
Ask a specific question about this device
(42 days)
The IQon Spectral CT is a Computed Tomography X-Ray System intended to produce cross-sectional images of the body by computer reconstruction of x-ray transmission data taken at different angles and planes. This device may include signal analysis and display equipment, patient and equipment supports, component parts, and accessories.
The IQon Spectral CT system acquires one CT dataset – composed of data from a higher-energy detected x-ray spectrum and a lower- energy detected x-ray spectra may be used to analyse the differences in the energy dependence of the attenuation coefficient of different materials. This allows for the generation of images at energies selected from the available spectrum and to provide information composition of the body materials and/or contrast agents. Additionally, materials analysis provides for the quantification and graphical display of attenuation, material density, and effective atomic number.
This information may be used by a trained healthcare professional as a diagnostic tool for the visualization and analysis of anatomical and pathological structures and to be used for diagnostic imaging in radiology, interventional radiology, and cardiology and in oncology as part of treatment preparation and radiation therapy planning.
The system is also intended to be used for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer*.
The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society.
*Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl. J Med 2011; 365:395-409) and subsequent literature, for further information.
The proposed IQon Spectral CT System is a whole-body computed tomography (CT) X-Ray System featuring a continuously rotating x-ray tube and detectors gantry and multi-slice capability. The acquired x-ray transmission data is reconstructed by computer into cross-sectional images of the body taken at different angles and planes. This device also includes signal analysis and display equipment; patient and equipment supports; components; and accessories. The proposed IQon Spectral CT System includes the detector array, which is identical to the currently marketed and predicate device – Philips IQon Spectral CT System (K163711).
The proposed IQon Spectral CT System consists of three main components, that are identical to the currently marketed and predicate device. Philips IQon Spectral CT System (K163711) - a scanner system that includes a rotating gantry, a movable patient couch, and an operator console for control and image reconstruction; a Spectral Reconstruction System; and a Spectral CT Viewer. On the gantry, the main active components are the x-ray high voltage (HV) power supply, the x-ray tube, and the detection system.
In addition to the above components and the software operating them, the proposed IQon Spectral CT System includes workstation hardware and software for data acquisition; image display, manipulation, storage, and filming, as well as post-processing for views other than the original axial images. Patient supports (positioning aids) are used to position the patient.
The provided text is a 510(k) summary for the Philips IQon Spectral CT system. It states that the device is substantially equivalent to a previously cleared predicate device (K163711) and describes non-clinical performance and a "change of indication for use statement and minor modifications." Crucially, this document explicitly states, "The proposed IQon Spectral CT system did not require any external clinical study."
Therefore, many of the requested details regarding acceptance criteria for an AI/CADe device performance study, sample sizes for test sets, expert ground truth establishment, MRMC studies, and standalone performance tests are not applicable in this context. This 510(k) is for a CT scanner itself and highlights updates to its indications for use and minor modifications, not for a new AI/CADe algorithm requiring specific clinical performance evaluation as described in the prompt.
However, based on the information provided, I can infer the "acceptance criteria" for the device itself and how the non-clinical performance demonstrates it meets those criteria, as detailed in the document.
Here's an interpretation based on the provided text, addressing the prompt as best as possible given the nature of the submission (a CT scanner, not an AI model):
1. A table of acceptance criteria and the reported device performance
Since this is a submission for a CT scanner and not an AI/CADe device with specific performance metrics like sensitivity/specificity for disease detection, the "acceptance criteria" relate to safety, effectiveness, and compliance with standards.
| Acceptance Criteria | Reported Device Performance (Summary from text) |
|---|---|
| Compliance with International and FDA Recognized Consensus Standards | Non-clinical performance testing demonstrates compliance with: - IEC 60601-1:2005 (Third Edition) + CORR. 1:2006 + CORR. 2:2007 + A1:2012 - IEC 60601-1-2:2014 - IEC 60601-1-3:2008+A1:2013 - IEC 60601-1-6:2010 +A1: 2013 - IEC 60601-2-44:2009/AMD2:2016 - IEC 62304:2006 + A1: 2015 - ISO 10993-1:2009/Cor.1:2010 - ISO 14971 2nd Edition. |
| Compliance with Device Specific Guidance Documents | Non-clinical performance testing demonstrates compliance with: - Guidance for Industry and FDA Staff - Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices" (May 11, 2005) - Content of Premarket Submissions for Management of Cybersecurity in Medical Devices (October 2, 2014). |
| Meeting established system and sub-system level design input requirements | Design Verification planning and testing was conducted at sub-system and system levels; activities demonstrate system/sub-systems meet requirements. |
| Image Quality Verification | Included in Design Verification; "Sample clinical images were provided... reviewed and evaluated by certified radiologists. All images were evaluated to have good image quality." |
| Risk Analysis and Mitigation | Risk analysis risk mitigation testing included in Design Verification. Traceability Matrix links requirements, hazard mitigations, and test protocols. |
| Usability and Clinical Workflow Validation for intended use and commercial claims | Non-Clinical design validation testing covered intended use and commercial claims as well as usability testing with representative intended users, including clinical workflow validation and service validation. |
| Demonstration of Substantial Equivalence to Predicate Device (K163711) in Safety and Effectiveness | Demonstrated through: Indication for use (updated statement not introducing new risk), Technological characteristics (identical fundamental scientific technology), Non-clinical performance testing (compliance with standards), and Safety and effectiveness (as safe/effective as predicate). |
2. Sample size used for the test set and the data provenance
- Test Set Sample Size: Not explicitly stated as a number of patient cases or images, as this was a non-clinical evaluation focusing on system performance and compliance, not a clinical trial for an AI/CADe's diagnostic accuracy. The document mentions "Sample clinical images were provided," but not the quantity, provenance, or whether they constituted a standardized "test set" in the sense of an algorithm performance evaluation.
- Data Provenance: Not specified. Given it's a CT scanner, images would likely be from existing clinical data or phantom studies. The document mentions "Sample clinical images were provided," but doesn't detail their origin (e.g., country, retrospective/prospective).
- Retrospective or Prospective: Unspecified, but likely retrospective for image evaluation.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not specified.
- Qualifications of Experts: "Certified radiologists" were used to evaluate image quality. No further details on experience or specialization are provided within this document.
- Establishment of Ground Truth: For image quality, the radiologists' evaluation of "good image quality" served as the assessment. For the system's overall safety and effectiveness, compliance with standards and internal testing served as the primary proof, rather than establishing a diagnostic "ground truth" for disease as would be needed for an AI algorithm.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable as there was no formal "test set" in the context of an AI/CADe performance study requiring ground truth adjudication. The radiologists' image quality evaluation method is not detailed.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. The document explicitly states: "The proposed IQon Spectral CT system did not require any external clinical study." Therefore, no MRMC study comparing human readers with and without AI assistance was performed or reported here.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Not applicable. This is for the CT scanner hardware/software system, not a standalone AI algorithm. The image reconstruction and analysis features like "Electron Density" and "Calcium Suppression Index" are integrated capabilities of the CT system, not separate AI algorithms undergoing standalone performance evaluation for diagnostic accuracy.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)
For image quality, the "ground truth" was based on the qualitative assessment of "good image quality" by certified radiologists. For system compliance, the "ground truth" was the defined requirements of international standards and internal specifications, tested through verification and validation activities. No pathology or outcomes data ground truth for disease diagnosis was required or used in this submission as it's not for an AI diagnostic aid.
8. The sample size for the training set
Not applicable. This document describes a CT scanner system, not a machine learning model that requires a training set. The descriptions of "Electron Density" and "Calcium Suppression Index" involve dedicated algorithms, but no details of training data for these are provided, nor would they typically be detailed in this type of submission for established image processing techniques in a CT scanner.
9. How the ground truth for the training set was established
Not applicable, as there is no mention of a training set for an AI model.
Ask a specific question about this device
(147 days)
O-MAR is a combination of an acquisition technique and post-processing software intended for use on Achieva and Ingenia, 1.5T & 3T MR Systems. O-MAR is suitable for use on all patients with passive MR Conditional orthopedics implants that are scanned according to the conditions of safe use for the specific MR Conditional implant being scanned. In addition O-MAR is suitable for use on patients without implants that are cleared for MR exams. O-MAR helps reduce artifacts caused by presence of metal in both in-plane and through-plane dimensions compared to conventional MR imaging techniques. Thus O-MAR improves visualization of more tissue in the vicinity of MR Conditional orthopedic implants. When interpreted by a trained physician, images generated by O-MAR provide information that can be useful in determining a diagnosis.
The O-MAR feature has two components. The SEMAC+VAT feature and the MARS+VAT feature. SEMAC+VAT is a Turbo Spin Echo method in combination with VAT (View Angle Tilting) and with multiple z-encodings per excited slice (aka SEMAC) to reduce in-plane and through-plane distortions caused by magnetic field inhomogeneities. MARS is high band width TSE. MARS+VAT can also be referred to as high band width TSE+VAT.
A difference between SEMAC+VAT and MARS+VAT is that SEMAC+VAT also provides through plane as well as in plane artifact reduction, MARS+VAT only provides in plane artifact reduction. SEMAC uses a slice selective TSE acquisition. Multiple z-encodings per excited slice are used to recover off-resonant signal caused by magnetic field inhomogeneities. The output image for each slice represents a combination of the signal acquired at different off-resonant frequencies. SEMAC takes care of corrections in the through-slice direction. The VAT (View angle tilting) technique is used to reduce in-plane distortions. For this, the gradient applied during slice selection is reapplied during the signal readout.
The feature consists of:
- Specific imaging sequence based on multiple overlapping 3D volumes, where the 3D volume aims at capturing the different frequencies caused by the distortion.
- A new calculation function to combine different frequency MR signals into a single . undistorted slice.
- A TSE-based SENSE reference scan, which is more robust towards the metal distortions than standard FFE reference scans
- VAT gradient control in sequences.
- . VAT is combined with SEMAC or MARS. MARS (Metal artifact reduction sequence) is a slice selective high bandwidth TSE sequence which can be achieved with standard settings of the TSE sequences.
The provided text contains information about the Philips Medical Systems O-MAR device (K143253), which is intended to reduce artifacts caused by metal implants in MRI scans. However, it does not explicitly state acceptance criteria in a quantitative table format nor does it provide a formal study comparing the device against these specific acceptance criteria in the way a structured clinical trial report would.
Instead, the document describes the validation process and findings which implicitly demonstrate that the device meets its stated purpose of improving image quality around metal implants.
Here's an attempt to extract and infer the requested information based on the available text:
Acceptance Criteria and Device Performance
Note: The document does not provide a formal table of quantitative acceptance criteria. The "reported device performance" is inferred from the conclusions of the validation studies described.
| Acceptance Criterion (Inferred from Validation Goals) | Reported Device Performance (from studies) |
|---|---|
| Reduced Artifact Size | All testing (phantom and clinical) showed "better artifact reduction" using either MARS+VAT or SEMAC+VAT scans versus high bandwidth TSE scans. |
| Improved Tissue Visualization | Clinical validation showed "improved tissue visualization" in all test areas (knee, hip, lumbar spine implant) with either MARS+VAT or SEMAC+VAT scans compared to high bandwidth TSE scans. A board-certified radiologist confirmed "more tissue visualization" with O-MAR. |
| Safe and Effective Operation | Nonclinical and clinical tests demonstrated the device is safe and works according to its intended use. No product defects or new hazards were identified during validation. The device functioned correctly, examcards loaded, Sense reference scans could be added, and all scans ran properly. Appropriately warned for individuals with implants. |
Detailed Study Information
2. Sample size used for the test set and the data provenance:
- Test Set Sample Size (Clinical): The text mentions "volunteers that had a knee, hip, or lumbar spine implant." The exact number of volunteers is not specified.
- Test Set Sample Size (Non-Clinical/Phantom): Phantoms containing "three total hip implants with varying materials, one total knee implant and two spine implants (screws and fixation rod, and screws and fixation plate)" were used.
- Data Provenance: The studies were conducted internally by Philips Medical Systems Nederland B.V. The country of origin of the data is not explicitly stated beyond the company's location in "The Netherlands." The studies appear to be prospective, specifically designed to validate the O-MAR feature.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: "A board certified radiologist" (singular) confirmed the findings in the "External Image Evaluation report DHF229248."
- Qualifications: "Board certified radiologist." Specific years of experience are not mentioned.
4. Adjudication method for the test set:
- The text only mentions a single "board certified radiologist" confirming the findings in the external image evaluation. This implies a single-reader assessment rather than a multi-reader adjudication method (like 2+1 or 3+1). For the internal validation, the method for assessing "improved tissue visualization" and "reduced artifacts" is not detailed, but it suggests an internal comparison without external adjudication.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study is not explicitly mentioned. The document describes a comparison between images generated by O-MAR (MARS+VAT or SEMAC+VAT) and conventional high bandwidth TSE scans, assessed by a single board-certified radiologist. The focus is on the improvement in image quality directly attributable to the O-MAR technique, not on how human readers' diagnostic accuracy changes with vs without AI assistance in the diagnostic workflow. The O-MAR device is characterized as an image acquisition and post-processing software, not an AI-assisted diagnostic tool.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, implicitly. The core of the validation involves comparing the output images of the O-MAR algorithm (MARS+VAT or SEMAC+VAT) to images from conventional high bandwidth TSE scans. The "artifact reduction" and "tissue visualization" are inherent properties of the generated images, representing the algorithm's standalone performance in processing MR signals to produce improved images. The radiologist's review then confirmed this standalone improvement in image quality.
7. The type of ground truth used:
- Expert Consensus/Subjective Assessment: The ground truth for image improvements (reduced artifacts, improved tissue visualization) was established through subjective comparison and assessment by technical evaluators and a "board certified radiologist." For the phantom studies, the reduction of artifact size was also a quantifiable comparison against a baseline. There's no mention of pathology or clinical outcomes data being used as ground truth for this device's validation.
8. The sample size for the training set:
- The document does not specify a training set size. As O-MAR appears to be a rule-based image acquisition/reconstruction technique rather than a machine learning/AI model that requires training, the concept of a "training set" in the context of machine learning might not directly apply here. It's possible the algorithms were developed and refined using internal data, but this is not detailed as a formal "training set."
9. How the ground truth for the training set was established:
- As no training set is described (see point 8), there is no information on how its ground truth was established.
Ask a specific question about this device
Page 1 of 1