Search Results
Found 4 results
510(k) Data Aggregation
(189 days)
The MC2 Portable X-ray System is indicated for use by qualified/trained medical professionals on adult patients for orthopedic radiographic, orthopedic serial radiographic, orthopedic fluoroscopic, and orthopedic interventional procedures of extremities distal to the shoulders and distal to the knees. The device is not intended for use during surgery. The device is not intended to replace a stationary radiographic system.
The device is to be used in healthcare facilities where qualified operators are present (e.g., outpatient clinics, urgent cares, imaging centers, sports medicine facilities, occupational medicine clinics).
The device is not intended to be used in environments with the following characteristics:
- Aseptic or sterile fields, such as in surgery
- Home or residential settings or other settings where qualified operators are not present ●
- Vehicular and moving environments ●
- Environments under direct sunlight ●
- Oxygen-rich environments, such as near an operating oxygenation concentrator ●
The MC2 Portable X-ray System ("MC2 System" or "MC2") is a portable and handheld X-ray system designed to aid clinicians with point-of-care visualization through diagnostic X-rays of extremities distal to the shoulders and distal to the knees. The device allows clinicians to select desired technique factors best suited for their patient's anatomy. The MC2 consists of two major system components: the emitter and the cassette. The MC2 emitter and cassette are battery-powered and are charged via a wired charger. The system is intended to interface wirelessly to an external tablet when used with the OXOS Device App or to a monitor with an off-the-shelf ELO Backpack and the OXOS Device App. The MC2 utilizes an Infrared Tracking System to allow the emitter to be positioned above the patient's anatomy and aligned to the cassette by the operator. The MC2 also utilizes a LIDAR system to ensure patient safety by maintaining a safe source-to-skin distance.
The MC2 is capable of three X-ray imaging modes: single radiography, serial radiography, and fluoroscopy. In single and serial radiography modes, the user can utilize the entire range of kV values (40-80kV), while fluoroscopy mode is limited to 40-64kV. In single radiography mode, the user can utilize the entire range of mAs values, while serial radiography and fluoroscopy are limited to 0.04-0.08 mAs. Single radiography acquisitions may be performed handheld, while serial radiography and fluoroscopy require the emitter to be in a stand-mounted configuration.
The MC2 contains various safety features to ensure patient and operator safety. The primary interlocks that ensure system geometry is maintained include a source-to-image distance interlock, an active area interlock, a source-to-skin distance interlock, and a stand-mounted interlock.
The source-to-image distance interlock uses the Tracking System to disallow X-ray acquisition when the device is outside the bounds of source-to-image distance (SID). This acts concurrently with the source-to-skin distance (SSD) interlock which uses the LIDAR system to disallow X-ray acquisition below 30cm source-to-skin distance. Both conditions must be met for X-ray acquisition to be allowed. The active area interlock uses the Tracking System to prevent the X-ray field from extending beyond the bounds of the active area. The stand-mounted interlock prevents handheld X-ray acquisition in serial radiography and fluoroscopy modes.
In addition to the components listed above, the MC2 includes accessories, such as a clinical cart and a wireless foot pedal should be used for stand-mounted imaging when initiating single, serial or fluoroscopic acquisitions remotely. The clinical cart supports the MC2 for stand-mounted operation and allows the user to position anatomy easily. An accessory stand such as the clinical cart is required to facilitate stand-mounted imaging modes. Radiography and Photo modes may be used without a stand.
The provided text details the regulatory approval of the OXOS Medical, Inc. MC2 Portable X-ray System (K241567) but does not contain specific acceptance criteria or the detailed results of a study proving the device meets those criteria. The document states that a comprehensive, task-based image quality study was conducted and that five radiologists clinically evaluated the image quality, but it does not provide the quantitative results, acceptance thresholds, or statistical analyses from this study.
Therefore, much of the requested information cannot be extracted from the given text.
Based on the provided document, here's what can be inferred or explicitly stated:
1. A table of acceptance criteria and the reported device performance
This information is not provided in the document. The document states:
- "All components of the MC2, including software, were verified and validated to demonstrate compliance with the appropriate regulations and in accordance with the risk profile analysis." (Page 10)
- "All forms of testing showed the MC2 to be compliant with the relevant standards and safe and effective in the procedures and scenarios outlined in the Indications for Use." (Page 10)
- "The MC2 Portable X-ray System met bench testing acceptance criteria as defined in the test protocols." (Page 10)
However, the specific "acceptance criteria as defined in the test protocols" and the actual "reported device performance" against these criteria are not detailed.
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set: Not specified. The document only mentions that "a comprehensive, task-based image quality study was conducted to assess the clinical adequacy of the device's imaging performance" and "Radiologic technologists acquired images in all acquisition modes." (Page 11)
- Data Provenance (country of origin, retrospective/prospective): Not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: "five radiologists clinically evaluated the image quality." (Page 11)
- Qualifications of Experts: Only "radiologists" are mentioned; specific qualifications such as years of experience are not provided.
4. Adjudication method for the test set
- Adjudication Method: Not specified. It is mentioned that "five radiologists clinically evaluated the image quality," but there is no description of how their evaluations were combined or adjudicated (e.g., 2+1, 3+1, none).
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and if so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study Done: Yes, a "comprehensive, task-based image quality study" was performed with "five radiologists clinically evaluated the image quality." However, this study appears to be an assessment of the device's imaging performance itself for clinical adequacy, not a direct comparative effectiveness study of human readers with vs. without AI assistance.
- Effect Size of Human Readers Improvement: Not discussed or measured, as the study described is not focused on AI assistance to human readers. The MC2 appears to be an X-ray system, not an AI diagnostic tool.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The MC2 is an X-ray system, not an algorithm. Therefore, "standalone (algorithm only)" performance metrics are not applicable in the typical sense for an AI model. The performance testing section refers to the system's overall compliance and image quality.
7. The type of ground truth used
- Type of Ground Truth: The "clinical evaluation of image quality" by radiologists suggests that the ground truth for image quality assessment was based on expert consensus/evaluation of the images themselves for their "clinical adequacy." It doesn't explicitly state if pathology or outcomes data were used in establishing this ground truth.
8. The sample size for the training set
- The document describes the MC2 Portable X-ray System as a medical device, specifically an X-ray system, not an AI/ML algorithm that requires a "training set" in the common sense. Therefore, information about a "training set sample size" is not applicable to this device's description.
9. How the ground truth for the training set was established
- As mentioned above, the device is an X-ray system, not a machine learning model. Thus, the concept of a "training set" and its "ground truth establishment" is not relevant to the information provided about this particular device.
Ask a specific question about this device
(165 days)
The Micro C Medical Imaging System, M01 is a handheld and portable general purpose X-ray system that is indicated for use by qualified/trained clinicians on adult and pediatric patients for taking diagnostic static and serial radiographic exposures of extremities. The device is not intended to replace a radiographic system that has both variable tube current and voltages (kVp) in the range that may be required for full optimization of image quality and radiation exposure for different exam types.
The Micro C Medical Imaging System, M01 (subject device) is a handheld X-ray system designed to aid clinicians with point of care visualization through diagnostic X-rays of distal extremities. The device allows a clinician to select desired technique factors best suited for their patient anatomy. The Micro C Medical Imaging System, M01 consists of three major subsystems: The Emitter, Cassette, and Control Unit. The System is intended to interface an external Monitor (touchscreen or non-touchscreen display), keyboard and a mouse, and can provide a remote operator interface over the network to a laptop. The Micro C Medical Imaging System, M01 utilizes a computer vision positioning system to allow the emitter to be positioned above the patient anatomy and aligned to the cassette by the operator. The device is used in a clinical environment.
The provided document describes the Micro C Medical Imaging System, M01, and updates to its software to include AiLARA (Artificial Intelligence-based Algorithm for Radiography) modes. The primary focus of the document regarding acceptance criteria and performance relates to the validation of this new software feature.
Here's a breakdown of the requested information:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state quantitative acceptance criteria for the AiLARA algorithm's performance in terms of image quality or diagnostic accuracy using specific metrics like sensitivity, specificity, or AUC. Instead, the acceptance criteria seem to be qualitative and focused on the algorithm's learning trend, software requirement fulfillment, diagnostic relevance of images, and radiation dose limits.
Acceptance Criteria Category | Description of Acceptance Criteria (Inferred) | Reported Device Performance |
---|---|---|
AiLARA Algorithm Verification | The model should learn the trend of the training dataset (truth). Mean Squared Error (MSE) and Mean Absolute Error (MAE) for both training and testing datasets should indicate no further benefit from additional training (epochs). | The model was able to learn the trend of the training dataset (truth). The mean squared error of the training and verification testing datasets were plotted, and the trend lines showed that the model had learned the general trend present in the data. Both training and testing Mean Squared Error and Mean Absolute Error showed that additional training (epochs) would have no added benefit. |
Software Verification | The updated Micro C software should meet system-level software requirements, and software outputs should meet expected results. | Software outputs met the expected result in all cases, with no anomalies found. |
Image Quality Validation | Images generated with AiLARA modes should be diagnostically and clinically relevant when reviewed by board-certified radiologists and an orthopedic surgeon. | All images were determined to be diagnostically and clinically relevant. |
Radiation Dose Testing | AiLARA mode's radiation outputs should be below established Diagnostic Reference Levels (DRLs) and not statistically different from the predicate device's manual mode dose outputs for comparable techniques. | All AiLARA dose values were below the established Diagnostic Reference Levels (DRLs) and there was no statistical difference between AiLARA and Manual mode calculated entrance skin exposure doses. |
Radiation Dose Testing on Small/Pediatric Anatomies | AiLARA's radiation outputs for small size extremity anatomies should be below DRLs and consistent across various emitter orientations and small anatomy thicknesses. | All AiLARA dose values were below the established Diagnostic Reference Levels (DRLs) for small size anatomies, and doses were similar among captures for each orientation within the same target thickness and SID category. |
Usability Evaluation | All critical use tasks for the AiLARA modes should be completed with a passing result by 100% of participants. | All the identified critical use tasks were completed with a passing result by 100% of participants. The usability evaluation was performed in accordance with IEC 62366-1:2020 and FDA guidance. |
2. Sample size used for the test set and the data provenance
- Test Set Sample Size: The AiLARA algorithm's full development dataset was split, with 20% of the data used as the testing set for algorithm verification.
- Data Provenance: The document does not specify the country of origin. The data used for algorithm verification was part of the "full development dataset" of AiLARA. For the "Image Quality Validation Study," the validation set was collected after the algorithm was frozen and transferred to the device, using phantoms at different emitter orientations and angles. The phantom types included ankle, elbow, hand, foot, knee, toe, and wrist. The study appears to be prospective in the sense that images were collected specifically for validation after the algorithm's finalization.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- For the Image Quality Validation Study, images were reviewed and rated by board certified radiologists and an orthopedic surgeon. The exact number of experts is not specified. Their qualifications are stated as "board certified" in their respective fields.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- The document does not describe a specific adjudication method (like 2+1 or 3+1) for establishing the ground truth or evaluating the test set images. It states that experts "reviewed and rated" the images.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No MRMC comparative effectiveness study was done to evaluate human readers' improvement with AI assistance. The studies performed were primarily focused on the standalone performance and safety of the AiLARA algorithm and the updated device.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, a standalone evaluation of the AiLARA algorithm was performed. The "AiLARA Algorithm Verification" specifically describes training the model and then sending the full testing set into the model to predict its performance on unseen data, which is a standalone assessment. The "Image Quality Validation Study" and "Radiation Dose Testing" also evaluate the device's output (images and dose) when using the AiLARA modes, which operate without human input on technique factors during image acquisition (the algorithm "determines and recommends a power setting and an exposure time for the X-ray without user input").
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- For the "AiLARA Algorithm Verification," the ground truth was referred to as the "truth" which the model was trained to learn the trend of. Given the context of determining optimal technique factors, this "truth" likely relates to ideal exposure parameters for specific anatomies and views.
- For the "Image Quality Validation Study," the ground truth for evaluating image quality was the assessment by board-certified radiologists and an orthopedic surgeon that images were "diagnostically and clinically relevant." This can be considered a form of expert consensus or subjective expert assessment of image utility.
8. The sample size for the training set
- 80% of AiLARA's full development dataset was used for the training set. The total size of the "full development dataset" is not specified.
9. How the ground truth for the training set was established
- The document states that the AiLARA model was trained to "learn the trend of the training dataset (truth)." It implies that the ground truth for the training set consisted of the correct or desired power settings and exposure times for various phantom anatomies and orientations. The specific methodology for establishing this "truth" (e.g., manual expert selection, physical measurements, reference images) for the training data is not detailed but is implicitly linked to generating "a clinically relevant image" while "reduc[ing] overexposures."
Ask a specific question about this device
(23 days)
The Micro C Medical Imaging System, M01 is a handheld and portable general purpose X-ray system that is indicated for use by qualified/trained clinicians on adult and pediatric patients for taking diagnostic static and serial radiographic exposures of extremities. The device is not intended to replace a radiographic system that has both variable tube current and voltages (kVp) in the range that may be required for full optimization of image quality and radiation exposure for different exam types.
The Micro C Medical Imaging System, M01 (subject device) is a handheld X-ray system designed to aid clinicians with point of care visualization through diagnostic X-rays of distal extremities. The device allows a clinician to select desired technique factors best suited for their patient anatomy. The Micro C Medical Imaging System, M01 consists of three major subsystems: The Emitter, Cassette, and Control Unit. The System is intended to interface an external Monitor (touchscreen or non-touchscreen display), keyboard and a mouse, and can provide a remote operator interface over the network to a laptop. The Micro C Medical Imaging System, M01 utilizes a computer vision positioning system to allow the emitter to be positioned above the patient anatomy and aligned to the cassette by the operator. The device is used in a clinical environment.
The provided text describes a 510(k) premarket notification for the OXOS Medical Micro C Medical Imaging System, M01. This submission aims to expand the indications for use to include pediatric patients and surgical settings. The document heavily relies on the substantial equivalence to a previously cleared predicate device (also Micro C Medical Imaging System, M01, K203658).
However, the document does not include detailed acceptance criteria or a study that rigorously proves the device meets specific performance criteria in a way that would be typical for an AI/CADe device. The focus is on demonstrating that the expanded indications (pediatric use and surgical use) do not raise new questions of safety or effectiveness compared to the predicate device. The information provided is primarily a comparison between the subject device and its predicate, highlighting that the core technological characteristics are identical.
Therefore, many of the requested details about acceptance criteria, specific performance metrics, sample sizes, expert ground truth establishment, adjudication methods, MRMC studies, and standalone AI performance are not present in the provided text, as this is a submission for an imaging device, not an AI/CADe device.
Based on the provided text, here's the information that can be extracted or inferred:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state quantitative acceptance criteria or detailed performance metrics in the way one would expect for an AI/CADe device (e.g., sensitivity, specificity, AUC). Instead, the "performance" discussed relates to the device's fundamental imaging capabilities and its intended use, established by its substantial equivalence to the predicate. The crucial aspect from the document is the claim that the updated indication for pediatric use and surgical use is acceptable as demonstrated by "performance data" that does not raise different questions of safety and effectiveness.
Acceptance Criteria (Inferred from equivalence to predicate and non-clinical testing) | Reported Device Performance (from text) |
---|---|
Image quality for diagnostic static and serial radiographic exposures of extremities (comparable to predicate K203658) | Device is "identical" to predicate device K203658 in technological characteristics like detector, resolution, DQE, MTF. |
Safe and effective for adult patients (established by predicate K203658) | Established as substantially equivalent. |
Safe and effective for pediatric patients (new indication) | Supported by "performance data" related to the addition of the pediatric population, implying that the device performs adequately for this cohort without new safety/effectiveness concerns. |
Safe and effective for use in surgery (new indication) | Supported by "usability testing for use in surgery" and "cleaning and disinfection studies." |
Compliance with relevant guidance documents and standards | Mention of conformance with guidance for Medical X-ray Imaging Devices, Solid State X-ray Devices, Hand-held X-ray Equipment, and Pediatric Information for X-ray Imaging Device Premarket Notifications. |
2. Sample size used for the test set and the data provenance
Not provided in the document. The filing is for an X-ray imaging device, not an AI/CADe system. The "performance data" for pediatric and surgical use is not detailed with sample sizes or data provenance.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable/Not provided. This relates to AI/CADe studies, not the 510(k) for an X-ray imaging system.
4. Adjudication method for the test set
Not applicable/Not provided.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. This is not an AI-assisted device.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Not applicable. This is not an AI-enabled device.
7. The type of ground truth used
Not applicable. The "performance data" for pediatric and surgical use is not specified in terms of clinical "ground truth" used for evaluation. It likely refers to technical performance evaluations and usability rather than diagnostic accuracy against a clinical gold standard for disease detection.
8. The sample size for the training set
Not applicable. This is not an AI-enabled device.
9. How the ground truth for the training set was established
Not applicable. This is not an AI-enabled device.
Ask a specific question about this device
(28 days)
The Micro C Medical Imaging System, M01 is a handheld and portable general purpose X-ray system that is indicated for use by qualified/trained clinicians on adult patients for taking diagnostic static and serial radiographic exposures of extremities. The device is not intended to replace a radiographic system that has both variable tube current and voltages (kVp) in the range that may be required for full optimization of image quality and radiation exposure for different exam types.
The Micro C Medical Imaging System, M01 (subject device) is a handheld X-ray system designed to aid clinicians with point of care visualization through diagnostic X-rays of distal extremities. The device allows a clinician to select desired technique factors best suited for their patient anatomy. The Micro C Medical Imaging System, M01 consists of three major subsystems: The Emitter, Cassette, and Control Unit. The System is interface an external Monitor (touchscreen or non-touchscreen display), keyboard and a mouse, and can provide a remote operator interface over the network to a laptop. The Micro C Medical Imaging System, M01 utilizes a computer vision positioning system to allow the emitter to be positioned above the patient anatomy and aligned to the cassette by the operator. The device is used in a clinical environment.
This looks like a 510(k) summary for a medical device called "Micro C Medical Imaging System, M01". The summary primarily focuses on establishing substantial equivalence to a predicate device and discussing technical characteristics and compliance with various standards.
There is no information provided regarding acceptance criteria nor a typical clinical study with patient data that would involve the following:
- A table of acceptance criteria and the reported device performance: This document does not specify quantitative acceptance criteria for image quality or diagnostic performance, nor does it report device performance against such metrics.
- Sample size used for the test set and the data provenance: There is no mention of a test set with patient data.
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts: As there's no clinical test set, there's no mention of experts or ground truth establishment.
- Adjudication method for the test set: Not applicable as there's no patient test set.
- If a multi reader multi case (MRMC) comparative effectiveness study was done: Not applicable as there's no patient test set or reader study mentioned.
- If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: The document describes an X-ray imaging system, not an AI algorithm. Therefore, this is not applicable.
- The type of ground truth used (expert consensus, pathology, outcomes data, etc): Not applicable as there's no clinical test set.
- The sample size for the training set: Not applicable as there's no AI algorithm with a training set discussed.
- How the ground truth for the training set was established: Not applicable as there's no AI algorithm with a training set discussed.
What the document does provide:
The document focuses on demonstrating substantial equivalence to predicate devices based on:
- Technological Characteristics Comparison (Table 5-2 and 5-3): This table compares the subject device (Micro C Medical Imaging System, M01) with a predicate mobile X-ray system (Nomad MD 75kV Handheld X-Ray System, K140723) and reference devices (KDR™ AU-DDR System, K193225 and Faxitron VisionCT, K173309). It details various technical specifications like product code, regulation, classification, indication for use, age of device use, principle of operation, image type, detector characteristics (for the detector incorporated), collimator, weight, dimensions, triggering mechanism, SSD, SID, light field, energy source, exposure time, mA, kVp, ingress protection, image processing, connectivity options, DICOM compliance, and device package contents.
- Non-Clinical Performance Data: The document states that testing was performed successfully according to a list of international standards (e.g., ISO, IEC) and FDA regulations. These standards cover aspects like risk management, diagnostic X-ray system requirements, electrical safety, electromagnetic compatibility, usability, X-ray tube assemblies, software product lifecycle, biocompatibility, labeling, and laser safety.
- Additional Non-Clinical Performance Testing: This includes Functional Testing, an Image Quality Study, Usability Testing, and a Cleaning Study. However, no specific details, criteria, or results from these studies are provided in this summary.
Conclusion from the document:
The summary concludes that the Micro C Medical Imaging System, M01 is similar to the legally marketed predicate device (Nomad MD 75kV Handheld X-Ray System) in intended use, similar technologies, and performance data, and therefore does not raise different questions of safety and effectiveness, thus supporting its substantial equivalence claim.
Ask a specific question about this device
Page 1 of 1