Search Results
Found 34 results
510(k) Data Aggregation
(117 days)
Phoenix Sinus Tarsi Stent is an implant stabilization device used in the treatment of talotarsal joint instability in adult and pediatric patients four years of age and older. The stent is designed to stabilize the talus to prevent excessive anterior, and/or medial, and/or plantarflexion of the talus, while allowing normal talotarsal joint motion.
The Phoenix Sinus Tarsi Stent is a non-pyrogenic type IIB extra-osseous talotarsal stabilization (EOTTS) device used in the treatment of talotarsal joint instability per Graham Et al. EOTTS classification system. The stent system consists of an implant designed to be inserted into the sinus tarsi and has corresponding instrumentation to facilitate the insertion. All stents are manufactured from Ti6Al-4V ELI per ASTM F136 and have 5 sizes with varying diameter.
The Phoenix Sinus Tarsi Stent System is a medical device, and the provided FDA 510(k) clearance letter focuses on its substantial equivalence to predicate devices based on non-clinical performance testing. This type of submission does not typically involve the kind of AI/ML performance evaluation criteria and studies that would address human-in-the-loop performance, multi-reader multi-case studies, or detailed ground truth establishment as requested.
The provided document describes non-clinical performance testing to demonstrate the device's physical and biological properties. It does not involve AI or algorithms, nor does it refer to human interpretation of medical images or data. Therefore, many of the requested categories (such as human readers, AI assistance, ground truth for AI, etc.) are not applicable to this specific device and its regulatory submission.
However, I can extract the acceptance criteria and study information that is present in the document regarding the device's physical and material performance.
Acceptance Criteria and Study for the Phoenix Sinus Tarsi Stent System (Non-Clinical Performance)
The Phoenix Sinus Tarsi Stent System's acceptance criteria and performance are established through a series of non-clinical tests designed to demonstrate its safety and functionality as a physical implant. These tests do not involve AI/ML components or human interpretation studies.
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria Category | Acceptance Criteria Met (Yes/No) | Reported Device Performance |
|---|---|---|
| Mechanical Performance | ||
| Screw Pullout | Yes | Performed as intended per ASTM F543 |
| Cantilever Bending | Yes | Performed as intended per ASTM 2193 |
| Biocompatibility | ||
| Endotoxin Testing | Yes | Met predetermined criteria per AAMI ST72 and USP <85> |
| Cytotoxicity Testing | Yes | Met predetermined criteria per ISO 10993-5 |
| Biocompatibility Risk Assessment | Yes | Concluded positively (details not specified, but implied met criteria) |
| Sterilization & Packaging | ||
| Sterilization Testing | Yes | Met predetermined criteria per ISO 11137-1, ISO 11137-2 |
| Packaging Shelf-Life Performance Testing | Yes | Met predetermined criteria per ISO 11607-1, ASTM F88/F88M, ASTM F2096, ASTM F1886/1886M |
Note: The document explicitly states: "All testing showed the subject device performed as intended. All testing met applicable predetermined acceptance criteria."
2. Sample Size for the Test Set and Data Provenance:
- Sample Size: Not explicitly stated for each test (e.g., number of stents tested for pullout, bending, etc.). For physical and material tests, sample sizes are typically determined by relevant ISO/ASTM standards.
- Data Provenance: The tests are non-clinical, implying laboratory-based testing of the device itself (e.g., physical specimens of the stent and its materials), not patient data. Therefore, concepts like "country of origin of the data" or "retrospective/prospective" studies are not applicable in this context.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
- Not Applicable. For non-clinical performance testing of a physical implant, "ground truth" is established by adherence to recognized international standards (e.g., ASTM, ISO, AAMI, USP) and predefined pass/fail criteria for material and mechanical properties. There are no human "experts" establishing ground truth in the sense of medical diagnosis or interpretation for this type of testing.
4. Adjudication Method for the Test Set:
- Not Applicable. Adjudication methods like 2+1 or 3+1 are used in studies involving human interpretation of data where consensus on ground truth is required. For the non-clinical tests described, outcomes are typically objectively measured against predefined criteria specified in the test standards.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:
- No. An MRMC study is designed to evaluate the performance of diagnostic devices, especially those involving human interpretation (e.g., radiologists reading images), often with and without AI assistance. The Phoenix Sinus Tarsi Stent System is a physical implant, not a diagnostic device involving human interpretation; thus, MRMC studies are not applicable.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- No. This device does not incorporate any AI or algorithm component. Its performance is purely based on its physical and material properties.
7. The Type of Ground Truth Used:
- Scientific Standards and Predetermined Criteria: For mechanical tests (e.g., screw pullout, cantilever bending), ground truth is defined by the requirements and methodologies outlined in the applicable ASTM standards (e.g., ASTM F543, ASTM 2193), with specific performance thresholds (e.g., minimum pullout strength, maximum deflection).
- Biological/Material Standards: For biocompatibility (Endotoxin, Cytotoxicity) and sterilization, ground truth is based on meeting the specifications and acceptable limits defined by international standards (e.g., AAMI ST72, USP <85>, ISO 10993-5, ISO 11137-1/2).
- Packaging Integrity: For shelf-life, ground truth is meeting parameters defined by ISO 11607-1 and relevant ASTM standards regarding package integrity.
8. The Sample Size for the Training Set:
- Not Applicable. This device does not use AI/ML, therefore, there is no "training set."
9. How the Ground Truth for the Training Set was Established:
- Not Applicable. There is no training set for this device.
Ask a specific question about this device
(280 days)
General ophthalmic imaging including retinal, corneal, and external structures of the eye.
The Phoenix ICON system is an updated cart based retinal imaging system covering the design changes to date on the predicate device, Phoenix ICON. The Phoenix ICON GO retinal imaging system is a portable version of the predicate device, Phoenix ICON (K170527) including the design changes in the Phoenix ICON system.
Both the Phoenix ICON and Phoenix ICON GO are wide-field, handheld, high resolution, real-time retinal imaging devices. They are intended to be used for general ophthalmic imaging including retinal, corneal, and external structures of the eye. The intended users of the Phoenix ICON and Phoenix ICON GO are clinical imaging technicians, ophthalmic technicians, nurses, and physicians. The devices may be used in hospitals, medical clinics, and physician's offices.
The Phoenix ICON platform consists of either a cart based (Phoenix ICON) or portable (Phoenix ICON GO) control box used in conjunction with a hand-held camera (Handpiece) using interchangeable LED based light sources (White and Blue light). The Phoenix ICON cart contains an AC mains power attachment, a battery module, a keyboard interface, a monitor, and a computer with Phoenix ICON software. The Phoenix ICON GO contains a portable control box with battery function and has an interface for attachment to a specified laptop computer which runs the Phoenix ICON software. Both systems may be used with a Foot Pedal, White Light Module (standard), Blue Light Module (FA) and/or Diffuser accessory.
The Phoenix ICON Handpiece contains a wide-field, high resolution camera is used in three (3) modes, External Imaging (White Light), Retinal Imaging (White Light), and Fluorescein Angiography (Blue Light). For external imaging, the Diffuser accessory is placed over the lens tip to diffuse the light and provide for images of the outer surfaces of the eye. Both Retinal Imaging and Fluorescein Angiography are performed with the glass lens of the Handpiece coupled to the cornea via an imaging gel. In these imaging methods, LED light is emitted into the eye to illuminate the retina for image capture.
Both the Phoenix ICON and Phoenix ICON GO are software-controlled systems which can capture either video or still images and store them on the control box (Cart computer or GO laptop) for later review. The Phoenix ICON system may be connected to IT networks under IT supervision.
The provided document does not contain details about specific acceptance criteria for a device's performance in a clinical study or a study proving that the device meets those criteria. Instead, it is a 510(k) summary for the NeoLight Phoenix ICON and Phoenix ICON GO ophthalmic cameras, aimed at demonstrating substantial equivalence to a predicate device (K170527, Phoenix ICON by Phoenix Technology Group, LLC).
The document focuses on comparing technological characteristics and safety testing, not on clinical performance acceptance criteria or a study to demonstrate such.
However, based on the Performance Testing section (Table 5.3) related to Simulated Use, it states:
Characteristic: Image Clarity - Comparison between subject and predicate images to ensure equivalent visual quality of the captured images.
Results: Pass.
While this indicates some form of performance assessment related to image quality, it does not provide the specific acceptance criteria (e.g., quantitative metrics, thresholds) or the detailed methodology of the study. It also doesn't present the "reported device performance" in a manner typical for clinical trials (e.g., sensitivity, specificity, or reader agreement scores).
Therefore, a table of acceptance criteria and reported device performance, as well as several other requested details, cannot be fully extracted from the provided text.
Here's an attempt to answer the questions based only on the available information, noting where information is missing:
1. A table of acceptance criteria and the reported device performance
| Acceptance Criteria (Implied) | Reported Device Performance |
|---|---|
| Visual quality of captured images is equivalent to predicate device | Passed (Equivalent visual quality of captured images) |
Note: The document only provides a high-level "Pass" result for "Image Clarity - Comparison between subject and predicate images to ensure equivalent visual quality of the captured images." It does not specify quantitative acceptance criteria or detailed performance metrics.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
The document mentions "Simulated Use" testing for "Image Clarity - Comparison between subject and predicate images." However, it does not specify the sample size used for this comparison, nor does it provide any information on data provenance (e.g., country of origin, retrospective/prospective nature).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
The document does not provide any information about the number or qualifications of experts used for establishing ground truth or evaluating image clarity.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
The document does not specify any adjudication method used for the image clarity comparison.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document describes the device as an "Ophthalmic Camera" for "General ophthalmic imaging." It is a imaging acquisition device and does not include AI functionality. Therefore, an MRMC comparative effectiveness study involving AI assistance for human readers is not relevant to this submission, and no such study is mentioned.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
This device is an ophthalmic camera. Its function is to capture images. It does not appear to incorporate an algorithm that independently processes or interprets images to provide a diagnosis or finding, nor does it claim AI capabilities. Therefore, a standalone algorithm performance study is not applicable and not mentioned. The "Simulated Use" test assesses the visual quality of the captured images, not the performance of an interpretive algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the "Image Clarity" assessment, the implicit "ground truth" was a visual comparison to the predicate device's images to ensure equivalent visual quality. The document does not specify a separate, independent ground truth method like expert consensus on pathology, or outcomes data.
8. The sample size for the training set
The document concerns an ophthalmic camera, not an AI/ML algorithm requiring a training set. Therefore, this question is not applicable.
9. How the ground truth for the training set was established
As the device is an ophthalmic camera and not an AI/ML algorithm, there is no training set and therefore no ground truth for a training set.
Ask a specific question about this device
(132 days)
Phoenix Contact Lens Case is indicated for storage of soft (hydrophilic), rigid gas permeable and hard contact lenses during chemical disinfection.
There are three models of the Phoenix Contact Lens Case:
CL-01 "Dome Top Flat Pack" - made with LDPE and has 1.5ml wells on each side
CL-02 "Classic Flat Pack" - made with LDPE and has 1.5 ml wells on each side
CL-03 "Sunglass Shape Flat Pack" made with Polypropylene and has 2.0 ml wells on each side
All three models have hinged self sealing caps and are available in white, black, blue, orange, green, and natural.
The Phoenix contact lens cases are intended for storage during chemical disinfection of soft, rigid gas permeable or hard contact lenses. It is not to be used with heat disinfection.
The provided document is a 510(k) premarket notification for Phoenix Contact Lens Cases (CL-01, CL-02, CL-03). It outlines the device's substantial equivalence to legally marketed predicate devices.
Based on the provided text, the device in question is a contact lens case, not an AI/ML medical device.
Therefore, the requested information regarding acceptance criteria, study details, sample sizes, expert involvement, and ground truth establishment, which are typical for studies proving the performance of AI/ML medical devices, is not applicable to this submission.
The document details non-clinical tests performed on the contact lens cases to demonstrate their substantial equivalence. These tests primarily focus on the biocompatibility and safety of the materials used in the contact lens cases, not on the performance of a diagnostic or therapeutic algorithm.
Here's a summary of the non-clinical tests and their conclusions, which serve as the "acceptance criteria" and "device performance" for this type of device:
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria (Test Objective) | Reported Device Performance (Conclusion) |
|---|---|
| Cytotoxicity (per ISO 10993-5): Assess if the device material extracts cause cell death or harm. | "Based on the results obtained under laboratory testing conditions, the extract of test item, Contact Lens Case was found to be 'non-cytotoxic' to the subconfluent monolayer of L-929 mouse fibroblast cells." |
| Intracutaneous skin irritation (per ISO 10993-23): Assess if extracts cause skin irritation when injected intradermally. | "Based on the results of the experiment, it is concluded that the polar extracts of test item, Contact Lens Case was 'Nonirritant' to the skin of New Zealand White Rabbits under the experimental conditions and the dose employed as per the ISO 1093 Part 23:2021 (E) Specification." |
| Guinea pig maximization (GPMT) skin sensitization (per ISO 10993-10): Assess the potential for the device material to cause allergic sensitization. | "Based on the above results of the experiment, it is concluded that the polar extracts of Contact Lens Case was found to be 'Non-sensitizer' to the skin of the Guinea pigs under the experimental conditions employed." |
| Acute systemic injection (per ISO 10993-11): Assess the potential for general toxic effects after systemic exposure to extracts. | "Based on the results of the experiment, it is concluded that the polar extracts of test item, Contact Lens Case when administered to Swiss Albino Mice through and intraperitoneal routes respectively at a dose volume of 50 mL/kg body weight did not reveal any systemic toxicity under the experimental conditions employed." |
| Material mediated pyrogenicity (per USP <151>): Assess the potential for the device material to induce fever. | "Based on the results of the experiment, it is concluded that the extract Lens Case evaluated for progen test in New Zealand White Rabbits is Non-pyrogenic as it meets the requirements of progen test as per U.S. Pharmacopoeia, and General Chapters: <151> Pyrogen Test." |
| Acute ocular irritation testing (per ISO 10993-23): Assess the potential for the device material to cause irritation to the eye. | "Under the experimental conditions employed and based on the observed results of the experiment, it is concluded that polar and non-polar extract of test item, Contact Lens Case did not produce any irritant effects to the eyes of New Zealand White Rabbits as per ISO 10993 'Biological Evaluation of Medical Devices' Part 23:2021(E) 'Test for Irritation'." |
The following numbered points are not applicable to this device, as it is a physical contact lens case and not an AI/ML software device.
- Sample sizes used for the test set and the data provenance: Not applicable. The tests involved in vitro cell cultures and in vivo animal models, with sample sizes determined by the respective ISO standards and USP guidelines for biocompatibility testing (e.g., specific numbers of cells, guinea pigs, rabbits, or mice as per the standard). The provenance is "laboratory testing conditions" and "experimental conditions." These are typically prospective in nature.
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable. Ground truth for these tests is based on objective biological responses measured in a laboratory setting per standardized protocols, not human expert consensus on images or clinical data.
- Adjudication method for the test set: Not applicable. The tests evaluate direct biological and material responses, not subjective interpretations requiring adjudication.
- If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. This study type is for AI/ML diagnostic performance, not for a physical medical device like a contact lens case.
- If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable. This refers to AI/ML algorithm performance.
- The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not applicable to an AI context. The "ground truth" for these biocompatibility tests is the presence or absence of a specific biological or toxicological reaction as defined by the international standards (e.g., cell viability, skin erythema/edema, systemic toxicity, fever induction, ocular irritation).
- The sample size for the training set: Not applicable. This document describes pre-market testing for a physical device, not an AI/ML algorithm that requires training data.
- How the ground truth for the training set was established: Not applicable.
Ask a specific question about this device
(26 days)
This is a digital mobile diagnostic x-ray system intended for use by a qualified/trained doctor or technician on both adult and pediatric subjects for taking diagnostic radiographic exposures of the skull, spinal column, chest, abdomen, extremities, and other body parts. Applications can be performed with the patient sitting, standing, or lying in the prone or supine position. Not for mammography.
This is a modified version of our previous predicate mobile PHOENIX. The predicate PHOENIX mobile is interfaced with Konica – Minolta Digital X-ray panels and CS-7 or Ultra software image acquisition. PHOENIX mobile systems will be marketed in the USA by KONICA MINOLTA. Models with the CS-7 Software will be marketed as AeroDR TX m01. Models with the Ultra software will be marketed as mKDR Xpress. The modification adds two new models of compatible Konica-Minolta digital panels, the AeroDR P-65 and AeroDR P-75, cleared in K210619. These newly compatible models are capable of a mode called DDR, Dynamic Digital Radiography wherein a series of radiographic exposures can be rapidly acquired, up to 15 frames per seconds maximum (300 frames).
The provided text describes a 510(k) premarket notification for a mobile x-ray system. The document focuses on demonstrating substantial equivalence to a legally marketed predicate device rather than presenting a study to prove the device meets specific performance-based acceptance criteria for an AI/algorithm.
Therefore, many of the requested details, such as specific acceptance criteria for algorithm performance, sample sizes for test sets, expert ground truth establishment, MRMC studies, or standalone algorithm performance, are not applicable or not present in this type of submission.
The essence of this submission is that the entire mobile x-ray system, including its components (generator, panels, software), is deemed safe and effective because it is substantially equivalent to a previously cleared device, with only minor modifications (adding two new compatible digital panels and enabling a DDR function in the software, which is stated to be "unchanged firmware" and "moderate level of concern").
Here's an attempt to address your questions based on the provided text, while acknowledging that many of them pertain to AI/algorithm performance studies, which are not the focus of this 510(k):
1. A table of acceptance criteria and the reported device performance
The document does not specify performance-based acceptance criteria for an AI/algorithm. Instead, it demonstrates substantial equivalence to a predicate device by comparing technical specifications. The "acceptance criteria" in this context are implicitly met if the new device's specifications (kW rating, kV range, mA range, collimator, power source, panel interface, image area sizes, pixel sizes, resolutions, MTF, DQE) are equivalent to or improve upon the predicate, and it remains compliant with relevant international standards.
| Characteristic | Predicate: K212291 PHOENIX | PHOENIX/AeroDR TX m01 and PHOENIX/mKDR Xpress. | Acceptance Criterion (Implicit) | Reported Performance |
|---|---|---|---|---|
| Indications for Use | Digital mobile diagnostic x-ray for adults/pediatrics, skull, spine, chest, abdomen, extremities. Not for mammography. | SAME | Must be identical to predicate. | SAME (Identical) |
| Configuration | Mobile System with digital x-ray panel and image acquisition computer | SAME | Must be identical to predicate. | SAME (Identical) |
| X-ray Generator(s) | kW: 20, 32, 40, 50 kW; kV: 40-150 kV (1 kV steps); mA: 10-650 mA | SAME | Must be identical to predicate. | SAME (Identical) |
| Collimator | Ralco R108F | SAME | Must be identical to predicate. | SAME (Identical) |
| Meets US Performance Standard | YES 21 CFR 1020.30 | SAME | Must meet this standard. | YES (Identical) |
| Power Source | Universal, 100-240 V~, 1 phase, 1.2 kVA | SAME | Must be identical to predicate. | SAME (Identical) |
| Software | Konica-Minolta CS-7 or Ultra | CS-7 and Ultra modified for DDR mode | Functions must be equivalent/improved; DDR enabled. | CS-7 and Ultra modified for DDR mode |
| Panel Interface | Ethernet or Wi-Fi wireless | SAME | Must be identical to predicate. | SAME (Identical) |
| Image Area Sizes (Panels) | Listed AeroDR P-series | Listed AeroDR P-series + P-65, P-75 | Expanded range must be compatible and cleared. | Expanded range compatible, previously cleared. |
| Pixel Sizes (Panels) | Listed AeroDR P-series | Listed AeroDR P-series + P-65, P-75 | Expanded range must be compatible and cleared. | Expanded range compatible, previously cleared. |
| Resolutions (Panels) | Listed AeroDR P-series | Listed AeroDR P-series + P-65, P-75 | Expanded range must be compatible and cleared. | Expanded range compatible, previously cleared. |
| MTF (Panels) | Listed AeroDR P-series | Listed AeroDR P-series + P-65, P-75 | Performance must be equivalent or better. | P-65 (Non-binning) 0.62, (2x2 binning) 0.58; P-75 (Non-binning) 0.62, (2x2 binning) 0.58 |
| DQE (Panels) | Listed AeroDR P-series | Listed AeroDR P-series + P-65, P-75 | Performance must be equivalent or better. | P-65 0.56 @ 1 lp/mm; P-75 0.56 @ 1 lp/mm |
| Compliance Standards | N/A | IEC 60601-1, -1-2, -1-3, -2-54, -2-28, -1-6, IEC 62304 | Must meet relevant international safety standards. | Meets all listed IEC standards. |
| Diagnostic Quality Images | N/A | Produced diagnostic quality images as good as predicate | Must produce images of equivalent diagnostic quality. | Verified |
2. Sample size used for the test set and the data provenance
No specific test set or data provenance (country, retrospective/prospective) is mentioned for AI/algorithm performance. The "testing" involved "bench and non-clinical tests" to verify proper system operation and safety, and that the modified combination of components produced diagnostic quality images "as good as our predicate generator/panel combination." This implies physical testing of the device rather than a dataset for algorithm evaluation.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable. There was no specific test set requiring expert-established ground truth for an AI/algorithm evaluation. The determination of "diagnostic quality images" likely involved internal assessment by qualified personnel within the manufacturer's testing process.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable. No adjudication method is described as there was no formal expert-read test set for algorithm performance.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. An MRMC study was not conducted as this submission is not about an AI-assisted diagnostic tool designed to improve human reader performance. It is for a mobile x-ray system.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
No. This submission is for a medical device (mobile x-ray system), not a standalone AI algorithm. The software components (CS-7 and Ultra) are part of the image acquisition process, and the only software "modification" mentioned is enabling the DDR function, which is a feature of the new panels, not an AI for diagnosis.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
Not applicable. The substantial equivalence argument relies on comparing technical specifications and demonstrating that the physical device produces images of "diagnostic quality" equivalent to the predicate, rather than an AI producing diagnostic outputs against a specific ground truth.
8. The sample size for the training set
Not applicable. This is not an AI/ML algorithm submission requiring a training set. The software components are for image acquisition and processing, not for AI model training.
9. How the ground truth for the training set was established
Not applicable, as no training set for an AI/ML algorithm was used or mentioned.
Ask a specific question about this device
(557 days)
The Phoenix Digital Thermometer is intended to measure the human body temperature orally, rectally or under the arm. The Phoenix Digital Thermometer is reusable for clinical or home use on people of all ages.
Not Found
This document appears to be an FDA 510(k) clearance letter for a "Phoenix Digital Thermometer." As such, it grants market clearance based on substantial equivalence to a predicate device, but it does not contain the detailed acceptance criteria for device performance or the study details proving it meets those criteria, as typically found in a clinical study report or a more extensive submission summary.
The letter confirms the device is substantially equivalent to legally marketed predicate devices and is subject to general controls, but it does not describe a specific study proving the device meets particular acceptance criteria in the way you've outlined.
Therefore, I cannot provide the requested information in the structured format you've asked for based solely on the provided text. The document acts as an approval notice, not the validation study itself.
To answer your questions, I would need access to the actual 510(k) submission summary or supporting documentation that was reviewed by the FDA, which would detail the performance data, acceptance criteria, and study design used to demonstrate substantial equivalence.
Ask a specific question about this device
(54 days)
This is a digital mobile diagnostic x-ray system intended for use by a qualified/trained doctor or technician on both adult and pediatric subjects for taking diagnostic radiographic exposures of the skull, spinal column, chest, abdomen, extremities, and other body parts. Applications can be performed with the patient sitting, standing, or lying in the prone or supine position. Not for mammography.
This is a new type of our previous predicate mobile PhoeniX. The predicate PhoeniX mobile is interfaced with Canon Digital X-ray panels and Canon control software CXDI-NE. The new PhoeniX mobile is interfaced with Konica – Minolta Digital X-ray panels and CS-7 or Ultra software image acquisition. Phoenix mobile systems will be marketed in the USA by KONICA MINOLTA. Models with the CS-7 Software will be marketed as AeroDR Tran-X Models with the Ultra Software will be marketed as mKDR II. The compatible digital receptor panels are the same for either model. The CS-7 software was cleared under K151465/K172793, while the Ultra software is new. The CS-7 is a DIRECT DIGITIZER used with an image diagnosis device, medical imaging device and image storage device connected via the network. This device digitally processes patient images collected by the medical imaging device to provide image and patient information. By contrast the Ultra-DR software is designed as an exam-based modality image acquisition tool. Ultra-DR software and its accompanying Universal Acquisition Interface (UAI) were developed to be acquisition device independent. Basic Features of the software include Modality Worklist Management (MWM) / Modality Worklist (MWL) support, DICOM Send, CD Burn, DICOM Print, and Exam Procedure Mapping. Ultra Software is designed to increase patient throughput while minimizing data input errors. Ultra is made up of multiple components that increase efficiency while minimizing errors. The main components of Ultra are the Worklist, Acquisition Interface and Configuration Utility. These components combine to create a Stable, Powerful, and Customizable Image capture system. The intuitive graphical user interface is designed to improve Radiology, Technologist accuracy, and image quality. Worklist and Exam screens were developed to allow site specific customizations to seamlessly integrate into existing practice workflows.
Here's an analysis of the acceptance criteria and study information for the PHOENIX Digital Mobile Diagnostic X-Ray System, based on the provided text.
Based on the provided document, the PHOENIX device is a digital mobile diagnostic x-ray system, and the submission is for a modification to an existing cleared device (K192011 PHOENIX). The "study" described is primarily non-clinical bench testing to demonstrate that the modified system, with new digital flat-panel detectors (AeroDR series) and new acquisition software (Ultra), is as safe and effective as the predicate device. No clinical study information is provided in this document.
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state "acceptance criteria" in a quantitative, measurable sense for the overall device performance. Instead, it focuses on demonstrating substantial equivalence to a predicate device. The comparison is primarily in the form of feature similarity and compliance with international standards for safety and electrical performance.
| Characteristic | Predicate (K192011 PHOENIX) | PHOENIX (Proposed) | Comparison of Performance |
|---|---|---|---|
| Indications for Use | Intended for use by a qualified/trained doctor or technician on both adult and pediatric subjects for taking diagnostic radiographic exposures of the skull, spinal column, chest, abdomen, extremities, and other body parts. Applications can be performed with the patient sitting, standing, or lying in the prone or supine position. Not for mammography. | SAME (includes device description as requested by FDA) | Met: Indications for use are identical, signifying no change in intended clinical application. |
| Configuration | Mobile System with digital x-ray panel and image acquisition computer | SAME | Met: Basic physical configuration remains unchanged. |
| X-ray Generator(s) | kW rating: 20 kW, 32 kW, 40 kW and 50 kW. kV range: from 40 kV to 150 kV in 1 kV steps. mA range: from 10 mA to 630 mA / 640 mA / 650 mA. | SAME | Met: The X-ray generator specifications are identical, ensuring consistent radiation output characteristics. |
| Collimator | Ralco R108F | Ralco R108F | Met: The collimator model is identical, ensuring consistent radiation field shaping. |
| Meets US Performance Standard | YES 21 CFR 1020.30 | SAME | Met: Compliance with the US Performance Standard for diagnostic X-ray systems is maintained. |
| Power Source | Universal power supply, from 100 V~ to 240 V~. 1 phase, 1.2 kVA | SAME | Met: Power supply specifications are identical. |
| Software | Canon control software CXDI-NE | Konica-Minolta control software CS-7 (K151465 or K172793) OR Konica-Minolta control software Ultra. | Met (by validation): New software (Ultra) validated according to FDA Guidance. CS-7 was previously reviewed. This is a key change, and compliance is asserted through specific software validation. |
| Panel Interface | Ethernet or Wi-Fi wireless | SAME | Met: Interface method is unchanged. |
| Image Area Sizes (Detectors) | CANON CXDI-401C 16"x 17", CXDI-701C 14" x 17", CXDI-801C 11" x 14", CXDI-710C 14" x 17", CXDI-810C 14" x 11", CXDI-410C 17" x 17" | AeroDR P-51 14" x 17", AeroDR P-52 14" x 17", AeroDR P-61 14" x 17", AeroDR P-71 17" x 17", AeroDR P-81 10" x 12". (Similar range of sizes, all previously cleared) | Met (by equivalence): The new detectors offer a "similar range of sizes" and are all "previously cleared" by FDA. This implies their performance characteristics within those sizes are acceptable. |
| Pixel Sizes (Detectors) | CANON CXDI (all 125 µm) | AeroDR P-51 175 µm, AeroDR P-52 175 µm, AeroDR P-61 100/200 µm, AeroDR P-71 100/200 µm. | Met (by equivalence): The new pixel sizes are different but are associated with previously cleared detectors, implying their diagnostic utility is acceptable. Specific performance comparison (e.g., to predicate's pixel size) isn't given for diagnostic equivalence, but rather for detector equivalence. |
| Resolutions (Detectors) | CANON CXDI (various, e.g., CXDI-401C 3320 × 3408 pixels) | AeroDR P-51 1994 × 2430 pixels, AeroDR P-52 1994 × 2430 pixels, AeroDR P-61 3488 × 4256 pixels, AeroDR P-71 4248 × 4248 pixels, AeroDR P-81 2456 × 2968 pixels. | Met (by equivalence): Similar to pixel size, specific resolutions differ but are for previously cleared detectors. Diagnostic equivalence is asserted by the prior clearance of the detectors themselves. |
| MTF (Detectors) | CANON CXDI (all 0.35 @ 2cy/mm) | AeroDR P-51 0.30 @ 2cy/mm, AeroDR P-52 0.30 @ 2cy/mm, AeroDR P-61 0.30 @ 2cy/mm, AeroDR P-71 0.30 @ 2cy/mm, AeroDR P-81 0.30 @ 2cy/mm. | Met (by equivalence): The new detectors have slightly lower MTF values at 2cy/mm, but these are for previously cleared detectors, implying acceptable image quality for diagnostic use. |
| DQE (Detectors) | CANON CXDI (all 0.6 @ 0 lp/mm) | AeroDR P-51 0.62 @ 0 lp/mm, AeroDR P-52 0.62 @ 0 lp/mm, AeroDR P-61 0.56 @ 1 lp/mm, AeroDR P-71 0.56 @ 1 lp/mm, AeroDR P-81 0.56 @ 1 lp/mm. | Met (by equivalence): DQE values differ but are for previously cleared detectors, suggesting acceptable performance. Some are higher, some are slightly lower (e.g., P61/P71/P81 at 1 lp/mm vs. predicate at 0 lp/mm). The key is the "previously cleared" status. |
| Compliance with Standards | N/A (implied by predicate clearance) | IEC 60601-1:2005+A1:2012, IEC 60601-1-2:2014, IEC 60601-1-3:2008+A1:2013, IEC 60601-2-54:2009+A1:2015, IEC 60601-2-28:2010, IEC 60601-1-6:2010 + A1:2013, IEC 62304:2006 + A1:2016. | Met: Device tested and found compliant with these international standards for safety and essential performance. |
Summary of the "Study" Proving Acceptance Criteria
The study described is a non-clinical, bench testing-based assessment for demonstrating substantial equivalence rather than a clinical study measuring diagnostic performance outcomes.
The core argument for substantial equivalence is based on:
- Identical Indications for Use.
- Identical platform (mobile system, generator, collimator, power source).
- Replacement of components (detectors and acquisition software) with components that are either:
- Previously FDA cleared (AeroDR detectors, CS-7 software).
- Validated according to FDA guidance (Ultra software).
- Compliance with recognized international standards for medical electrical equipment.
Here are the specific details requested:
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not applicable. No patient-level test set data is mentioned for testing diagnostic performance. The "test set" consisted of physical devices (systems covering all generator/panel combinations) for bench testing and software for validation.
- Data Provenance: Not applicable for a clinical test set. The testing was non-clinical bench testing. The detectors themselves (AeroDR) are stated to have been "previously cleared" by the FDA, implying their performance was established via other submissions, likely including data from various countries consistent with regulatory submissions.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts
- Not applicable. There was no clinical test set requiring expert ground truth establishment for diagnostic accuracy.
4. Adjudication Method for the Test Set
- Not applicable. There was no clinical test set requiring adjudication.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- No, an MRMC comparative effectiveness study was not done. The document explicitly states: "Clinical testing was not required to establish substantial equivalence because all digital x-ray receptor panels have had previous FDA clearance."
- Effect size of human readers improvement: Not applicable, as no such study was performed.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, in spirit, for the software component. The new image acquisition software (Ultra) was validated according to the "FDA Guidance: Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices." This validation assesses the software's functionality and performance as a standalone component within the system, ensuring it correctly manages workflow, acquires images, and processes them. However, this is software validation, not a standalone diagnostic performance study in the context of an AI algorithm producing diagnostic outputs.
7. The Type of Ground Truth Used
- For the overall device: Substantial equivalence to a legally marketed predicate device (K192011 PHOENIX), which itself would have demonstrated safety and effectiveness.
- For the components (detectors): Prior FDA clearance of the Konica-Minolta AeroDR panels served as the "ground truth" for their imaging characteristics (MTF, DQE, pixel size, etc.) being diagnostically acceptable.
- For the software (Ultra): Validation against specified functional and performance requirements outlined in the FDA software guidance, which serves as the ground truth for software quality and safety.
- For the PHOENIX system itself: Compliance with international safety and performance standards (IEC series) served as the ground truth for its electrical, mechanical, and radiation safety.
8. The Sample Size for the Training Set
- Not applicable. This device is not an AI/ML algorithm that requires a training set in the conventional sense of image analysis. It is an imaging acquisition device. The software validation is based on engineering principles and testing, not statistical training on a dataset.
9. How the Ground Truth for the Training Set Was Established
- Not applicable. As above, no training set for an AI/ML algorithm was used.
Ask a specific question about this device
(58 days)
Intended for use by a qualified/trained doctor or technician on both adult and pediatic subjects for taking diagnostic radiographic exposures of the skull, spinal column, extremities, and other body parts. Applications can be performed with the patient sitting, standing, or lying in the prone or supine position. Not for mammography
These are redesigned versions of our predicate mobile digital diagnostic xray systems. They feature motorized movement and full battery operation. Various Canon digital Xray panels are supplied with the system. (See list above.) The Mobile X-Ray Unit PhoeniX has a Basic configuration or Advanced configuration. Advanced configurations include:
- Second screen on head assembly.
- Smooth movements of head assembly.
- Telescopic arm in four steps instead three steps for basic configuration.
The Mobile X-Ray unit is provided with touch screen to operate as a control console. The Digital Imaging System is composed by image receptors and application for image acquisition (control console & image processing controller). The Image acquisition software "CANON CXDI Control Software NE" runs on Mobile X-ray unit and it is displayed on touch screen. It is the user interface and compatible digital detectors are listed above. All have FDA Clearance. The Advanced configuration has a second Touch Screen Monitor located on head-assembly.
The PhoeniX Mobile X-Ray Unit is provided with separate battery packs for X-Ray generation and motorized movements of the unit can operate connected to mains or in stand-alone mode, that is, operating without mains being present or unplugged from mains. The unit is connected to mains to charge the batteries; New rating: the input voltage range goes from 100 V~ to 240 V~, 1 phase, 1.1 kVA. New X-ray generator (model SHFM) is mounted on Battery Mobile X-Ray unit PhoeniX. This new X-Ray generator for PhoeniX Mobile X-Ray Unit has a radiogenic unit mounted on headassembly and comprising an X-ray tube with rotating anode and its circuit for high voltage. The electronic and associated software to control the X-ray generation are placed on mobile cart. The Battery Mobile X-Ray Unit PhoeniX is provided with different output powers: 20 kW, 32 kW, 40 kW and 50 kW. There are available two X-ray tube inserts with rotating anode manufactured by CANON ELECTRON TUBES & DEVICES:
- New: XRR-3331 insert.
- New: E7886 insert.
New: Manual Beam Limiting Device from Ralco, model: R108 F. External interface (controls) and covers are provided by Sedecal. There are two versions of collimator assembly, Basic and Advanced.
The provided text describes a 510(k) premarket notification for a mobile X-ray system named PhoeniX. This document focuses on demonstrating substantial equivalence to a predicate device rather than presenting a standalone study for novel acceptance criteria.
Therefore, many of the requested items (acceptance criteria, specific study details with sample sizes, expert involvement, and ground truth information) are not directly addressed in the provided FDA submission as they would be for a de novo marketing authorization or a more extensive clinical study. The submission relies heavily on non-clinical testing and the prior clearance of its components and predicate devices.
However, I can extract information related to the device's technical specifications and the basis for its substantial equivalence, which implicitly acts as its "acceptance criteria" for regulatory clearance.
Here's an analysis based on the provided text, addressing the points where information is available:
1. Table of Acceptance Criteria (Implicit) and Reported Device Performance
The device is considered acceptable if its performance and specifications are substantially equivalent to the predicate device and meet relevant international standards. The "reported device performance" in this context refers to its technical specifications and compliance with safety standards, rather than clinical performance metrics like sensitivity or specificity.
| Acceptance Criteria (Implicitly based on Predicate & Standards) | Reported Device Performance (PhoeniX) |
|---|---|
| Indications for Use: | Intended for use by a qualified/trained doctor or technician on both adult and pediatric subjects for taking diagnostic radiographic exposures of the skull, spinal column, chest, abdomen, extremities, and other body parts. Applications can be performed with the patient sitting, standing, or lying in the prone or supine position. Not for mammography. (SAME as Predicate) |
| Configuration: | Mobile System with digital x-ray panel and image acquisition computer. (SAME as Predicate) |
| kW Rating: | 20 kW, 32 kW, 40 kW and 50 kW. (Predicate: 40 kW only) |
| kV Range: | From 40 kV to 150 kV in 1 kV steps. (SAME as Predicate) |
| mA Range: | From 10 mA to 630 mA / 640 mA / 650 mA. (Predicate: From 10 mA to 500 mA) |
| Collimator: | Ralco R108F (Two versions: Basic and Advanced). (Predicate: Ralco R221 DHHS) |
| Digital X-ray Panels Supplied: | CANON CXDI-401C Wireless, CANON CXDI-701C Wireless, CANON CXDI-801C Wireless. (SAME as Predicate). Plus: CANON CXDI-710C Wireless, CANON CXDI-810C Wireless, CANON CXDI-410C Wireless. (New panels, but all previously FDA cleared). |
| Software: | Canon control software CXDI-NE. (SAME as Predicate, updated in K190368) |
| Panel Interface: | Ethernet or Wi-Fi wireless. (SAME as Predicate) |
| Meets US Performance Standard: | YES 21 CFR 1020.30. (SAME as Predicate) |
| Power Source: | Universal power supply, from 100 V~ to 240 V~. 1 phase, 1.1 kVA. (Predicate: Input transformer with 7 input voltage taps; AC 20 amp and Batteries) |
| Safety and Effectiveness: | "The results of bench testing indicate that the new devices are as safe and effective as the predicate devices. Proper system operation is fully verified upon installation. We verified that the modified combination of components worked properly and produced diagnostic quality images as good as our predicate generator/panel combination." |
| Compliance with International Standards: | IEC 60601-1:2005+A1:2012 (Edition 3.1), IEC 60601-1-2:2014 (Edition 4.0), IEC 60601-1-3:2008+A1:2013 (Edition 2.1), IEC 60601-2-54:2009+A1:2015 (Edition 1.1), IEC 60601-2-28:2010 (Edition 2.0), IEC 60601-1-6:2010 + A1:2013 (Edition 3.1), IEC 62304:2006 + A1:2016 (Edition 1.1). (Compliance affirmed through non-clinical testing) |
Regarding the Study Proving Acceptance Criteria:
The "study" in this submission is primarily a non-clinical bench testing and comparative analysis to establish substantial equivalence to a legally marketed predicate device (K161345 RadPRO® Mobile 40kW; RadPRO® Mobile 40kW FLEXPLUS).
2. Sample Size Used for the Test Set and Data Provenance:
- Sample Size: Not specified in terms of number of images or patients. The non-clinical testing involved "Systems covering all generator/panel combinations were assembled and tested." This suggests testing of device configurations rather than a "test set" of medical images or patient data in the typical sense of an AI/CAD device.
- Data Provenance: Not applicable in the context of clinical data for a "test set." The testing是在实验室环境中,使用设备进行物理和功能验证。
3. Number of Experts and Qualifications to Establish Ground Truth:
- Not applicable as this was a non-clinical submission for substantial equivalence based on technical specifications and functionality. Clinical trials requiring expert reads for ground truth were not performed.
4. Adjudication Method:
- Not applicable as there was no clinical reading study or consensus required for ground truth determination.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- No, an MRMC comparative effectiveness study was not done. The document states: "Summary of clinical testing: Clinical testing was not required to establish substantial equivalence because all digital x-ray receptor panels have had previous FDA clearance." This device is a mobile X-ray system, not an AI or CAD software that assists human readers.
6. Standalone (Algorithm Only Without Human-in-the-loop) Performance:
- Not applicable. This device is an X-ray imaging system, not an algorithm that performs a diagnostic task independently. Its performance is related to image quality, radiation output, mechanical safety, and electrical safety.
7. Type of Ground Truth Used:
- Technical Specifications and Compliance with Standards: The "ground truth" for this submission are the established performance characteristics of the predicate device and the specified requirements of relevant international safety and performance standards (e.g., IEC 60601 series, 21 CFR 1020.30). The "truth" is that the device meets these engineering and regulatory benchmarks.
- The document mentions that the modified combination of components "produced diagnostic quality images as good as our predicate generator/panel combination," implying an internal assessment of image quality, but no explicit detailed ground truth generation method is described for clinical image interpretation.
8. Sample Size for the Training Set:
- Not applicable. This is not an AI/ML device that requires a training set of data. The device's components (X-ray generator, panels, software) are either updates to existing technology or previously cleared devices.
9. How the Ground Truth for the Training Set Was Established:
- Not applicable, as there is no training set for an AI/ML algorithm.
Ask a specific question about this device
(154 days)
The Phoenix orthotically fits to the lower limbs and trunk. The device is intended to enable individuals with spinal cord injury at levels T4 to L5 to perform ambulatory functions in rehabilitation institutions in accordance with the user assessment and training certification program. This device is not intended for sports or stair climbing.
The Phoenix™ is a wearable, powered exoskeleton that assists a trained user to sit, stand, walk, and turn. The Phoenix consists of a pair of motorized leg braces coupled to a torso module, a lithium-ion battery pack, a main controller unit, and a wireless user interface attached to the handle of an assistive device (such as a crutch, walker, or parallel bars), control software, and mobile Android tablet hosting a mobile app. The Phoenix dimensions, such as spine length, torso hip width, femur length, femur bracket, tibia length, tibia bracket, and foot plate length can be adjusted individually. The Phoenix is coupled to the user via soft-good components (i.e. shoulder straps, waist pads, thigh straps, and shin pads), which can be adjusted to accommodate various users' dimensions.
The provided document describes the Phoenix™ powered exoskeleton and its regulatory submission (K183152). It outlines performance data to demonstrate its safety and effectiveness and its substantial equivalence to a predicate device.
Here's a breakdown of the requested information based on the document:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are not explicitly stated in a quantitative manner as pass/fail thresholds for specific metrics before the study. Instead, the study objectives and the results achieved are presented as evidence that the device meets its intended purpose. The table below synthesizes the implicit acceptance criteria from the study objectives and the reported performance.
| Acceptance Criterion (Implied from Study Objective) | Reported Device Performance (Result) |
|---|---|
| Safe and effective for intended use (SCI T4-L5 non-ambulatory to poorly ambulatory to stand up and walk under various conditions) | Safety: "Adverse Events (AE) reported during the study included minor instances of bruising. The causes attributed to these reported incidents were related to improper fitting or improper padding... There were no Unanticipated Adverse Events (UAE)." "The clinical study concluded that the Phoenix device is safe and effective for its intended use..." Effectiveness: All detailed performance metrics below support effectiveness. |
| Participants with SCI T4-L5 can safely complete transitional movements (stand up, turn, sit down) and walk using Phoenix with minimal contact assistance or Functional Independence Measure (FIM) | Transitional Movements (TUG Test): 39 out of 40 subjects completed with minimal contact assistance (FIM score of 4 or higher); 1 subject completed with moderate contact assistance (FIM score of 3). Level of Assistance (WISC-II): Averaged mean scores of 8.60 (±2.19) for the final assessment. Level of Assistance (FIM): "FIM scores as noted previously support that subjects were capable of managing all scenarios presented..." |
| Participants with SCI T4-L5 are able to achieve walking during the 10 Meter Walk Test (10MWT) and 6 Minute Walk Test (6MWT). | 10MWT: All participants (40/40) were able to complete the 10MWT. Mean FIM was 4.6 (±0.50). Average completion time was 61.9 seconds (±34.64), with a mean speed of 0.12 m/s (±0.06). 6MWT: Mean FIM was 4.37 (±0.49), "indicating an acceptable level of functional independence." |
| User exertion for basic level-ground walking (Modified Borg Rating of Perceived Exertion) | Averaged results of 3.3 for indoor level-ground walking at the end of sessions, corresponding to an exertion level just above "moderate." |
2. Sample size used for the test set and the data provenance
- Sample Size: 40 subjects.
- Data Provenance: The document does not specify the country of origin of the data. It states, "The study was performed in compliance with Good Clinical Practices (GCP) with subjects enrolled in an IRB approved study that were consented for participation according to the intended use of the device, defined inclusion criteria, and defined exclusion criteria..." This implies a prospective clinical study.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not mention the use of experts to establish a "ground truth" for the test set in the traditional sense of diagnostic accuracy studies (e.g., radiologists interpreting images). Instead, the study evaluates the functional performance and safety of the exoskeleton in human subjects. The outcome measures (FIM, WISC-II, 10MWT, 6MWT, TUG Test, Modified Borg) are objective measures or standardized assessments commonly used in rehabilitation, performed by trained clinicians/investigators within the study protocol. The "ground truth" here is the direct, observed performance of the subjects using the device.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
The concept of an adjudication method (like 2+1 for conflicting interpretations) is not applicable here as this is a functional performance and safety study, not a diagnostic study requiring interpretation of outcomes by multiple experts. The clinical outcomes were directly measured or observed by the study staff.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a multi-reader, multi-case (MRMC) comparative effectiveness study was not done. This study is a clinical trial assessing the performance of a medical device (exoskeleton), not an AI-powered diagnostic tool. Therefore, the concept of human readers improving with or without AI assistance is irrelevant to this document.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
No, a standalone algorithm-only performance study was not done. The Phoenix™ is a physical medical device (exoskeleton) intended for use by individuals with spinal cord injury in rehabilitation settings. Its entire purpose involves human-in-the-loop performance (the user wearing and operating the device under clinician supervision).
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For this functional performance study, the "ground truth" consists of observed clinical outcomes data and standardized functional assessment scores collected directly from the subjects using the Phoenix™ device in a controlled clinical setting. These include:
- Functional Independence Measure (FIM) scores
- WISC-II scores (likely a typo for WISCI II - Walking Index for Spinal Cord Injury)
- 10 Meter Walk Test (10MWT) results (time, speed)
- 6 Minute Walk Test (6MWT) results (implied distance/FIM score)
- Timed Up-and-Go (TUG) Test results
- Modified Borg Rating of Perceived Exertion scores
- Adverse Event (AE) reporting.
8. The sample size for the training set
The document mentions "training with the device" for the 40 subjects enrolled in the clinical study as part of their 20 sessions. This training is for the actual users (patients) and clinicians to learn how to operate the device, not an algorithm's training data.
The document does not refer to a "training set" in the context of machine learning or AI algorithms. The Phoenix™ is a powered exoskeleton, not an AI diagnostic tool that requires a separate training data set for model development.
9. How the ground truth for the training set was established
As there is no mention of a "training set" for an algorithm in this context, this question is not applicable to the provided document. The device itself is "trained" in terms of its parameters and control mechanisms during its engineering and development phase, but this is distinct from "ground truth for a training set" in AI/ML performance studies.
Ask a specific question about this device
(55 days)
The Phoenix Atherectomy System is intended for use in atherectomy of the peripheral vasculature. It is not intended for use in coronary, carotid, iliac or renal vasculature.
The Phoenix Atherectomy System is a sterile, single-use device designed for atherectomy of the peripheral vasculature. The Phoenix Atherectomy System has two main components: the Phoenix Catheter and the Phoenix Handle. The Phoenix Catheter is a flexible, over-the-wire (OTW), front-cutting Catheter that continuously captures and clears debulked plaque proximally through the Catheter and Handle into a collection reservoir that resides outside the patient. For use, the Phoenix Catheter is inserted into the Phoenix Handle. The Handle incorporates a self-contained battery-powered motor designed to drive and rotate the cutter of the Phoenix Atherectomy Catheter at its specified rotational speed. The device is activated by an ON/OFF slider switch on the top of the Handle. A Wire Support Clip is used to hold a guidewire in a fixed position relative to the Handle and prevent guidewire rotation during the procedure. The Catheter, Handle, and Wire Support Clip are packaged as sterile, single-use components of the Phoenix Atherectomy System.
The provided text is a 510(k) summary for a medical device called the Phoenix® Atherectomy System. It describes the device, its intended use, and the testing conducted to demonstrate substantial equivalence to a predicate device. However, it does not contain information related to software performance, acceptance criteria for an AI/ML device, or any study involving human readers or expert consensus on image interpretation.
The document is about a mechanical atherectomy system used for clearing peripheral vasculature. The "testing summary" section lists various engineering and functional tests to demonstrate the device's performance and safety (e.g., dimensional inspection, simulated use, torque tests, temperature rise, trackability, etc.). These are typical for a physical medical device, not for a software-based AI/ML product.
Therefore, I cannot provide a response that directly answers your request using the provided text. The questions you posed are specifically designed for evaluating an AI/ML-based medical device, particularly one that involves image interpretation and human expert involvement in ground truth establishment.
If you have a different document describing an AI/ML medical device, please provide it, and I would be happy to analyze it according to your criteria.
Ask a specific question about this device
(157 days)
The Phoenix Atherectomy System is intended for use in atherectomy of the peripheral vasculature. The system is not intended for use in the coronary, carotid, iliac, pulmonary, or renal vasculature.
Phoenix Atherectomy Plus System:
When used with the Phoenix Aspiration Pump as the vacuum source, the Phoenix 2.4mm Deflecting Atherectomy System is indicated for the removal of thrombus from vessels of the peripheral arterial vasculature.
The Phoenix 2.4mm Atherectomy Plus System comprises the currently cleared Phoenix 2.4mm Deflecting Atherectomy System (K172386) and a new Phoenix Aspiration Pump. which when used in conjunction with each other are designed for removal of thrombus from vessels of the peripheral arterial vasculature.
The Phoenix 2.4mm Deflecting Atherectomy System and the Phoenix Aspiration Pump are sterile, single-use devices that are sterilized using Ethylene Oxide (EO). The Phoenix 2.4mm Deflecting Atherectomy System comprises the Phoenix 2.4mm Deflecting Atherectomy Catheter and the Phoenix Atherectomy Handle with Wire Support Clip.
The Phoenix Catheter is an over-the-wire (OTW), multi-lumen Catheter with a cutter at the distal tip that continuously captures and clears debulked (excised) material proximally through the Catheter and Handle into a collection reservoir that resides outside the patient. For use, the Phoenix Catheter is inserted into the Phoenix Handle incorporates a self-contained battery-powered motor designed to drive and rotate the cutter of the Phoenix Atherectorny Catheter at its specified rotational speed, and is activated by an ON/OFF slider switch on the top of the Handle. A Wire Support Clip is used to hold a guidewire in a fixed position relative to the Phoenix Handle and prevent guidewire rotation during the procedure. The Catheter, Handle, and Wire Support Clip are packaged as sterile, single-use components of the Phoenix Atherectomy System. The Phoenix Catheter is compatible with commercially available 0.014" exchange length (260 cm or greater) guidewires.
For the purpose of this 510(k), the design of the Phoenix 2.4mm Deflecting Atherectomy System remains unchanged.
The Phoenix Aspiration Pump connects to the disposal outlet of the Phoenix Catheter via the pump's aspiration tubing. The exit end of the aspiration tubing is connected to a waste disposal bag to collect aspirated material. A 60 ml syringe is provided as part of the Aspiration Pump Assembly to assist in priming the pump aspiration tubing and purging the aspiration system of air. The Aspiration Pump is battery-operated and is operated by an ON/OFF switch. It serves as a vacuum source for the Aspiration System for aspirating the thrombus from the target vessel out of the Phoenix Catheter via the pump aspiration tubing and into the disposal bag. The Aspiration Pump is non-patient contacting.
The provided text is a 510(k) summary for the Phoenix 2.4mm Atherectomy Plus System. It describes the device, its intended use, and a comparison to predicate and reference devices, as well as a summary of testing performed to demonstrate substantial equivalence. However, it does not contain the specific information requested about acceptance criteria and a study that proves the device meets those criteria in the context of an AI/algorithm-driven device performance study.
The document describes the device as a mechanical atherectomy system with an aspiration pump for removing thrombus, which is a physical device, not an AI or algorithm. The "Testing Summary" section lists various engineering and preclinical tests (e.g., visual inspection, vacuum and leak test, flow rate test, system simulated use test, preclinical animal testing), which are typical for medical devices of this nature. There is no mention of an algorithm, AI, or any data-driven performance study comparing human readers with and without AI assistance, or standalone algorithm performance, or ground truth establishment based on expert consensus for image analysis.
Therefore, I cannot provide the requested information from the given text. The text does not detail an AI/algorithm-driven study with acceptance criteria.
Ask a specific question about this device
Page 1 of 4