Search Filters

Search Results

Found 133 results

510(k) Data Aggregation

    K Number
    K251987
    Manufacturer
    Date Cleared
    2025-09-23

    (88 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Rapid Aortic Measurements (AM) is an image analysis and measurement device to evaluate aortic and iliac arteries in contrast enhanced and non-contrast CT imaging datasets acquired of the chest, abdomen, and/or pelvis. The module segments the aorta, iliacs, and major branching vessels and provides 2D and 3D visualizations of the segmented vessels.

    Outputs of the device include: Centerline measurements of the aorta and iliacs, Aortic Zone Measurements (Maximum Oblique Diameter), Fixed Measurements of the aorta and left and right iliacs, 3D Volume Renderings, Rotations, Curved Planar Reformations (CPRs) of the isolated left and right iliacs, aortic oblique Multiplanar Reconstructions (MPRs), and Longitudinal Tracking visualizations.

    Rapid Aortic Measurements is an aid to physician decision making. Its results are not intended to be used on a stand-alone basis for clinical decision-making or otherwise preclude clinical assessment.

    Rapid Aortic Measurements is indicated for adults.

    Precautions/Exclusions:

    • Series containing excessive patient motion or metal implants may impact module output quality.
    • The AM module will not process series that meet the following module exclusion criteria:
      • Series acquired w/cone-beam CT scanners (c-arm CT)
      • Series that are non-axial or axial oblique greater than 5 degrees
      • Series containing improperly ordered or missing slices where the gap is larger than 3 times the median inter-slice distance (e.g., as a result of manual correction by an imaging technician)
      • Series with less than 3cm of target anatomical zones (e.g. aorta or right/left iliac artery)
      • NCCT, CECT, CTA, or CTPA datasets with:
        1. in-plane X and Y FOV < 160mm
        2. Z FOV (cranio-caudal transverse anatomical coverage) < 144 mm.
        3. in-plane pixel spacing (X & Y resolution) < 0.3 mm or > 1.0 mm.
        4. inter-slice distance of < 0.3 mm or > 3 mm.
        5. slice thickness > 3 mm.
        6. data acquired at x-ray tube voltage < 70kVp or > 150kVp, including single energy, dual energy, or virtual monochromatic datasets
    Device Description

    Rapid Aortic Measurements (AM) is a Software as a Medical Device (SaMD) image processing module and is part of the Rapid Platform. It provides analysis of chest, abdomen, and pelvis non-contrast CT (NCCT), contrast enhanced (CT, CTP (CT- Pulmonary Angiogram, and CTA (CT-Angiography)) for the reconstructed 3D visualization and measurement of arteries from the aortic root to the iliac arteries.

    Rapid AM is integrated into the Rapid Platform which provides common functions and services to support image processing modules such as DICOM filtering and job and interface management along with external facing cyber security controls. The Integrated Module and Platform can be installed on-premises within customer's infrastructure behind their firewall or in a hybrid on-premises/cloud configuration. The Rapid Platform accepts DICOM images and, upon processing, returns the processed DICOM images to the source imaging modality or PACS.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter for "Rapid Aortic Measurements":

    Acceptance Criteria and Device Performance

    Acceptance Criteria CategorySpecific MetricAcceptance CriteriaReported Device PerformanceStudy Type
    Segmentation Quality (VR Outputs)Clinical Accuracy (agreement with source DICOM)100% agreement100% agreementSegmentation Quality Study
    Segmentation Quality (CPR/MPR Outputs)CPR/MPR Quality100% agreement100% agreementSegmentation Quality Study
    Segmentation Quality (CPR/MPR Outputs)Anatomical Labeling100% agreement between readers for all labels100% agreement for all labelsSegmentation Quality Study
    Segmentation Quality (Zone Measurement Outputs)Maximum Oblique Diameter Location Accuracy100% agreement between readers for all segments100% agreement for all segmentsSegmentation Quality Study
    Segmentation Quality (Longitudinal Results)Clinical Accuracy of MeasurementsClinically accurate measurements placed within respective zonesDeemed clinically accurateSegmentation Quality Study
    Segmentation Accuracy (VR Outputs)Average Dice CoefficientNot explicitly stated as acceptance criteria, but reported0.93Segmentation Accuracy Study
    Segmentation Accuracy (VR Outputs)Average Hausdorff DistanceNot explicitly stated as acceptance criteria, but reported0.54 mmSegmentation Accuracy Study
    Segmentation Accuracy (CPR/MPR Visualizations)Average Hausdorff Distance (centerline accuracy)Not explicitly stated as acceptance criteria, but reported0.59 mmSegmentation Accuracy Study
    Segmentation Accuracy (Ground Truth Reproducibility)Average Dice CoefficientNot explicitly stated as acceptance criteria, but reported0.95Segmentation Accuracy Study
    Measurement ReportsMean Absolute Error (MAE) compared to ground truthNot explicitly stated as an acceptance criterion, but reported and stated to "compare favorably with the reference device"0.22 cmSegmentation Accuracy Study

    Study Details:

    1. Sample Sizes and Data Provenance:

    • Test Set Sample Size: 108 cases from 115 unique patients.
    • Data Provenance:
      • Country of Origin: 54 US, 24 OUS (Outside US), 30 unknown.
      • Retrospective/Prospective: Not explicitly stated, but the description "data used during model training" and "test dataset was independent" suggests a retrospective approach.

    2. Number of Experts and Qualifications for Ground Truth (Test Set):

    • Number of Experts: Up to three clinical experts (for segmentation quality/clinical accuracy). The number of experts involved in establishing ground truth for quantitative segmentation and measurement accuracy metrics is not explicitly stated but implies expert involvement.
    • Qualifications of Experts: Not explicitly stated beyond "clinical experts."

    3. Adjudication Method (Test Set):

    • Adjudication Method: "Consensus of up to three clinical experts" for the segmentation quality/clinical accuracy endpoint. For other endpoints where "agreement between readers" is mentioned, it implies a consensus or agreement-based adjudication. No specific scheme like "2+1" or "3+1" is detailed.

    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • Was it done? No, an MRMC comparative effectiveness study was not explicitly mentioned. The FDA letter describes standalone device performance against ground truth and expert consensus.
    • Effect Size of Human Readers with/without AI: Not applicable, as an MRMC study was not conducted or reported.

    5. Standalone Performance Study:

    • Was it done? Yes, both a "Segmentation Quality Study" and a "Segmentation Accuracy Study" were conducted to assess the algorithm's standalone performance. The results reported in the table above are from these standalone evaluations.

    6. Type of Ground Truth Used:

    • Ground Truth Type:
      • Expert Consensus: Used for segmentation quality and clinical accuracy, determined by the "consensus of up to three clinical experts against the source DICOM images."
      • Approved Ground Truth Segmentations: For measurement reports, AM measurements were compared to "measurements taken from approved ground truth segmentations using a validated technique." This implies expert-derived and validated segmentations serve as the reference for measurements.

    7. Sample Size for Training Set:

    • Training Set Sample Size: Not explicitly stated. The document mentions "The test dataset was independent from the data used during model training," but does not provide details on the size of the training dataset itself.

    8. How the Ground Truth for the Training Set was Established:

    • How Ground Truth Established: Not explicitly stated in the provided text. The document only mentions that the test dataset was independent from the training data.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251533
    Manufacturer
    Date Cleared
    2025-09-04

    (108 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Rapid OH is a radiological computer aided triage and notification software indicated for suspicion of Obstructive Hydrocephalus (OH) in non-enhanced CT head images of adult patients. The device is intended to assist trained clinicians in workflow prioritization triage by providing notification of suspected findings in head CT images.

    Rapid OH uses an artificial intelligence algorithm to analyze images and highlight cases with suspected OH on a server or standalone desktop application in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected OH findings. Notifications include compressed preview images, that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.

    The results of Rapid OH are intended to be used in conjunction with other patient information and based on professional judgment, to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.

    Contraindications/Limitations/Exclusions:

    • Rapid OH is intended for use for adult patients.
    • Input data image series containing excessive patient motion or metal implants may impact module analysis accuracy, robustness and quality.
    • Ventriculoperitoneal shunts are contraindicated

    Exclusions:

    • Series with missing slices or improperly ordered slices
    • data acquired at x-ray tube voltage < 100kVp or > 140kVp.
    • data not representing human head or head/neck anatomical regions
    Device Description

    Rapid OH software device is a radiological computer-aided triage and notification software device using AI/ML. The Rapid OH device is a non-contrast CT (NCCT) processing module which operates within the integrated Rapid Platform to provide a notification of suspected findings of obstructive hydrocephalus (OH). The Rapid OH device is SaMD which analyzes input NCCT images that are provided in DICOM format for notification of suspected findings for workflow prioritization.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Rapid OH device, based on the provided FDA 510(k) clearance letter:

    1. Table of Acceptance Criteria and Reported Device Performance

    MetricAcceptance CriteriaReported Device Performance
    Primary Endpoint: Sensitivity (Se)Not explicitly stated as a separate acceptance criterion, but the reported performance met the statistical confidence interval.89.5% (95% CI: 0.837-0.935)
    Primary Endpoint: Specificity (Sp)Not explicitly stated as a separate acceptance criterion, but the reported performance met the statistical confidence interval.97.6% (95% CI: 0.940-0.991)
    Secondary Endpoint: Time to NotificationNot explicitly stated as a numerical acceptance criterion, but the reported performance indicates efficiency.30.3 seconds (range 10.5-55.5 seconds)

    Note: The document states "Standalone performance primary endpoint passed with sensitivity (Se) of 89.5% (95% CI:0.837-0.935) and specificity (Sp) of 97.6% (95% CI:0.940-0.991)". While explicit numerical acceptance criteria for sensitivity and specificity are not provided, the "passed" statement implies that the reported performance fell within pre-defined acceptable ranges or met a statistical hypothesis.

    2. Sample Size for the Test Set and Data Provenance

    • Sample Size for Test Set: 320 cases
    • Data Provenance: The document mentions "diversity amongst demographics (M: 45%, F: 54%); Sites (and manufacturers (GE, Philips, Siemens, Toshiba) and confounders (ICH, Ischemic Stroke, Tumor, Cyst, Aqueductal stenosis, Mass effect, Brain atrophy and Communicating hydrocephalus)". While specific countries of origin are not explicitly stated, the mention of multiple manufacturers (Siemens, GE, Toshiba, Philips) and multiple sites (74 sites for algorithm development, and "Sites" for the validation set) suggests a diverse, likely multi-site, and potentially multi-country dataset, although this is not definitively confirmed for the test set itself. The dataset appears to be retrospective, as it's used for algorithm development and validation based on existing cases.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: 3 experts (implied from "Truthing was established using 2:3 experts.")
    • Qualifications of Experts: Not explicitly stated in the provided text. They are referred to as "experts." In regulatory contexts, these would typically be radiologists or neuro-radiologists with significant experience in interpreting head CTs.

    4. Adjudication Method for the Test Set

    • Adjudication Method: "2:3 experts." This means that ground truth was established by agreement from at least 2 out of 3 experts.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study Done: No, an MRMC comparative effectiveness study was not explicitly mentioned for this device. The study described is a standalone performance validation of the algorithm.

    6. Standalone Performance (Algorithm Only without Human-in-the-Loop) Done

    • Standalone Performance Done: Yes, "Final device validation included standalone performance validation. This performance validation testing demonstrated the Rapid OH device provides accurate representation of key processing parameters under a range of clinically relevant conditions associated with the intended use of the software." The reported sensitivity and specificity values are for this standalone performance.

    7. Type of Ground Truth Used

    • Type of Ground Truth: Expert consensus ("Truthing was established using 2:3 experts.")

    8. Sample Size for the Training Set

    • Sample Size for Training Set: 3340 cases (This refers to "Algorithm development" which encompasses training and likely internal validation/development sets).

    9. How the Ground Truth for the Training Set Was Established

    • How Ground Truth Was Established (Training Set): The document states "Algorithm development was performed using 3340 cases... Truthing was established using 2:3 experts." This implies that the same expert consensus method (2 out of 3 experts) used for the test set was also used to establish ground truth for the cases used in algorithm development (which includes the training set).
    Ask a Question

    Ask a specific question about this device

    K Number
    K252526
    Device Name
    Rapid DeltaFuse
    Manufacturer
    Date Cleared
    2025-08-26

    (15 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Rapid DeltaFuse is an image processing software package to be used by trained professionals, including but not limited to physicians and medical technicians.

    The software runs on a standard off-the-shelf computer or a virtual platform, such as VMware, and can be used to perform image viewing, processing, and analysis of images.

    Data and images are acquired through DICOM compliant imaging devices.

    Rapid DeltaFuse provides both viewing and analysis capabilities for imaging datasets acquired with Non-Contrast CT (NCCT) images.

    The CT analysis includes NCCT maps showing areas of hypodense and hyperdense tissue including overlays of time differentiated scans of the same patient.

    Rapid DeltaFuse is intended for use for adults.

    Device Description

    Rapid DeltaFuse (DF) is a Software as a Medical Device (SaMD) image processing module and is part of the Rapid Platform. It provides visualization of time differentiated neuro hyperdense and hypodense tissue from Non-Contrast CT (NCCT) images.

    Rapid DF is integrated into the Rapid Platform which provides common functions and services to support image processing modules such as DICOM filtering and job and interface management along with external facing cyber security controls. The Integrated Module and Platform can be installed on-premises within customer's infrastructure behind their firewall or in a hybrid on-premises/cloud configuration. The Rapid Platform accepts DICOM images and, upon processing, returns the processed DICOM images to the source imaging modality or PACS.

    AI/ML Overview

    The provided FDA 510(k) clearance letter for Rapid DeltaFuse describes the acceptance criteria and the study that proves the device meets those criteria, though some details are absent.

    Here's a breakdown of the information found in the document, structured according to your request:


    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are not explicitly stated in a quantified manner as a target. Instead, the document describes the type of performance evaluated and the result obtained.

    Acceptance Criteria (Implied/Description of Test)Reported Device Performance
    Co-registration accuracy for slice overlaysDICE coefficient of 0.94 (Lower Bound 0.93)
    Software performance meeting design requirements and specifications"Software performance testing demonstrated that the device performance met all design requirements and specifications."
    Reliability of processing and analysis of NCCT medical images for visualization of change"Verification and validation testing confirms the software reliably processes and supports analysis of NCCT medical images for visualization of change."
    Performance of Hyperdensity and Hypodensity display with image overlay"The Rapid DF performance has been validated with a 0.95 DICE coefficient for the overlay addition to validate the overlay performance..."

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: 14 cases were used for the co-registration analysis. The sample size for other verification and validation testing is not specified.
    • Data Provenance: Not explicitly stated (e.g., country of origin, retrospective or prospective).

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    • This information is not provided in the document. The document refers to "performance validation testing" and "software verification and validation testing" but does not detail the involvement of human experts or their qualifications for establishing ground truth.

    4. Adjudication Method for the Test Set

    • This information is not provided in the document.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No MRMC comparative effectiveness study was reported. The document focuses on the software's performance (e.g., DICE coefficient for co-registration) rather than its impact on human reader performance.

    6. Standalone (Algorithm Only) Performance Study

    • Yes, a standalone performance study was done. The reported DICE coefficients (0.94 and 0.95) are measures of the algorithm's performance in co-registration and overlay addition, independent of human interaction.

    7. Type of Ground Truth Used

    • The document implies that the ground truth for co-registration and overlay performance was likely established through a reference standard based on accurate image alignment and feature identification, against which the algorithm's output (DICOM images with overlays) was compared. The exact method of establishing this reference standard (e.g., manual expert annotation, a different validated algorithm output) is not explicitly stated.

    8. Sample Size for the Training Set

    • The document does not specify the sample size used for training the Rapid DeltaFuse algorithm.

    9. How Ground Truth for the Training Set Was Established

    • The document does not specify how the ground truth for the training set was established.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251151
    Device Name
    Rapid CTA 360
    Manufacturer
    Date Cleared
    2025-07-16

    (93 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Rapid CTA 360 is a radiological computer aided triage and notification software indicated for use in the analysis of CTA adult head images. The device is intended to assist hospital networks and trained clinicians in workflow triage by flagging and communication of suspected positive Large and Medium Vessel Occlusion findings in head CTA images including the ICA (C1-C5), MCA (M1-M3), ACA, PCA, Basilar and Vertebral vascular segments.

    Rapid CTA 360 uses an AI software algorithm to analyze images and highlight cases with suspected occlusion on a server or standalone desktop application in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected LVO and MVO findings. Notifications include compressed preview images. These are meant for informational purposes only and are not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.

    The results of Rapid CTA 360 are intended to be used in conjunction with other patient information and based on professional judgment, to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.

    Device Description

    Rapid CTA 360 device is a radiological computer-assisted Triage and Notification Software device using AI/ML. The Rapid CTA 360 processing module operates within the integrated Rapid Platform to provide triage and notification of suspected large and medium vessel neuro-occlusions. The Rapid CTA 360 software analyzes input Head and Neck CTA images that are provided in DICOM format and provides notification of suspected positive results. The device does not alter the original medical image and is not intended to be used as a diagnostic device.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) Clearance Letter:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriterionDescriptionReported Device Performance
    Primary Endpoint: SensitivityAbility of the device to correctly identify true positive cases of Large and Medium Vessel Occlusion (LVO and MVO).0.921 (95% CI: 0.880, 0.949)
    Primary Endpoint: SpecificityAbility of the device to correctly identify true negative cases (no LVO or MVO).0.890 (95% CI: 0.832, 0.929)
    Secondary Endpoint: Time to NotificationThe time taken by the device to provide a notification of suspected occlusion.3.2 minutes (min: 1.92 min to 5.35 min)
    Sensitivity Analysis (High Grade Stenosis)Sensitivity specifically for cases involving high grade stenosis (a potential confounder).87.4% (95% CI: 0.829-0.908)
    Specificity Analysis (High Grade Stenosis)Specificity specifically for cases involving high grade stenosis (a potential confounder).89.0% (95% CI: 0.832-0.929)

    2. Sample size used for the test set and the data provenance

    • Test Set Sample Size: 403 CTA cases
    • Data Provenance: The data was collected from multiple sites (not explicitly stated which countries, but the training data was primarily US, which might suggest a similar distribution for the test set or at least a representative one). The cases were selected to cover patient demographics (age, gender), manufacturer distributions (GE, Toshiba, Siemens, Philips scanners), and confounders. The data was "collected and blinded prior to use, per internal data management procedures which includes isolation of development and product validation cohorts," implying a retrospective collection, but carefully separated from the training data.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: 3 experts
    • Qualifications of Experts: Not explicitly stated beyond "experts."

    4. Adjudication method for the test set

    • Adjudication Method: 2 out of 3 (2:3 concurrence). This means that for a case to be considered positive or negative for ground truth, at least two of the three experts had to agree on the finding.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No MRMC comparative effectiveness study involving human readers with and without AI assistance was mentioned in the provided text. The study focused on the standalone performance of the AI device.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Yes, a standalone performance validation was explicitly stated as being conducted: "Final device validation included standalone performance validation, per the special controls."

    7. The type of ground truth used

    • Ground Truth Type: Expert consensus. The document states, "ground truth established by 3 experts (2:3 concurrence)."

    8. The sample size for the training set

    • Training Set Sample Size: 6264 cases

    9. How the ground truth for the training set was established

    • The document implies that the ground truth for the training set was established through expert review and annotation, as the cases were used for "Algorithm development, including training and testing." It mentions the selection criteria for cases (demographics, scanner manufacturers, confounders) which would likely lead to expert-verified labels as ground truth, but the exact method (e.g., specific number of experts, adjudication) for the training set is not detailed in the same way as for the test set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K250286
    Date Cleared
    2025-07-03

    (153 days)

    Product Code
    Regulation Number
    882.5890
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Rapid2 Magnetic Stimulators (Magstim Rapid2, Magstim Super Rapid2, Magstim Super Rapid2 Plus1) are intended to stimulate peripheral nerves for relief of chronic intractable, post-traumatic and post-surgical pain for patients 18 years or older.

    Device Description

    The Magstim Rapid2, Magstim Super Rapid2, Magstim Super Rapid2 Plus1 (herein collectively referred to as "Rapid2 Magnetic Stimulators") are computerized, electromechanical medical devices that provide brief and focused magnetic pulses in order to non-invasively stimulate peripheral nerves for relief of chronic intractable, post-traumatic and post-surgical pain for patients 18 years or older. The subject device is intended to be used in hospitals and clinics such as pain management clinics.

    Rapid2 Magnetic Stimulators are integrated systems consisting of a combination of hardware, software, and accessories. Rapid2 Magnetic Stimulators are offered in multiple configurations:

    • Rapid2
    • Super Rapid2
    • Super Rapid2 Plus1

    All three configurations have identical intended use/indications for use, common specifications, equivalent performance characteristics and equivalent composition to each other. Specifically, Rapid2 and Super Rapid2 have received prior clearance under K051864 for Peripheral Nerve Stimulation (Product Code: GWF, Regulation 21 CFR 882.1870). All Rapid2 Magnetic Stimulators are made up of components that have received prior clearance under K051864 (e.g., the 3190-00, 3192-00 and 3193-00 coils) and components which have received prior clearance under K051864 but have received modifications due to aspects like obsolescence (Mainframe, Power Supply etc.).

    All Rapid2 Magnetic Stimulators are composed from the following main components:

    • Stimulating Unit & Power Supply
    • User Interface
    • Stimulating Coil
    • System and Stimulating Coil Cart and Holding Arm

    Rapid2 Magnetic Stimulators include temperature monitoring via two independent temperature sensors to ensure surfaces of the coils do not reach unacceptable levels. The cut-off is set to act at 40°C at which point the system will automatically be disabled. Over-temperature conditions are also communicated on the User Interface (UI) via a temperature gauge and alarm system. Rapid2 Magnetic Stimulators also includes the 3910-00 air-cooled coil to further mitigate any temperature conditions. The 3910-00 air-cooled coil comes with all 3 configurations (Rapid2, Super Rapid2 and Super Rapid2 Plus1) as standard.

    AI/ML Overview

    The provided document is an FDA 510(k) Clearance Letter for the Rapid2 Magnetic Stimulators. It does NOT contain information about a study proving the device meets acceptance criteria related to its performance in pain relief. Instead, it focuses on demonstrating substantial equivalence to a predicate device through non-clinical testing of technical characteristics, safety standards compliance, and physical properties.

    The document does not describe an AI/ML-driven device, nor does it present results from a clinical study with patients or human readers using AI. The acceptance criteria and performance metrics described are related to physical and electrical characteristics of the magnetic stimulator, not diagnostic accuracy or clinical effectiveness in a traditional sense of "performance" as one might expect for an AI diagnostic device.

    Therefore, many of the requested items (e.g., sample size for test/training sets, data provenance, number of experts for ground truth, MRMC study, standalone performance, ground truth type) cannot be answered based on the provided text, as they pertain to clinical or AI/ML performance evaluation, which is not the subject of this 510(k) summary.

    However, I can extract information related to the technical acceptance criteria and how they align with the device's measured performance as described in the summary:


    Acceptance Criteria and Device Performance (Based on Technical and Safety Equivalence)

    The acceptance criteria for the Rapid2 Magnetic Stimulators are primarily based on demonstrating substantial equivalence to the predicate device (MagVenture Pain Therapy) in terms of technical characteristics, safety, and effectiveness for the stated indications for use. The "performance" reported is adherence to these characteristics and safety standards.

    Acceptance Criteria Category/CharacteristicSubject Device Performance (Rapid2 Magnetic Stimulators)Predicate Device Performance (MagVenture Pain Therapy)Evaluation / Proof of Meeting Criteria
    Indications for UseStimulate peripheral nerves for relief of chronic intractable, post-traumatic and post-surgical pain for patients 18 years or older.Stimulate peripheral nerves for relief of chronic intractable, post-traumatic and post-surgical pain for patients 18 years or older.Identical. Meets criteria by having the same intended use.
    Anatomical SitesAny area, such as hand, arm, waist, buttock, thigh, calf, back and lower back etc.Any area, such as hand, arm, waist, buttock, thigh, calf, back and lower back etc.Identical.
    Treatment FacilitiesHospitals & ClinicsHospitals & ClinicsIdentical.
    Treatment Time13 minutes per session (800 seconds)13 minutes per session (800 seconds)Identical.
    Pulse FrequencyRapid2: 0.1 – 50 Hz (pps); Super Rapid2 and Super Rapid2 Plus1: 0.1 – 100 Hz (pps)MagPro R30 & MagPro R30 with MagOption: 0.1 – 30 Hz (pps); MagPro X100 & MagPro X100 with MagOption: 0.1 – 100 Hz (pps)Similar range. Subject device's range covers or extends slightly beyond predicate, but the recommended protocol (0.5Hz) is well within both.
    Pulse Amplitude0 – 100%0 – 100%Identical.
    On-cycle duty period2-800 Seconds (0.5 Hz and up to 400 pulses)2-800 Seconds (0.5 Hz and up to 400 pulses)Identical.
    Off-cycle rest periodN/AN/AIdentical.
    Maximum Repetition RateRapid2: 50Hz; Super Rapid2: 100Hz; Super Rapid2 Plus1: 100HzMagPro R30 & MagPro R30 with MagOption: 30 pulses per second; MagPro X100 & MagPro X100 with MagOption: 100 pulses per secondUpper limit identical compared to predicate. Substantial equivalence demonstrated despite differences in how maximum output is achieved (explained in "SE Note 1").
    Pulse WidthBiphasic (300-425 µs)Biphasic (280-320 µs)Similar range. Differences deemed not to raise new safety/effectiveness questions due to compensating factors (see "SE Note 2").
    Pulse ModeStandardStandardSame.
    Temperature ControlAutomatic disable at 40°C; includes air-cooled coil; UI communication of over-temperature.Automatic disable at 43°C.Comparable/Better. Subject device has a lower cutoff and additional cooling/reporting features.
    Peak Magnetic Field at Coil Surface1.0-1.5T1.15-2.6TSubstantially equivalent. Subject device is a subset of the predicate's range. Differences explained in "SE Note 2" as not raising new safety/effectiveness concerns.
    Peak Magnetic Field Gradient (dB/dt) at 20mm from Coil Center9-12kT/s9-24kT/sSubstantially equivalent. Subject device is a subset of the predicate's range. Differences explained in "SE Note 2" as not raising new safety/effectiveness concerns.
    WaveformBiphasic, Biphasic BurstBiphasic, Monophasic, Biphasic Burst, Halfsine (combinations vary by predicate configuration)Substantially equivalent. Subject device's waveform is within the range available in the predicate.
    Software/Firmware ControlYesYesIdentical. Verified per IEC 62304.
    Power Supply TypePower Supply via dedicated power supply modules each using a separate input mains line cord.Power Supply via Isolation Transformer.Similar.
    Power Consumption230/240V Systems – 3000VA peak per input; 115V Systems – 2300VA peak per inputMaximum 2700VASimilar.
    User InterfaceLCD Capacitive TouchscreenLED DisplaySimilar. A difference in display technology, but performs the same function.
    Housing Material ConstructionStimulator: PUR, Stainless/Galvanized Steel; Coils: PC, PURStimulator: Aluminum, Aluzinc; Coils: PVC, ABS, PA, POMSimilar. Different specific materials but serve the same function.
    Applied Parts (Coils)Various, including previously cleared (K051864, K080499, K130403) and new coils (4150-00, 4170-00, 4189-00, 4190-00, 4510-00).Various, all previously cleared.Substantially equivalent coil range. New coils are evaluated for safety and function to be equivalent (see "SE Note 2").
    Applied Part AreaButterfly Coils: 152mm – 191mm; Circular Coils: 124.5mmButterfly Coils: 150mm; Circular Coils: 110-126mm; Special Coils: 160x80 mmSubstantially equivalent. (see "SE Note 2").
    SterilizationNon-sterile when used.Non-sterile when used.Identical.
    Electrical SafetyComplies with IEC 60601-1 Ed. 3.2Complies with IEC 60601-1 Ed. 3.1Meets/Exceeds. Complies with a newer edition of the standard.
    Mechanical SafetyComplies with IEC 60601-1 Ed. 3.2Complies with IEC 60601-1 Ed. 3.1Meets/Exceeds. Complies with a newer edition of the standard.
    Thermal SafetyComplies with IEC 60601-1 Ed. 3.2Complies with IEC 60601-1 Ed. 3.1Meets/Exceeds. Complies with a newer edition of the standard.
    Radiation SafetyNo radiation generated.No radiation generated.Same.
    BiocompatibilityComplies with ISO 10993-1, -5, -10; materials tested for Skin Irritation, Cytotoxicity, Skin Sensitization.Complies with ISO 10993.Same/Exceeds. Detailed compliance with relevant parts of the standard.
    Standards ComplianceISO 13485 (company); IEC 60601-1-2, IEC 60601-1-6, IEC 60601-1-8, IEC 62366-1 (device).EN ISO 13485 (company).Same/Equivalent. Device-specific standards compliance indicated.
    AcousticsTested per 60601-1 type testing and in-house, demonstrating substantially equivalent acoustic output. Labeling requires earplugs (30dB noise reduction).(Not specified beyond "similar")Comparable. Demonstrated and mitigated with user instructions.
    E-Field Decay & Linearity of OutputPerformance data showed "very similar" E-Field decay, linearity, and electric/magnetic field spatial distributions.(Not explicitly detailed, but implied to be baseline for comparison)Comparable. Results demonstrate equivalent effects at 0-2cm from coil surface.

    Regarding the specific questions that cannot be answered from the provided text:

    1. Sample sizes used for the test set and the data provenance: Not applicable or provided. The "test set" here refers to non-clinical testing of device characteristics, not a clinical study on patients or data.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable or provided. Ground truth in this context would implicitly be engineering specifications, laboratory measurements, and standard compliance testing, not expert clinical assessment of patient data.
    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable or provided. No adjudication process detailed for establishing technical specifications.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. This is not an AI/ML device, nor is it a diagnostic device being evaluated for reader performance.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable. Not an AI/ML algorithm.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.): The "ground truth" for this device's performance evaluation is based on engineering specifications, direct physical measurements (e.g., of magnetic fields, temperatures, electrical properties), and compliance with international safety and performance standards (e.g., IEC 60601 series, ISO 10993).
    7. The sample size for the training set: Not applicable or provided. This device does not use a "training set" in the machine learning sense.
    8. How the ground truth for the training set was established: Not applicable or provided.

    In summary, the provided document details the 510(k) clearance process for a non-AI/ML magnetic stimulator for pain relief. The "acceptance criteria" and "performance" are framed around demonstrating substantial technical and safety equivalence to a legally marketed predicate device, rather than clinical efficacy data from patient studies or AI algorithm performance metrics.

    Ask a Question

    Ask a specific question about this device

    K Number
    K243378
    Device Name
    Rapid MLS
    Manufacturer
    Date Cleared
    2025-05-28

    (210 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Rapid MLS software device is designed to measure the midline shift of the brain from a NCCT acquisition and report the measurements. Rapid MLS analyzes adult cases using machine learning algorithms to identify locations and measurements of the expected brain midline and any shift which may have occurred. The Rapid MLS device provides the user with annotated images showing measurements. Its results are not intended to be used on a stand-alone basis for clinical decision-making or otherwise preclude clinical assessment of NCCT cases.

    Device Description

    Rapid MLS software device is a radiological computer-assisted image processing software device using AI/ML. The Rapid MLS device is a non-contrast CT (NCCT) processing module which operates within the integrated Rapid Platform to provide a measurement of the brain midline. The Rapid MLS software analyzes input NCCT images that are provided in DICOM format and provides both a visual output containing a color overlay image displaying the difference between the expected and indicated brain midline at the Foramen of Monro; and a text file output (json format) containing the quantitative measurement.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) Clearance Letter for Rapid MLS (K243378):


    Acceptance Criteria and Device Performance

    The core of the acceptance criteria for Rapid MLS appears to be its ability to measure midline shift with an accuracy comparable to or better than human experts.

    Acceptance CriteriaReported Device Performance
    Mean Absolute Error (MAE) of Rapid MLS non-inferior to MAE of experts.Rapid MLS MAE: 0.7 mm
    Experts Average Pairwise MAE: 1.0 mm
    Intercept of Passing-Bablok fit (Rapid MLS vs. Reference MLS) close to 0.Intercept: 0.12 (0, 0.2)
    Slope of Passing-Bablok fit (Rapid MLS vs. Reference MLS) close to 1.Slope: 0.95 (0.9, 1.0)
    No bias demonstrated in differences between Rapid MLS and reference MLS.Paired t-test p-value: 0.1800 (indicates no significant bias)

    Study Details

    Here's a detailed summary of the study proving the device meets the acceptance criteria:

    1. Sample Size Used for the Test Set and Data Provenance:

      • Sample Size: 153 NCCT cases
      • Data Provenance:
        • Country of Origin: Not explicitly stated for all cases, but sourced from 13 sites (2 OUS [Outside US], 11 US). This indicates a mix of international and domestic data.
        • Retrospective or Prospective: Not explicitly stated, but the description of "validation data was sourced and blinded independent of the development cases" and "demographic split for age and gender... used to test for broad demographic representation and avoidance of overlap bias with development" suggests these were pre-existing, retrospectively collected cases (i.e., not prospectively collected for this trial).
        • Scanner Manufacturers: Mixed from GE, Philips, Toshiba, and Siemens scanners.
        • Demographics: Male: 44%, Female: 56%, Age Range: 26-93 years.
    2. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications:

      • Number of Experts: 3 experts
      • Qualifications of Experts: Not explicitly stated, but the context implies they are medical professionals who use midline shift as a clinical metric, likely radiologists or neurologists.
    3. Adjudication Method for the Test Set:

      • Method: Expert consensus was used to establish ground truth. The document states "ground truth established by 3 experts." This implies a consensus approach, but the specific method (e.g., majority vote, discussion to consensus) is not detailed. The "experts average pairwise MAE" suggests individual expert measurements were consolidated. It is not explicitly stated whether a 2+1 or 3+1 method was used, but given there were 3 experts, it's likely they reached a consensus view.
    4. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done:

      • The study does compare the device's performance to human experts, but it's not explicitly described as a traditional MRMC comparative effectiveness study where human readers use the AI and then are compared to human readers without AI.
      • Effect Size of Human Readers Improvement with AI vs. Without AI Assistance: This specific comparison (human with AI vs. human without AI) was not the primary focus of the reported performance study. The study primarily evaluated the standalone performance of the AI in comparison to expert measurements (i.e., the AI as a "reader" vs. expert "readers"). The "Indications for Use" state that the results "are not intended to be used on a stand-alone basis for clinical decision-making or otherwise preclude clinical assessment of NCCT cases," implying it's an assistive tool, but the study described measures the AI's accuracy against experts, not the improvement of experts with the AI.
    5. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done:

      • Yes. The document states, "Final device validation included standalone performance validation." The reported MAE of the Rapid MLS and its comparison to the experts' pairwise MAE directly reflect its standalone performance.
    6. The Type of Ground Truth Used:

      • Ground Truth Type: Expert Consensus from the 3 experts.
    7. The Sample Size for the Training Set:

      • Training Set Sample Size: 138 cases
    8. How the Ground Truth for the Training Set Was Established:

      • The document implies that the "Algorithm development was performed using 162 cases from multiple sites; training included 24 cases for validation and 138 for training." While it doesn't explicitly state how ground truth was established for the training set, it is highly probable that a similar (if not identical) process involving human expert annotation was used, given the reliance on expert consensus for the validation/test set. The development cases were chosen to cover 0-18.6 mm offsets from expected midline, indicating a process of identifying and labeling the midline shift in these cases.
    Ask a Question

    Ask a specific question about this device

    K Number
    K243350
    Device Name
    Rapid Neuro3D
    Manufacturer
    Date Cleared
    2025-01-22

    (86 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Rapid Neuro3D (RN3D) is an image analysis software for imaging datasets acquired with conventional CT Angiography (CTA) from the aortic arch to the vertex of the head. The module removes bone, tissue, and venous vessels, providing a 3D and 2D visualization of the neurovasculature supplying arterial blood to the brain.

    Outputs of the device include 3D rotational maximum intensity projections (MIPS), volume renders (VR), along with the curved planar reformation (CPR) of the isolated left and right internal carotid and vertebral arteries.

    Rapid Neuro3D is designed to support the physician in confirming the presence or absence of physician-identified lesions and evaluation, documentation, and follow-up of any such lesion and treatment planning.

    Its results are not intended to be used on a stand-alone basis for clinical decision-making or otherwise preclude clinical assessment.

    RN3D is indicated for adults.

    Precautions/Exclusions:

    o Series containing excessive patient motion or metal implants may impact module output quality.

    o The RN3D module will not process series that meet the following module exclusion criteria:

    • Series containing inadequate contrast agent (<0.3 mL of right-hemisphere intracranial arterial

    • contrast media or <0.3 mL of left-hemisphere intracranial arterial contrast media, above 120 HU)
      • · Series acquired w/cone-beam CT scanners (c-arm CT)
      • · Series that are non-axial
      • · Series with a non-supine patient position
      • · Series containing missing or improperly ordered slices (e.g., as a result of manual correction by an imaging technician)
      • CTA datasets with:
          1. in-plane X and Y FOV < 160mm or > 400mm.
          1. Z FOV (cranio-caudal transverse anatomical coverage) < 90 mm.
          1. in-plane pixel spacing (X & Y resolution) < 0.2 mm or > 1.0 mm.
          1. Z slice spacing of < 0.2 mm or > 1.25 mm.
          1. slice thickness > 1.5mm.
          1. data acquired at x-ray tube voltage < 70kVp or > 150kVp.
    Device Description

    Rapid Neuro 3D (RN3D) is a Software as a Medical Device (SaMD) image processing module and is part of the Rapid Platform. It allows for visualization of arterial vessels of the head and neck and identifies and segments arteries of interest in patient CTA exams.

    The Rapid Platform provides common functions and services to support image processing modules such as DICOM filtering and job and interface management. The Rapid Platform can be installed on-premises within customer's infrastructure behind their firewall or in a hybrid on-premises/cloud configuration. The software can be installed on dedicated hardware or a virtual machine. The Rapid Platform accepts DICOM images and, upon processing, returns the processed DICOM images to the source imaging modality or PACS.

    The RN3D image processing module is based on pre-trained artificial intelligence / machinelearning models and facilitates a 3D visualization of the neurovasculature supplying arterial blood to the brain. The module analyzes input CTA images in DICOM format and provides a corresponding DICOM series output that can be used by a DICOM viewer, clinical workstations. and PACS systems.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Rapid Neuro3D device, extracted from the provided text:


    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly defined by the primary endpoints of the studies.

    Metric / EndpointAcceptance CriteriaReported Device Performance
    Segmentation Quality Study
    Clinical Accuracy (MIP images)Passed99.8% agreement with expert consensus for MIP images
    Clinical Accuracy (VR images)Passed98.6% agreement with expert consensus for VR images
    Clinical Accuracy (SSE images)Passed100.0% agreement with expert consensus for SSE images
    Clinical Accuracy (CPR images)Passed100.0% agreement with expert consensus for CPR images
    Labeling Accuracy100% of anatomical labels applied found to be accurate100% of the anatomical labels applied found to be accurate for the vessels visualized.
    Segmentation Accuracy Study
    Extracranial Region
    Average Dice Coefficient (Extracranial)Met0.89
    Average Hausdorff Distance (Extracranial)Met0.44 mm
    Intracranial Region
    Average Dice Coefficient (Intracranial)Substantial equivalence (presumably to predicate)0.97 (between the module and the predicate device)
    Average Hausdorff Distance (Intracranial)Substantial equivalence (presumably to predicate)0.44 mm (between the module and the predicate device)
    CPR Visualizations
    Average Hausdorff Distance (CPR centerline)Met0.31 mm (between the module and ground truth)
    Ground Truth ReproducibilityWithin case variance of expert segmentations (for segmentation accuracy study) demonstrating strong reproducibility of ground truth segmentations.1% within case variance, demonstrating strong reproducibility of ground truth segmentations. (This isn't a direct device performance metric but confirms the reliability of the ground truth used for evaluation).

    2. Sample Sizes and Data Provenance for the Test Set

    • Segmentation Quality Study:

      • Sample Size: 120 CTA cases from 115 patients (65 female; 50 male; aged from 27 to 90+).
      • Data Provenance: 104 US, 16 OUS (Outside US).
      • Retrospective/Prospective: Not explicitly stated, but the mention of a "test dataset was independent from the data used during model training" suggests a retrospective nature.
    • Segmentation Accuracy Study:

      • Sample Size: 50 CTA cases from 48 patients (24 female; 24 male; aged from 27 to 90+).
      • Data Provenance: 43 US, 7 OUS.
      • Retrospective/Prospective: Not explicitly stated, but the mention of a "test dataset was independent from the data used during model training" suggests a retrospective nature.

    3. Number and Qualifications of Experts for Ground Truth

    • Number of Experts: Up to three clinical experts (for the segmentation quality study). The document does not specify if the same number of experts were used for the segmentation accuracy study's ground truth.
    • Qualifications: "Clinical experts." No further specific qualifications (e.g., years of experience, subspecialty) are provided in the text.

    4. Adjudication Method for the Test Set

    • Method: "Consensus of up to three clinical experts" was used to determine clinical accuracy in the segmentation quality study. For the segmentation accuracy study, "ground truth" was established, and for reproducibility it mentions "reproducibility (of ground truths)" implying a process, but a specific adjudication method like 2+1 or 3+1 isn't explicitly detailed for the accuracy study.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was it done? No, a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance was not explicitly described or reported. The studies described focus on the standalone performance of the AI device against expert consensus or defined ground truth.

    6. Standalone (Algorithm Only) Performance Study

    • Was it done? Yes. Both the "Segmentation Quality Study" and the "Segmentation Accuracy Study" evaluated the standalone performance of the Rapid Neuro3D algorithm. The outputs were compared against source DICOM images and established ground truth, respectively, without mentioning human-in-the-loop performance improvement.

    7. Type of Ground Truth Used

    • Segmentation Quality Study: Expert consensus against source DICOM images.
    • Segmentation Accuracy Study: For the extracranial region and CPR, it was compared against "ground truth" (presumably expert annotated regions). For the intracranial region, it was compared to the "predicate device" performance, implying the predicate served as a reference for substantial equivalence in that specific context. The document also mentions "reproducibility (of ground truths)," indicating expert delineations.

    8. Sample Size for the Training Set

    • The document states, "The test dataset was independent from the data used during model training," but it does not provide the specific sample size for the training set.

    9. How Ground Truth for the Training Set Was Established

    • The document does not provide details on how ground truth was established for the training set. It only mentions that the test set data was independent from the training data.
    Ask a Question

    Ask a specific question about this device

    K Number
    K243447
    Manufacturer
    Date Cleared
    2024-12-05

    (28 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Navbit Rapid Surgical Plan is intended for pre-operative planning for primary total hip arthroplasty. Rapid Surgical Plan is intended to be used as a tool to assist the surgeon in the selection and positioning of components in primary total hip arthroplasty.

    Rapid Surgical Plan is indicated for individuals undergoing primary hip surgery.

    Device Description

    The Navbit Rapid Surgical Plan is a non-invasive total hip arthroplasty planning software. It is software as a medical device (SaMD) which provides pre-operative planning for acetabular component orientation for orthopaedic surgeons. The software provides a recommended cup target intended to reduce impingement based on each patient's spinopelvic mobility. The ultimate decision to use the cup target recommended by Navbit is the surgeon's based on their clinical judgement.

    AI/ML Overview

    The provided text details the 510(k) premarket notification for the Navbit Rapid Surgical Plan (RSP-SW-001) device. It outlines the device's intended use, comparison to a predicate device (RI.HIP MODELER), and the non-clinical testing performed to demonstrate substantial equivalence.

    Here's the breakdown of the acceptance criteria and the study proving the device meets them:

    1. A table of acceptance criteria and the reported device performance

    The document details the following acceptance criteria (referred to as "Device Measurement Accuracy Evaluation" in the text) and the reported device performance:

    Measurement TypeAcceptance Criteria (95% Confidence)Reported Device Performance
    Linear Measurements±0.1mmMet
    Angular Measurements±0.3°Met
    Ratio Measurements±0.1Met

    2. Sample size used for the test set and the data provenance

    The document states that in the "Clinical Measurement Accuracy Evaluation," "representative patient images" were used. However, it does not specify the exact sample size for this test set.

    Regarding data provenance:

    • The data used for the test set in the "Clinical Measurement Accuracy Evaluation" consisted of "representative patient images." The country of origin and whether the data was retrospective or prospective are not explicitly stated in the provided text.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • The ground truth for the test set in the "Clinical Measurement Accuracy Evaluation" was "established by surgeons."
    • The number of surgeons used to establish this ground truth is not specified.
    • Their specific qualifications (e.g., years of experience) are not explicitly stated, beyond them being "surgeons."

    4. Adjudication method for the test set

    The document states that the ground truth for the "Clinical Measurement Accuracy Evaluation" was "established by surgeons." It does not explicitly detail an adjudication method (e.g., 2+1, 3+1, consensus process) for these surgeons in establishing the ground truth.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • A formal MRMC comparative effectiveness study, evaluating human readers with and without AI assistance, was not described for this specific device.
    • The focus was on demonstrating that the performance of the planning technicians using RSP is "equivalent to that of the surgeons for the scope of the landmarking tasks involved" during user testing. This is a comparison of two user groups (technicians vs. surgeons) on a reference dataset, rather than an MRMC study on human improvement with AI assistance. Therefore, an effect size of human reader improvement with AI assistance is not provided.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, a standalone evaluation was implicitly done as part of the "Device Measurement Accuracy Evaluation." This section describes the performance of the device's underlying measurement capabilities (linear, angular, ratio measurements) against defined limits, independent of human interaction beyond inputting test patterns.

    7. The type of ground truth used

    • For the "Clinical Measurement Accuracy Evaluation," the ground truth was "established by surgeons" based on landmarking clinical features on radiographs, which can be categorized as expert consensus/opinion (though the consensus process is not detailed).
    • For the "Device Measurement Accuracy Evaluation," the ground truth was derived from "test patterns," implying a known, engineered truth against which the device's fundamental measurements were compared.

    8. The sample size for the training set

    The document does not specify the sample size for the training set used for the Navbit Rapid Surgical Plan algorithm.

    9. How the ground truth for the training set was established

    The document does not explicitly describe how the ground truth for the training set was established. It mentions that "Navbit RSP uses an algorithm based on clinical recommendations for spinopelvic mobility to provide cup target recommendations" and that "RI.HIP Modeler uses an algorithm based on spinopelvic mobility classifications and hip kinematics in literature to recommend cup targets." This suggests reliance on existing clinical knowledge and literature for algorithm development, but the specifics of how this translated into labeled data for a training set are not provided.

    Ask a Question

    Ask a specific question about this device

    K Number
    K240543
    Manufacturer
    Date Cleared
    2024-10-24

    (240 days)

    Product Code
    Regulation Number
    878.4810
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Rapido Medical Laser is generally indicated for use in incision, vaporization, ablation, cutting and hemostasis, or coagulation of soft tissue in General surgery, Gynecology, Neurosurgery, Endovascular coagulation (EVLT/EVLA), Urology, Podiatry and Orthopedics, Dermatology, Plastic surgery and Aesthetics, Otolaryngology (ENT) and Oral surgery.

    Device Description

    Not Found

    AI/ML Overview

    I apologize, but the provided text is a letter from the FDA regarding a 510(k) premarket notification and an "Indications for Use" statement for a medical laser device. It does not contain any information about acceptance criteria, device performance studies, sample sizes, expert qualifications, adjudication methods, MRMC studies, or ground truth establishment.

    Therefore, I cannot fulfill your request to describe the acceptance criteria and the study that proves the device meets them based on the given input.

    Ask a Question

    Ask a specific question about this device

    K Number
    K241100
    Date Cleared
    2024-05-22

    (30 days)

    Product Code
    Regulation Number
    862.3650
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Rapid Urine Fentanyl (FYL) Test Strip is a rapid, screening test for the qualitative detection of Fentanyl (FYL) in human urine at the cut-off concentration of 1 ng/mL. For in vitro diagnostic use only. This assay provides only a preliminary analytical test result. To obtain a confirmed analytical result, a more specific alternative chemical method must be used. GC/MS or LC/MS is the preferred confirmatory method. Rapid Urine Fentanyl (FYL) Test Dipcard is a rapid, screening test for the qualitative detection of Fentanyl (FYL) in human urine at the cut-off concentration of 1 ng/mL. For in vitro diagnostic use only. This assay provides only a preliminary analytical test result. To obtain a confirmed analytical result, a more specific alternative chemical method must be used. GC/MS is the preferred confirmatory method.

    Device Description

    Rapid Urine Fentanyl (FYL) Test Strip and Rapid Urine Fentanyl (FYL) Test Dipcard are competitive binding, lateral flow immunochromatographic assays for qualitatively the detection of fentanyl at or above the cut-off concentration of 1 ng/mL. The tests can be performed without the use of an instrument. Test Strip and Test Dipcard use identical test strips made with same chemical formulation and manufacturing procedures.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for a Rapid Urine Fentanyl (FYL) Test Strip and Dipcard. The device is a rapid screening test for the qualitative detection of Fentanyl in human urine at a cut-off concentration of 1 ng/mL. The approval is an addition of an OTC (Over-The-Counter) claim to a previously cleared prescription device (K231904).

    Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    The document doesn't explicitly state "acceptance criteria" as a separate table. However, the performance is demonstrated through a lay user study, with the implicit acceptance being a high agreement with the confirmed sample concentrations.

    Sample Concentration (ng/mL)% of CutoffLay User Agreement (Test Strip)Lay User Agreement (Test Dipcard)
    0Negative100%100%
    0.5-50% cutoff100%100%
    0.75-25% cutoff93% (2 Positive, 28 Negative)97% (1 Positive, 29 Negative)
    1.25+25% cutoff97% (29 Positive, 1 Negative)90% (27 Positive, 3 Negative)
    1.5+50% cutoff100% (30 Positive, 0 Negative)100% (30 Positive, 0 Negative)
    2+100% cutoff100% (30 Positive, 0 Negative)100% (30 Positive, 0 Negative)

    2. Sample size used for the test set and the data provenance

    • Test Set Sample Size: The lay user study used 360 lay persons. For each device (Test Strip and Test Dipcard) and each concentration level, 30 determinations were made.
    • Data Provenance: The text does not explicitly state the country of origin. The study was conducted at "three intended user sites." The study design indicates it was a prospective study where participants tested blinded, randomized samples.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The text states: "The concentrations of samples were confirmed by LC/MS." It does not specify the number of experts or their qualifications for this confirmation. LC/MS (Liquid Chromatography-Mass Spectrometry) is an analytical chemistry technique, and its use implies that the ground truth was established through a laboratory method, likely performed by trained laboratory personnel.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The text does not mention an adjudication method for the lay user results. Each participant's interpretation of their test result was recorded, and the "Agreement (%)" was calculated based on these individual interpretations compared to the known sample concentration.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    This section is not applicable as the device is a rapid, screening test for drug detection and does not involve AI or human reader interpretation in the context of image analysis or diagnostic assistance. The "readers" in this case are the lay users interpreting the test strip/dipcard result.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This section is not applicable as the device is a manual, rapid test. The "standalone" performance is essentially what the lay users demonstrated in their interpretation of the test results themselves. There is no algorithm or automated reading discussed.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The ground truth for the test set samples was established by LC/MS (Liquid Chromatography-Mass Spectrometry), which is a highly specific and sensitive analytical chemical method for confirming the concentration of substances. The text explicitly states: "The concentrations of samples were confirmed by LC/MS."

    8. The sample size for the training set

    The document does not provide information about a "training set" for the device itself because it is an immunochromatographic assay, not an AI/machine learning device. The performance characteristics are inherent to the chemical formulation and manufacturing procedures. The 510(k) summary references "analytical performance in predicate K231904" and "studies in predicate K231904," which would include the foundational analytical performance data.

    9. How the ground truth for the training set was established

    As there is no "training set" in the context of AI/machine learning, this question is not applicable. The analytical performance and design of this type of device are established through various laboratory studies (e.g., specificity, sensitivity, precision, cross-reactivity) using reference standards and confirmed samples, as would have been documented for the predicate device (K231904).

    Ask a Question

    Ask a specific question about this device

    Page 1 of 14