Search Filters

Search Results

Found 7 results

510(k) Data Aggregation

    K Number
    K242669
    Manufacturer
    Date Cleared
    2025-03-26

    (202 days)

    Product Code
    Regulation Number
    878.4550
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    SnapshotGLO (KB100)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SnapshotGLO is a handheld imaging tool that allows clinicians diagnosing and treating skin wounds, at the point of care, to (i) View and digitally record images of a wound, (ii) Measure and digitally record the size of a wound, and (iii) View and digitally record images of fluorescence emitted from a wound when exposed to an excitation light. The fluorescence image, when used in combination with clinical signs and symptoms, has been shown to increase the likelihood that clinicians can identify wounds containing bacterial loads >104 CFU per gram as compared to examination of clinical signs and symptoms alone. The SnapshotGLO device should not be used to rule-out the presence of bacteria in a wound. The SnapshotGLO does not diagnose or treat skin wounds.

    Device Description

    SnapshotGLO is a medical device that operates like a camera. It is a point-of-care, wound imaging device. This device is a non-contact imaging device wherein the wound images are captured from a height of ~12 cm using 395 nm LEDs and a white LED to produce a resultant fluorescence image that aids in visualising the bacteria on the wound and a colour-based "RGB" image or clinical image. Resultant images are viewed on the 7-inch touchscreen display. SnapshotGLO is based on autofluorescence imaging technology and uses native fluorescence of bacteria to determine the presence of bacterial bioburden and displays a two-dimensional, colour-coded highlight of bioburden presence on the wounds. SnapshotGLO is a handheld imaging tool that allows clinicians diagnosing, monitoring and treating skin wounds at the point of care with the help of the following features: . View and digitally record images of a wound, . Measure and digitally record the size of a wound, and View and digitally record images of fluorescence emitted from a wound when exposed to an ● excitation light SnapshotGLO consists of: - SnapshotGLO device - Medical grade adapter - User Manual - Quick Start Guide ● SnapshotGLO is intended for wound care applications as an adjunct tool that uses autofluorescence to detect tissues or structures. The fluorescence image, when used in combination with clinical signs and symptoms, has been shown to increase the likelihood that clinicians can identify wounds containing high bacteria bioburden compared to clinical symptoms alone. SnapshotGLO should not be used to rule-out the presence of bacteria in a wound. This device is not intended to provide a diagnosis.

    AI/ML Overview

    The provided text is a 510(k) summary for the SnapshotGLO (KB100) device, aiming to demonstrate its substantial equivalence to the MolecuLightDX as a predicate device. While it details the device's function and provides some study information, it does not explicitly state "acceptance criteria" in a tabulated format alongside "reported device performance." Instead, it discusses the outcomes of non-clinical and clinical studies that support the device's equivalence and performance.

    Based on the information provided, here's an attempt to structure the response according to your request, extracting the closest equivalents to "acceptance criteria" and "reported performance" from the study descriptions.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly define quantitative acceptance criteria for the clinical study. However, the non-clinical tests imply an acceptance criterion of "comparable performance" or "substantially equivalent performance" to the predicate device. For the clinical study, the acceptance was based on showing "improved accuracy" compared to the predicate device when used with clinical signs and symptoms.

    Criterion TypeAcceptance Criterion (Implicit/Derived)Reported Device Performance (SnapshotGLO)
    Non-clinical - Bacterial Fluorescence DetectionSubstantially equivalent detection of red fluorescence from porphyrin-producing bacteria (mono- and bi-microbial, biofilms) compared to predicate device.Provided robust evidence that SnapshotGLO is substantially equivalent to MolecuLightDX in detecting bacterial fluorescence. Confirmed effectiveness for identifying porphyrin-producing bacteria and biofilms.
    Non-clinical - Wound Dimensions MeasurementComparable performance in manual wound detection modes to the predicate device, demonstrating agreement in measurement accuracy and repeatability.Performs comparably to MolecuLightDX in manual wound detection modes, with strong agreement in measurement accuracy and repeatability. Supports claim of substantial equivalence.
    Clinical - Bacterial Load IdentificationWhen used with clinical signs and symptoms (CSS), demonstrated improved accuracy in identifying wounds with bacterial loads >10^4 CFU per gram compared to predicate device alone.When used in conjunction with CSS, showed over 88% positive percent agreement and provided improved accuracy (75-82.5%) compared to MolecuLightDX (52.5-65%) when validated against culture results for identifying wounds with bacterial loads >10^4 CFU per gram.

    2. Sample Size and Data Provenance

    • Test Set Sample Size:
      • Clinical Study: 40 patients.
      • Non-clinical Studies: Not explicitly stated, but conducted on "culture plates" and "wound dimensions."
    • Data Provenance: The document does not specify the country of origin for the clinical study. The clinical study was described as retrospective.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: Two clinical evaluators were involved in the retrospective clinical study.
    • Qualifications: Not explicitly stated in the document.

    4. Adjudication Method for the Test Set

    The document states "This blinded assessment" for the clinical study, indicating that the evaluators were blinded to some information, but it does not describe a specific adjudication method (e.g., 2+1, 3+1 consensus).

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? A clinical study comparing the SnapshotGLO and the predicate device was done, involving two clinical evaluators. While it's a comparative study with multiple readers, the format described does not fully align with a typical MRMC study designed to assess reader improvement with AI assistance. It rather compares the device's performance (with CSS) against the predicate device's performance (with CSS) validated against culture.
    • Effect Size of Human Readers Improvement with AI vs. Without AI Assistance: The study compared SnapshotGLO + CSS versus MolecuLightDX + CSS versus CSS alone (implicit, as the basis for comparison), all validated against culture results. It demonstrated:
      • SnapshotGLO + CSS accuracy: 75-82.5%
      • MolecuLightDX + CSS accuracy: 52.5-65%
      • The phrase "increase the likelihood that clinicians can identify wounds containing bacterial loads >10^4 CFU per gram as compared to examination of clinical signs and symptoms alone" (from the Indications for Use) suggests that the device, when combined with CSS, improves performance over CSS alone. The specific "effect size" of improvement of human readers with AI vs. without AI assistance (meaning AI as an added tool for human readers) is not directly quantified as a comparative value in terms of reader gain. The comparison shown is between two different devices (both of which are imaging tools that provide additional information to clinicians) when used with CSS, against culture results.

    6. Standalone (Algorithm Only) Performance

    The document does not report on standalone (algorithm only without human-in-the-loop performance). The indications for use consistently state that the fluorescence image is to be used "in combination with clinical signs and symptoms."

    7. Type of Ground Truth Used

    • Clinical Study: The ground truth for identifying wounds with bacterial loads >10^4 CFU per gram was established using culture results.
    • Non-clinical Studies: The ground truth for bacterial fluorescence detection was based on bacterial presence in in vitro culture plates. For wound dimensions, it was likely based on known or carefully measured dimensions.

    8. Sample Size for the Training Set

    The document is a 510(k) summary for a medical device and does not provide information regarding the training set sample size as it primarily focuses on the device's performance for regulatory submission. This device description points to an "autofluorescence imaging technology" for directly visualizing bacterial compounds, rather than a machine learning algorithm that requires a training set. If there is an AI component for image processing or interpretation not explicitly detailed, the training set information is not included in this document.

    9. How Ground Truth for Training Set Was Established

    As no training set is discussed concerning an AI/ML algorithm, no information is provided on how its ground truth was established. The device utilizes physical principles of autofluorescence.

    Ask a Question

    Ask a specific question about this device

    K Number
    K240601
    Manufacturer
    Date Cleared
    2024-04-02

    (29 days)

    Product Code
    Regulation Number
    870.2700
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    SnapshotNIR model KD205

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    SnapshotNIR (KD205) is intended for use by healthcare professionals as a non-invasive tissue oxygenation measurement system that reports an approximate value of:

    • oxygen saturation (St02),
    • relative oxyhemogoblin level (Hb02), and
    • relative deoxyhemoglobin (Hb) level
      in superficial tissue. SnapshotNIR (KD205) displays two-dimensional color-coded images of tissue oxygenation of the scanned surface and reports multispectral tissue oxygenation measurements for selected tissue regions.

    SnapshotNIR (KD205) is indicated for use to determine oxygenation levels in superficial tissues.

    Device Description

    SnapshotNIR (KD205) is a medical device that operates like a camera. The SnapshotNIR device is completely non-contact, capturing images from 12 inches away from the patient or other imaging field-ofviews. The device uses six near-infrared wavelengths of light and white light emitting diodes (LEDs) to produce a resultant tissue oxygenation image and a colour-based "RGB" or clinical image, respectfully, that can be viewed on the touchscreen display.

    SnapshotNIR is based on multispectral imaging technology and performs spectral analysis at each pixel to determine the approximate values of tissue oxygen saturation (StO2), oxyhemoglobin levels (HbO2), and deoxyhemoglobin levels (Hb) in superficial tissues and displays a two-dimensional, color-coded image of the tissue oxygen saturation (StO2).

    SnapshotNIR consists of:

    • · SnapshotNIR
    • · Recharger
    • · User Guide
    • · Quick Start Guide
    • · Sterile Drape (Optional)

    The intended use and user interaction remained the same in the new model (KD205). The user will notice that the modified device no longer requires the user to take a calibration image prior to image capture. SnapshotNIR now uses an embedded computer module and a touchscreen LCD. Additional minor form factor changes were made on the new model (KD205) to accommodate the hardware change. With respect to the software used by SnapshotNIR, the predicate and new model (KD205) operate similarly.

    AI/ML Overview

    This document, K240601, is a 510(k) Premarket Notification from the FDA regarding the Kent Imaging SnapshotNIR model KD205. It's a clearance letter, not a study report or a detailed technical submission. Therefore, it does not contain the information requested regarding acceptance criteria and performance studies in the level of detail typically found in a clinical study report or a more comprehensive technical submission to the FDA.

    Specifically, the document states:

    • "Nonclinical engineering-based tests (e.g., IEC 60601-1-2, etc) among other performance bench tests support conclusions that the KD205 device is safe for use with a low risk of adverse events occurring during the intended use scenarios."
    • "These tests and findings are analogous to those conducted on the predicate KD204 SnapshotNIR device and demonstrate that the KD205 device is similarly safe, and effective compared to a current legally marketed device."
    • "In a sub-analysis of the pre-clinical dataset, the KD205 device demonstrated no significant differences (derived via 2-way ANOVA models) in STO2 across the ischemia-reperfusion test when comparing cohorts of low versus high tissue scattering effects."

    While it mentions "pre-clinical dataset" and "tests and findings," it does not provide any specific acceptance criteria, reported performance values, sample sizes, provenance of data, expert qualifications, or details about MRMC studies, standalone performance studies, or ground truth establishment. The purpose of this 510(k) summary is to establish substantial equivalence to a predicate device, not to detail the full performance study results.

    Therefore, I cannot populate the requested table or answer most of the questions based solely on the provided text. The document focuses on the conclusion of substantial equivalence rather than the raw data and detailed methodology of the performance studies conducted.

    Ask a Question

    Ask a specific question about this device

    K Number
    K201976
    Device Name
    SnapshotNIR
    Manufacturer
    Date Cleared
    2020-11-10

    (117 days)

    Product Code
    Regulation Number
    870.2700
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    SnapshotNIR

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    SnapshotNIR is intended for use by healthcare professionals as a non-invasive tissue oxygenation measurement system that reports an approximate value of:

    • oxygen saturation (StO2),
    • relative oxyhemogoblin level (HbO2), and
    • relative deoxyhemoglobin (Hb) level
      in superficial tissue. SnapshotNIR displays two-dimensional color-coded images of tissue oxygenation of the scanned surface and reports multispectral tissue oxygenation measurements for selected tissue regions.

    SnapshotNIR is indicated for use to determine oxygenation levels in superficial tissues.

    Device Description

    SnapshotNIR, Model KD204 (K201976), is a modification to the Kent Camera, Model KD203 (K163070). The changes made to create the modified snapshot include modifications to the software. Both devices have similar hardware.

    SnapshotNIR is based on multispectral imaging technology and performs spectral analysis at each point in a two-dimensional scanned area producing an image displaying information derived from the analysis. SnapshotNIR determines the approximate values of oxygen saturation (S-O2), oxyhemoglobin levels (HbO₂), and deoxyhemoglobin levels (Hb) in superficial tissues and displays a two-dimensional, color-coded image of the tissue oxygenation (S-O2).

    The camera consists of:

    • Camera: Contains light source, camera and touchscreen PC
    • Recharger: Used to recharge the camera
    • Reference Card: To calibrate the camera
    AI/ML Overview

    The provided text is a 510(k) summary for the SnapshotNIR device, which is a modification of an existing predicate device. The primary focus of the document is to demonstrate "substantial equivalence" to the predicate device, rather than to establish new performance criteria for the device itself. Therefore, the "acceptance criteria" in the traditional sense of a new medical device approval (e.g., minimum sensitivity/specificity thresholds) and a separate "study that proves the device meets the acceptance criteria" are not explicitly defined in the provided document in the way one might expect for a novel device or AI/ML product.

    Instead, the acceptance criteria are implicitly met through the demonstration of linear relationship and agreement between the modified device's algorithm and the predicate device's algorithm for StO2 measurements over a clinically meaningful range. The study is a pre-clinical agreement study conducted to support this substantial equivalence.

    Here's the breakdown of the information based on your request, extracted from the provided text:


    1. A table of acceptance criteria and the reported device performance

    As discussed, specific numerical "acceptance criteria" and "reported device performance" in terms of clinical accuracy (e.g., sensitivity, specificity, or specific error bounds against a gold standard) are not explicitly stated in the provided 510(k) summary for the modified device. The document primarily focuses on demonstrating that the modified device's performance (specifically the new StO2 algorithm) is linearly related and in agreement with the predicate device's performance.

    The implicit "acceptance criteria" for demonstrating substantial equivalence for the modified algorithm is that it should:
    * Show a linear relationship with the predicate algorithm for relative oxyhemoglobin (HbO2) and deoxyhemoglobin (Hb) estimates (R^2 > 0.98).
    * Show a linear relationship over a wide and clinically meaningful dynamic range of StO2.
    * Allow for quantification of scale shift and bias using Bland-Altman plots, with an estimation of 95% levels of agreement (though the specific numerical agreement is not detailed in the summary).

    Reported Device Performance (against the Predicate Device's Algorithm):

    MeasurementAcceptance Criteria (Implicit from Equivalence Claim)Reported Performance
    r[Hb]Linear relationship with predicate algorithm (R^2 near 1)R^2 > 0.98 for r[Hb]
    r[Hbo]Linear relationship with predicate algorithm (R^2 near 1)R^2 > 0.98 for r[Hbo]
    RMSE r[Hb]Low residual error compared to predicate algorithmRMSE r[Hb] = 0.000239
    RMSE r[Hbo]Low residual error compared to predicate algorithmRMSE r[Hbo] = 0.00208
    StO2Linear relationship with predicate algorithm over clinically meaningful dynamic range; quantified biasConcluded to show a linear relationship over a wide and clinically meaningful dynamic range of S-O2, supporting the use of the modified device.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size: 38 volunteer subjects.
    • Data Provenance: Field data acquired (implies prospective data collection). No specific country of origin is mentioned, but the company address is Canada. The study involved a "forearm ischemia protocol," suggesting a controlled experimental setting rather than real-world patient data for diagnosis.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This category is not applicable as the study did not establish a ground truth by human expert review in the traditional sense for an AI/ML diagnostic device. The study's purpose was to demonstrate agreement between the modified device's algorithm and the predicate device's algorithm. The ground truth, in this context, is the measurement provided by the predicate device (KD203).


    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    This is not applicable. There was no human expert review or adjudication process described as the ground truth was derived from the predicate device's measurements.


    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, a multi-reader multi-case (MRMC) comparative effectiveness study was not conducted or described. This device is an oximeter, not an AI-assisted diagnostic imaging tool that would typically involve human reader improvement. The study compared the device's algorithm performance to its predicate.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, the performance reported is standalone (algorithm only). The study compared the StO2 measurements from the modified device (KD204) directly against the predicate device (KD203). The output of the device is a measurement (StO2, HbO2, Hb) and a color-coded image, not a diagnostic interpretation that typically involves human-in-the-loop assistance for clinical decision-making.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The "ground truth" for this agreement study was the measurements obtained from the predicate device (Kent Camera, Model KD203). The study's objective was to show that the modified device's StO2 algorithm produces results that are linearly related and agree with the predicate device over a clinically relevant range, essentially validating the new algorithm against the established (predicate) one.


    8. The sample size for the training set

    The document does not specify a sample size for a training set. The change is described as a "modified algorithm for calculating StO2" which "was implemented to increase the signal to noise ratio and provide better image quality." This implies an algorithmic refinement rather than a machine learning model that would typically require a separate, quantifiable training set. While algorithmic development often uses data, the document does not present it as a trained AI/ML model with a distinct training dataset.


    9. How the ground truth for the training set was established

    Since a "training set" with established ground truth for an AI/ML model is not explicitly mentioned or the focus of the document, this question is not applicable in the context of the provided text. The modified algorithm presumably underwent internal development and validation, but the mechanism for establishing "ground truth" for its development is not detailed. The primary validation shown to the FDA is the agreement study against the predicate device.

    Ask a Question

    Ask a specific question about this device

    K Number
    K183161
    Date Cleared
    2019-02-13

    (90 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    SnapShot Freeze 2

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    SnapShot Freeze 2 is designed for use with gated cardiac acquisitions to reduce cardiac induced motion artifacts.

    Device Description

    SnapShot Freeze 2 (SSF 2) is a post processing software, which can be delivered on general purpose computing platforms. SnapShot Freeze 2 is an automated motion correction algorithm designed for use with gated cardiac acquisitions from GE CT scanners to reduce cardiac induced motion artifacts in the whole heart. It is based on the same fundamental algorithm as the predicate product commercially marketed under the name CardIQ Xpress 2.0 with SnapShot Freeze Option (K120910, AKA SSF 1). Same as its predicate device the SSF 2 algorithm works on multi-phase, gated cardiac CT DICOM compatible image data and produces a new image series in which motion artifact is reduced.

    AI/ML Overview

    The provided text does not contain a detailed study proving the device meets specific acceptance criteria with reported device performance metrics in a table. Instead, it describes internal validation and testing to support the product's substantial equivalence to a predicate device.

    However, based on the information provided, here's an attempt to extract relevant details and structure them according to your request, with disclaimers that much of the quantitative information you're asking for (like specific acceptance criteria values and reported performance against them) is not present in the given document.


    Device Name: SnapShot Freeze 2
    Intended Use: Motion correction of gated cardiac image series.
    Indications for Use: Designed for use with gated cardiac acquisitions to reduce cardiac induced motion artifacts.

    Overview of Device Performance and Acceptance (as inferred from the document):

    The document describes engineering bench testing and a clinical assessment to demonstrate that SnapShot Freeze 2 is "as safe and effective, and performs in a substantially equivalent manner to the predicate device CardIQ Xpress 2.0 with SnapShot Freeze Option." The main improvement of SnapShot Freeze 2 is its ability to extend motion correction to the "whole heart" beyond just coronary vessels.

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document does not provide a clear table of predefined acceptance criteria with corresponding numerical performance results. However, it mentions qualitative performance improvements and claims of "effective temporal resolution" based on phantom testing.

    Feature / Criterion (Inferred from text)Acceptance Criteria (Not explicitly stated with thresholds)Reported Device Performance (as stated or inferred)
    Motion Correction (Coronary Vessels)Expected to be equivalent to predicate."6x improvement in reducing blurring artifacts of the coronary arteries due to cardiac motion." (Inherited from predicate, reiterated for SSF2)
    Motion Correction (Whole Heart)Demonstrate ability to perform whole heart motion correction."Yes, enhancement to the algorithm demonstrates whole heart motion correction."
    Effective Temporal ResolutionMaintain or improve upon predicate's resolution."29 ms for 0.35 sec/rot gantry speed." (Inherited from predicate).
    "24 ms for 0.28 sec/rot gantry." (New for SSF2 at faster gantry speed).
    Diagnostic Capability of Motion Corrected ImagesImages should demonstrate diagnostic utility."The assessment demonstrated the diagnostic capability of the motion corrected images by SSF 2." (Qualitative statement)
    No New Risks/HazardsDevice should not introduce new risks."SnapShot Freeze 2 does not introduce any new risks/hazards, warnings, or limitations."
    "No new hazards were identified, and no unexpected test results were obtained."
    Substantial EquivalenceDevice must be substantially equivalent to predicate."GE Medical Systems believes that the SnapShot Freeze 2 is as safe and effective, and performs in a substantially equivalent manner to the predicate device CardIQ Xpress 2.0 with SnapShot Freeze Option."

    2. Sample Size and Data Provenance for Test Set:

    • Sample Size for Clinical Assessment: A "representative clinical sample image set of 60 CT cardiac cases."
    • Data Provenance: The document states this data is "representative of routine clinical imaging from a cardiac acquisition perspective," but "intentionally includes data from subjects with elevated heart rates or those with heart rate variability which represent more challenging cases." It does not specify the country of origin, nor explicitly state if it was retrospective or prospective, but the phrasing "representative clinical sample image set" and "routine clinical imaging" suggests it was likely a retrospective collection of existing patient data.

    3. Number of Experts and Qualifications for Ground Truth for Test Set:

    • Number of Experts: Three board certified radiologists.
    • Qualifications: "Board certified radiologists." No further details on years of experience are provided.

    4. Adjudication Method for the Test Set:

    • The document states, "A representative clinical sample image set of 60 CT cardiac cases... were assessed by three board certified radiologists using 5-point Likert scale."
    • It does not specify an adjudication method (e.g., 2+1, 3+1 consensus, or independent reading). It only states they "assessed" the cases.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • No MRMC comparative effectiveness study demonstrating how human readers improve with AI vs. without AI assistance is explicitly described. The clinical assessment was to demonstrate the "diagnostic capability of the motion corrected images by SSF 2," not a direct comparison of human reader performance with and without the tool.

    6. Standalone (Algorithm Only) Performance:

    • The document implies that the effective temporal resolution metrics (29 ms, 24 ms) derived from "mechanical and mathematical cardiac phantom testing" represent the standalone performance of the algorithm in terms of motion correction capability, independent of human interpretation.
    • The clinical assessment of "diagnostic capability" also evaluates the output of the algorithm, suggesting an evaluation of its quality for diagnostic purposes.

    7. Type of Ground Truth Used for the Test Set:

    • The "ground truth" for the clinical assessment appears to be the expert consensus/assessment of the three board-certified radiologists using a 5-point Likert scale to determine the "diagnostic capability" of the motion-corrected images. No mention of pathology or outcomes data as direct ground truth for image quality/diagnostic utility is made for the 60 clinical cases.

    8. Sample Size for the Training Set:

    • The document does not specify a sample size for the training set. It describes the algorithm's fundamental similarity to its predicate (SSF1) and states that SSF2 "extends the motion correction capability to the whole heart." This suggests the core algorithm was already established, and the "enhancement" for whole-heart motion correction might have involved further development or refinement without explicitly detailing a new, distinct "training set" in this submission summary.

    9. How Ground Truth for Training Set was Established:

    • The document does not describe how the ground truth for any potential training set was established. Given that the algorithm is based on and an extension of a predicate device, it's plausible that the underlying algorithm was developed and validated earlier, and the current submission focuses on the safety and effectiveness of the extended capability.
    Ask a Question

    Ask a specific question about this device

    K Number
    K143037
    Date Cleared
    2015-01-20

    (90 days)

    Product Code
    Regulation Number
    888.3030
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    SnapShot Fixation System

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SnapShot Fixation System is indicated for use in soft tissue reattachment procedures in the following shoulder procedures:

    Bankart repair, SLAP repair, acromio-clavicular separation, rotator cuff repair, capsule repair or capsulolabral reconstruction, biceps tenodesis, and deltoid repair.

    The SnapShot Fixation System is also indicated for supplementary fixation when used in conjunction with a primary fixation device in surgical procedures requiring graft fixation.

    Device Description

    The SnapShot Fixation System consists of a SnapShot implant and deployment gun. The SnapShot implant is a PEEK rivet comprised of two components, a cannulated body and a bullet. The SnapShot deployment gun comes packaged pre-loaded with the SnapShot implant and is designed to ease insertion of the SnapShot implant into the bone hole. A reusable deployment gun and reusable drill are also available and can be used with the implant reloads.

    AI/ML Overview

    The provided text describes a medical device, the SnapShot Fixation System, and its 510(k) premarket notification for FDA clearance. However, it does not contain the detailed information necessary to fully answer all aspects of your request regarding acceptance criteria and a study proving those criteria are met. This document is a summary of the submission for clearance, not the full study report.

    Specifically, the document states:

    • "Non-clinical laboratory testing was performed to verify the fixation strength of the SnapShot Fixation System in mechanical pullout testing as compared to the predicate LactoSorb Pop Rivet (K981798) for specific indications for use."
    • "The average pullout strength of the SnapShot Fixation System implants was statistically equivalent to or greater than that of the Lactosorb Pop Rivet."
    • "When testing supplemental fixation using a resorbable interference screw for primary fixation, the average pullout strength of the construct utilizing the SnapShot Fixation System for supplementary fixation was statistically greater than the construct with no supplemental fixation."
    • "Clinical Tests: None provided as a basis for substantial equivalence."

    Based on the information provided, here's what can be extracted and what is missing:


    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria for Non-Clinical Testing (Implied from the text):

    Performance MetricAcceptance Criteria (Implied)Reported Device Performance
    Fixation Strength (Pullout) vs. Predicate"statistically equivalent to or greater than" the Lactosorb Pop Rivet (K981798)"The average pullout strength of the SnapShot Fixation System implants was statistically equivalent to or greater than that of the Lactosorb Pop Rivet."
    Supplemental Fixation Strength"statistically greater than" the construct with no supplemental fixation (when used with a primary fixation device like a resorbable interference screw)"the average pullout strength of the construct utilizing the SnapShot Fixation System for supplementary fixation was statistically greater than the construct with no supplemental fixation." (when the primary fixation was a resorbable interference screw)

    Note: The specific numerical values (e.g., in Newtons) for acceptance criteria and performance are not provided in this document. The document only states the comparative outcome ("statistically equivalent to or greater than," "statistically greater than").


    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: Not specified. The document states "Non-clinical laboratory testing was performed," but does not give the number of samples or specimens tested.
    • Data Provenance: The study was a "Non-clinical laboratory testing" (mechanical testing). The country of origin for the data is not explicitly stated, but given the sponsor (Biomet Inc., Warsaw, IN) and the FDA submission, it's highly likely to be conducted in the USA or by labs commissioned by a US company. It is by definition retrospective in relation to the submission date, as the tests were completed before the 510(k) was filed.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not applicable. This was a non-clinical mechanical performance study. "Ground truth" in the clinical sense (e.g., expert consensus on medical images or patient outcomes) is not relevant to this type of testing. The "ground truth" for mechanical testing is established by internationally recognized standards for mechanical testing of medical devices, which are not mentioned in this summary but would have been followed.


    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable. As this was a non-clinical mechanical performance study, there was no need for expert adjudication of results in the way it's done for clinical or imaging studies. Mechanical test results are typically objective measurements.


    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Not applicable. This device is a physical fixation system (screws/rivets) for soft tissue reattachment, not an AI or imaging diagnostic device. Therefore, MRMC studies and AI assistance are not relevant to its evaluation.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Not applicable. This is a physical medical device, not a software algorithm.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    For non-clinical mechanical testing, the "ground truth" is defined by the physical measurements obtained from calibrated testing equipment under controlled conditions, following established mechanical testing standards (e.g., ASTM, ISO standards for orthopedic implants). These standards themselves can be considered the "ground truth" methodology.


    8. The sample size for the training set

    Not applicable. This describes a non-clinical mechanical performance study for a physical device, not an AI or machine learning model that would require a 'training set'.


    9. How the ground truth for the training set was established

    Not applicable. See point 8.

    Ask a Question

    Ask a specific question about this device

    K Number
    K120910
    Date Cleared
    2012-06-18

    (84 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    CARDIQ XPRESS 2.0 WITH SNAPSHOT FREEZE OPTION CARDIQ XPRESS 2.0 REVEAL

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CardlQ Xpress 2.0 is intended to provide an optimized non-invasive application to andyze cardiovascular anatomy and pathology and aid in determining treatment paths from a set of Computed Tomography (CT) Angiographic images. CardlQ Xpress 2.0 is a CT, noninvasive, image analysis software package, which aids in diagnosing of cardiovascular disease to include, coronary artery disease, functional parameters of the heart, heart structures and follow-up for stent placement, bypasses and plaque imaging.

    CardIQ Xpress 2.0 offers unique tools such as automatic tracking, which will pre-process the CT data into multiple viewing ports to allow for an expedited read time improving workflow. With CardIQ Xpress 2.0, the user can color code the myocardial tissue to show hyperdense areas in the myocardial tissue of the heart. With the IVUS-like view the user can color code the HU units of the plaque to better visualize the difference between calcified and noncalcified plaque in the wall of the vessel and the lumen to determine the amount of atherosclerosis. The user can see the different valve planes along with a variety of new layouts to alian the heart. The IVUS like view is created by applying GE's Volume Rendering on a cross-section perpendicular to the detected centerline. This view merely displays a cross section as in IVUS imaging and color codes like IVUS images. No new or additional diagnostic informotion is added.

    CardIQ Xpress 2.0 is for use on the Advantage Workstation (AW) platform, CT scanner, PAC or Centricity stations, which can be used in the analysis of 2D or 3D CT angiography images/data derived from DICOM 3.0 CT scans.

    Device Description

    The GE Medical Systems CardIQ Xpress 2.0 software is a post processing software option for the Advantage Workstation (AW) Platform, CT scanner, PACS or Centricity systems. This product can be used in the analysis of CT angiographic images to view the coronary vessels to determine if the patient has normal coronary arteries, arteriolosclerosis or severe stenosis, which needs to go on for treatment. This software also can look at the heart structures to include valve imaging, heart motion and ejection fraction. CardIO Xpress 2.0 contains both graphic and text report capabilities with predefined templates for ease of use.

    CardIO Xpress 2.0 with SnapShot* Freeze is additionally designed to reduce the coronary artery motion blurring in a CT image.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification submission for the GE Healthcare CardIQ Xpress 2.0 with SnapShot* Freeze Option. However, the document does not explicitly state acceptance criteria or the study details that prove the device meets specific performance criteria. It focuses on demonstrating substantial equivalence to a predicate device (CardIQ Xpress 2.0).

    Based on the available information, here's what can be extracted and what is missing:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document does not explicitly state specific acceptance criteria in measurable terms (e.g., accuracy, sensitivity, specificity, or quantitative reduction in motion blurring) or reported device performance against such criteria. The submission aims to establish substantial equivalence to a predicate device, CardIQ Xpress 2.0. The "Summary of Non-Clinical Tests" and "Summary of Clinical Tests" sections mention verification and validation activities, but they don't provide the results in a quantifiable way related to acceptance criteria.

    2. Sample Size Used for the Test Set and Data Provenance:

    • Test Set Sample Size: Not explicitly stated.
    • Data Provenance: The images used were "CT acquired clinical images... obtained from a non-significant risk reader study of care patient images." The country of origin is not specified, and it is a "reader study," implying retrospective use of existing patient data.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    • Number of Experts: Not explicitly stated.
    • Qualifications of Experts: Not explicitly stated.

    4. Adjudication Method for the Test Set:

    • Adjudication Method: Not explicitly stated.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size:

    • The text mentions a "non-significant risk reader study," which implies human readers were involved. However, it does not explicitly state that it was an MRMC comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance.
    • Effect Size: No effect size or quantitative improvement of human readers with AI assistance is provided.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done:

    • The document implies that the device "aids in diagnosing" and "aids in determining treatment paths," suggesting it's an assistance tool for a human. It's not clear if a standalone performance study of the algorithm without human interaction was conducted and reported with specific metrics.

    7. The Type of Ground Truth Used:

    • The document does not explicitly state the type of ground truth used (e.g., expert consensus, pathology, outcome data). Given it's a reader study, expert consensus is a likely, but unconfirmed, possibility.

    8. The Sample Size for the Training Set:

    • Training Set Sample Size: Not mentioned. The document describes verification and validation using "CT acquired clinical images" but does not distinguish between training and test sets or their respective sizes.

    9. How the Ground Truth for the Training Set Was Established:

    • Ground Truth Establishment for Training Set: Not mentioned.

    In summary, the provided 510(k) summary focuses on demonstrating substantial equivalence based on fundamental scientific technology and adherence to quality assurance measures. It lacks detailed quantitative performance metrics, specific acceptance criteria, and comprehensive information about the clinical study design, including sample sizes, expert qualifications, and ground truth establishment methods, which are typically found in more detailed study reports.

    Ask a Question

    Ask a specific question about this device

    K Number
    K081925
    Device Name
    SNAPSHOT
    Date Cleared
    2008-09-02

    (57 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    SNAPSHOT

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Snapshot is intended to be used by dentists and other qualified professionals for producing diagnostic x-ray radiographs of dentition, jaws and other oral structures.

    Device Description

    Snapshot is a digital intraoral sensor system that can be connected to a workstation PC via USB connection. The essential function and purpose of the system in a dental clinic is to capture intraoral digital dental x-ray images. The system can be used with general intraoral X-ray units. Snapshot utilizes existing designs and share parts with the predicate Sigma M sensor. The sensor is the same CMOS sensor as that of the predicate device. USB 2.0 High speed connection is used for image transfer to a PC. The basic system consists of Sensor, which is available in two sizes, Workstation software, Sensor holders and hygienic covers.

    AI/ML Overview

    The provided document is a 510(k) summary for the Snapshot digital intraoral sensor system. It focuses on demonstrating substantial equivalence to a predicate device (Sigma M) rather than presenting a standalone study with acceptance criteria and performance data. Therefore, many of the requested details are not present in this document.

    Here's an analysis based on the information available:

    1. A table of acceptance criteria and the reported device performance

    This information is not available in the provided document. The 510(k) summary primarily focuses on demonstrating substantial equivalence based on design, composition, and function compared to a predicate device, rather than quantitative performance against specific acceptance criteria. It states: "As conclusion Snapshot is as safe, as effective, and performs as well as or better than the predicate device." This is a qualitative statement of equivalence, not a presentation of performance data against defined metrics.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    This information is not available in the provided document. There is no mention of a specific test set or data provenance for a performance study.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This information is not available in the provided document. As there's no mention of a test set or a study conducted to establish performance metrics, there's no information on experts or ground truth establishment.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    This information is not available in the provided document.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    There is no indication that an MRMC comparative effectiveness study was done, or that the device involves AI assistance. The document describes a "digital intraoral sensor system" which is a hardware component for capturing X-ray images, not an AI-powered diagnostic tool. The focus is on the sensor's ability to capture images, not on assisting human readers with interpretation.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This information is not applicable/available. The "Snapshot" is a digital intraoral sensor system, essentially a hardware component for image acquisition, not an algorithm that functions in a standalone capacity for diagnosis. Its purpose is to "capture intraoral digital dental x-ray images."

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    This information is not available in the provided document, as no specific performance study with ground truth is detailed.

    8. The sample size for the training set

    This information is not available in the provided document. The device is a digital sensor, not a machine learning algorithm that requires a training set in the typical sense.

    9. How the ground truth for the training set was established

    This information is not available in the provided document, as it's not relevant to the described device type.

    In summary:

    The provided document is a 510(k) premarket notification that asserts substantial equivalence of the Snapshot device to a predicate device (Sigma M) based on design and functional similarities. It does not contain details about specific performance studies with acceptance criteria, test sets, or ground truth establishment, which are typical for studies evaluating diagnostic accuracy or AI software performance. The device is a hardware system for image capture, not an AI-driven diagnostic tool.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1