Search Filters

Search Results

Found 62 results

510(k) Data Aggregation

    K Number
    K242607
    Manufacturer
    Date Cleared
    2025-02-21

    (171 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ScanDiags Ortho L-Spine MR-Q software is an image-processing and measurement software tool that provides quantitative spine measurements from previously-acquired DICOM lumbar spine Magnetic Resonance (MR) images for users' review, analysis, and interpretation. It provides the following functionality to assist users in visualizing, and documenting area and distance measurements of relevant anatomical structures (vertebral body, intervertebral disc, neuroforamina, thecal sac) of the lumbar spine:

    Feature Segmentation;

    Feature measurement; and

    Export of measurement results to a PDF report containing measurement results and overlay images for user's review, revise and approval.

    ScanDiags Ortho L-Spine MR-Q software does not produce or recommend any type of medical diagnosis or treatment. Instead, it simply helps users to more easily identify and classify features in lumbar MR images and compile a report. The user is responsible for reviewing and verifying the software-generated measurements and approving draft report content using their medical judgment and discretion.

    The device is intended to be used only by hospitals and other medical institutions.

    Only DICOM images of MRI acquired from lumbar spine exams of patients aged 22 and above are considered to be valid input. ScanDiags Ortho L-Spine MR-Q software does not support DICOM images of patients that are pregnant, undergo MRI scan with contrast media, have post-operational complications or infections.

    Device Description

    ScanDiags Ortho L-Spine MR-Q software is a software as a medical device (SaMD) intended for visualization, and quantification of lumbar spine anatomical structures including vertebral bodies, intervertebral discs, neuroforamina, thecal sacs from a set of standard lumbar spine MRI images in DICOM (Digital Imaging and Communications in Medicine) format. The semi-automatic segmentation of these structures forms the bases for the distance and area measurement outputs. The software has features for log-in, viewing, revising, and saving measurement results in addition to generating PDF reports. The PDF report includes images, measurements.

    ScanDiags Ortho L-Spine MR-Q software includes a viewing application (ScanDiags DICOM Viewer) to visualize, review, and apply corrections to the measurement results shown as overlay on the original lumbar spine MRI images. Pre-existing MR images of the lumber spine are uploaded into the software for analysis. The semi-automatic segmentations are based on deep convolutional neural networks (DCNN) which are developed by applying well-established supervised deep learning methods on unstructured MRI scans (DICOM image format). ScanDiags Ortho L-Spine MR-Q combines deep learning, image analysis, as well as regression-based machine methods. The segmentations and distance measurements are user modifiable. Results are reviewed and approved by the radiologist's user before a PDF report is generated. Once approved, the result PDF report is sent to the clinician's PACS system. The PACS system stores the PDF report within the corresponding patient MRI study.

    ScanDiags Ortho L-Spine MR-Q does not interface directly with any MR or data collection equipment; instead, the software uploads data files previously generated by such equipment. Its functionality is independent of the vendor type of the acquisition equipment. The analysis results are available on the ScanDiags DICOM Viewer screen and can be edited, saved, and approved. The approved measurement results are sent back to the PACS system as a Measurement Result PDF Report. The software does not perform any functions that could not be accomplished by a trained user with manual tracing methods; the purpose of the software is to save time and automate the tedious manual task of segmentation and distance measurement.

    ScanDiags Ortho L-Spine MR-Q software is an adjunct tool and is not intended to replace a radiologist's review of the MRI study, nor is it intended to replace his or her clinical judgment, and it does not detect, diagnose or identify any abnormalities. Radiologists must not use the generated output as a primary interpretation.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) submission information:

    Acceptance Criteria and Reported Device Performance

    The device, ScanDiags Ortho L-Spine MR-Q, uses machine learning (Deep Convolutional Neural Networks) for semi-automatic segmentation and quantitative measurements of lumbar spine anatomical structures from MRI images. Its performance was evaluated against ground truth established by expert radiologists.

    Table of Acceptance Criteria and Reported Device Performance:

    Performance MetricAcceptance Criteria (Implicit from "Successfully Passed")Reported Device Performance (Mean)Units (if applicable)
    Intraclass Correlation Coefficient (ICC)ICC consistently high (e.g., > 0.75 or 0.8 as a common benchmark for good agreement)
    Vertebra AreaPassed0.95 [0.94 - 0.96](95% CI)
    Vertebra Anterior HeightPassed0.85 [0.30 - 0.94](95% CI)
    Vertebra Middle HeightPassed0.91 [0.63 - 0.96](95% CI)
    Vertebra Posterior HeightPassed0.89 [0.87 - 0.91](95% CI)
    Neuroforamen AreaPassed0.90 [0.86 - 0.93](95% CI)
    Intervertebral Disc AreaPassed0.92 [0.87 - 0.94](95% CI)
    Intervertebral Disc Anterior HeightPassed0.78 [0.73 - 0.82](95% CI)
    Intervertebral Disc Middle HeightPassed0.85 [0.18 - 0.95](95% CI)
    Intervertebral Disc Posterior HeightPassed0.74 [0.68 - 0.78](95% CI)
    Thecal Sac AreaPassed0.94 [0.91 - 0.96](95% CI)
    Thecal Sac Anteroposterior DiameterPassed0.92 [0.90 - 0.94](95% CI)
    Thecal Sac Mediolateral DiameterPassed0.86 [0.83 - 0.88](95% CI)
    Dice Similarity Coefficient (DSC)DSC consistently high (e.g., > 0.7 for good overlap)
    VertebraPassed0.95 [0.95 - 0.96](95% CI)
    NeuroforamenPassed0.86 [0.85 - 0.86](95% CI)
    Intervertebral DiscPassed0.89 [0.89 - 0.90](95% CI)
    Thecal SacPassed0.89 [0.89 - 0.90](95% CI)
    Mean Absolute Error (MAE)Implicitly low (consistent with passing criteria)mm
    Vertebra Anterior HeightPassed1.17mm
    Vertebra Middle HeightPassed0.86mm
    Vertebra Posterior HeightPassed0.79mm
    Intervertebral Disc Anterior HeightPassed1.1mm
    Intervertebral Disc Middle HeightPassed1.19mm
    Intervertebral Disc Posterior HeightPassed0.96mm
    Thecal Sac Anteroposterior DiameterPassed0.81mm
    Thecal Sac Mediolateral DiameterPassed1.26mm

    Note: The document states "The device successfully passed the primary ICC acceptance criteria," "The device successfully passes the secondary DICE acceptance criteria," and "The device successfully passes the co-secondary MAE acceptance criteria," implying that specific thresholds were met, though the exact numerical criteria are not explicitly stated in this summary.

    Study Details Proving Acceptance Criteria

    1. Sample Size and Data Provenance:

      • Test Set Sample Size: 100 individual patient MRI studies.
      • Data Provenance: Retrospective, multicenter study. Data collected from two hospital groups in the United States: one in Missouri (18 patients from a rural hospital group) and one in North Carolina (82 patients from urban and rural hospital groups). Images were acquired from MRI systems from GE (40), Siemens Healthineers (42), and Philips (18).
    2. Number of Experts and Qualifications:

      • Number of Experts: 3.
      • Qualifications: US board-certified MSK (Musculoskeletal) radiologists. (Specific years of experience are not mentioned, but board certification implies a certain level of expertise).
    3. Adjudication Method for Ground Truth:

      • For Anatomic Structure Segmentation: Pixel-based majority opinion between the three radiologists.
      • For Area and Distance Measurements: Averaging the measurements of all three readers.
    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

      • No, a MRMC comparative effectiveness study was not done. The study focuses on the standalone performance of the algorithm against expert-derived ground truth. The device is described as an "adjunct tool" that requires human review and verification, but the provided performance data is for the algorithm itself, not human-AI collaboration.
    5. Standalone Performance Study:

      • Yes, a standalone (algorithm only without human-in-the-loop performance) study was performed. The "Machine Learning Performance Evaluation Summary" clearly outlines the results of the algorithm's performance (ICC, DSC, MAE) against the established ground truth.
    6. Type of Ground Truth Used:

      • Expert Consensus: The ground truth was established by the consensus (majority opinion for segmentation, average for measurements) of three US board-certified MSK radiologists.
    7. Training Set Sample Size:

      • The document states that images and cases used for verification and validation testing were "separate and carefully segregated from training datasets." However, the sample size for the training set is not provided in the excerpt. It mentions that the Deep Convolutional Neural Networks (DCNN) were developed by "applying well-established supervised deep learning methods on unstructured MRI scans (DICOM image format)."
    8. How Ground Truth for Training Set was Established:

      • The document implies that the DCNN utilized "supervised deep learning methods." While it doesn't explicitly detail the ground truth establishment for the training set, it can be inferred that it involved labeled data, likely expert annotations, similar to how the test set ground truth was established, given the nature of supervised learning for medical imaging segmentation and measurement. However, the specific process (e.g., number of annotators, their qualifications, adjudication method) for the training set is not described in this provided text.
    Ask a Question

    Ask a specific question about this device

    K Number
    K232787
    Date Cleared
    2023-10-06

    (25 days)

    Product Code
    Regulation Number
    868.1980
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ScanNav Anatomy Peripheral Nerve Block is indicated to assist qualified healthcare professionals to identify and label the below mentioned anatomy in live ultrasound images in preparation for ultrasound guided regional anesthesia prior to needle insertion for patients 18 years of age or older.

    The highlighting of structures in the following anatomical regions is supported:

    • · Axillary level brachial plexus
    • · Erector spinae plane
    • · Interscalene level brachial plexus
    • · Popliteal level sciatic nerve
    • · Rectus sheath plane
    • · Sub-sartorial femoral triangle / Adductor canal
    • · Superior trunk of brachial plexus
    • · Supraclavicular level brachial plexus
    • · Longitudinal suprainguinal fascia iliaca plane
    • Femoral block

    ScanNav Anatomy Peripheral Nerve Block is an accessory to compatible general-purpose diagnostic ultrasound systems.

    Device Description

    ScanNav Anatomy Peripheral Nerve Block is a software (SaMD) which assists qualified healthcare professionals to identify and label relevant anatomical structures in preparation for ultrasound guided regional anesthesia prior to needle insertion for patients 18 years of age or older.

    The device receives ultrasound images in real-time from a compatible general-purpose ultrasound machine. It processes these images using deep learning artificial intelligence algorithms and highlights relevant anatomical structures. The ultrasound machine display remains unaffected, and the highlighting is only displayed on a general-purpose panel PC provided with the device.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the ScanNav Anatomy Peripheral Nerve Block device, based on the provided FDA 510(k) summary:

    Acceptance Criteria and Device Performance

    Acceptance CriteriaReported Device Performance (Femoral Block)
    FP rate: Mis-identification rate of safety critical anatomical structures in the indicated procedures is less than 5%.1.1% (2 out of 183 scans)
    Accuracy (TP+TN) rate: Correct highlighting of safety critical anatomical structures in the indicated procedures at least 80% of the time.96.7% (177 out of 183 scans)
    FN rate: Non-identification rate of safety critical anatomical structures in the indicated procedures is less than 15%.2.2% (4 out of 183 scans)

    Note: The software tests also had acceptance criteria (e.g., successful completion of unit test, integration test, etc.) but specific quantitative performance metrics were not provided in the summary, only that "all software tests have been successfully completed without any anomalies."


    Study Details

    1. Sample Size for the Test Set and Data Provenance:

    • Sample Size: 183 scans were used for the safety and accuracy validation of the Femoral Nerve Block.
    • Data Provenance: The document does not specify the country of origin of the data or whether the study was retrospective or prospective. It only states that the tests used "established protocol and acceptance criteria same as that used for the predicate device."

    2. Number of Experts Used to Establish Ground Truth and Qualifications:

    • This information is not provided in the given 510(k) summary.

    3. Adjudication Method for the Test Set:

    • This information is not provided in the given 510(k) summary.

    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • A MRMC comparative effectiveness study was not explicitly mentioned in this summary. The study described focuses on the device's standalone performance against defined criteria, rather than comparing human readers with and without AI assistance.

    5. Standalone (Algorithm Only) Performance:

    • Yes, the provided performance metrics (FP rate, Accuracy, FN rate) directly reflect the standalone performance of the algorithm for the Femoral Nerve Block. The device processes ultrasound images using AI algorithms and highlights structures, and the reported rates indicate how well the algorithm performs this task independently.

    6. Type of Ground Truth Used:

    • The summary states that the validation involved "safety and Accuracy validation" through "established test protocols." While not explicitly named (e.g., "expert consensus" or "pathology"), given the context of identifying anatomical structures in ultrasound images, it is highly probable that the ground truth was established by expert consensus (e.g., skilled sonographers or regional anesthetists manually identifying and labeling the structures) or well-defined anatomical landmarks.

    7. Sample Size for the Training Set:

    • This information is not provided in the given 510(k) summary.

    8. How the Ground Truth for the Training Set Was Established:

    • This information is not provided in the given 510(k) summary.
    Ask a Question

    Ask a specific question about this device

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Scanning Laser Ophthalmoscope Mirante [SLO/OCT Model]: The Mirante SLO/OCT with scanning laser ophthalmoscope and optical coherence tomography function and with Image Filing Software NAVIS-EX is a non-contact system for imaging the fundus and for axial cross sectional imaging of ocular structures. It is indicated for in vivo imaging and measurement of:
    · the retina, retinal nerve fiber layer, optic disc, and
    · the anterior chamber and cornea (when used with the optional anterior segment OCT adapter)
    and for color, angiography, autofluorescence, and retro mode imaging of the retina as an aid in the diagnosis and management. The Image Filing Software NAVIS-EX is a software system intended for use to store, manage, process, measure, and display patient data and clinical information from computerized diagnostic instruments through networks. It is intended to work with compatible NIDEK ophthalmic devices.

    Scanning Laser Ophthalmoscope Mirante [SLO Model]: The Mirante SLO with scanning laser ophthalmoscope function and with Image Filing Software NAVIS-EX is a noncontact system for imaging the fundus. It is indicated for color, angiography, auto-fluorescence, and retro mode imaging of the retina as an aid in the diagnosis and management. The Image Filing Software NAVIS-EX is a software system intended for use to store, manage, process, measure, analyze and display patient data and clinical information from computerized diagnostic instruments. It is intended to work with compatible NIDEK ophthalmic devices.

    Device Description

    The Nidek Mirante is an Optical Coherence Tomography (OCT) system intended for use as a non-invasive imaging device for viewing and measuring ocular tissue structures with micrometer range resolution. The Nidek Mirante is a computer controlled ophthalmic imaging system. The device scans the patient's eye using a low coherence interferometer to measure the reflectivity of retinal tissue. The cross sectional retinal tissue structure is composed of a sequence of A-scans. It has a traditional patient and instrument interface like most ophthalmic devices. The Nidek Mirante uses Fourier Domain OCT, a method that involves spectral analysis of the returned light rather than mechanic moving parts in the depth scan. Fourier Domain OCT allows scan speeds about 65 times faster than the mechanical limited Time Domain scan speeds. The Mirante utilizes Fourier spectroscopic imaging a Michelson interferometer. The interfering light of the reference light and the reflected light from the test eye obtained by the Michelson interferometer are spectrally divided by a diffraction grating and the signal is acquired by a line scan camera. The signal is inverse Fourier transformed to obtain the reflection intensity distribution in the depth direction of the patient's eve. The galvano mirror scans the imaging light in the XY direction to obtain a tomographic image. The Mirante includes scanning laser ophthalmoscope (SLO) functions as well as the OCT functions. The SLO component uses a confocal scanning system for image capture. The imaging light emitted from the laser oscillator passes through the hole mirror and enters the patient's eye. The reflected by the hole mirror and the signal is obtained by the detector. A resonant mirror and a galvanometer mirror placed in the imaging optical path scan the imaging light in the XY direction to obtain a flat surface image. The device includes Image Filing Software NAVIS-EX which is a software system intended for use to store, manage, process, measure, and display patient data and clinical information from computerized diagnostic instruments through networks. It is intended to work with compatible NIDEK ophthalmic devices.

    AI/ML Overview

    The provided documentation describes the acceptance criteria and the study results for the Nidek Mirante Scanning Laser Ophthalmoscope and the Image Filing Software NAVIS-EX.

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly defined by demonstrating "substantial equivalence" to previously cleared predicate devices through agreement and precision analyses, and superior or equivalent image quality. The performance is reported in terms of comparisons against these predicate devices.

    Nidek Mirante (OCT Component) vs. Optovue Avanti (Predicate)

    MetricAcceptance Criteria (Implied by Substantial Equivalence)Reported Device Performance (Nidek Mirante)
    Agreement Analysis (Mean Difference)Demonstrate agreement with predicate device deemed clinically acceptable.[ILM-RPE/BM] Thickness: Higher than Avanti (10-20 µm thicker). All parameters and populations met agreement performance goals. Disc Map RNFL Thickness: Higher than Avanti (around 10 µm thicker), with the exception of TSNIT Temporal. All parameters and populations met agreement performance goals. Disc Map Optic Disc: Lower Horizontal C/D Ratio and Vertical C/D Ratio, higher Disc Area and Cup Area (All Subjects). Similar differences for Normal, lower values for Glaucoma. All parameters and populations met agreement performance goals. Cornea Radial CCT: Higher than Avanti (around 15 µm thicker). Agreement performance goals met for All Subjects, but not met for Normal and Corneal Disease populations.
    Precision Analysis (Repeatability)Demonstrate acceptable variation (coefficient of variation, %CV) for measurements.[ILM-RPE/BM] Thickness: Met precision goals for all parameters and groups. Disc Map RNFL Thickness: Met most precision goals for Normal population; most met for Glaucoma except for one TSNIT Nasal and one TSNIT Temporal parameter (slightly missed). Disc Map Optic Disc: Met most precision goals for Normal and Glaucoma populations, except for Cup Area in both populations (slightly missed). Cornea Radial CCT: Met precision goals for all parameters and populations.
    Image Quality (ACA)Clinically useful and overall quality comparable to predicate.Not statistically significant difference in clinical utility and overall quality compared to Avanti.

    Nidek Mirante (SLO Component) vs. OPTOS P200DTx (Predicate)

    MetricAcceptance Criteria (Implied by Substantial Equivalence)Reported Device Performance (Nidek Mirante)
    Image Quality (Color Fundus)Clinically useful and overall quality comparable to predicate.Provided better clinical utility and overall quality compared to P200DTx (p<0.0001) for all subjects and individual populations (Normal, Glaucoma, Retinal Disease).
    Image Quality (B-FAF)Clinically useful and overall quality comparable to predicate.Provided better clinical utility and overall quality compared to P200DTx (p<0.0001) for all subjects and individual populations (Normal, Glaucoma, Retinal Disease).
    Image Quality (G-FAF)Clinically useful and overall quality comparable to predicate.Provided better clinical utility and overall quality compared to P200DTx (p<0.0001) for all subjects and individual populations (Normal, Glaucoma, Retinal Disease).

    General Acceptance (Safety)

    MetricAcceptance CriteriaReported Device Performance
    Safety-related issuesNo safety issues related to the study devices.One adverse event (pinguecula), determined not related to the study device.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: A total of 170 subjects were enrolled in the study.
      • 45 subjects in the Normal group
      • 46 subjects in the Glaucoma group
      • 47 subjects in the Retinal Disease group
      • 32 subjects in the Corneal Disease group
      • 167 subjects completed the study.
    • Data Provenance: Prospective, comparative clinical study conducted at one clinical site in the United States.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document mentions "Masked graders reviewing Anterior Chamber Angle (ACA) and SLO images were masked to the subject, device type, subject population, configuration order, device order and results from other graders."
    However, the number of experts/graders and their qualifications are not explicitly stated in the provided text.

    4. Adjudication Method for the Test Set

    The document states: "Scan acceptability was by a 2-step process where the device operator identified acceptable scans and then an Investigator image reviews the scans making the final determination of which scans were acceptable or unacceptable." This implies a form of sequential review, with the "Investigator" making the final determination. It does not specify an adjudication method like 2+1 or 3+1 for resolving discrepancies between multiple graders for image quality assessment, as the number of graders is not mentioned. However, for "image quality" assessments (ACA and SLO), it states "The results from the 3 masked graders were documented," and uses "grader average." This suggests that if multiple graders were used, their averages were taken, rather than a formal adjudication process to resolve disagreements.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was Done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    This was not a multi-reader, multi-case (MRMC) comparative effectiveness study comparing human readers with AI vs. without AI assistance. The study compared the Nidek Mirante device's performance (including image quality) to predicate devices, and involved masked graders assessing image quality, but not in the context of improving human reader performance with AI assistance.

    6. If a Standalone (algorithm only without human-in-the-loop performance) was Done

    The primary purpose of the study was to assess the agreement and precision of the Nidek Mirante OCT measurements in comparison with a predicate device and to assess its image quality in comparison to predicate devices for both OCT and SLO. While precision analyses evaluate the device's inherent measurement consistency, and image quality assessment involves human graders, these do not represent a standalone "algorithm only without human-in-the-loop performance" in the general sense of an AI diagnostic algorithm operating independently. The device itself is an imaging system used by humans, not an AI for diagnosis.

    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    The study does not establish an independent ground truth (e.g., pathology, clinical outcomes) for the disease states. Instead, it uses comparative effectiveness against predicate devices and agreement/precision analysis for quantitative measurements and human-graded qualitative image quality. The study categorized subjects into Normal, Glaucoma, Retinal Disease, and Corneal Disease groups. This implies that the diagnosis of these disease states served as a basis for evaluating the device's performance within those groups, but the precise method of establishing these diagnoses (e.g., expert consensus, other gold-standard tests) is not detailed for the "ground truth" of the disease classification itself. The "ground truth" for the device's performance metrics appears to be the measurements and image quality of the predicate devices, or the consistency of the Mirante itself.

    8. The Sample Size for the Training Set

    The document describes a clinical study to demonstrate substantial equivalence, but it does not mention a training set sample size or the development of an AI/ML algorithm that would typically involve a separate training set. The device itself is a scanning laser ophthalmoscope and optical coherence tomography system, not inherently an AI diagnostic tool. However, the NAVIS-EX software later references a "B-scan Denoising software" which is a new function. The document mentions this function "denoises a single B-scan image to an averaged image of 120 images added," suggesting it uses a computational approach, but does not provide details of a training set for this denoising algorithm if it were an AI-based method.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is explicitly mentioned for an AI/ML algorithm in the context of the Nidek Mirante device itself, the method for establishing ground truth for a training set is not applicable or described in the provided text. For the B-scan Denoising software, the mechanism is described as "averaging 120 images," which is an algorithmic process rather than a machine learning model requiring a ground-truth labeled training set in the typical sense.

    Ask a Question

    Ask a specific question about this device

    K Number
    K230095
    Manufacturer
    Date Cleared
    2023-02-06

    (25 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ScanX Swift 2.0 is intended to be used for scanning and processing digital images exposed on Phosphor Storage Plates (PSPs) in dental applications.

    The ScanX Swift View 2.0 is intended to be used for scanning and processing digital images exposed on Phosphor Storage Plates (PSPs) in dental applications.

    Device Description

    The ScanX Swift 2.0 and ScanX Swift View 2.0 are dental devices that scan photostimulable phosphor storage plates that have been exposed in place of dental X- Ray film and allows the resulting images to be displayed on a personal computer monitor and stored for later recovery. It will be used by licensed clinicians and authorized technicians for this purpose. The device is an intraoral Plate Scanner, which is designed to read out all cleared plates of the sizes 0, 1, 2, 3, and 4. The phosphor plates are made of rigid photostimulable material. Intraoral phosphor plate x-ray (also known as phosphor storage plate or PSP x-ray) eliminates the need for traditional film processing for dental radiography. Phosphor storage plates can convert existing film based imaging systems to a digital format that can be integrated into a computer or network system. The intraoral Plates are put into the mouth of the patient, exposed to X-rays and then are read out with the device. The read-out-process is carried out with a 639nm Laser. The laser beam is moved across the plate by an oscillating MEMS mirror. The laser beam stimulates the top coating of the plates, which consists of x-ray sensitive material. Depending on the exposed dose, the coating emits different levels of light. These light particles are then requisitioned by an optical sensor (Photo Multiplier Tube/ PMT) and transferred into an electrical output signal is digitalized and is the data for the digital X-ray image. The data is transmitted via an Ethernet link to a computer. Before the plate is discharged, the remaining data is erased by a LED-PCB. The user chooses which size of plate he has to use and prepares the device by inserting the appropriate plate insert into the device. He then exposes the plate and then puts the plate directly into the insert by pushing it out of the light protection envelope. The user closes the light protection cover and starts the read out process. After the read out process the picture is transmitted to the connected PC, the picture can be viewed and the IP is erased and ready to use for the next acquisition. The main difference between the two models is on the ScanX Swift View 2.0 the display is larger, has touch capability, and can show a preview of the scan image. The device firmware is based on the predicate firmware and is of a moderate level of concern.

    AI/ML Overview

    The provided text is a 510(k) summary for a medical device (ScanX Swift 2.0, ScanX Swift View 2.0), which focuses on demonstrating substantial equivalence to a predicate device. It does not contain information about acceptance criteria for a specific clinical endpoint or a study proving a device meets such criteria.

    Instead, it discusses the technological characteristics, safety, and performance of the device in comparison to a predicate device based on non-clinical testing and engineering principles. The document explicitly states:

    • "Summary of clinical performance testing: Not required to establish substantial equivalence."

    Therefore, I cannot provide a table of acceptance criteria and reported device performance from a clinical study, nor details about sample sizes, ground truth establishment, or multi-reader multi-case studies, as this information is not present in the provided text.

    However, I can extract information related to non-clinical performance testing and technical characteristics, which are used to establish substantial equivalence.

    Here's an analysis based on the available information:

    Key Takeaways from the Document:

    • Device Type: Phosphor Storage Plate (PSP) scanner for dental X-ray images.
    • Purpose: Scan exposed PSPs, process digital images, and display/store them.
    • Approval Basis: Substantial equivalence to a predicate device (ScanX Edge K202633).
    • No Clinical Study: Clinical performance testing was explicitly stated as "Not required to establish substantial equivalence." This means the FDA cleared the device based on non-clinical data and comparison to a legally marketed predicate.

    Information Related to Device Performance and Equivalence (Non-Clinical):

    The document compares the subject devices (ScanX Swift 2.0, ScanX Swift View 2.0) to the predicate device (ScanX Edge) based on various technical specifications and non-clinical performance metrics.

    1. Table of "Acceptance Criteria" (Technical Specification Comparison) and Reported Device Performance (as listed for the subject devices):

    Since no acceptance criteria are explicitly stated as pass/fail for a clinical endpoint, I will present the comparative technical specifications as the basis for demonstrating equivalence and "performance" in this context. The "acceptance criteria" here are effectively the predicate device's performance, and the subject device's performance is compared against it for substantial equivalence.

    CharacteristicPredicate Device (ScanX Edge) "Acceptance Criteria" (for equivalence)Subject Devices (ScanX Swift 2.0, ScanX Swift View 2.0) Reported PerformanceComparison / Impact Analysis
    Max. theoretical resolutionApprox. 40 Lp/mmApprox. 40 Lp/mmSAME
    MTF (at 3 LP/mm)More than 40%Horizontal 59%, Vertical 49% (in 12.5µm pixel size mode)Similar/better. (Subject device performance is higher than the predicate's stated 'more than 40%')
    DQE (at 3 LP/mm)More than 3.4%Horizontal 8.5%, Vertical 10.5% (in 12.5µm pixel size mode with 99µGy)Similar/better. (Subject device performance is significantly higher than the predicate's stated 'more than 3.4%')
    Image bit depth16 bits16 bitsIdentical
    Operating PrincipleLaser / Photomultiplier Tube (PMT) Components: Photomultiplier 2" Diode, Laser 639nm/10mW Fiber coupled laser diodeLaser / Photomultiplier Tube (PMT) Components: Photomultiplier 2" Diode, Laser 639nm/10mW Fiber coupled laser diodeIdentical. Note: While the components are identical, a new "Flying-Spot configuration (PCS technology)" is used, which was cleared in a predecessor device (K170733), suggesting equivalence in efficacy despite a change in the exact scanning mechanism.
    Supported Plate SizesSize 0 (22x35mm), 1 (24x40mm), 2 (31x41mm)Size 0, 1, 2, 3 (27x54mm), 4 (57x76mm)Similar. The predicate device uses smaller phosphor plates: Size 0, 1 and 2. The subject devices support these and add Size 3 and 4, which were available on a previous model (K170733), implying this expansion is also a previously cleared technology. This is presented as an enhancement rather than a deviation that would impact safety/effectiveness negatively.
    Data TransferEthernet linkEthernet link (all models); WLAN interface or removable storage (XPS07.2A1 only)Similar. The View model offers additional flexibility. Risks associated with WLAN would be addressed by standards compliance (e.g., IEC 60601-1-2 and "Radio Frequency Wireless Technology in Medical Devices" guidance).
    Image GenerationImage assembled by imaging software (e.g., VisionX)Image assembled within the image plate scanner (using same algorithm as K192743)New. The raw image data is the same, and the algorithm used is the same as already cleared for the VisionX imaging software (K192743), suggesting this change does not impact safety or effectiveness.

    2. Sample size used for the test set and the data provenance:

    • Not explicitly stated for performance testing. The document refers to non-clinical performance testing (MTF, DQE, noise power spectrum) in accordance with IEC 6220-1:2003, which would involve imaging phantoms or test objects.
    • Data Provenance: Not specified, but given the manufacturer is German (DURR DENTAL SE), the non-clinical testing likely occurred in a controlled lab environment, presumably in Germany or where their R&D facilities are located. These tests are inherently "prospective" in the sense that they are conducted to characterize the specific device.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts for clinical studies:

    • Not applicable. The submission states no clinical performance testing was required. For non-clinical tests like MTF/DQE, ground truth is established by the design of the test phantom and the known physical properties being measured.

    4. Adjudication method for the test set (for clinical studies):

    • Not applicable. No clinical studies were performed.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:

    • No. The document explicitly states: "Summary of clinical performance testing: Not required to establish substantial equivalence." Therefore, no MRMC study was performed.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • The device itself is a standalone imaging acquisition and processing system. The performance metrics (MTF, DQE) are inherently standalone measurements of the device's image quality output.
    • The ScanX Swift View 2.0 model has a "Stand-Alone-Mode" where it can operate without a connection to a computer, generate image data, and store it on a USB stick. This is a functional feature, not a separate performance study.

    7. The type of ground truth used:

    • For the non-clinical image quality performance metrics (MTF, DQE, noise power spectrum), the ground truth is established by physical standards and phantoms as defined by the IEC 6220-1:2003 standard. These involve known patterns and controlled radiation exposures to objectively measure the imaging system's capabilities.

    8. The sample size for the training set:

    • Not applicable. As this is a 510(k) for a hardware device (PSP scanner) and not an AI/ML algorithm requiring a training set in the typical sense, there is no mention of a "training set" for image processing algorithms. The image processing algorithms used within the device are stated to be "the same as cleared in K192743 for the imaging software VisionX," implying they are established and validated algorithms, not newly trained ones for this specific device.

    9. How the ground truth for the training set was established:

    • Not applicable, as there is no training set discussed for this device submission.
    Ask a Question

    Ask a specific question about this device

    K Number
    DEN220024
    Date Cleared
    2022-10-18

    (193 days)

    Product Code
    Regulation Number
    868.1980
    Type
    Direct
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ScanNav Anatomy Peripheral Nerve Block is indicated to assist qualified healthcare professionals to identify and label the below mentioned anatomy in live ultrasound images in preparation for ultrasound guided regional anesthesia prior to needle insertion for patients 18 years of age or older.

    The highlighting of structures in the following anatomical regions is supported:

    • Axillary level brachial plexus ●
    • Erector spinae plane .
    • Interscalene level brachial plexus ●
    • Popliteal level sciatic nerve .
    • Rectus sheath plane
    • Sub-sartorial femoral triangle / Adductor canal .
    • Superior trunk of brachial plexus
    • Supraclavicular level brachial plexus ●
    • . Longitudinal suprainguinal fascia iliaca plane

    ScanNav Anatomy Peripheral Nerve Block is an accessory to compatible general-purpose diagnostic ultrasound systems.

    Device Description

    ScanNav Anatomy Peripheral Nerve Block is a software medical device which assists anesthetists and other qualified healthcare professionals in the identification of anatomical structures within ultrasound images during ultrasound-guided regional anesthesia (UGRA) procedures by highlighting the relevant anatomical structures in realtime.

    The device performs the highlighting by using deep learning artificial intelligence technology based on convolutional networks (CNNs). These deep-learning models generate a colored overlay that allows the user to identify the specific anatomical structures of interest for the procedure. A separate monitor displays the highlighted images as an overlay on top of the ultrasound image, so the original view from the ultrasound machine is not affected. The deep learning models are locked, and they do not continue to learn in the field.

    The device interfaces with ultrasound machine with an external monitor output that meets the compatibility requirements. The ScanNav Anatomy Peripheral Nerve Block is run on a mobile computing platform (a commercial off the shelf panel PC) performing the processing with an integrated touchscreen monitor to display the user interface and anatomy highlighting

    The Software as a Medical Device is packaged with a tablet PC, power cable, compatible plug, and mounting bracket and instructions for mounting the tablet to the ultrasound host. This acts as a separate monitor to display the highlighted images as an overlay on top of the ultrasound image, so the original view from the ultrasound machine is not affected. The ScanNav Anatomy Peripheral Nerve Block system is composed of a software medical device and other non-medical devices such as a panel PC, power supply, an HDMI interface cable and a VESA mount.

    AI/ML Overview

    Acceptance Criteria and Device Performance Study for ScanNav Anatomy Peripheral Nerve Block

    This document outlines the acceptance criteria for the ScanNav Anatomy Peripheral Nerve Block device and details the studies conducted to demonstrate its compliance.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific MetricAcceptance CriterionReported Device Performance
    Clinical Performance (Primary Endpoint)Assistance in obtaining correct ultrasound view prior to needle insertionMajority view (at least 8/15 participants) agree the device assists.63% (19/30) of participants were assisted, meeting the criterion.
    Clinical Performance (Secondary Endpoint 1)Assistance in identification of anatomical structures (up to BMI 35 kg/m2)Majority view (at least 8/15 participants) agree the device assists.70% (21/30) of participants were assisted, meeting the criterion.
    Clinical Performance (Secondary Endpoint 2)Assistance in supervision and training for anatomical structure identificationMajority view (at least 8/15 supervising experts) agree the device assists.87% (13/15) of experts were assisted, meeting the criterion.
    Clinical Performance (Secondary Endpoint 3)Improvement in operator confidenceMajority view (at least 8/15 participants) agree the device improves confidence.63% (19/30) of participants had improved confidence, meeting the criterion.
    Safety (Misidentification)Frequency of misidentification (FP Rate) of anatomical structuresNot explicitly stated as a numerical criterion for all blocks, but assessed as a primary endpoint in one study.Varies by block, ranging from 0% (ESP, Adductor) to 21.9% (SFIC). Details in section 2 below.
    Safety (Adverse Events Risk)Frequency of highlighting risking an adverse event<= 5% of total for each specified adverse event risk.Specific percentages are redacted (b)(4), but implied to be within acceptable limits as the device was granted De Novo.
    Accuracy (Correct Identification)Frequency of correct identification (TP+TN)>= 80% of total for each block.Varies by block, ranging from 76.2% (SFIC) to 98.3% (SC). Details in section 2 below.
    Human FactorsSuccessful completion of essential and critical tasks by usersAll participants complete essential and critical tasks without patterns of use failures, confusion, or difficulties.All 30 participants completed tasks. No UI design issues, use errors, or task failures were found.
    SoftwareCompliance with design and safety standardsSoftware documentation acceptable, with Major Level of Concern addressed.Documentation reviewed and accepted. Supports cybersecurity and hazard analysis.
    Electromagnetic Compatibility & Electrical SafetyCompliance with IEC 60601-1-12 and IEC 60601-1-2Test results support electrical safety and electromagnetic compatibility.Test results support compliance.

    2. Study Details for Device Performance

    The provided documentation describes two main studies relevant to device performance: a Clinical Validation Study to assess Performance and predict Adverse Events (IU2021 AG 07) and a Human Factors (HF) Study Design.

    Clinical Validation Study (IU2021 AG 07)

    • Sample Size: 40 volunteers.
    • Data Provenance: Single-center, prospective validation study conducted in the USA (Oregon Health & Science University, Portland).
    • Number of Experts for Ground Truth: Three (3) expert anesthesiologists in UGRA.
    • Qualifications of Experts: Anesthesiologists competent to perform independent UGRA.
    • Adjudication Method: Majority opinion (2/3) determined TP, TN, FP, FN, and AE rates.
    • Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study: No, this study was primarily a standalone validation of the device's highlighting performance against expert consensus. It did not directly compare human readers with and without AI assistance to quantify an effect size in human improvement. However, the Human Factors study (described below) did involve performance "with and without the aid of the device," providing some insight into assisted performance.
    • Standalone Performance (Algorithm-only): Yes, the study assessed the "device output" and its highlighting performance, with experts evaluating the device's interpretations in isolation. Experts answered questions on the device's highlighting performance (TP, TN, FP, FN).
    • Type of Ground Truth: Expert consensus. Three expert anesthesiologists viewed recorded ultrasound clips side-by-side with the ScanNav Anatomy PNB output and determined the correctness of the highlighting.
    • Sample Size for Training Set: Not explicitly stated in the provided text. The device uses "deep learning artificial intelligence technology based on convolutional networks (CNNs)."
    • How Ground Truth for Training Set was Established: Not explicitly stated, but typically involves expert annotation of ultrasound images for relevant anatomical structures.

    Key Performance Metrics (from IU2021 AG 07):

    • Primary Endpoint (Misidentification - FP Rate):
      • Axillary: 0.3%
      • ESP: 0.0%
      • IS: 1.3%
      • Pop: 0.6%
      • RS: 3.2%
      • Adductor: 0.0%
      • ST: 5.2%
      • SC: 0.8%
      • SFIC: 21.9%
    • Secondary Endpoint (Accuracy - Correct Identification - TP+TN Rate):
      • Axillary: 97.7%
      • ESP: 88.8%
      • IS: 94.1%
      • Pop: 98.1%
      • RS: 96.8%
      • Adductor: 90.4%
      • ST: 90.9%
      • SC: 98.3%
      • SFIC: 76.2%
    • Adverse Event Rates: Redacted sections (b)(4) indicate specific risks (PONS, LAST, Pneumothorax, Peritoneum risk) were assessed per block, aiming for <= 5% frequency.

    Human Factors (HF) Study Design

    • Sample Size: 30 anesthesiologists (15 Expert, 15 Trainee).
    • Data Provenance: Summative usability validation study conducted in a simulated interventional procedural lab setting in the USA (b)(4).
    • Number of Experts for Ground Truth: The study involved participants' self-assessment via questionnaires and a panel of three (3) independent experts who reviewed recorded scans for later analysis.
    • Qualifications of Experts:
      • Expert Participants (15): Capable of independent clinical practice of UGRA, deliver regular clinical care, 14 are members of relevant professional societies, 11 hold advanced further training in UGRA.
      • Trainee Participants (15): Undergoing training for UGRA procedures; 7 deliver regular clinical care, 13 are members of relevant professional societies.
      • Independent Expert Panel (3): Expertise in UGRA, reviewed recorded scans.
    • Adjudication Method: For the expert panel, majority panel view determined the device's performance and safety profile for each scan. For primary and secondary clinical endpoints, it was based on majority view of participants (at least 8/15) for each group (experts, trainees, or combined as 30 participants).
    • Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study: Yes, in essence. The study involved participants (readers) performing "scans with and without the aid of the device," on two models (cases). This directly assesses the impact of the AI assistance on user performance and perceptions.
      • Effect Size of Human Readers Improve with AI vs. Without AI Assistance:
        • Assistance in obtaining correct ultrasound view: 63% (19/30) participants were assisted.
        • Assistance in identification of anatomical structures: 70% (21/30) participants were assisted.
        • Assistance in supervision and training (for experts): 87% (13/15) experts were assisted.
        • Improved operator confidence: 63% (19/30) participants reported improved confidence.
        • Reduction of risk of mistaking an incorrect view: 70% reduction in this risk reported (presumably from expert panel analysis based on device use in the study).
        • (b)(4) increase in risk (redacted amount) where incorrect highlighting increased the risk.
    • Standalone Performance (Algorithm-only): The expert panel independently evaluated ScanNav Anatomy PNB highlighting on recorded scans, which can be seen as an assessment of the algorithm's standalone performance, albeit within the context of generated images from user interaction. However, the clinical validation study (IU2021 AG 07) is a more direct evaluation of standalone performance (TP, TN, FP, FN rates).
    • Type of Ground Truth:
      • Participant self-assessment/perception: Through questionnaires regarding assistance, confidence, and identification.
      • Expert consensus: The independent panel of three experts reviewed recorded scans and device highlighting, completing the same questionnaire as participants, with their "majority panel view" forming a ground truth for device performance and safety.
    • Sample Size for Training Set & How Ground Truth for Training Set was Established: Not stated in the provided text.
    Ask a Question

    Ask a specific question about this device

    K Number
    K201456
    Device Name
    Scan Monitor
    Manufacturer
    Date Cleared
    2021-10-05

    (491 days)

    Product Code
    Regulation Number
    870.2340
    Reference & Predicate Devices
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Scan Monitor is intended to record, store and transfer single-channel electrocardiogram (ECG) rhythms. The Scan Monitor also displays ECG rhythms and detects the presence of atrial fibrillation (when the monitor is prescribed or used under the care of a physician). The Scan Monitor is intended for use by healthcare professionals, patients with known or suspected heart conditions and health-conscious individuals.

    The Scan Monitor is also indicated for use in measuring and displaying functional oxygen saturation of arterial hemoglobin (%SpO2). The Scan Monitor is intended for spot-checking of adult patients in hospitals, clinics, long-term care facilities, and homes.

    Device Description

    The Scan Monitor is a wearable, Bluetooth-connected wrist-worn watch that records two medical measurements-heart activity (electrocardiogram (ECG)) and oxygen saturation (SpO2)-as well as other measurements including step cycles, running, biking, and walking. The Scan Monitor has a companion mobile application called Health Mate. The Scan Monitor is available in two sizes, 38 mm and 42 mm, which have different watch faces but are otherwise identical.

    The Scan Monitor has two stainless steel electrodes integrated on the back case of the watch and are always in contact with the skin. One of these two electrodes is used to obtain a reference signal and reduce the noise on the ECG signal. A third electrode is accessible by the free hand (the hand that does not wear the device) on the top of the device.

    The Scan Monitor classifies ECG signals as follows:

    • 0 Normal Sinus Rhythm;
    • Atrial Fibrillation; .
    • . Inconclusive;
    • 0 Noisy.

    The SpO2 measurements are obtained by a photoplethysmograph sensor located at the backcase of the product. This sensor is composed of three LEDs (green, red, and infrared) and two photodiodes (one large band and one with a green filter). The Scan Monitor is validated for an SpO2 range of 70% to 100% and an SpO2 range between 85% to 100% is displayed on the gauge.

    All measurements obtained by the Scan Monitor are available in the companion Health Mate app.

    AI/ML Overview

    The Withings Scan Monitor has two main functionalities: ECG rhythm recording and Atrial Fibrillation (AFib) detection, and SpO2 measurement. Here's a breakdown of the acceptance criteria and study information for each:


    ECG Rhythm Recording and Atrial Fibrillation (AFib) Detection

    1. Table of Acceptance Criteria and Reported Device Performance (AFib Detection)

    Acceptance CriteriaReported Device Performance
    Sensitivity (of AFib detection)96.3% (95% CI lower bound: 89.4%)
    Specificity (of AFib detection)100% (95% CI lower bound: 96.7%)

    2. Sample Size and Data Provenance

    • Test Set Sample Size: 262 participants
    • Data Provenance: Prospective, multicentric, comparative, cross-over study. The document does not specify the country of origin of the data.

    3. Number and Qualifications of Experts for Ground Truth

    The document does not explicitly state the number of experts or their specific qualifications (e.g., years of experience) used to establish the ground truth for the ECG study. It implicitly refers to "clinical performance" evaluation, suggesting medical professionals were involved in defining the ground truth for AFib and normal sinus rhythm.

    4. Adjudication Method

    The document does not specify the adjudication method (e.g., 2+1, 3+1, none) for the test set's ground truth.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    A MRMC comparative effectiveness study involving human readers with and without AI assistance was not explicitly mentioned. The study focused on the performance of the Scan Monitor software itself.

    6. Standalone Performance

    Yes, a standalone performance study was done for the AFib detection algorithm. The reported sensitivity and specificity are for the device's software detecting AFib.

    7. Type of Ground Truth Used

    The ground truth for the AFib detection study was established through clinical evaluation, likely by expert interpretation of reference ECGs, given its "clinical performance" context.

    8. Sample Size for Training Set

    The sample size for the training set is not provided in the document.

    9. How Ground Truth for Training Set was Established

    The document does not provide details on how the ground truth for the training set was established.


    SpO2 Measurement

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Met acceptance criteria set forth in FDA's guidelines and ISO 80601-2-61.The clinical study results demonstrated that the device met the acceptance criteria and showed comparable performance to the chosen reference device (Oxitone 1000). Specific quantitative metrics (e.g., accuracy, bias) are not provided in this summary but are referenced as meeting the standards.

    2. Sample Size and Data Provenance

    • Test Set Sample Size: 15 participants
    • Data Provenance: Clinical study. The document does not specify the country of origin or whether it was retrospective or prospective.

    3. Number and Qualifications of Experts for Ground Truth

    The document does not explicitly state the number of experts or their specific qualifications used to establish the ground truth for the SpO2 study. Ground truth for pulse oximetry is typically established using a reference oximeter or arterial blood gas analysis in a controlled setting.

    4. Adjudication Method

    The document does not specify the adjudication method for the SpO2 test set.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    A MRMC comparative effectiveness study was not mentioned for the SpO2 functionality.

    6. Standalone Performance

    Yes, a standalone performance study was done for the SpO2 functionality, as the device's measurement performance was evaluated against established standards and a reference device.

    7. Type of Ground Truth Used

    The ground truth for the SpO2 study was established in accordance with IEC 80601-2-61:2017 and FDA's guidance for pulse oximeters, which typically involves comparing the device's readings to a validated reference method, such as arterial blood gas analysis or a highly accurate laboratory oximeter.

    8. Sample Size for Training Set

    The sample size for the training set is not provided in the document.

    9. How Ground Truth for Training Set was Established

    The document does not provide details on how the ground truth for the training set was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K202633
    Device Name
    ScanX Edge
    Manufacturer
    Date Cleared
    2020-10-07

    (26 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ScanX Edge is intended to be used for scanning and processing digital images exposed on Phosphor Storage Plates (PSPs) in dental applications.

    Device Description

    The ScanX Edge is a dental device that scans photostimulable phosphor storage plates that have been exposed in place of dental X- Ray film and allows the resulting images to be displayed on a personal computer monitor and stored for later recovery. It will be used by licensed clinicians and authorized technicians for this purpose. The device is an intraoral Plate Scanner, which is designed to read out intraoral Plates of the sizes 0, 1 and 2. intraoral plate Scanners are state of the art since 2002 and are available in various designs from many companies around the world. The intraoral Plates are put into the mouth of the patient, exposed to X-rays and then are read out with the device. The read-out-process is carried out with a 635nm Laser. The laser beam is moved across the surface of the plate by an oscillating MEMS mirror. The laser beam stimulates the top coating of the plates, which consists of x-ray sensitive material. Depending on the exposed dose, the coating emits different levels of light. These light particles are then requisitioned by an optical sensor (Photo Multiplier Tube/ PMT) and transferred into an electrical output signal is digitalized and is the data for the digital X-ray image. The data is transmitted via an Ethernet link to a computer. Before the plate is discharged, the remaining data is erased by a LED-PCB. The user chooses which size of plate he has to use and prepares the device by inserting the appropriate plate insert into the device. He then exposes the plate and then puts the plate directly into the insert by pushing it out of the light protection envelope. The user closes the light protection cover and starts the read out process. After the read out process the picture is transmitted to the connected PC, the picture can be viewed and the IP is erased and ready to use for the next acquisition.

    AI/ML Overview

    The provided text describes the Dürr Dental SE ScanX Edge device (K202633) and its substantial equivalence to a predicate device, the ScanX Intraoral View (K170733). However, it does not contain a study that establishes acceptance criteria and then proves the device meets those criteria with performance data from a specific study.

    Instead, the document focuses on demonstrating that the ScanX Edge maintains similar performance characteristics and safety profiles compared to a previously cleared device. It references technical testing and standards compliance rather than a clinical performance study with defined acceptance criteria and performance metrics for the device itself.

    Therefore, I cannot provide a table of acceptance criteria and reported device performance from a dedicated study as that information is not present in the provided text. I will extract the available performance specifications and the context of their assessment.

    Here's a breakdown of the information that can be extracted from the document, with explanations for what is missing:


    1. Table of Acceptance Criteria and Reported Device Performance

    As noted above, the document does not present a specific study that defines acceptance criteria and then reports performance against those criteria. Instead, it compares the ScanX Edge's technical specifications to those of its predicate device, often stating that the values are "similar." These comparisons are made against established standards for general X-ray imaging devices.

    Metric (Acceptance Criteria are inferred as being "Similar" or "More than XX% at YY lp/mm" to the predicate)Predicate Device (ScanX Intraoral View) PerformanceSubject Device (ScanX Edge) Performance
    Image quality - Theoretical resolutions10, 20, 25 or 40 LP/mmMax. theoretical resolution Approx. 16.7 Lp/mm
    MTF (Modulation Transfer Function) at 3 lp/mmMore than 45%More than 40%
    DQE (Detective Quantum Efficiency) at 3 lp/mmMore than 7.5%More than 3.4%
    Image data bit depth16 bits16 bits

    Note: The document states that the "lower DQE did not adversely affect image quality as shown by our SSD testing," implying that despite a numerically lower DQE, the overall image quality was deemed acceptable, likely within the context of substantial equivalence to the predicate in dental applications.


    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: Not specified. The document refers to "MTF and DQE image performance testing" and "SSD testing" but does not provide details on the number of images or cases used for these tests.
    • Data Provenance: Not specified. The document does not mention the country of origin of the data or whether it was retrospective or prospective. Implicitly, as Dürr Dental SE is a German company, some testing might have occurred in Germany, but this is not stated for the performance data.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: Not specified.
    • Qualifications of Experts: Not specified.

    The performance testing referenced (MTF, DQE, noise power spectrum) are objective technical measurements performed on the device itself, often using phantoms or controlled inputs, not human interpretation of clinical images. Therefore, the concept of "ground truth established by experts" as typically applied in diagnostic AI studies is not directly applicable here for these specific technical metrics. The "SSD testing" mentioned in relation to DQE might involve human assessment, but details are not provided.


    4. Adjudication method for the test set

    Not applicable/Not specified for the referenced technical performance tests (MTF, DQE). These are objective measurements rather than subjective assessments requiring adjudication.


    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was not conducted or reported in this document.
    • Effect Size: Not applicable, as no MRMC study was performed.

    This device is described as an "Extraoral source x-ray system" for scanning phosphor plates and displaying digital images. It is not an AI algorithm intended to assist human readers in diagnosis. Therefore, an MRMC study assessing AI assistance is outside the scope of this device's function as described.


    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    A "standalone" performance assessment was done in the sense that the intrinsic physical image quality metrics (MTF, DQE, theoretical resolution) of the device were measured and reported. These metrics reflect the device's inherent capability to produce high-quality images, independent of human interpretation or assistance. However, it's not an algorithm that performs a diagnostic task.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The ground truth for the MTF, DQE, and theoretical resolution measurements are based on physical properties and standardized test methods as outlined in IEC 62220-1:2003 ("Medical electrical equipment - Characteristics of digital X-ray imaging devices - Part 1: Determination of the input properties of digital X-ray imaging devices"). This typically involves using phantoms and known physical inputs rather than clinical ground truth like pathology or expert consensus.


    8. The sample size for the training set

    Not applicable. This device is a hardware imaging system, not an artificial intelligence algorithm that requires a training set.


    9. How the ground truth for the training set was established

    Not applicable, as there is no training set for this hardware device.

    Ask a Question

    Ask a specific question about this device

    K Number
    K191634
    Manufacturer
    Date Cleared
    2019-11-04

    (138 days)

    Product Code
    Regulation Number
    872.3630
    Panel
    Dental
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Dentium Prosthetics are intended for use as an aid in prosthetic rehabilitation.

    Device Description

    The purpose of this submission is to expand the Dentium Prosthetics to include the Scan Abutments and Comfort Caps.

    Scan Abutments are used provisionally as an accessory to endosseous dental implant during healing period to prepare gingival tissue for acceptance of a final abutments are designed to aid in soft tissue contouring the healing period after implant placement, creating an emergence profile for the final abutment.

    Comfort Caps are used provisionally as an accessory to protect the dental abutment before final prosthesis.

    They have the design feature that enable to transmit position and angulation data of implant when taking a digital impression using an intra-oral scanner.

    The Scan Abutments and Comfort Caps are prefabricated and made of Ti-6AI-4V ELI (ASTM F136) or PEEK(ASTM F2026). Scan Abutments are compatible with Dentium Implant(K041368 Implantium and K160965 SuperLine or K153268 NR Line) and Comfort Caps are compatible with Dentium Prosthetics(K052957 Implantium and K172640 SuperLine or K153268 NR Line)

    AI/ML Overview

    This document is a 510(k) Pre-Market Notification from the FDA regarding dental devices (Scan Abutments and Comfort Caps). It focuses on demonstrating substantial equivalence to previously cleared devices, rather than establishing performance against specific acceptance criteria for a novel AI/ML device.

    Therefore, many of the requested details about acceptance criteria and study design (especially those related to AI/ML performance, ground truth, human readers, and training/test set specifics) are not applicable to this type of regulatory submission.

    Here's what can be extracted and what cannot:

    1. A table of acceptance criteria and the reported device performance:

    • Acceptance Criteria (Implicit for 510(k) Substantial Equivalence): The primary "acceptance criterion" for a 510(k) is the demonstration of substantial equivalence to a legally marketed predicate device. This is achieved by showing that the new device has the same intended use as a predicate and the same technological characteristics, or different technological characteristics that do not raise different questions of safety and effectiveness and are as safe and effective as the predicate.
    • Reported Device Performance (as demonstrated for Substantial Equivalence):
      • Indications for Use: The subject device has the same indication for use as the primary predicates: "Dentium Prosthetics are intended for use as an aid in prosthetic rehabilitation."
      • Technological Characteristics Comparison (Tables provided in Section 7): The document provides detailed comparison tables for Scan Abutments (vs. K172640, K153268, K173374, K172160) and Comfort Caps (vs. K171142, K172160). These tables demonstrate similarities in:
        • Device name, Manufacturer, 510(k) Number (where applicable)
        • Indication for use
        • Materials (Ti-6Al-4V ELI, PEEK – demonstrated to be identical to or commonly used in predicates)
        • Form (Preformed)
        • Sterilization (Non-sterile, similar to primary predicates; note on sterile reference predicates)
        • Use (Prescription)
        • Single Use Only (Yes)
        • Design characteristics: Diameter, Length, Connection type, Scanning feature (for Scan Abutments), Surface treatment.
      • Performance Data (Non-clinical):
        • Steam sterilization validation (ISO 17665-1 and ISO 17665-2), demonstrating a sterility assurance level (SAL) of 10-6. (This is a specific performance metric).
        • Biocompatibility of Ti-6Al-4V ELI (ASTM F136) was demonstrated by referencing a previous submission (K172640) using the same materials and manufacturing processes.
        • Cytotoxicity testing of PEEK (ASTM F2026) was performed according to ISO 10993-5.

    N/A (Not Applicable) for this type of submission:

    • 2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective): No specific "test set" in the context of an AI/ML model for performance evaluation is described. The performance data relates to material properties and sterilization, not diagnostic/AI performance on a dataset.
    • 3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable. Ground truth for an AI/ML model is not a concept for these mechanical/material devices.
    • 4. Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable.
    • 5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. This device is not an AI/ML diagnostic tool.
    • 6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable.
    • 7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not applicable. The "ground truth" here is compliance with material standards, sterilization standards, and functional equivalence to predicates.
    • 8. The sample size for the training set: Not applicable.
    • 9. How the ground truth for the training set was established: Not applicable.

    In summary, this FDA document is for a medical device (dental abutments and caps) seeking 510(k) clearance based on substantial equivalence. It does not involve AI/ML components or comparative effectiveness studies of human readers, thus many of the questions are not relevant.

    Ask a Question

    Ask a specific question about this device

    K Number
    K191623
    Date Cleared
    2019-08-21

    (64 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ScanX Touch/ScanX Duo Touch is intended to be used for scanning and processing digital images exposed on Phosphor Storage Plates (PSPs) in dental applications.

    Device Description

    The ScanX Touch is a device that scans photostimulable phosphor storage plates that have been exposed in place of X-ray film and allows the resulting images to be displayed on a personal computer monitor and stored for later recovery. It will be used by licensed clinicians and authorized technicians for this purpose. Two models are available: The Scanx® Touch processes one imaging plate while ScanX® Duo Touch processes two at a time. The ScanX Touch also includes an RFID Phosphor Plate identification system which requires that the Phosphor Plate has an RFID tag attached to the non-imaging side of the Plate. ScanX allows multi room dental clinics to produce digital radiographs within seconds using a flexible PSP (Phosphor Storage Plates). The workflow is supported by a touch screen, for job handling (ScanManager, Patient information), preview and standalone work. ScanX produces the digital diagnostic quality intraoral image by scanning PSPs, which have been exposed to X-rays. ScanX can also work independently. If the IT network goes down, the user can still scan and save X-ray images. The images are temporarily placed in the internal memory and are later transferred to the office database. This makes the device ideally suited to provide a mobile solution when it is necessary to visit the patient outside the office. Additionally, ScanX allows computer storage, processing, retrieval and display of the processed images utilizing a user supplied software (e.g. DBSWIN) and computer. An additional feature of ScanX consists of an in-line plate eraser function that removes the latent image from the plate immediately after scanning. This design provides an efficient one-operation scanning and erasing process leaving the user with a PSP ready for the next X-ray procedure

    AI/ML Overview

    The provided text describes the Air Techniques ScanX Touch/ScanX Duo Touch, a device for scanning and processing digital images from Phosphor Storage Plates (PSPs) in dental applications. The information focuses on demonstrating substantial equivalence to a predicate device (ScanX Intraoral View, K170733) rather than a comprehensive study proving the device meets specific performance acceptance criteria for a novel AI algorithm.

    Therefore, many of the requested sections (e.g., sample sizes for test/training sets, number of experts for ground truth, adjudication methods, MRMC studies) are not applicable or cannot be extracted from this document as it pertains to a traditional medical device clearance, not an AI-driven one.

    Here's a breakdown of the information that can be extracted, aligning with the closest relevant details provided:


    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are primarily framed around demonstrating substantial equivalence to the predicate device in terms of image quality and functional specifications. The "acceptance criteria" here are implicitly meeting or exceeding the predicate's performance in key imaging metrics.

    Acceptance Criteria (Implicit for Substantial Equivalence)Reported Device Performance (ScanX Touch/ScanX Duo Touch)Predicate Device Performance (ScanX Intraoral View, K170733)
    Indications for UseIntended for scanning and processing digital images exposed on PSPs in dental applications.SAME
    Mechanical DesignSAME as Predicate Device (scans plates in two orthogonal directions using a 650 nm laser)The exposed and unwrapped plates are scanned in two orthogonal directions using a laser with a wavelength of approximately 650 nm.
    Electrical DesignSAME as Predicate Device (light 380 nm collected proportional to X-ray photons, formed into image for display/storage)Light with a wavelength of approximately 380 nm is from the plate in proportion to the number of captured X-ray photons. This light is collected and formed into an image that may be viewed on a video display and stored for later recovery in a computer memory.
    Image ScanningLaser/Photomultiplier TubeSAME
    Erasing Residual ImageInline erasing functionSAME
    Viewing the Image7.0″ diagonal Touch Screen (preview only); diagnostic viewing on external monitor with computer and software.4.3″ Touch Screen (preview only); diagnostic viewing on external monitor with computer and software.
    Transport / Feed MechanismSAME as Predicate Device (beltways, continuous feed)The plates are transported by "beltways" down the axis of the cylinder past the slot. The motion of the laser and plates provides the two orthogonal scan directions. This is a continuous feed device that allows successive plates to be loaded as soon as the previous plates have moved past the slot.
    Phosphor PlatesOperates with the same Dental intraoral size and material PSPs; includes RFID identification system.Dental intraoral PSPs: Size 0 (22x35mm), Size 1 (24x40mm), Size 2 (31x41mm), Size 3 (27x54mm), Size 4 (57x76mm).
    Image Quality (Resolution)Theoretical resolutions: 10, 20, 25 or 40 Lp/mmSAME
    MTF (Modulation Transfer Function)More than 46% at 3 Lp/mm (essentially the same)More than 45% at 3 Lp/mm
    DQE (Detective Quantum Efficiency)More than 7.2% at 3 Lp/mm (essentially the same)More than 7.5% at 3 Lp/mm
    Image Data Bit Depth16 bitsSAME
    Imaging SoftwareDBSWIN/VistaEasy (updated in K190629)DBSWIN/VistaEasy (cleared in K161444, updated in K190629)
    User InterfaceUsed by dentists and authorized dental auxiliary personnel.SAME
    Energy Source AC100 to 240VAC, 50/60 HzSAME
    Electrical Safety StandardsIEC 60601-1 Electrical Safety Medical Devices, UL Listed.EN 61010-1:2010 Safety Requirements for Electrical Equipment for Measurement, Control, and Laboratory Use - Part 1: General Requirements.
    EMC TestingIEC 60601-1-2 EMC Medical Devices; Complies with EN 61326-1:2013, FCC rules part 15 (RFID).EN 61326-1:2013 Electrical equipment for measurement, control and laboratory use. EMC requirements. General requirements.
    Patient Contamination PreventionUses identical single-use barrier envelopes as predicate device.Single patient use barrier envelope encloses the imaging plate while in the patient's mouth.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: The document mentions that "Test images were acquired on the new device and on the predicate device." However, a specific number of images or cases used for this comparison is not provided.
    • Data Provenance: Not explicitly stated, but the testing was conducted by Air Techniques, Inc. likely within their facilities or through contracted labs (e.g., UL). There's no mention of country of origin for the data or whether it was retrospective or prospective in detail beyond the acquisition of "test images."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: "a licensed dentist" (singular)
    • Qualifications: "licensed dentist" (no further details such as years of experience or specialization are given).

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not applicable. The evaluation was a single-reader assessment where a licensed dentist answered specific questions in the affirmative regarding the images. There was no mention of a consensus process among multiple experts.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    • MRMC Study: No, an MRMC comparative effectiveness study was not done. The evaluation involved a single licensed dentist.
    • Effect Size of Human Readers with/without AI: Not applicable, as this device itself is not an AI algorithm but an imaging hardware and processing system. The study did not assess human reader performance with or without AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • Standalone Performance: Not applicable. The device is hardware for image acquisition and initial processing. The "standalone" testing referred to in the document relates to the device's ability to operate and save images independently if the IT network goes down, not an AI algorithm's standalone diagnostic performance.

    7. The Type of Ground Truth Used

    • Type of Ground Truth: "Expert assessment/satisfaction." A licensed dentist evaluated the images for representativeness, diagnostic usefulness, and absence of misleading artifacts, and comparability to the predicate. This is a subjective expert evaluation rather than an objective "ground truth" derived from pathology or definitive outcomes data.

    8. The Sample Size for the Training Set

    • Sample Size for Training Set: Not applicable. This document does not describe an AI algorithm that was "trained." The device is an image acquisition and processing system.

    9. How the Ground Truth for the Training Set Was Established

    • Ground Truth for Training Set: Not applicable, as there is no mention of a training set or an AI algorithm that requires one.
    Ask a Question

    Ask a specific question about this device

    K Number
    K190949
    Date Cleared
    2019-07-26

    (106 days)

    Product Code
    Regulation Number
    878.4370
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Disposable Barrier Envelopes are intended to be used as a disposable barrier for Air Techniques Phosphor Storage Plates. This device is non-sterile and intended for single patient use only.

    Device Description

    The ScanX barrier envelopes are made from one layer of clear blown polyethylene film and one layer of black blown polyethylene film heat sealed along three edges. An adhesive strip along the fourth edge is used for temporary barrier. Barrier Envelopes are used with Air Techniques' intraoral Phosphor Storage Plates. Barrier Envelopes are non-sterile, and disposable, single use only- discarded after each use.

    AI/ML Overview

    The provided document is a 510(k) premarket notification for "ScanX Barrier Envelopes" and describes the acceptance criteria and study results demonstrating that the device meets these criteria.

    Here's an analysis of the requested information:

    1. A table of acceptance criteria and the reported device performance

    Comparison CriteriaStandardsAcceptance CriteriaReported Device Performance
    Biocompatibility Testing:
    In-Vitro CytotoxicityANSI/AAMI/ISO 10993-5Score of Less Than 2Pass
    SensitizationISO 10993-10Non-SensitizerPass
    IrritationISO 10993-10Non-IrritantPass
    Biological Risk AssessmentISO 10993-1Biological SafetyPass
    Performance and Mechanical Testing:
    Synthetic Blood PenetrationASTM F1670/F1670MProtective Materials Resistance Against Liquid PenetrationPass
    Viral PenetrationASTM F1671/F1671MProtective Materials Resistance Against Blood Borne PathogensPass
    Tensile StrengthASTM D882Tensile Properties of MaterialPass
    Puncture ResistanceASTM F1342Protective Materials Resistance to Puncture/RupturePass
    Tear ResistanceASTM D1004Tear Resisting AbilityPass
    Image QualityISO 19232Determination of Image Quality of Radiographs ("Determination of Image Quality of Radiographs" is a broad descriptor for the standard, implying the device must maintain image quality when used. The specific quantitative acceptance criteria are not detailed in this section, but the "Pass" result indicates it met what was required.)Pass

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document lists standards (e.g., ISO 10993, ASTM) that were followed for testing. These standards specify methodologies and may include sample size requirements within their procedures. However, the exact sample sizes used for each specific test in this study (e.g., "how many barrier envelopes were tested for tensile strength?") are not explicitly reported in this summary.

    Regarding data provenance: The tests are material and mechanical property tests of a medical device (barrier envelopes), not clinical data involving patient information. Therefore, 'country of origin of the data' and 'retrospective or prospective' study design are not applicable in the typical sense for these types of non-clinical performance studies. The testing would have been conducted in a laboratory setting.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This product (ScanX Barrier Envelopes) is a physical barrier device for dental phosphor storage plates. The "ground truth" for its performance is established through objective, standardized laboratory testing of its physical properties (e.g., tensile strength, resistance to penetration, biocompatibility) and its impact on image quality, as outlined by the listed ASTM and ISO standards. It does not involve human expert interpretation of medical images or diagnoses in the way an AI-driven diagnostic device would.

    Therefore, this question (number of experts, qualifications, etc.) is not applicable to this type of device and study. The "ground truth" is derived from the results of the physical and chemical tests themselves, performed by qualified technicians in accredited labs according to the specified standards.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    This concept is primarily relevant for studies involving human interpretation (e.g., reading medical images) where discrepancies between readers need to be resolved. Since this study involves objective laboratory measurements of physical properties, adjudication methods are not applicable. The results are quantitative measurements or pass/fail determinations based on predefined criteria within the respective standards.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, an MRMC comparative effectiveness study was not conducted for this device. This type of study is specifically designed to evaluate the impact of an AI algorithm on human reader performance, typically in diagnostic imaging. The ScanX Barrier Envelope is a physical accessory, not an AI diagnostic tool, so such a study would be irrelevant.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This question is also not applicable. The device is not an algorithm; it's a physical product. The "standalone performance" here refers to its ability to meet the specified physical and biological safety standards, which was indeed evaluated through the non-clinical tests listed.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The "ground truth" for the ScanX Barrier Envelopes is based on objective, quantitative measurements and qualitative assessments derived from adherence to recognized national and international standards (ASTM, ISO) for material properties, barrier effectiveness, biocompatibility, and image quality. This is not clinical ground truth (like pathology or outcomes data) but rather engineering and safety ground truth.

    8. The sample size for the training set

    The concept of "training set" is specific to machine learning and AI algorithms. Since this device is not an AI algorithm, there was no training set in that context. The development and testing of the barrier envelopes would involve material science, engineering, and manufacturing processes, not AI model training.

    9. How the ground truth for the training set was established

    As there was no AI model training set, this question is not applicable.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 7