Search Filters

Search Results

Found 2777 results

510(k) Data Aggregation

    K Number
    K252328

    Validate with FDA (Live)

    Date Cleared
    2025-11-24

    (122 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Ultrasound Transducer, 21 CFR 892.1570, 90-ITX
    Automated radiological image processing software, 21 CFR 892.2050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is a general purpose ultrasound system intended for use by qualified and trained healthcare professionals. Specific clinical applications remain the same as previously cleared: Fetal/OB; Abdominal (including GYN, pelvic and infertility monitoring/follicle development); Pediatric; Small Organ (breast, testes, thyroid etc.); Neonatal and Adult Cephalic; Cardiac (adult and pediatric); Musculo-skeletal Conventional and Superficial; Vascular; Transvaginal (including GYN); Transrectal

    Modes of operation include: B, M, PW Doppler, CW Doppler, Color Doppler, Color M Doppler, Power Doppler, Harmonic Imaging, Coded Pulse, 3D/4D Imaging mode, Elastography, Shear Wave Elastography and Combined modes: B/M, B/Color, B/PWD, B/Color/PWD, B/Power/ PWD, B/Elastography. The Voluson™ Expert 18, Voluson™ Expert 20, Voluson™ Expert 22 is intended to be used in a hospital or medical clinic.

    Device Description

    The systems are full-featured Track 3 ultrasound systems, primarily for general radiology use and specialized for OB/GYN with particular features for real-time 3D/4D acquisition. They consist of a mobile console with keyboard control panel; color LCD/TFT touch panel, color video display and optional image storage and printing devices. They provide high performance ultrasound imaging and analysis and have comprehensive networking and DICOM capability. They utilize a variety of linear, curved linear, matrix phased array transducers including mechanical and electronic scanning transducers, which provide highly accurate real-time three-dimensional imaging supporting all standard acquisition modes.

    The following probes are the same as the predicate: RIC5-9-D, IC5-9-D, RIC6-12-D, 9L-D, 11L-D, ML6-15-D, RAB6-D, C1-6-D, C2-9-D, M5Sc-D, RM7C, eM6CG3, RSP6-16-D , RIC10-D, 6S-D and L18-18iD, RIC12-D.

    The existing cleared Probe C1-6-D is being added to previously cleared SW- AI Feature Sonolyst 1st Trimester.

    AI/ML Overview

    The provided text describes the FDA 510(k) clearance for the Voluson Expert Series ultrasound systems, specifically focusing on the AI feature "Sonolyst 1st Trimester" and the addition of the C1-6-D transducer to this feature.

    Here's an analysis of the acceptance criteria and the study that proves the device meets them:

    1. Table of Acceptance Criteria and Reported Device Performance

    FunctionalityAcceptance CriteriaReported Device Performance (CL2 probe group)
    SonoLystIR0.800.93
    SonoLystX0.800.84
    SonoLystLive0.700.84

    Additional Performance Data (Mean values across transabdominal and transvaginal scans):

    FunctionalityMean (%)
    SonoLyst IR94.1
    SonoLyst X92.4
    SonoLyst Live82.5

    2. Sample Sizes Used for the Test Set and Data Provenance

    • SonoLyst 1st Trim IR: 7970 images

    • SonoLyst 1st Trim X: 4931 images

    • SonoLyst 1st Trim Live: 9111 images

    • SonoBiometry CRL: 243 images

    • Specific to Probegroup CL2 (which includes C1-6-D Probe): Data was collected from 396 patients.

    • Data Provenance: Data was collected from multiple geographical sites including the UK, Austria, India, and USA. The data was collected using different systems (GE Voluson V730, P8, S6/S8, E6, E8, E10, Expert 22, Philips Epiq 7G).

    • Retrospective/Prospective: The document does not explicitly state whether the test data was retrospective or prospective. However, the mention of "data acquired with transabdominal vs transvaginal probes" and "patients within the dataset includes pregnancies between 11 and 14 weeks of gestation, with no known fetal abnormalities at the time of imaging" suggests that the images were pre-existing or collected specifically for this evaluation, implying a retrospective or a pre-defined prospective collection for the study.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Initial Curation: A single sonographer curated (sorted and graded) the images initially.
    • Review Panel for Graded Images: Where the system's grading differed from the initial ground truth, the images were reviewed by a 5-sonographer review panel to determine the grading accuracy of the system.
    • Qualifications: The document identifies them as "sonographers." Specific years of experience or expertise in fetal ultrasound are not provided, other than their role in image curation and review.

    4. Adjudication Method for the Test Set

    • Initial Sorting and Grading: Images were initially curated (sorted and graded) by a single sonographer.
    • Reclassification during Sorting: The SonoLyst IR/X First Trimester process resulted in some images being reclassified during sorting based on the majority view of the panel (after the step where the system had sorted them).
    • Grading Accuracy Review: For graded images where the initial single sonographer's ground truth differed from the system, a 5-sonographer review panel was used to determine the accuracy. This suggests an adjudication process where the panel formed a consensus or majority opinion to establish the final ground truth when discrepancies arose. The exact method (e.g., simple majority, weighted vote) is not specified beyond "majority view of the panel."

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    • The document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done to evaluate how much human readers improve with AI vs. without AI assistance. The testing focused on the standalone performance of the AI algorithm against a ground truth established by sonographers.
    • The verification of SonoLystLive 1st Trim Trimester features was based on the "average agreement between a sonographer panel and the output of the algorithm regarding Traffic light quality," which involves human readers assessing traffic light quality in relation to the algorithm's output, but it's not a study designed to measure human improvement with AI assistance.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • Yes, a standalone performance evaluation was conducted. The performance metrics (SonoLystIR, SonoLystX, SonoLystLive, SonoBiometry CRL success rate) are reported as the accuracy of the algorithm comparing its output directly against the established ground truth. This is a measure of the algorithm's ability to perform its specified functions independently.

    7. The Type of Ground Truth Used

    • The ground truth was established through expert consensus/review by sonographers.
      • Initial curation by a single sonographer.
      • Review and reclassification during sorting based on the "majority view of the panel."
      • A 5-sonographer review panel was used to determine grading accuracy for discrepancies.
    • The ground truth also adhered to standardized imaging protocols based on internationally recognized guidelines (AIUM Practice Parameter, AIUM Detailed Protocol, ISUOG Practice Guidelines, ISUOG Detailed Protocol, and the study by Yimei Liao et al.) which informed the quality and consistency of the expert review.

    8. The Sample Size for the Training Set

    • 122,711 labelled source images from 35,861 patients were used for training.

    9. How the Ground Truth for the Training Set Was Established

    • The document states that "Data used for both training and validation has been collected across multiple geographical sites using different systems to represent the variations in target population."
    • While the specific method for establishing ground truth for the training set is not explicitly detailed in the same way as the test set, it can be inferred that similar expert labeling and curation processes would have been applied given the emphasis on "labelled source images." The document focuses on the test set truthing process as part of verification, implying that the training data would have undergone a robust labeling process to ensure quality for machine learning.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251880

    Validate with FDA (Live)

    Manufacturer
    Date Cleared
    2025-11-21

    (156 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    05738

    Re: K251880
    Trade/Device Name: Archy Dental Imaging
    Regulation Number: 21 CFR 892.2050
    Classification Name: Medical image management and processing system
    Regulation: 21 CFR 892.2050
    Classification Name: Medical image management and processing system
    Regulation: 21 CFR 892.2050
    Comparison |
    |---|---|---|---|
    | Product Code | LLZ | LLZ | Same. |
    | Regulation | 21 C.F.R. 892.2050
    | 21 C.F.R. 892.2050 | Same |
    | Class | II | II | Same. |
    | *Intended Use/ Indications for Use

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Archy Dental Imaging is an internet-based, image management software (PACS) that enables dental offices to keep records of hard and soft tissue charts in the form of digital images. The system uses a Web-based interface and includes acquisition, editing, and storage of digital images. Images and data can be stored, communicated, processed, and displayed within the system or across computer networks at distributed locations. Images can be acquired from standard dental imaging devices or can be uploaded directly from the user's computer. Images can be edited (e.g., zoomed, contrast adjusted, rotated, etc.), as well as exported. The system is designed to provide images for diagnostic use.

    Device Description

    Archy Dental Imaging is a cloud-based dental imaging software that allows access to diagnostic radiological and photo images on any PC with an active internet connection via modern web browser. Archy Dental Imaging contains all key features present in traditional client-server based dental imaging software.

    Archy Dental Imaging is a Class II dental imaging software that includes the ability to acquire, view, annotate, and organize dental radiographs and color images. Images stored using Archy Dental Imaging are saved using lossless compression and can be exported as DICOM or PNG files. The original images are treated as immutable by the rest of the system.

    Archy Dental Imaging is a software-only dental image device which allows the user to acquire images using standard dental imaging devices, such as intraoral X-ray sensors, intraoral cameras, and scanners. Archy Dental Imaging is imaging software designed for use in dentistry. The main Archy Dental Imaging software functionality includes image acquisition, organization, and annotation. Archy Dental Imaging is used by dental professionals for the visualization of patient images retrieved from a dental imaging device or scanner, for assisting in case diagnosis, review, and treatment planning. Dentists and other qualified individuals can display and review images, apply annotations, and manipulate images. Archy Dental Imaging is the imaging component of Archy, a full-featured dental practice management system that handles scheduling, charting, and other practice business concerns. The software operates within a web browser upon standard consumer PC hardware and displays images on the PC's connected display/monitor. The subject device is the Archy Dental Imaging software; the computer or the monitor are not part of the submission.

    Archy Dental Imaging neither contacts the patient nor controls any life sustaining devices. Diagnosis is not performed by this software but by doctors and other qualified individuals. A physician, providing ample opportunity for competent human intervention, interpreting images and information being displayed and printed.

    AI/ML Overview

    N/A

    Ask a Question

    Ask a specific question about this device

    K Number
    K253495

    Validate with FDA (Live)

    Date Cleared
    2025-11-20

    (23 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    55114

    Re: K253495
    Trade/Device Name: syngo.MR Applications (VB80)
    Regulation Number: 21 CFR 892.2050
    Applications (VB80)
    Classification Panel: Radiology Devices
    Classification Number: 21 CFR 892.2050
    with added features and enhancements |
    | Product Classification | Product Code: LLZ Regulation Number: 892.2050
    | Product Code: LLZ Regulation Number: 892.2050 | Same – The AI Algorithm "Prostate MR AI (K241770)"

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    syngo.MR Applications is a syngo based post-acquisition image processing software for viewing, manipulating, evaluating, and analyzing MR, MR-PET, CT, PET, CT-PET images and MR spectra.

    Device Description

    syngo.MR Applications is a software only Medical Device consisting post-processing applications/workflows used for viewing and evaluating the designated images provided by a MR diagnostic device. The post-processing applications/workflows are integrated with the hosting application syngo.via, that enables structured evaluation of the corresponding images

    AI/ML Overview

    The provided FDA 510(k) clearance letter and summary for syngo.MR Applications (VB80) indicate that no clinical studies or bench testing were performed to establish new performance criteria or demonstrate meeting previously established acceptance criteria. The submission focuses on software changes and enhancements from a predicate device (syngo.MR Applications VB40).

    Therefore, based solely on the provided document, I cannot create the requested tables and information because the document explicitly states:

    • "No clinical studies were carried out for the product, all performance testing was conducted in a non-clinical fashion as part of verification and validation activities of the medical device."
    • "No bench testing was required to be carried out for the product."

    The document details the following regarding performance and acceptance:

    • Non-clinical Performance Testing: "Non-clinical tests were conducted for the subject device during product development. The modifications described in this Premarket Notification were supported with verification and validation testing."
    • Software Verification and Validation: "The performance data demonstrates continued conformance with special controls for medical devices containing software. Non-clinical tests were conducted on the device Syngo.MR Applications during product development... The testing results support that all the software specifications have met the acceptance criteria. Testing for verification and validation for the device was found acceptable to support the claims of substantial equivalence."
    • Conclusion: "The predicate device was cleared based on non-clinical supportive information. The comparison of technological characteristics, device hazards, non-clinical performance data, and software validation data demonstrates that the subject device performs comparably to and is as safe and effective as the predicate device that is currently marketed for the same intended use."

    This implies that the acceptance criteria are related to the functional specifications and performance of the software, as demonstrated by internal verification and validation activities, rather than a clinical performance study with specific quantitative metrics. The new component, "MR Prostate AI," is noted to be integrated without modification and had its own prior 510(k) clearance (K241770), suggesting its performance was established in that separate submission.

    Without access to the actual verification and validation reports mentioned in the document, it's impossible to list precise acceptance criteria or detailed study results. The provided text only states that "all the software specifications have met the acceptance criteria."

    Therefore, I can only provide an explanation of why the requested details cannot be extracted from this document:

    Explanation Regarding Acceptance Criteria and Study Data:

    The provided FDA 510(k) clearance letter and summary for syngo.MR Applications (VB80) explicitly state that no clinical studies or bench testing were performed for this submission. The device (syngo.MR Applications VB80) is presented as a new version of a predicate device (syngo.MR Applications VB40) with added features and enhancements, notably the integration of an existing AI algorithm, "Prostate MR AI VA10A (K241770)," which was cleared under a separate 510(k).

    The basis for clearance is "non-clinical performance data" and "software validation data" demonstrating that the subject device performs comparably to and is as safe and effective as the predicate device. The document mentions that "all the software specifications have met the acceptance criteria" as part of the verification and validation (V&V) activities. However, the specific quantitative acceptance criteria, detailed performance metrics, sample sizes, ground truth establishment, or expert involvement for these V&V activities are not included in this public summary.

    Therefore, the requested information cannot be precisely extracted from the provided text.


    Summary of Information Available (and Not Available) from the Document:

    Information RequestedStatus (Based on provided document)
    1. Table of acceptance criteria and reported performanceNot provided in the document. The document states: "The testing results support that all the software specifications have met the acceptance criteria." However, it does not specify what those acceptance criteria are or report detailed performance metrics against them. These would typically be found in the detailed V&V reports, which are not part of this summary.
    2. Sample size and data provenance for test setNot provided. The document indicates "non-clinical tests were conducted as part of verification and validation activities." The sample sizes for these internal tests, the nature of the data, and its provenance (e.g., country, retrospective/prospective) are not detailed. It is implied that the data is not patient-specific clinical test data.
    3. Number of experts and qualifications for ground truthNot applicable/Not provided. Since no clinical studies or specific performance evaluations against an external ground truth are described in this document, there's no mention of experts establishing ground truth for a test set. The validation appears to be against software specifications. If the "MR Prostate AI" component had such a study, those details would be in its individual 510(k) (K241770), not this submission.
    4. Adjudication method for test setNot applicable/Not provided. As with the ground truth establishment, no adjudication method is mentioned because no external test set requiring such expert consensus is described within this 510(k) summary.
    5. MRMC comparative effectiveness study and effect sizeNot performed for this submission. The document explicitly states "No clinical studies were carried out for the product." Therefore, no MRMC study or AI-assisted improvement effect size is reported here.
    6. Standalone (algorithm only) performance studyPartially addressed for a component. While this submission doesn't detail such a study, it notes that the "MR Prostate AI" algorithm is integrated without modification and "is classified under a different regulation in its 510(K) and this is out-of-scope from the current submission." This implies that a standalone performance study was done for the Prostate MR AI algorithm under its own 510(k) (K241770), but those details are not within this document. For the overall syngo.MR Applications (VB80) product, no standalone study is described.
    7. Type of ground truth usedNot provided for the overall device's V&V. The V&V activities are stated to have met "software specifications," which suggests an internal, design-based ground truth rather than clinical ground truth like pathology or outcomes data. For the integrated "MR Prostate AI" algorithm, clinical ground truth would have been established for its separate 510(k) submission.
    8. Sample size for the training setNot applicable/Not provided for this submission. The document describes internal non-clinical V&V for the syngo.MR Applications software. It does not refer to a machine learning model's training set within this context. The "Prostate MR AI" algorithm, being independently cleared, would have its training set details in its specific 510(k) dossier (K241770), not here.
    9. How the ground truth for the training set was establishedNot applicable/Not provided for this submission. As above, this document does not discuss a training set or its ground truth establishment for syngo.MR Applications. This information would pertain to the Prostate MR AI algorithm and be found in its own 510(k).
    Ask a Question

    Ask a specific question about this device

    K Number
    K251822

    Validate with FDA (Live)

    Date Cleared
    2025-11-20

    (160 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Picture Archiving and Communications System
    Classification Panel: Radiology
    CFR Code: 21 CFR §892.2050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    MAGNETOM Free.Max:
    The MAGNETOM MR system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross-sectional images that display, depending on optional local coils that have been configured with the system, the internal structure and/or function of the head, body or extremities.

    Other physical parameters derived from the images may also be produced. Depending on the region of interest, contrast agents may be used. These images and the physical parameters derived from the images when interpreted by a trained physician or dentist trained in MRI yield information that may assist in diagnosis.

    The MAGNETOM MR system may also be used for imaging during interventional procedures when performed with MR-compatible devices such as MR Safe biopsy needles.

    When operated by dentists and dental assistants trained in MRI, the MAGNETOM MR system must only be used for scanning the dentomaxillofacial region.

    MAGNETOM Free.Star:
    The MAGNETOM MR system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross-sectional images that display, depending on optional local coils that have been configured with the system, the internal structure and/or function of the head, body or extremities.

    Other physical parameters derived from the images may also be produced. Depending on the region of interest, contrast agents may be used. These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist in diagnosis.

    Device Description

    The subject devices MAGNETOM Free.Max and MAGNETOM Free.Star with software version syngo MR XA80A, consists of new and modified hardware and software features comparing to the predicate device MAGNETOM Free.Max and MAGNETOM Free.Star with software version syngo MR XA60A (K231617).

    New hardware features (Only for MAGNETOM Free.Max):

    • Dental coil
    • High-end host
    • syngo Workplace

    Modified hardware features:

    • MaRS
    • Select&GO Display (TPAN_3G)

    New Pulse Sequences/ Software Features / Applications:

    Only for MAGNETOM Free.Max:

    • EP_SEG_FID_PHS
    • EP2D_FID_PHS
    • EP_SEG_PHS
    • GRE_Proj
    • GRE_PHS
    • myExam Dental Assist
    • Select&GO Dental
    • Slice Overlapping

    For both MAGNETOM Free.Max and MAGNETOM Free.Star:

    • Eco Power Mode
    • Extended Gradient Eco Mode
    • System Startup Timer

    Modified Features and Applications:

    • myExam RT Assist (only for MAGNETOM Free.Max)
    • Deep Resolve for HASTE
    • Deep Resolve for EPI Diffusion
    • Select&GO for dental (only for MAGNETOM Free.Max)
    • Select&GO extension: Patient Registration and Start Scan
    • SPACE improvement: MTC prep module

    Other Modifications and Minor Changes:

    • MAGNETOM Free.Max Dental Edition marketing bundle (only for MAGNETOM Free.Max)
    • MAGNETOM Free.Max RT Pro Edition marketing bundle (only for MAGNETOM Free.Max)
    • Off-Center Planning Support
    • ID Gain
    AI/ML Overview

    The provided FDA 510(k) clearance letter and summary for MAGNETOM Free.Max and MAGNETOM Free.Star (K251822) offer high-level information regarding the devices and their comparison to predicate devices. However, it does not explicitly detail acceptance criteria (performance metrics with pass/fail thresholds) or a specific study proving the device meets those criteria for the overall device clearance.

    The document primarily focuses on demonstrating substantial equivalence to predicate devices for general MR diagnostic imaging. The most detailed performance evaluation mentioned is for the AI feature "Deep Resolve Boost." Therefore, the response will focus on the information provided regarding Deep Resolve Boost, and address other points based on what is stated and what is not.


    Acceptance Criteria and Device Performance (Focusing on Deep Resolve Boost)

    Table 1. Deep Resolve Boost Performance Summary

    MetricAcceptance Criteria (Implicit from "significantly better")Reported Device Performance
    Structural Similarity Index (SSIM)Significantly better structural similarity with the gold standard than conventional reconstruction.Deep Resolve reconstruction has significantly better structural similarity with the gold standard than the conventional reconstruction.
    Peak Signal-to-Noise Ratio (PSNR) / Signal-to-Noise Ratio (SNR)Significantly better SNR than conventional reconstruction.Deep Resolve reconstruction has significantly better signal-to-noise ratio (SNR) than the conventional reconstruction, and visual evaluation confirmed higher SNR.
    Aliasing ArtifactsNot found to have caused artifacts.Deep Resolve reconstruction was not found to have caused artifacts.
    Image SharpnessSuperior sharpness compared to conventional reconstruction.Visual evaluation confirmed superior sharpness.
    Denoising LevelsImproved denoising levels.Visual evaluation confirmed improved denoising Levels (implicit in higher SNR and image quality).

    Note: The document does not provide numerical thresholds or specific statistical methods used to define "significantly better" for SSIM and PSNR. The acceptance criteria are implicitly derived from the reported positive performance relative to conventional reconstruction.

    Study Details for Deep Resolve Boost

    1. Sample Size used for the test set and the data provenance:

      • Test Data: A "set of test data" was used for quantitative metrics (SSIM, PSNR) and visual evaluation. This test data was a "retrospectively undersampled copy of the test data" which was also used for conventional reconstruction.
      • Provenance: "In-house measurements and collaboration partners."
      • Retrospective/Prospective: The process of creating the test data by manipulating (undersampling) retrospectively acquired data indicates a retrospective approach.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Number of Experts: Not specified. The document states, "Visual evaluation was performed by qualified readers."
      • Qualifications of Experts: "Qualified readers." No further specific qualifications (e.g., years of experience, specialty) are provided.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • Not specified. The document states, "Visual evaluation was performed by qualified readers." It does not mention whether multiple readers were used per case or how discrepancies were resolved.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No, an MRMC comparative effectiveness study involving human readers with vs. without AI assistance was not explicitly described for the Deep Resolve Boost feature. The visual evaluation was focused on comparing images reconstructed with conventional methods versus Deep Resolve Boost, primarily to assess image quality attributes without explicit human performance metrics (e.g., diagnostic accuracy, reading time).
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Yes, a standalone performance evaluation was done. The quantitative metrics (SSIM, PSNR) and the visual assessment of images reconstructed solely by the algorithm (Deep Resolve Boost) were performed to characterize the network's impact independently.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

      • The "acquired datasets represent the ground truth for the training and validation." Input data for testing was "retrospectively created from the ground truth by data manipulation and augmentation." This implies that the raw, fully sampled, and high-quality MRI acquisitions are considered the ground truth against which the reconstructed images (conventional and Deep Resolve Boost) are compared. This is a technical ground truth rather than a clinical ground truth like pathology.
    7. The sample size for the training set:

      • TSE: More than 25,000 slices.
      • HASTE: Pretrained on the TSE dataset and refined with more than 10,000 HASTE slices.
      • EPI Diffusion: More than 1,000,000 slices.
    8. How the ground truth for the training set was established:

      • "The acquired datasets represent the ground truth for the training and validation."
      • Input data for training was "retrospectively created from the ground truth by data manipulation and augmentation." This included "further under-sampling of the data by discarding k-space lines, lowering of the SNR level by addition of noise and mirroring of k-space data."
      • This indicates that the ground truth for training was derived from high-quality, fully sampled MRI acquisitions, which were then manipulated to simulate lower quality inputs for the AI to learn from.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251078

    Validate with FDA (Live)

    Device Name
    AutoDensity
    Manufacturer
    Date Cleared
    2025-11-14

    (220 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    75009
    FRANCE

    Re: K251078
    Trade/Device Name: AutoDensity
    Regulation Number: 21 CFR 892.2050
    Density evaluation
    Classification Name: Medical Image Management and Processing System (21 C.F.R. § 892.2050
    | Medical Image Management and Processing System | Yes, same as predicate |
    | Regulation Number | §892.2050
    | §892.1170 | §892.2050 | Yes, same as predicate |
    | Intended use / Indications | This cloud-based software

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AutoDensity is a post-processing software intended to estimate spine Bone Mineral Density (BMD) from EOSedge dual energy images for orthopedic pre-surgical assessment applications. It is an opportunistic tool that enables immediate assessment of bone density from EOSedge images acquired for other purposes.

    AutoDensity is not intended to replace DXA screening. Suspected low BMD should be confirmed by a DXA exam.

    Clinical judgment and experience are required to properly use the software.

    Device Description

    Based on EOSedge™ system's images acquired with the dual energy protocols cleared in K233920, AutoDensity software provides an estimate of the Bone Mineral Density (BMD) for L1-L4 in EOSedge AP radiographs of the spine. These values are used to aid in BMD estimation in orthopedic surgical planning workflows to help inform patient assessment and surgical decisions. AutoDensity is opportunistic in nature and provides BMD information with equivalent radiation dose compared to the EOSedge images concurrently acquired and used for general radiographic exams. AutoDensity is not intended to replace DXA screening.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the AutoDensity device, based on the provided FDA 510(k) clearance letter:


    1. Acceptance Criteria and Reported Device Performance

    Device Name: AutoDensity
    Intended Use: Post-processing software to estimate spine Bone Mineral Density (BMD) from EOSedge dual energy images for orthopedic pre-surgical assessment applications.

    Acceptance CriteriaReported Device Performance
    Vertebral Level Identification Accuracy
    Percent of levels correctly identified ≥ 90%Testing confirms that the AutoDensity ROI detection algorithm meets performance thresholds. (Specific percentage not provided, but stated to meet criterion).
    Spine ROI Accuracy (Dice Coefficient)
    Lower boundary of 95% CI of mean Dice Coefficient ≥ 0.80Testing confirms that the AutoDensity ROI detection algorithm meets performance thresholds. (Specific value not provided, but stated to meet criterion).
    BMD Precision (Phantom - CV%)
    CV% < 1.5% (compared to reference device)Results met the acceptance criterion (CV% < 1.5%).
    BMD Agreement (Phantom - max difference)
    (Specific numeric criterion not explicitly stated, but implies clinical equivalence to reference device)Maximum BMD difference of 0.057 g/cm² for the high BMD phantom vertebra, and a difference of < 0.018 g/cm² for clinically relevant BMD range.
    BMD Precision (Clinical - CV%)
    (Specific numeric criterion not explicitly stated, but implies acceptable clinical limits)AutoDensity precision CV% was 2.23% [95% CI: 1.78%, 2.98%], which is within the range of acceptable clinical limits for the specified pre-surgical orthopedic patient assessment.
    BMD Agreement (Clinical - Bland-Altman)
    (Specific numeric criterion not explicitly stated, but implies equivalence to other commercial bone densitometers)Bland-Altman bias was 0.045 g/cm², and limits of agreement (LoA) were [-0.088 g/cm², 0.178 g/cm²]. Stated as equivalent to published agreement between other commercial bone densitometers.

    2. Sample Sizes and Data Provenance

    Test Set (for ROI Performance Evaluation):

    • Sample Size: 129 patients.
    • Data Provenance: All cases obtained from EOSedge systems (K233920). The document does not specify the country of origin but mentions a clinical study with 65% US subjects and 35% French subjects for clinical performance testing, which might suggest a similar distribution for the test set, though it's not explicitly stated for the ROI test set. The data was retrospective as it was "obtained from EOSedge systems."

    3. Number of Experts and Qualifications for Ground Truth

    For ROI Performance Evaluation Test Set:

    • Number of Experts: At least 3 (implied by "3 truther majority voting principle") plus one senior US board certified expert radiologist who acted as the gold standard adjudicator.
    • Qualifications:
      • Two trained technologists (for initial ROI and level identification).
      • One senior US board-certified expert radiologist (for supervision, review, selection of most accurate set, and final adjustments).

    4. Adjudication Method for the Test Set

    For ROI Performance Evaluation Test Set:

    • Adjudication Method: A "3 truther majority voting principle" was used, with input from a senior US board-certified expert radiologist (acting as the "gold standard"). The radiologist reviewed results, selected the more accurate set, and made necessary adjustments. This combines elements of majority voting with expert adjudication.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No, the provided document does not mention an MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. The performance data presented focuses on the standalone performance of the AI algorithm and its agreement/precision with a reference device or clinical measurements.

    6. Standalone Performance Study (Algorithm Only)

    • Was a standalone study done? Yes. The "Region of Interest (ROI) Performance Evaluation" section explicitly states: "To assess the standalone performance of the AI algorithm of AutoDensity, the test was performed with..." This section details the evaluation of the algorithm's predictions against ground truth for vertebral level identification and spine ROI accuracy.

    7. Type of Ground Truth Used

    For ROI Performance Evaluation Test Set:

    • Type of Ground Truth: Expert consensus with adjudication. Ground truths for ROIs and level identification were established by two trained technologists under the supervision of a senior US board-certified radiologist. The radiologist made the final informed decision, often described as a "gold standard."

    8. Sample Size for the Training Set

    • Training Set Sample Size: The AI algorithm was trained using 4,679 3D reconstructions and 9,358 corresponding EOS (K152788) or EOSedge (K233920) biplanar 2D X-ray images.

    9. How Ground Truth for the Training Set was Established

    • The document implies that the training data was "selected to only keep relevant images with the fields of view of interest." However, it does not explicitly detail how the ground truth for the training set was established (e.g., whether it used expert annotations, a similar adjudication process, or other methods). It primarily focuses on the test set ground truth establishment.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251410

    Validate with FDA (Live)

    Device Name
    VXvue
    Manufacturer
    Date Cleared
    2025-11-04

    (181 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    STE 160
    IRVINE, CA 92612

    Re: K251410
    Trade/Device Name: VXvue
    Regulation Number: 21 CFR 892.2050
    and processing system
    Classification Panel: Radiology
    Classification Regulation: 21 CFR 892.2050
    Processing, Radiological
    Classification Panel: Radiology
    Classification Regulation: 21 CFR 892.2050
    Equivalent |
    | Classification Panel | Radiology | | Equivalent |
    | Classification Regulation | 21 CFR 892.2050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    VXvue is intended to acquire Digital images from X-ray Detectors, process the images to facilitate diagnosis and to display, and transfer the resulting images to other devices for diagnostic purpose.

    VXvue is indicated for use in general radiographic images of human anatomy. And it is not for fluoroscopic, angiographic, and mammographic applications.

    Device Description

    VXvue gets images from a detector, processes, and transfers the images and manages patient's information and the images for radiologists. VXvue enables images such as x-ray images to be stored electronically and viewed on screens.

    VXvue offers full compliance with DICOM (Digital Imaging and Communications in Medicine) standards to allow the sharing of medical information with other PACS (Picture Archiving and Communication System Server). Besides, VXvue is a device that provides one or more capabilities relating to the acceptance, transfer, display, storage, and digital processing of medical images. The software components provide functions for performing operations related to image manipulation and enhancement.

    AI/ML Overview

    N/A

    Ask a Question

    Ask a specific question about this device

    K Number
    K251532

    Validate with FDA (Live)

    Manufacturer
    Date Cleared
    2025-11-03

    (168 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Name:** Acorn 3D Software (AC-SEG-4009); Acorn 3DP Model (AC-101-XX)
    Regulation Number: 21 CFR 892.2050
    Name:** Acorn 3D Software (AC-SEG-4009); Acorn 3DP Model (AC-101-XX)
    Regulation Number: 21 CFR 892.2050
    Name:** Image processing system
    Device Classification: Class II
    Regulation, Name: 21 CFR 892.2050
    | 892.2050 | 892.2050 |
    | Product Code | QIH, LLZ | QIH, LLZ | LLZ |
    | Device Description |
    | 892.2050 | 892.2050 |
    | Product Code | QIH, LLZ | QIH, LLZ | LLZ |
    | Device Description |

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Acorn 3D Software is a modular image processing software intended for use as an interface for visualization of medical images, segmentation, treatment planning, and production of an output file.

    The Acorn 3D Segmentation module is intended for use as a software interface and image segmentation system for the transfer of CT or CTA medical images to an output file. Acorn 3D Segmentation is also intended for measuring and treatment planning. The Acorn 3D Segmentation output can also be used for the fabrication of physical replicas of the output file using additive manufacturing methods, Acorn 3DP Models. The physical replica can be used for diagnostic purposes in the field of musculoskeletal and craniomaxillofacial applications.

    The Acorn 3D Trajectory Automation module may be used to plan pedicle screw placement in the thoracic and lumbar regions of the spine in pediatric and adult patients.

    Acorn 3D Software and 3DP Models should be used in conjunction with expert clinical judgment.

    Device Description

    Acorn 3D Software is an image processing software that allows the user to import, visualize and segment medical images, check and correct the segmentations, and create digital 3D models. The models can be used in Acorn 3D Software for measuring, treatment planning and producing an output file to be used for additive manufacturing (3D printing). Acorn 3D Software is structured as a modular package.

    This includes the following functionality:

    • Importing medical images in DICOM format
    • Viewing images and DICOM data
    • Selecting a region of interest using generic segmentation tools
    • Segmenting specific anatomy using dedicated semi-automatic tools or fully automatic algorithms
    • Verifying and editing a region of interest
    • Calculating a digital 3D model and editing the model
    • Measuring on images and 3D models
    • Exporting 3D models to third-party packages
    • Planning pedicle screw placement

    The Acorn 3D Segmentation module contains both machine learning based auto segmentation as well as semi-automatic and manual segmentation tools. The auto-segmentation tool is only intended to be used for thoracic and lumbar regions of the spine (T1-T12 and L1-L5) and the pelvis (sacrum). Semi-automatic and manual segmentation tools are intended to be used for all musculoskeletal anatomy.

    AutomaticSemi-AutomaticManual
    DefinitionAlgorithmic with little or no direct human controlA combination of algorithmic and direct human controlDirectly controlled by a human
    Tool TypeMachine Learning algorithm used to automatically segment individual vertebrae and the pelvisAlgorithmic based tools that do not incorporate machine learning.Manual tools requiring user input.
    Anatomical Location(s)Spinal anatomy:• Thoracic (T1-T12)• Lumbar (L1-L5)• SacrumMusculoskeletal & craniomaxillofacial bone:• Short• Long• Flat• Sesamoid• IrregularMusculoskeletal & craniomaxillofacial bone:• Short• Long• Flat• Sesamoid• Irregular

    Acorn 3DP Model is an additively manufactured physical replica of the virtual 3D model generated in Acorn 3D Segmentation. The output file from Acorn 3D Segmentation is used to additively manufacture the Acorn 3DP Model.

    The Acorn 3D Trajectory Automation module contains dedicated fully automatic algorithms for planning pedicle screw trajectories. The algorithms are only intended to be used for the thoracic and lumbar regions of the spine (T1-T12 and L1-L5). The output file from Acorn 3D Trajectory Automation contains information relevant to pedicle screw placement surgery, including entry points, end points, and screw sizes of planned screws.

    AI/ML Overview

    The provided FDA 510(k) clearance letter describes the Acorn 3D Software, specifically focusing on the new Acorn 3D Trajectory Automation module. However, the document is quite sparse on detailed descriptions of the acceptance criteria and the specifics of the study proving the device meets these criteria. The information below is extracted from the provided text, and where details are missing, it's explicitly stated.


    1. Table of Acceptance Criteria and Reported Device Performance

    The document mentions "deviations were within the acceptance criteria" without specifying the numerical acceptance criteria themselves. It also doesn't provide specific numerical results of the device's performance, only a qualitative statement of accuracy.

    Acceptance Criteria (Not explicitly stated in numerical terms)Reported Device Performance
    Accuracy of pedicle screw geometry: Deviations within acceptance criteriaDeviations were within the acceptance criteria.
    Accuracy of pedicle screw trajectories: Deviations within acceptance criteriaDeviations were within the acceptance criteria.
    Substantial equivalence to predicate device for planning pedicle screws and trajectoriesPerformance testing demonstrated substantial equivalence to the predicate device.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Not specified. The document only mentions "clinical data" was used.
    • Data Provenance: Not specified. It's unclear if the clinical data was retrospective or prospective, or the country of origin.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    • The document does not specify the number of experts used or their qualifications for establishing ground truth for the test set.

    4. Adjudication Method for the Test Set

    • The document does not specify any adjudication method (e.g., 2+1, 3+1, none) used for the test set.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • The document does not mention that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done. Therefore, no effect size of human readers improving with AI vs. without AI assistance is provided.

    6. Standalone Performance Study

    • Yes, a standalone performance study was done for the Acorn 3D Trajectory Automation module. The document states:
      • "The accuracy of pedicle screw geometry as well as pedicle screw trajectories created in the subject device, Acorn 3D Trajectory Automation, was assessed via bench testing."
      • This implies the algorithm's performance was evaluated independently without human intervention during the trajectory planning process itself, as it's a "fully automatic algorithm."

    7. Type of Ground Truth Used

    • The ground truth for the "accuracy of pedicle screw geometry" and "pedicle screw trajectories" was established by comparing the device's output to "clinical data." While the nature of this "clinical data" is not explicitly defined (e.g., expert consensus on images, surgical outcomes, or pathology reports), it serves as the reference standard.

    8. Sample Size for the Training Set

    • The document does not specify the sample size used for the training set of the machine learning algorithms. It mentions "Using a collection of images and masks as a training dataset for machine-learning segmentation algorithm" but no numbers.

    9. How the Ground Truth for the Training Set Was Established

    • The document states that the machine learning segmentation algorithm uses "a collection of images and masks as a training dataset." It doesn't explicitly describe how these "masks" (which represent the ground truth segmentations) were created. It can be inferred that these masks would have been generated by human experts, likely through manual or semi-automatic segmentation, but this is not confirmed in the text.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 278