Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K253023

    Validate with FDA (Live)

    Device Name
    BIOGRAPH One
    Date Cleared
    2026-01-15

    (118 days)

    Product Code
    Regulation Number
    892.1200
    Age Range
    All
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Magnetic Resonance Imaging (MRI) is a noninvasive technique used for diagnostic imaging. MRI with its soft tissue contrast capability enables the healthcare professional to differentiate between various soft tissues, for example, fat, water, and muscle, but can also visualize bone structures.

    Depending on the region of interest, contrast agents may be used.

    The MR system may also be used for imaging during interventional procedures and radiation therapy planning.

    The PET images and measures the distribution of PET radiopharmaceuticals in humans to aid the physician in determining various metabolic (molecular) and physiologic functions within the human body for evaluation of diseases and disorders such as, but not limited to, cardiovascular disease, neurological disorders, and cancer.

    The integrated system utilizes the MRI for radiation-free attenuation correction maps for PET studies. The integrated system provides inherent anatomical reference for the fused MR and PET images due to precisely aligned MR and PET image coordinate systems.

    Device Description

    BIOGRAPH One with software Syngo MR XB10 includes new and modified hardware and software compared to the predicate device, Biograph mMR with software syngo MR E11P-AP01. A high level summary of the new and modified hardware and software is provided below:

    Hardware

    New Hardware

    • Gantry offset phantom
    • SDB (Smart Distribution Box)

    New Coils

    • BM Contour XL Coil
    • BM Head/Neck Pro PET-MR Coil
    • BM Spine Pro PET-MR Coil
    • Transfer of up-to-date RF coils from the reference device MAGNETOM Vida.

    Modified Hardware

    • Main components such as:
      • Detector cassettes / DEA
      • Phantom holder
      • Gantry tube
      • Backplane
      • Magnet and cabling
      • Gradient coil
      • MaRS (measurement and reconstruction system)
      • MI MARS
      • PET electronics
      • RF transmitter TBX3 3T (TX Box 3)
    • Other components such as:
      • Cover
      • Filter plate
      • Patient table
      • RFCEL_TEMP

    Modified Coils

    • Body coil
    • Transfer of up-to-date RF coils from the reference device MAGNETOM Vida with some improvements.

    Software

    New Features and Applications

    • Fast Whole-Body workflows
    • Fast Head workflow
    • myExam PET-MR Assist
    • CS-Vibe
    • myExam Implant Suite
    • DANTE blood suppression
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS without diffusion function
    • BioMatrix Motion Sensor
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • ASNR recommended protocols for imaging of ARIA
    • Open Workflow
    • Ultra HD-PET
    • "MTC Mode"
    • OpenRecon 2.0
    • Deep Resolve Boost for TSE
    • GRE_PC
    • The following functions have been migrated for the subject device without modifications from MAGNETOM Skyra Fit and MAGNETOM Sola Fit:
      • 3D Whole Heart
    • Ghost reduction (Dual polarity Grappa (DPG))
    • Fleet Reference Scan
    • AutoMate Cardiac (Cardiac AI Scan Companion)
    • Complex Averaging
    • SPACE Improvement: high bandwidth IR pulse
    • SPACE Improvement: increase gradient spoiling
    • The following function has been migrated for the subject device without modifications from MAGNETOM Free.Max:
      • myExam Autopilot Spine
    • The following functions have been migrated for the subject device without modifications from MAGNETOM Sola:
      • myExam Autopilot Brain
      • myExam Autopilot Knee
    • Transfer of further up-to-date SW functions from the reference devices.

    New Software / Platform

    • PET-Compatible Coil Setup
    • Select&GO
    • PET-MR components communication

    Modified Features and Applications

    • HASTE_CT
    • FL3D_VIBE_AC
    • PET Reconstruction
    • Transfer of further up-to-date SW functions from the reference devices with some improvements.

    Modified Software / Platform

    • Several software functions have been improved. Which are:
      • PET Group
      • PET Viewing
      • PET RetroRecon
      • PET Status and Tune-up/QA

    Other Modifications and / or Minor Changes

    • Indications for use
    • Contraindications
    • SAR parameter
    • Off-Center Planning Support
    • Flip Angle Optimization (Lock TR and FA)
    • Inline Image Filter
    • Marketing bundle "myExam Companion"
    • ID Gain
    • Automatic System Shutdown (ASS) sensor (Smoke Detector)
    • Patient data display (PDD)
    AI/ML Overview

    The FDA 510(k) Clearance Letter for BIOGRAPH One refers to several AI/Deep Learning features. However, the provided document does not contain explicit acceptance criteria for these AI features in a table format, nor does it detail a comparative effectiveness study (MRMC study) for human readers. It primarily focuses on demonstrating non-inferiority to the predicate device through various non-clinical tests.

    Below is an attempt to extract and synthesize the information based on the provided text, while acknowledging gaps in the information regarding specific acceptance criteria metrics and clinical studies.

    Acceptance Criteria and Study Details for BIOGRAPH One AI Features

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state numerical acceptance criteria in a dedicated table format. Instead, it describes performance in terms of achieving "convergence of the training" and "improvements compared to conventional parallel imaging," or confirming "very similar metrics" to the predicate. The "acceptance criteria" are implied by these statements and the successful completion of the described tests.

    AI FeatureImplied Acceptance Criteria (Performance Goal)Reported Device Performance
    Deep Resolve Boost for FL3D_VIBE & SPACEConvergence of training and improvement compared to conventional parallel imaging for SSIM, PSNR, and MSE; no negative impact on image quality.Quantitative evaluations of SSIM, PSNR, and MSE metrics showed a convergence of the training and improvements compared to conventional parallel imaging. Inspection of test images did not reveal any negative impact to image quality. Function used for faster acquisition or improved image quality.
    Deep Resolve Sharp for FL3D_VIBE & SPACEImprovements across quality metrics (PSNR, SSIM, perceptual loss), increased edge sharpness, reduced Gibb's artifacts.Characterized by several quality metrics (PSNR, SSIM, perceptual loss). Tests show increased edge sharpness and reduced Gibb's artifacts.
    Deep Resolve Boost for TSE (First Mention)Very similar metrics (PSNR, SSIM, LPIPS) to predicate/modified network, outperforming conventional GRAPPA. No negative visual impact.Evaluation on test dataset confirmed very similar metrics (PSNR, SSIM, LPIPS) for the predicate and modified network, with both outperforming conventional GRAPPA. Visual evaluations confirmed no negative impact to image quality. Function used for faster acquisition or improved image quality.
    Deep Resolve Boost for TSE (Second Mention)Statistically significant reduction of banding artifacts, no significant changes in sharpness/detail, no difference in clinical suitability (radiologist evaluation).Statistically significant reduction of banding artifacts with no significant changes in sharpness and detail visibility. Radiologist evaluation revealed no difference in suitability for clinical diagnostics between updated and cleared predicate network.

    2. Sample Sizes Used for Test Set and Data Provenance

    The document primarily describes a validation dataset which serves as the "test set" for the AI models during development, and an additional "test dataset" for specific evaluations.

    • Deep Resolve Boost for FL3D_VIBE and SPACE:

      • Test Set Description: The "collaboration partners (testing)" data is mentioned as the source for testing, implying an external, independent test set. No specific number for this test set is provided beyond the 1265 measurements for training/validation.
      • Sample Size (Validation/Training): 27,679 3D patches from 1265 measurements.
      • Data Provenance: "in-house measurements (training and validation) and collaboration partners (testing)." The country of origin is not specified but is likely Germany (Siemens Healthineers AG) and/or China (Siemens Shenzhen Magnetic Resonance LTD.) where the manufacturing is listed.
      • Retrospective/Prospective: "Input data was retrospectively created from the ground truth by data manipulation and augmentation." This indicates retrospective data use.
    • Deep Resolve Sharp for FL3D_VIBE and SPACE:

      • Test Set Description: The document states, "The high-resolution datasets were split to 70% training and 30% validation datasets before training to ensure independence of them." This implies the 30% validation dataset is used as the test set.
      • Sample Size (Validation/Training): 27,679 3D patches from 1265 measurements (split into 70% training and 30% validation).
      • Data Provenance: "in-house measurements (training and validation) and collaboration partners (testing)."
      • Retrospective/Prospective: "Input data was retrospectively created from the ground truth by data manipulation." This indicates retrospective data use.
    • Deep Resolve Boost for TSE (First Mention - General Performance):

      • Test Set Description: The "evaluation on the test dataset" is mentioned. The validation set is 30% of the 500 measurements.
      • Sample Size (Validation/Training): Approximately 13,000 high resolution 3D patches from 500 measurements (split into 70% training and 30% validation).
      • Data Provenance: "in-house measurements."
      • Retrospective/Prospective: "Input data was retrospectively created from the ground truth by data manipulation." This indicates retrospective data use.
    • Deep Resolve Boost for TSE (Second Mention - Banding Artifacts):

      • Test Set Description: "Additional test dataset for banding artifact reduction: more than 2000 slices." This dataset was acquired after the release of the predicate network.
      • Sample Size: More than 2000 slices.
      • Data Provenance: "in-house measurements and collaboration partners."
      • Retrospective/Prospective: Not explicitly stated for this specific additional dataset, but the training/validation data for the predicate was retrospective.

    3. Number of Experts and Qualifications for Ground Truth

    • Radiologist Evaluation for Deep Resolve Boost for TSE (Second Mention): The document mentions "the radiologist evaluation revealed no difference in suitability for clinical diagnostics."

      • Number of Experts: Not specified (singular "radiologist" used, but typically multiple are implied for such evaluations).
      • Qualifications: "Radiologist." No specific years of experience or subspecialty are mentioned.
    • Other features: For Deep Resolve Boost/Sharp for FL3D_VIBE and SPACE, and Deep Resolve Boost for TSE (first mention), the ground truth is derived directly from acquired image data (see section 7). No independent human expert ground truth establishment for these.

    4. Adjudication Method (for Test Set)

    • Radiologist Evaluation for Deep Resolve Boost for TSE (Second Mention): The adjudication method is not specified in the document (e.g., 2+1, 3+1). It only states "the radiologist evaluation."

    • Other features: Adjudication methods are not applicable as human experts were not establishing ground truth for objective metrics.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No, the document does not describe an MRMC comparative effectiveness study where human readers' performance with and without AI assistance is compared. The evaluation for Deep Resolve Boost for TSE mentions "radiologist evaluation" but not in a comparative MRMC study context.
    • Effect Size: Not applicable, as no MRMC study was performed.

    6. Standalone (Algorithm Only) Performance

    • Was standalone performance done? Yes, the performance testing for all Deep Resolve features (Boost and Sharp for FL3D_VIBE, SPACE, and TSE) was conducted "algorithm only" by evaluating metrics like PSNR, SSIM, MSE, and LPIPS, and then visual inspection/radiologist evaluation. These refer to the algorithm's direct output performance.

    7. Type of Ground Truth Used

    • Deep Resolve Boost for FL3D_VIBE and SPACE: "The acquired datasets (as described above) represent the ground truth for the training and validation."
    • Deep Resolve Sharp for FL3D_VIBE and SPACE: "The acquired datasets represent the ground truth for the training and validation." Input data was manipulated (cropped k-space) to create low-resolution input and high-resolution output/ground truth from the same dataset.
    • Deep Resolve Boost for TSE (First Mention): "The acquired datasets represent the ground truth for the training and validation." Input data was manipulated (cropped k-space) to create low-resolution input and high-resolution output/ground truth from the same dataset.
    • Deep Resolve Boost for TSE (Second Mention): "The acquired training/validation datasets... represent the ground truth for the training and validation." Input data was manipulated by undersampling k-space, adding noise, and mirroring k-space.
    • Summary: The ground truth for all AI features was derived from acquired, high-resolution original image data (retrospectively manipulated to simulate inputs). For Deep Resolve Boost for TSE (second mention), there was also an implicit "expert consensus" or "expert reading" component for the "radiologist evaluation" regarding clinical suitability.

    8. Sample Size for the Training Set

    • Deep Resolve Boost for FL3D_VIBE and SPACE: 81% of 1265 measurements (for 27,679 3D patches).
    • Deep Resolve Sharp for FL3D_VIBE and SPACE: 70% of 1265 measurements (for 27,679 3D patches).
    • Deep Resolve Boost for TSE (First Mention): 70% of 500 measurements (for approx. 13,000 high resolution 3D patches).
    • Deep Resolve Boost for TSE (Second Mention): More than 23,250 slices (93% of the combined training/validation dataset from K213693).

    9. How the Ground Truth for the Training Set Was Established

    • Deep Resolve Boost for FL3D_VIBE and SPACE: The "acquired datasets" represent the ground truth. "Input data was retrospectively created from the ground truth by data manipulation and augmentation. This process includes further undersampling of the data by discarding k-space lines as well as creating sub-volumes of the acquired data."
    • Deep Resolve Sharp for FL3D_VIBE and SPACE: The "acquired datasets represent the ground truth." "Input data was retrospectively created from the ground truth by data manipulation. k-space data has been cropped such that only the center part of the data was used as input. With this method corresponding low-resolution data as input and high-resolution data as output / ground truth were created for training and validation."
    • Deep Resolve Boost for TSE (First Mention): Similar to Deep Resolve Sharp for FL3D_VIBE and SPACE: "The acquired datasets represent the ground truth for the training and validation. Input data was retrospectively created from the ground truth by data manipulation. k-space data has been cropped such that only the center part of the data was used as input. With this method corresponding low-resolution data as input and high-resolution data as output / ground truth were created for training and validation."
    • Deep Resolve Boost for TSE (Second Mention): "The acquired training/validation datasets... represent the ground truth for the training and validation. Input data was retrospectively created from the ground truth by data manipulation and augmentation. This process includes further undersampling of the data by discarding k-space lines, lowering of the SNR level by addition of noise and mirroring of k-space data."

    In summary, for all AI features, the ground truth for training was established by using high-quality, originally acquired MRI data that was then retrospectively manipulated (e.g., undersampled, cropped, noise added) to create synthetic "lower quality" input data for the AI model to learn from, with the original high-quality data serving as the target output or ground truth.

    Ask a Question

    Ask a specific question about this device

    K Number
    K253495

    Validate with FDA (Live)

    Date Cleared
    2025-11-20

    (23 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    syngo.MR Applications is a syngo based post-acquisition image processing software for viewing, manipulating, evaluating, and analyzing MR, MR-PET, CT, PET, CT-PET images and MR spectra.

    Device Description

    syngo.MR Applications is a software only Medical Device consisting post-processing applications/workflows used for viewing and evaluating the designated images provided by a MR diagnostic device. The post-processing applications/workflows are integrated with the hosting application syngo.via, that enables structured evaluation of the corresponding images

    AI/ML Overview

    The provided FDA 510(k) clearance letter and summary for syngo.MR Applications (VB80) indicate that no clinical studies or bench testing were performed to establish new performance criteria or demonstrate meeting previously established acceptance criteria. The submission focuses on software changes and enhancements from a predicate device (syngo.MR Applications VB40).

    Therefore, based solely on the provided document, I cannot create the requested tables and information because the document explicitly states:

    • "No clinical studies were carried out for the product, all performance testing was conducted in a non-clinical fashion as part of verification and validation activities of the medical device."
    • "No bench testing was required to be carried out for the product."

    The document details the following regarding performance and acceptance:

    • Non-clinical Performance Testing: "Non-clinical tests were conducted for the subject device during product development. The modifications described in this Premarket Notification were supported with verification and validation testing."
    • Software Verification and Validation: "The performance data demonstrates continued conformance with special controls for medical devices containing software. Non-clinical tests were conducted on the device Syngo.MR Applications during product development... The testing results support that all the software specifications have met the acceptance criteria. Testing for verification and validation for the device was found acceptable to support the claims of substantial equivalence."
    • Conclusion: "The predicate device was cleared based on non-clinical supportive information. The comparison of technological characteristics, device hazards, non-clinical performance data, and software validation data demonstrates that the subject device performs comparably to and is as safe and effective as the predicate device that is currently marketed for the same intended use."

    This implies that the acceptance criteria are related to the functional specifications and performance of the software, as demonstrated by internal verification and validation activities, rather than a clinical performance study with specific quantitative metrics. The new component, "MR Prostate AI," is noted to be integrated without modification and had its own prior 510(k) clearance (K241770), suggesting its performance was established in that separate submission.

    Without access to the actual verification and validation reports mentioned in the document, it's impossible to list precise acceptance criteria or detailed study results. The provided text only states that "all the software specifications have met the acceptance criteria."

    Therefore, I can only provide an explanation of why the requested details cannot be extracted from this document:

    Explanation Regarding Acceptance Criteria and Study Data:

    The provided FDA 510(k) clearance letter and summary for syngo.MR Applications (VB80) explicitly state that no clinical studies or bench testing were performed for this submission. The device (syngo.MR Applications VB80) is presented as a new version of a predicate device (syngo.MR Applications VB40) with added features and enhancements, notably the integration of an existing AI algorithm, "Prostate MR AI VA10A (K241770)," which was cleared under a separate 510(k).

    The basis for clearance is "non-clinical performance data" and "software validation data" demonstrating that the subject device performs comparably to and is as safe and effective as the predicate device. The document mentions that "all the software specifications have met the acceptance criteria" as part of the verification and validation (V&V) activities. However, the specific quantitative acceptance criteria, detailed performance metrics, sample sizes, ground truth establishment, or expert involvement for these V&V activities are not included in this public summary.

    Therefore, the requested information cannot be precisely extracted from the provided text.


    Summary of Information Available (and Not Available) from the Document:

    Information RequestedStatus (Based on provided document)
    1. Table of acceptance criteria and reported performanceNot provided in the document. The document states: "The testing results support that all the software specifications have met the acceptance criteria." However, it does not specify what those acceptance criteria are or report detailed performance metrics against them. These would typically be found in the detailed V&V reports, which are not part of this summary.
    2. Sample size and data provenance for test setNot provided. The document indicates "non-clinical tests were conducted as part of verification and validation activities." The sample sizes for these internal tests, the nature of the data, and its provenance (e.g., country, retrospective/prospective) are not detailed. It is implied that the data is not patient-specific clinical test data.
    3. Number of experts and qualifications for ground truthNot applicable/Not provided. Since no clinical studies or specific performance evaluations against an external ground truth are described in this document, there's no mention of experts establishing ground truth for a test set. The validation appears to be against software specifications. If the "MR Prostate AI" component had such a study, those details would be in its individual 510(k) (K241770), not this submission.
    4. Adjudication method for test setNot applicable/Not provided. As with the ground truth establishment, no adjudication method is mentioned because no external test set requiring such expert consensus is described within this 510(k) summary.
    5. MRMC comparative effectiveness study and effect sizeNot performed for this submission. The document explicitly states "No clinical studies were carried out for the product." Therefore, no MRMC study or AI-assisted improvement effect size is reported here.
    6. Standalone (algorithm only) performance studyPartially addressed for a component. While this submission doesn't detail such a study, it notes that the "MR Prostate AI" algorithm is integrated without modification and "is classified under a different regulation in its 510(K) and this is out-of-scope from the current submission." This implies that a standalone performance study was done for the Prostate MR AI algorithm under its own 510(k) (K241770), but those details are not within this document. For the overall syngo.MR Applications (VB80) product, no standalone study is described.
    7. Type of ground truth usedNot provided for the overall device's V&V. The V&V activities are stated to have met "software specifications," which suggests an internal, design-based ground truth rather than clinical ground truth like pathology or outcomes data. For the integrated "MR Prostate AI" algorithm, clinical ground truth would have been established for its separate 510(k) submission.
    8. Sample size for the training setNot applicable/Not provided for this submission. The document describes internal non-clinical V&V for the syngo.MR Applications software. It does not refer to a machine learning model's training set within this context. The "Prostate MR AI" algorithm, being independently cleared, would have its training set details in its specific 510(k) dossier (K241770), not here.
    9. How the ground truth for the training set was establishedNot applicable/Not provided for this submission. As above, this document does not discuss a training set or its ground truth establishment for syngo.MR Applications. This information would pertain to the Prostate MR AI algorithm and be found in its own 510(k).
    Ask a Question

    Ask a specific question about this device

    K Number
    K251059

    Validate with FDA (Live)

    Date Cleared
    2025-10-24

    (203 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Syngo Carbon Clinicals is intended to provide advanced visualization tools to prepare and process the medical image for evaluation, manipulation and communication of clinical data that was acquired by the medical imaging modalities (for example, CT, MR, etc.)

    OrthoMatic Spine provides the means to perform musculoskeletal measurements of the whole spine, in particular spine curve angle measurements.

    The TimeLens provides the means to compare a region of interest between multiple time points.

    The software package is designed to support technicians and physicians in qualitative and quantitative measurements and in the analysis of clinical data that was acquired by medical imaging modalities.

    An interface shall enable the connection between the Syngo Carbon Clinicals software package and the interconnected software solution for viewing, manipulation, communication, and storage of medical images.

    Device Description

    Syngo Carbon Clinicals is a software only Medical Device, which provides dedicated advanced imaging tools for diagnostic reading. These tools can be called up using standard interfaces any native/syngo based viewing applications (hosting applications) that is part of the SYNGO medical device portfolio. These tools help prepare and process the medical image for evaluation, manipulation and communication of clinical data that was acquired by medical imaging modalities (e.g., MR, CT etc.)

    Deployment Scenario: Syngo Carbon Clinicals is a plug-in that can be added to any SYNGO based hosting applications (for example: Syngo Carbon Space, syngo.via etc…). The hosting application (native/syngo Platform-based software) is not described within this 510k submission. The hosting device decides which tools are used from Syngo Carbon Clinicals. The hosting device does not need to host all tools from the Syngo Carbon Clinicals, a desired subset of the provided tools can be used. The same can be enabled or disabled thru licenses.

    When preparing the radiologist's reading workflow on a dedicated workplace or workstation, Syngo Carbon Clinicals can be called to generate additional results or renderings according to the user needs using the tools available.

    AI/ML Overview

    This document describes performance evaluation for two specific tools within Syngo Carbon Clinicals (VA41): OrthoMatic Spine and TimeLens.

    1. Table of Acceptance Criteria and Reported Device Performance

    Feature/ToolAcceptance CriteriaReported Device Performance
    OrthoMatic SpineAlgorithm's measurement deviations for major spinal measurements (Cobb angles, thoracic kyphosis angle, lumbar lordosis angle, coronal balance, and sagittal vertical alignment) must fall within the range of inter-reader variability.Cumulative Distribution Functions (CDFs) demonstrated that the algorithm's measurement deviations fell within the range of inter-reader variability for the major Cobb angle, thoracic kyphosis angle, lumbar lordosis angle, coronal balance, and sagittal vertical alignment. This indicates the algorithm replicates average rater performance and meets clinical reliability acceptance criteria.
    TimeLensNot specified as a reader study/bench test was not required due to its nature as a simple workflow enhancement algorithm.No specific quantitative performance metrics are provided, as clinical performance evaluation methods (reader studies) were deemed unnecessary. The tool is described as a "simple workflow enhancement algorithm".

    2. Sample Size Used for the Test Set and Data Provenance

    • OrthoMatic Spine:

      • Test Set Sample Size: 150 spine X-ray images (75 frontal views, 75 lateral views) were used in a reader study.
      • Data Provenance: The document states that the main dataset for training includes data from USA, Germany, Ukraine, Austria, and Canada. While this specifies the training data provenance, the provenance of the specific 150 images used for the reader study (test set) is not explicitly segregated or stated here. The study involved US board-certified radiologists, implying the test set images are relevant to US clinical practice.
      • Retrospective/Prospective: Not explicitly stated, but the description of "collected" images and patients with various spinal conditions suggests a retrospective collection of existing exams.
    • TimeLens: No specific test set details are provided as a reader study/bench test was not required.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • OrthoMatic Spine:

      • Number of Experts: Five US board-certified radiologists.
      • Qualifications: US board-certified radiologists. No specific years of experience are mentioned.
      • Ground Truth for Reader Study: The "mean values obtained from the radiologists' assessments" for the 150 spine X-ray images served as the reference for comparison against the algorithm's output.
    • TimeLens: Not applicable, as no reader study was conducted.

    4. Adjudication Method for the Test Set

    • OrthoMatic Spine: The algorithm's output was assessed against the mean values obtained from the five radiologists' assessments. This implies a form of consensus or average from multiple readers rather than a strict 2+1 or 3+1 adjudication.
    • TimeLens: Not applicable.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • OrthoMatic Spine: A reader study was performed, which is a type of MRMC study. However, this was a standalone performance evaluation of the algorithm against human reader consensus, not a comparative effectiveness study with and without AI assistance for human readers. Therefore, there is no reported "effect size of how much human readers improve with AI vs without AI assistance." The study aimed to show the algorithm replicates average human rater performance.
    • TimeLens: Not applicable.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • OrthoMatic Spine: Yes, a standalone performance evaluation of the OrthoMatic Spine algorithm (without human-in-the-loop assistance) was conducted. The algorithm's measurements were compared against the mean values derived from five human radiologists.
    • TimeLens: The description suggests the TimeLens tool itself is a "simple workflow enhancement algorithm" and its performance was evaluated through non-clinical verification and validation activities rather than a specific standalone clinical study with an AI algorithm providing measurements.

    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    • OrthoMatic Spine:
      • For the reader study (test set performance evaluation): Expert consensus (mean of five US board-certified radiologists' measurements) was used to assess the algorithm's performance.
      • For the training set: The initial annotations were performed by trained non-radiologists and then reviewed by board-certified radiologists. This can be considered a form of expert-verified annotation.
    • TimeLens: Not specified, as no clinical ground truth assessment was required.

    8. The Sample Size for the Training Set

    • OrthoMatic Spine:
      • Number of Individual Patients (Training Data): 6,135 unique patients.
      • Number of Images (Training Data): A total of 23,464 images were collected within the entire dataset, which was split 60% for training, 20% for validation, and 20% for model selection. Therefore, the training set would comprise approximately 60% of both the patient count and image count. So, roughly 3,681 patients and 14,078 images.
    • TimeLens: Not specified.

    9. How the Ground Truth for the Training Set Was Established

    • OrthoMatic Spine: Most images in the dataset (used for training, validation, and model selection) were annotated using a dedicated annotation tool (Darwin, V7 Labs) by a US-based medical data labeling company (Cogito Tech LLC). Initial annotations were performed by trained non-radiologists and subsequently reviewed by board-certified radiologists. This process was guided by written guidelines and automated workflows to ensure quality and consistency, with annotations including vertebral landmarks and key vertebrae (C7, L1, S1).
    • TimeLens: Not specified.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1