Search Results
Found 2109 results
510(k) Data Aggregation
(247 days)
LLZ
Ask a specific question about this device
(402 days)
LLZ
Ask a specific question about this device
(22 days)
LLZ
Ask a specific question about this device
(142 days)
LLZ
Ask a specific question about this device
(210 days)
LLZ
SurgiTwin is a web-based platform designed to help healthcare professionals carry out pre-operative planning for knee reconstruction procedures, based on their patients' imported imaging studies. Experience in usage and a clinical assessment is necessary for the proper use of the system in the revision and approval of the output of the planning.
The system works with a database of digital representations related to surgical materials supplied by their manufacturers. SurgiTwin generates a PDF report as an output. End users of the generated SurgiTwin reports are trained healthcare professionals. SurgiTwin does not provide a diagnosis or surgical recommendation.
SurgiTwin is a semi-automated Software as a Medical Device (SaMD) that assists health care professionals in the pre-operative planning of total knee replacement surgery. Using a series of algorithms, the software creates 2D segmented images, a 3D model, and relevant measurements derived from the patient's pre-dimensioned medical images. The software interface allows the user to adjust the plan manually to verify the accuracy of the model and achieve the desired clinical targets. SurgiTwin generates a PDF report as an output. SurgiTwin does not provide a diagnosis or surgical recommendation.
The intended patient population is patients over 22 undergoing total knee replacement surgery without any existing material in the operated lower limb.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) clearance letter for SurgiTwin:
1. Acceptance Criteria and Reported Device Performance
The provided document specifically details acceptance criteria for the segmentation ML model. Other functions (automatic landmark function, metric generation, implant placement, osteophyte removal) are mentioned as having "predefined clinical acceptance criteria" and "all acceptance criteria were met," but the specific numeric criteria are not listed.
Table of Acceptance Criteria (for the Segmentation ML Model) and Reported Device Performance:
Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|
Mean DSC (Dice Similarity Coefficient) | > 0.95 | Met (> 0.95, implied by "met the acceptance criteria") |
Mean voxel based AHD (Average Hausdorff Distance) | 0.9 | Met (> 0.9, implied by "met the acceptance criteria") |
95th percentile of the boundary based HD 95 (Hausdorff Distance 95th percentile) |
Ask a specific question about this device
(15 days)
LLZ
Rapid DeltaFuse is an image processing software package to be used by trained professionals, including but not limited to physicians and medical technicians.
The software runs on a standard off-the-shelf computer or a virtual platform, such as VMware, and can be used to perform image viewing, processing, and analysis of images.
Data and images are acquired through DICOM compliant imaging devices.
Rapid DeltaFuse provides both viewing and analysis capabilities for imaging datasets acquired with Non-Contrast CT (NCCT) images.
The CT analysis includes NCCT maps showing areas of hypodense and hyperdense tissue including overlays of time differentiated scans of the same patient.
Rapid DeltaFuse is intended for use for adults.
Rapid DeltaFuse (DF) is a Software as a Medical Device (SaMD) image processing module and is part of the Rapid Platform. It provides visualization of time differentiated neuro hyperdense and hypodense tissue from Non-Contrast CT (NCCT) images.
Rapid DF is integrated into the Rapid Platform which provides common functions and services to support image processing modules such as DICOM filtering and job and interface management along with external facing cyber security controls. The Integrated Module and Platform can be installed on-premises within customer's infrastructure behind their firewall or in a hybrid on-premises/cloud configuration. The Rapid Platform accepts DICOM images and, upon processing, returns the processed DICOM images to the source imaging modality or PACS.
The provided FDA 510(k) clearance letter for Rapid DeltaFuse describes the acceptance criteria and the study that proves the device meets those criteria, though some details are absent.
Here's a breakdown of the information found in the document, structured according to your request:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are not explicitly stated in a quantified manner as a target. Instead, the document describes the type of performance evaluated and the result obtained.
Acceptance Criteria (Implied/Description of Test) | Reported Device Performance |
---|---|
Co-registration accuracy for slice overlays | DICE coefficient of 0.94 (Lower Bound 0.93) |
Software performance meeting design requirements and specifications | "Software performance testing demonstrated that the device performance met all design requirements and specifications." |
Reliability of processing and analysis of NCCT medical images for visualization of change | "Verification and validation testing confirms the software reliably processes and supports analysis of NCCT medical images for visualization of change." |
Performance of Hyperdensity and Hypodensity display with image overlay | "The Rapid DF performance has been validated with a 0.95 DICE coefficient for the overlay addition to validate the overlay performance..." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: 14 cases were used for the co-registration analysis. The sample size for other verification and validation testing is not specified.
- Data Provenance: Not explicitly stated (e.g., country of origin, retrospective or prospective).
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
- This information is not provided in the document. The document refers to "performance validation testing" and "software verification and validation testing" but does not detail the involvement of human experts or their qualifications for establishing ground truth.
4. Adjudication Method for the Test Set
- This information is not provided in the document.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC comparative effectiveness study was reported. The document focuses on the software's performance (e.g., DICE coefficient for co-registration) rather than its impact on human reader performance.
6. Standalone (Algorithm Only) Performance Study
- Yes, a standalone performance study was done. The reported DICE coefficients (0.94 and 0.95) are measures of the algorithm's performance in co-registration and overlay addition, independent of human interaction.
7. Type of Ground Truth Used
- The document implies that the ground truth for co-registration and overlay performance was likely established through a reference standard based on accurate image alignment and feature identification, against which the algorithm's output (DICOM images with overlays) was compared. The exact method of establishing this reference standard (e.g., manual expert annotation, a different validated algorithm output) is not explicitly stated.
8. Sample Size for the Training Set
- The document does not specify the sample size used for training the Rapid DeltaFuse algorithm.
9. How Ground Truth for the Training Set Was Established
- The document does not specify how the ground truth for the training set was established.
Ask a specific question about this device
(269 days)
LLZ
Neurovascular Insight V1.0 is an optional user interface for use on a compatible technical integration environment and designed to be used by trained professionals with medical imaging education including, but not limited to, physicians. Neurovascular Insight V1.0 is intended to:
- Display and, if necessary, export neurological DICOM series and outputs provided by compatible processing docker applications, through the technical integration environment.
- Allow the user to edit and modify parameters that are optional inputs of aforementioned applications. These modified parameters are provided by the technical integration environment as inputs to the docker application to reprocess the outputs. When available, Neurovascular Insight V1.0 display can be updated with the reprocessed outputs.
- If requested by an application, allow the user to confirm information before displaying associated outputs and export them.
The device does not alter the original image information and is not intended to be used as a diagnostic device. The outputs of each compatible application must be interpreted by the predefined intended users, as specified in the application's own labeling. Moreover, the information displayed is intended to be used in conjunction with other patient information and based on professional judgment, to assist the clinician in the medical imaging assessment. It is not intended to be used in lieu of the standard care imaging.
Trained professionals are responsible for viewing the full set of native images per the standard of care.
Neurovascular Insight V1.0 is an optional user interface for use on a compatible technical integration environment and designed to be used by trained professionals with medical imaging education including, but not limited to, physicians and medical technicians.
It is worth noting that Neurovascular Insight V1.0 is an evolution of the FDA cleared medical device Olea S.I.A. Neurovascular V1.0 (K223532).
Neurovascular Insight V1.0 does not contain any calculation feature or any algorithm (deterministic or AI).
The provided FDA 510(k) clearance letter for Neurovascular Insight V1.0 states that the device "does not contain any calculation feature or any algorithm (deterministic or AI)." Furthermore, it explicitly mentions, "Neurovascular Insight V1.0 provides no output. Therefore, the comparison to predicate was based on the comparison of features available within both devices. No performance feature requires a qualitative or quantitative comparison and validation."
Based on this, it's clear that the device is a user interface and does not include AI algorithms or generate outputs that would require a study involving acceptance criteria for AI performance (e.g., sensitivity, specificity, accuracy). Therefore, the questions related to AI-specific performance criteria, ground truth establishment, training sets, and MRMC studies are not applicable to this particular device.
The "study" conducted for this device was a series of software verification and validation tests to ensure its functionality as a user interface and its substantial equivalence to its predicate.
Here's a breakdown of the requested information based on the provided document, highlighting where the requested information is not applicable due to the device's nature:
1. A table of acceptance criteria and the reported device performance
Note: As the device is a user interface without AI or output generation, there are no quantitative performance metrics like sensitivity, specificity, or accuracy that would typically be associated with AI algorithms. The acceptance criteria relate to the successful execution of software functionalities.
Acceptance Criteria (Based on information provided) | Reported Device Performance |
---|---|
Product risk assessment successfully completed | Confirmed |
Software modules verification tests successfully completed | Confirmed |
Software validation test successfully completed | Confirmed |
System provides all capabilities necessary to operate according to its intended use | Confirmed |
System operates in a manner substantially equivalent to the predicate device | Confirmed |
All features tested during verification phases (Software Test Description) | Successfully performed as reported in Software Test Report (STR) |
Specific features highlighted by risk analysis tested during usability process (human factor considered) | User Guide followed, no clinically blocking bugs, no incidents during processing |
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set: Not explicitly stated as a number of patient cases or images, as the testing was focused on software functionality rather than AI performance on a dataset. The testing refers to "software modules verification tests" and "software validation test."
- Data Provenance: Not applicable in the context of clinical data for AI development/validation, as the device doesn't use or produce clinical outputs requiring such data. The testing was internal software validation.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Not Applicable: Given that the device is a user interface and does not utilize AI or produce diagnostic outputs, there was no need to establish clinical ground truth for a test set by medical experts in the traditional sense. The "ground truth" for its functionality would be the design specifications and successful execution of intended features. The document mentions "operators" who "reported no issue" during usability testing, but these are likely system testers/engineers, not clinical experts establishing diagnostic ground truth.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Not Applicable: No clinical ground truth was established, so no adjudication method was required.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No: The document explicitly states, "Neurovascular Insight V1.0 does not contain any calculation feature or any algorithm (deterministic or AI)." Therefore, an MRMC study comparing human readers with and without AI assistance was not performed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- No: The device does not contain an algorithm, only a user interface. Standalone algorithm performance testing is not applicable.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Not Applicable: No clinical ground truth was established, as the device is a user interface without AI or diagnostic output generation. The "ground truth" for its validation was adherence to software specifications and intended functionalities.
8. The sample size for the training set
- Not Applicable: The device does not contain any AI algorithms, therefore, no training set was used.
9. How the ground truth for the training set was established
- Not Applicable: No training set was used.
Ask a specific question about this device
(728 days)
LLZ
This software is a medical device intended for the evaluation of DICOM images. It receives, stores, processes, and displays sequential DICOM images primarily obtained through low-dose chest fluoroscopy (e.g., RF and AX modalities).
This software is not intended to be used for primary diagnosis. Reference images such as scintigraphy or CT scans may be displayed for supplementary purposes.
The subject device is a software-only medical imaging system intended for installation on commercial off-the-shelf personal computers. It receives, stores, processes, and displays sequential DICOM images, primarily obtained from chest fluoroscopy (e.g., RF, AX modalities). The software is compatible with external systems such as hospital PACS via DICOM-compliant communication protocols.
The device operates as a standalone application, with all processing and visualization functionalities integrated into a single software package.
The provided FDA clearance letter and 510(k) summary for Mediott Inc.'s RW-1 device do not contain explicit acceptance criteria or results from a study that demonstrates the device meets specific performance criteria in the way typically expected for AI/ML-driven diagnostic devices.
Instead, the submission focuses on establishing substantial equivalence to a predicate device (KONICAMINOLTA DI-X1, K212685) based on technological characteristics and non-clinical performance testing.
Here's a breakdown of the information that can be extracted, and what is missing based on your requested format:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria:
The document does not define explicit, quantitative acceptance criteria for performance metrics like sensitivity, specificity, accuracy, or other clinical outcomes. The "acceptance criteria" are implied to be that the device performs its stated functions reliably and consistently, and that its differences from the predicate do not raise new questions of safety or effectiveness.
Reported Device Performance:
The document does not report quantitative performance metrics for the RW-1 device. The performance is described qualitatively as "functional correctness, repeatability, and robustness of the device functions."
Acceptance Criteria | Reported Device Performance |
---|---|
Functional correctness, repeatability, and robustness of device functions consistent with industry standards for software-based medical devices. | Qualitative Statement: "The implemented software algorithms operate reliably and consistently under representative conditions. The primary focus was on ensuring functional correctness, repeatability, and robustness of the device functions, consistent with industry standards for software-based medical devices." |
No new questions of safety or effectiveness compared to the predicate device. | Conclusion: "The observed differences do not raise new questions of safety or effectiveness and reflect reductions in scope or architectural simplification." |
2. Sample Size Used for the Test Set and Data Provenance
The document states: "Non-clinical performance testing was conducted as part of the comprehensive system-level verification and validation (V&V) activities for the subject device."
- Sample Size for Test Set: Not specified. The document implies that the testing was focused on the system's inherent functions rather than evaluation against a dataset of clinical cases with established ground truth.
- Data Provenance: Not specified.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
- Number of Experts: None explicitly mentioned.
- Qualifications of Experts: Not applicable, as there's no mention of expert-established ground truth for a test set.
4. Adjudication Method for the Test Set
- Adjudication Method: Not applicable, as no expert adjudication for a test set is mentioned.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and the Effect Size of Human Readers Improvement with AI vs. Without AI Assistance
- MRMC Study: No, an MRMC study was not conducted or reported. This type of study would typically be performed for AI/ML diagnostic aids to assess human reader performance with and without AI assistance.
- Effect Size: Not applicable, as no MRMC study was performed.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) was done
The document states: "No separate standalone bench tests were performed beyond these system-level V&V activities, as the system-level testing was considered sufficient to evaluate all performance-critical features under anticipated use conditions."
This indicates that an "algorithm-only" or "standalone" performance evaluation (in the sense of quantitative clinical performance metrics on a clinical dataset) was not performed. The "standalone application" mentioned in the description refers to the software's architecture, not a standalone performance evaluation.
7. The Type of Ground Truth Used
- Type of Ground Truth: Not applicable. The V&V activities focused on functional correctness of the software's operations (e.g., displaying images, performing measurements) rather than clinical ground truth (e.g., diagnosis confirmed by pathology, expert consensus, or outcomes).
8. The Sample Size for the Training Set
- Sample Size for Training Set: Not applicable. The RW-1 is described as a medical image management and processing system with specific display and measurement functions. There is no indication that it is an AI/ML device that requires a training set in the conventional sense for learning-based tasks (e.g., disease detection, classification). The "implemented software algorithms" are deterministic.
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: Not applicable, as there is no indication of a training set or learning-based algorithms.
Summary of Device Nature:
Based on the provided text, the RW-1 device is primarily a medical image management and processing system. Its functions include receiving, storing, processing, and displaying DICOM images, with features like density/gradation adjustment, rotation, scaling, panning, cine display, comparison, and area measurement.
The key phrase "Statistical exhaustiveness was not required due to the deterministic nature of the implemented algorithms" strongly suggests that the RW-1 is developed using traditional, rule-based or deterministic algorithms for image manipulation and display, rather than machine learning algorithms that would typically require large training and test sets and extensive clinical performance evaluations with ground truth. The V&V focused on ensuring these deterministic functions worked correctly and reliably.
Ask a specific question about this device
(155 days)
LLZ
The UC-Care Navigo Workstation is an adjunctive tool for ultrasound guided procedures and is intended to be used by physicians in the clinic or hospital for 2-D and 3-D visualization of ultrasound images of the prostate gland. The Navigo Workstation offers the ability to fuse DICOM originated information (e.g. MRI, CT) with the ultrasound images and thus superimposes information from one modality onto the other.
It also provides the ability to display a simulated image of a tracked insertion tool such as a needle, guide wire, catheter, grid plate or probe on a computer monitor screen that shows images of the target organ and the current and the projected future path of the interventional instrument taking into account patient movement.
Additional software features include patient data management, multiplanar reconstruction, segmentation, image measurement and 3-D image registration, as well as storage and future retrieval of this information.
Navigo is intended for treatment planning and guidance for clinical, interventional and/or diagnostic procedures. The device is intended to be used in interventional and diagnostic procedures in a clinical setting. Example procedures include, but are not limited to image fusion for diagnostic clinical examinations and procedures, soft tissue biopsies, soft tissue ablations and placement of fiducial markers. The software is not intended to predict ablation volumes or predict ablation success.
The Navigo Workstation version 2.3, model: FPRMC00039 (hereinafter referred to as "Navigo Workstation Version 2.3") is an adjunctive tool in the management of prostate diagnostic and interventional procedures. The Navigo Workstation provides tracking, recording, and management solutions for prostate insertion tools (such as a needle, guide wire, or catheter).
The Navigo Workstation is designed to assist the physician in performing prostate diagnostic and interventional procedures by providing regional orientation information, displaying a 3D model with real-time tracking and recording of the needle location. The Navigo Workstation offers the ability to fuse DICOM-originated information (e.g. MRI, CT) with the ultrasound images and thus superimposes information from one modality onto the other. The device includes means to compensate for the patient's body and prostate motion at any time during the procedure. In addition, the Navigo Workstation Version 2.3 supports treatment procedure by allowing the physician to plan the treatment by selecting a treatment needle with its defined properties (as declared by the manufacturer) and displaying the virtual ablation zone. The system enables the physician to segment anatomic ROIs (anatomic Regions Of Interest, e.g. surrounding organs) and present the distance measurements of the virtual treatment zone from it. The ROIs used for treatment planning can be either ROI segmented on MRI/CT images or positive pathology results updated on historic biopsy procedures performed on the Navigo.
The Navigo Workstation is used as an add-on to the ultrasound diagnostic and interventional procedures of the prostate gland. When operated in conjunction with the standard equipment in trans-rectal/trans-perineal ultrasound prostate procedure, the Navigo software may be used for the following:
- To assist the physician by transferring and displaying ultrasound images on the workstation screen
- To provide regional orientation information during prostate procedures
- To build a display and manipulate a 3D model of the prostate on a screen
- To define the physician's ROIs (Regions Of Interest) and display them on the 3D model
- To archive procedure data and report generating
- To provide data management solutions
- To track, display, and record the needle trajectory location retrieved from the ultrasound
- To display the scanning history, including pathology analyses
- To retrieve and display DICOM-compliant information
- To fuse DICOM-compliant originated regions of interest with the ultrasound 2D and 3D information
- To support the grid trajectory in Grid guided procedures
- To perform automatic or manual compensation for patient movement.
- To support treatment procedure: The module allows pre-procedure planning, real-time display of the treatment needle virtual ablation zone, accurate placement of the needle or insertion tools (such as cryoprobes) on targets, 3D tracking, and distance measurements (proximity) to anatomic ROIs.
The Navigo Workstation Version 2.3 is designed to work with standard trans-rectal/transperineal ultrasound systems and biopsy setup without changing or interfering with the physician's existing workflow. The Navigo Workstation Version 2.3 connects to the video output of the ultrasound system and by tracking the ultrasound probe's position, the recorded 2D ultrasound images are transferred to the Navigo Workstation Version 2.3 for viewing and creation of a 3D model. As with any other procedure, the Ultrasound probe is used together with a standard disposable cover sheath supplied by the user.
Two-dimensional (2D) images and the 3D model of the prostate are displayed on the Navigo Workstation Version 2.3 screen. The Navigo Workstation is equipped with tools to manipulate (rotate, pan, zoom) the model, and to archive and retrieve the information for further use.
The tracking and recording enable the display of an accurate 3D model of the prostate and to record needle locations on the model. Pathology diagnosis results may be updated on the 3D model and a color display representation provides a visual display of the pathology results.
In offline mode, the workstation allows analysis of previous procedures, updates to biopsy locations, report generation, and DICOM-based ROI definition. Offline tools support treatment planning by segmenting anatomical ROIs, displaying virtual treatment regions, and measuring distances from these regions to surrounding structures. Data from prior imaging or biopsy procedures can be utilized for planning.
The device consists of the following components and accessories: The Navigo Workstation cart, electromagnetic transmitter, probe sensor, reference sensor, grid-plate sensor, sensor fixators, reference sensor tape, and cables.
The provided document, an FDA 510(k) Clearance Letter for the Navigo Workstation 2.3, does not contain specific acceptance criteria (e.g., minimum accuracy percentages, sensitivity, specificity thresholds) or a detailed report of device performance against such criteria. The document primarily focuses on demonstrating substantial equivalence to a predicate device through technological similarities and a summary of non-clinical performance testing.
Therefore, I cannot extract a table of acceptance criteria and reported device performance directly from this document. The document lists the types of non-clinical tests performed, but not the quantitative results or the specific acceptance thresholds for those tests.
However, I can provide information based on the listed non-clinical performance testing and general context:
Summary of Device Acceptance and Performance (Based on Provided Document)
The Navigo Workstation 2.3 demonstrated performance through a series of non-clinical (bench) tests and adherence to recognized standards. The document asserts that these tests validate the device's changes and ensure its safety and effectiveness, leading to a conclusion of substantial equivalence.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category/Test | Acceptance Criteria (Implied) | Reported Device Performance (Implied) |
---|---|---|
Software Verification & Validation | Compliance with IEC 62304 | Successfully validated, changes do not affect safety/effectiveness. |
Electrical Safety | Compliance with IEC 60601-1 | Demonstrated compliance. |
EMC Testing | Compliance with IEC 60601-1-2 | Demonstrated compliance. |
Risk and Usability | Compliance with ISO 14971, IEC 60601-1-6 | Demonstrated compliance, deemed safe and effective. |
Mesh Proximity Test (Treatment Planning) | Accurate computation of shortest distance between 3D shapes (treatment zone & ROIs). | Algorithm developed and validated for accurate proximity measurements. |
Margin of ROIs & Positive Biopsies | Accurate addition of margins to ROIs and positive biopsies; consistency across scenarios. | Software capability validated. |
Virtual Ablation Zone Display | Accurate display/alignment with needle manufacturer specifications (3D within prostate model, 2D projection). | Accuracy validated. |
Mechanical Testing (New Cart/Components) | Performance and functionality compliance with defined requirements (e.g., stability, function, vibration, temperature, load). | Demonstrated compliance, ensures device safety/effectiveness not impacted by hardware changes. |
Note: The "acceptance criteria" and "reported device performance" are inferred based on the statement that these tests were conducted to "validate the changes" and ensure "compliance with defined requirements," ultimately supporting the conclusion of substantial equivalence and safety/effectiveness.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: Not explicitly stated for any of the bench tests. The description of tests like "Mesh Proximity Test" or "Tests for Margin of ROIs" suggests computational validation rather than a fixed number of physical samples.
- Data Provenance: The tests are described as "Bench Testing" performed by UC-Care. This indicates a controlled, laboratory-type setting. There is no mention of country of origin for test data, but the company is based in Israel. The tests are non-clinical, so the concept of retrospective or prospective data as typically applied to patient studies does not directly apply.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: Not mentioned.
- Qualifications of Experts: Not mentioned. For bench tests, ground truth would likely be established by engineering specifications, computational models, or known physical properties rather than human experts in the clinical sense.
4. Adjudication Method for the Test Set
- Adjudication Method: Not mentioned. It's improbable that an adjudication method like 2+1 or 3+1 would be applicable for these types of non-clinical bench tests.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study Done? No. The document explicitly states: "No clinical Study was performed for the purpose of this submission." and "Clinical performance data was not required to demonstrate safe and effective use of Navigo workstation 2.3."
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
- Standalone Study Done? Not directly stated as a "standalone study" with specific performance metrics (e.g., AUC, sensitivity, specificity). However, the "Non-Clinical Performance Testing" on specific software features (e.g., Mesh Proximity, Virtual Ablation Zone Display, Margin addition) inherently evaluates the algorithm's performance in a standalone manner, as it's testing the computational output of these features against defined requirements or specifications. The document doesn't provide the quantitative results of these standalone algorithmic evaluations, only that they were performed and validated the changes.
7. Type of Ground Truth Used
- Type of Ground Truth: For the non-clinical bench tests, the ground truth appears to be based on:
- Defined specifications/requirements: For software functionalities (e.g., accurate calculation of shortest distance, accurate margin addition, accurate display of ablation zone aligned with manufacturer specs).
- Recognized consensus standards: For general software, electrical safety, EMC, and risk/usability (IEC 62304, IEC 60601 series, ISO 14971).
- Internal existing test methods: Previously utilized for legally marketed devices by UC-Care.
8. Sample Size for the Training Set
- Training Set Sample Size: Not applicable/not mentioned. This device is a medical image management and processing system with new software features and hardware updates, not an AI/ML device that requires a distinct "training set" in the context of machine learning model development. The document does not describe the use of a machine learning component that would necessitate a training set.
9. How Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: Not applicable, as there is no mention of a training set for machine learning.
Ask a specific question about this device
(125 days)
LLZ
The RiaspDR software directly controls and acquires general radiographic images of human anatomy (excluding fluoroscopic, angiographic, dental and mammographic applications). The RiaspDR software is designed to work with X-ray images from the Mars1417X detector (K210316).
RIASPDR is Radiographic Imaging Acquisition Software Platform. RIASPDR software directly controls and acquires images from Mars1417X detector(K210316) whose manufacturer is iRay Technology. Furthermore, RIASPDR acquires and processes images. In addition, it complies with DICOM standards and is able to transmit and receive data with the PACS system.
The provided FDA 510(k) clearance letter and supporting documentation for the Shen Zhen Cambridge-hit Digital Radiographic Imaging Acquisition Software - DR (RiasDR) do not contain detailed information about the specific acceptance criteria and the comprehensive study that proves the device meets these criteria.
The document states:
- "Software verification and validation testing were conducted and documentation was provided in this 510(k). Results demonstrated that the predetermined acceptance criteria were met."
- "Software Verification and Validation Testing was performed in accordance with internal requirements, international standards and guidance shown below, the safety and effectiveness of RIASPDR were supported, and the substantial equivalence to the predicate device was demonstrated."
- "Clinical testing: Not applicable."
This indicates that acceptance criteria were defined and met through non-clinical testing, but the specifics of these criteria and the methodology of the study are not included in this extract. The document mainly focuses on comparative equivalence to a predicate device (Econsole1, K152172) and adherence to general software validation guidelines and DICOM standards.
Therefore, I cannot provide a detailed answer to your request based solely on the provided text, as the specific information about the acceptance criteria table, sample sizes, expert involvement, adjudication, MRMC study, standalone performance, and ground truth establishment is not present.
However, I can extract the available information and highlight what is missing based on your request:
Device: Digital Radiographic Imaging Acquisition Software - DR (RiasDR)
Study Type: Non-clinical (Software Verification and Validation Testing)
1. Acceptance Criteria and Reported Device Performance
The document specifies performance deviations for certain measurement functions, which likely serve as a subset of the acceptance criteria. However, a complete table of acceptance criteria and a detailed breakdown of all reported device performance metrics are not provided.
Partial Acceptance Criteria (from "5. Device specification"):
Acceptance Criteria | Reported Device Performance |
---|---|
Deviation of length measurement: |
Ask a specific question about this device
Page 1 of 211