Search Results
Found 7 results
510(k) Data Aggregation
(30 days)
Interactive, image-guided means of localizing targets in surgical procedures. OTS is indicated for use in cranial and spinal surgical procedures in which anatomical landmarks are not clearly visible or where a desired target is close to critical structures.
The above device consists of a software module that allows a variable focused microscope to be used with OTS. The position and trajectory of the microscope's focal point is tracked using light emitting diodes (LEDs) attached to the microscope and viewed by a camera array in conjunction with digital communications with the microscope.
The provided text is a 510(k) summary for the Radionics OTS Microscope Module II. This document primarily focuses on establishing substantial equivalence to predicate devices and does not contain information about acceptance criteria, detailed study designs, or performance metrics that would typically be found in a clinical study report.
Therefore, I cannot fulfill your request for:
- A table of acceptance criteria and the reported device performance: This information is not present.
- Sample size used for the test set and the data provenance: Not mentioned.
- Number of experts used to establish the ground truth for the test set and their qualifications: Not mentioned.
- Adjudication method for the test set: Not mentioned.
- If a multi-reader multi-case (MRMC) comparative effectiveness study was done, the effect size: Not mentioned.
- If a standalone performance (i.e., algorithm only without human-in-the-loop performance) was done: Not mentioned, and this device is an accessory for an optical tracking system, not a standalone diagnostic algorithm.
- The type of ground truth used: Not mentioned.
- The sample size for the training set: Not mentioned.
- How the ground truth for the training set was established: Not mentioned.
Based on the provided text, the device is an accessory (software module) that allows a variable-focused microscope to be used with Radionics Optical Tracking System (OTS). Its clearance is based on substantial equivalence to previously cleared devices (Radionics Microscope Module I and Zeiss Surgical Microscope Navigator System), rather than a detailed clinical performance study with acceptance criteria.
The 510(k) summary focuses on:
- Device Name: Radionics OTS Microscope Module II
- Common Name: Intraoperative Guidance Device
- Classification Name: Stereotaxic instrument (accessory)
- Intended Use: Interactive, image-guided means of localizing targets in surgical procedures. Indicated for use in cranial and spinal surgical procedures where anatomical landmarks are not clearly visible or where a desired target is close to critical structures.
- Substantial Equivalence: To Radionics Microscope Module I (K981213) and Zeiss Surgical Microscope Navigator System (K965139).
- Technological Characteristics: The device consists of a software module that allows a variable focused microscope to be used with OTS. The position and trajectory of the microscope's focal point are tracked using LEDs attached to the microscope, viewed by a camera array, and communicated digitally with the microscope. The technological characteristics are stated to be the same or similar to those found with the predicate device.
To obtain the detailed study information you are asking for, you would typically need to review the full 510(k) submission, which is not provided, or associated clinical trial documentation if any were required for clearance.
Ask a specific question about this device
(133 days)
The Optical Tracking System (OTS) is a graphical planning tool that allows for pre-operative and operative planning of cranial and spinal surgical procedures. The OTS is indicated for use in surgical procedures in which anatomical landmarks are not clearly visible or where a desired target is close to critical structures.
The Optical Tracking System (OTS) provides an interactive, image-guided means of localizing targets in surgical procedures. A camera array tracks the position of probes and various tools. In addition to the camera array, the OTS consists of a computer workstation, application software, and probes. Upon registration of the system in the surgical environment, the surgeon selects points on the patient using a probe which are interpreted by the computer and related to corresponding points on the image.
The provided text does not contain the detailed information required to answer all parts of your request. This document is a 510(k) summary for a medical device (Optical Tracking System) and focuses on establishing substantial equivalence to predicate devices, rather than presenting a detailed study report with specific performance metrics and validation methodologies.
However, I can extract what is available and indicate where information is missing.
Description of Acceptance Criteria and Study to Prove Device Meets Criteria
The provided document, a 510(k) summary for the Optical Tracking System (OTS), does not explicitly define quantitative acceptance criteria for device performance. Instead, it focuses on demonstrating substantial equivalence to predicate devices and describes the safety testing performed. The "study" described is primarily internal system and unit testing rather than a formal clinical or controlled comparative performance study to prove specific acceptance criteria.
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria (Explicitly Stated in Document) | Reported Device Performance (as per Safety Summary) |
|---|---|
| Not explicitly defined (e.g., target accuracy, precision thresholds). | Functionality Testing: Rigorously tests the functionality of application software. Verifies capabilities to match and map markers on patient scan and patient. Verifies stereotactic CT and MR localizer transformation equations are correctly encoded. Verifies spatial accuracy of the digitizer. Verifies accuracy of transformations from image space to screen space and screen space to image space. |
Missing Information: Specific quantitative acceptance thresholds (e.g., "accuracy must be within X mm").
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not specified. The document refers to "system and unit testing" but does not give details on the number of cases, images, or tests performed.
- Data Provenance: Not specified. Given it's internal "system and unit testing," it's likely internal simulated or lab data, but no details are provided. It does not mention retrospective or prospective studies.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified.
Missing Information: The document does not describe the involvement of external experts or the process of establishing ground truth for any test sets beyond internal software verification.
4. Adjudication Method for the Test Set
- Adjudication Method: Not specified.
Missing Information: There is no mention of an adjudication process, implying that ground truth was likely established through internal validation/verification methods as part of the software testing.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- MRMC Study: No, an MRMC comparative effectiveness study is not mentioned in the provided text. The document focuses on the device's technical functionality and substantial equivalence.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study was done
- Standalone Study: Yes, the described "system and unit testing" appears to be a form of standalone performance evaluation. The testing verifies the software's functionality, accuracy of transformations, and spatial accuracy of the digitizer independently. It's essentially an assessment of the algorithm and hardware's technical capabilities.
7. The Type of Ground Truth Used
- Type of Ground Truth: The document implies that ground truth was established through engineered or calculated accuracy standards for software and hardware components. For example, "spatial accuracy of the digitizer" and "accuracy of the transformations" suggest that known inputs were used to verify calculated outputs, rather than clinical ground truth like pathology or expert consensus on patient data.
8. The Sample Size for the Training Set
- Sample Size for Training Set: Not applicable/Not specified. The document does not describe a machine learning algorithm that requires a training set in the modern sense. It refers to a "graphical planning tool" and "application software" which are likely rule-based or algorithm-driven rather than learned models.
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: Not applicable/Not specified, as no training set for a machine learning model is mentioned.
Ask a specific question about this device
(83 days)
XPlan-1 is a stereotactic LINAC-based radiation treatment planning system. XPlan-1 localizes lesions to be treated using CT scans, MR scans and digitized angiographic film. XPlan-1 provides a stereotactic planning system, for treatment of tumors < 7 cm in diameter at sites such as base of skull and head. The conformal stereotactic treatments are delivered over multiple fractions.
The XPlan-1 system has the same intended use and similar technological characteristics as the commercially available XKnife-3 Stereotactic Radiation Treatment Planning System. Like XKnife-3, the XPlan-1 system includes the same image acquisition, localizing, contouring and beam planning techniques. In addition, XPlan-1 and XKnife-3 both contain methods of QA verification for targeting setup and delivery. XPlan-1 includes modifications of the XKnife-3 system, such as the ability to use conformal collimation devices and a modification to the dose algorithm to account for the use of conformal collimation devices. XPlan-1 also supports the use of the Laser Angio Target Localizer (LATL) as an additional QA check to verify the orientation and position of the LINAC jaws and gantry, the collimator and couch rotation. Like the XKnife-3 use of the Rectilinear Phantom Pointer (RLPP) and Laser Target Localizer Frame (LTLF), XPlan-1 supports the use of the LATL to confirm system alignment at isocenter and to position the patient at isocenter.
XPlan-1 is also substantially equivalent to the commercially available Conformal Radiation Treatment Planning System, FOCUS. Like FOCUS, XPlan-1 supports the use of LINAC jaws, MLCs and wedges to modify the shape and attenuation of the beam. Both XPlan-1 and FOCUS also have the ability to model the total on the target volume and surrounding tissue via a 3D Beam's Eye View and 2D beam visualization. Both systems, also addition to the dose-volume histogram.
The provided document is a 510(k) premarket notification for the XPlan-1 Stereotactic Radiation Treatment Planning System. It primarily focuses on demonstrating substantial equivalence to predicate devices (RSA XKnife-3 System and CMS Focus System), rather than detailing specific acceptance criteria and the results of a primary study.
Therefore, many of the requested details about acceptance criteria, study design, sample sizes, expert involvement, and ground truth establishment are not explicitly present in the provided text. The document refers to "testing" that verifies functionalities, but does not provide the specifics of these tests as a formal study with statistical outcomes.
Here's an attempt to extract the available information and highlight what is missing based on your request:
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria (Inferred from "Safety Summary") | Reported Device Performance (Inferred from "Safety Summary") |
|---|---|
| Correct aperture generation by beam shaping function | XPlan-1 system testing verifies that the beam shaping function generated the correct aperture. |
| Correct dose displays in Geometry Viewer and Dose Summary | [XPlan-1 system testing verifies] that the dose displays in the Geometry Viewer and Dose Summary are correct. |
| Accuracy of PTV generation | [XPlan-1 system testing verifies] that the PTV generation... [is] accurate. |
| Accuracy of Block and MLC generation | [XPlan-1 system testing verifies] that the... Block and MLC generation... [is] accurate. |
| Accuracy of DRR display | [XPlan-1 system testing verifies] that the DRR display... [is] accurate. |
| Accuracy of other new features | [XPlan-1 system testing verifies] that... other new features are accurate. |
| Functionality of QA verification for targeting setup and delivery | XPlan-1 and XKnife-3 both contain methods of QA verification for targeting setup and delivery. |
| Support for conformal collimation devices | XPlan-1 includes modifications of the XKnife-3 system, such as the ability to use conformal collimation devices. |
| Modification to dose algorithm for conformal collimation | XPlan-1 includes a modification to the dose algorithm to account for the use of conformal collimation devices. |
| Support for Laser Angio Target Localizer (LATL) as QA | XPlan-1 also supports the use of the Laser Angio Target Localizer (LATL) as an additional QA check to verify the orientation and position of the LINAC jaws and gantry, the collimator and couch rotation. |
| System alignment and patient positioning confirmation (via LATL) | Like the XKnife-3 use of the Rectilinear Phantom Pointer (RLPP) and Laser Target Localizer Frame (LTLF), XPlan-1 supports the use of the LATL to confirm system alignment at isocenter and to position the patient at isocenter. |
| Support for LINAC jaws, MLCs, and wedges to modify beam | Like FOCUS, XPlan-1 supports the use of LINAC jaws, MLCs and wedges to modify the shape and attenuation of the beam. |
| Ability to model total volume and surrounding tissue via 3D Beam's Eye View and 2D beam visualization | Both XPlan-1 and FOCUS also have the ability to model the total volume and surrounding tissue via a 3D Beam's Eye View and 2D beam visualization. |
| Inclusion of dose-volume histogram | Both systems, also addition to the dose-volume histogram. |
Note: The document uses phrases like "verifies that..." and "are accurate" to describe performance, without providing quantitative metrics or specific pass/fail criteria for what constitutes "correct" or "accurate" in a statistical sense.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: Not specified. The document states "XPlan-1 system testing verifies," but does not provide details on the number of cases, patients, or data points used during this testing.
- Data Provenance: Not specified. It's unclear if the data used for testing was retrospective or prospective, or its country of origin.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: Not specified.
- Qualifications: Not specified.
4. Adjudication Method for the Test Set
- Adjudication Method: Not specified. There is no mention of expert review, consensus methods (e.g., 2+1, 3+1), or any other adjudication process.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No. This document describes a radiation treatment planning system, not an AI-assisted diagnostic or interpretation tool for human readers. Therefore, an MRMC study comparing human reader performance with and without AI assistance is not applicable to the outlined purpose of this device.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Standalone Performance Study: The document refers to "XPlan-1 system testing" to verify various functions like beam shaping, dose display, and PTV/MLC generation. This implies a form of standalone testing of the algorithm's outputs. However, specific performance metrics (e.g., accuracy percentages, error margins) from such a standalone study are not provided. The "Safety Summary" paragraph could be interpreted as a High-Level summary of a standalone performance evaluation, but lacks the granular detail of a formal study report.
7. The Type of Ground Truth Used
- Type of Ground Truth: Not explicitly stated. For a treatment planning system, ground truth would typically involve comparisons against established physics models, phantom measurements, or expert-defined optimal plans. The document broadly mentions "accuracy" and "correctness," which suggests a comparison to some reference, but the nature of that reference (e.g., a gold standard derived from established physics, or expert consensus on ideal plan parameters for specific cases) is not detailed.
8. The Sample Size for the Training Set
- Sample Size for Training Set: Not applicable/Not specified. This device is described as a "Stereotactic Radiation Treatment Planning System" which uses algorithms to calculate and plan, rather than an AI/Machine Learning system that typically requires a distinct "training set." The system likely relies on physics-based models and established dosimetry principles, not data-driven learning from a training set in the modern sense.
9. How the Ground Truth for the Training Set was Established
- Ground Truth for Training Set: Not applicable/Not specified, for the same reasons as #8. If the system incorporates any machine learning, this information is not provided. Given the 1997 date of the document, it's highly unlikely that "training set" as understood in current AI contexts applies here.
Ask a specific question about this device
(185 days)
The Optical Tracking System (OTS) is a graphical planning tool that allows for preoperative and operative planning of surgical procedures. The OTS is indicated for use in surgical procedures in which anatomical landmarks are not clearly visible or where a desired target is close to critical structures.
The Optical Tracking System (OTS), addressed in this premarket notification, has the same intended use and similar technological characteristics as the commercially available Radionics Operating Arm System and the StealthStation Stereotactic System. The OTS is a modification of the Operating Arm System but substitutes a camera array for the operating arm to track the position of probes and various tools. This is the same camera array utilized by StealthStation. The OTS provides an interactive, image-guided means of localizing targets in surgical procedures. In addition to the camera array, the OTS consists of a computer workstation, application software, and probes. Upon calibration of the system in the surgical environment, the surgeon selects points on the patient using a probe which are interpreted by the computer and related to corresponding points on the image.
Here's a breakdown of the acceptance criteria and study information for the Optical Tracking System (OTS), based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Accuracy: The system is correctly encoded for stereotactic CT and MR localizer transformation equations, and all target coordinate input and displays are accurate. | The camera array is accurate to a mean value of 0.3 mm. |
| Stability: Accuracy is maintained over a period of extended operation. | This accuracy is maintained over a period of extended operation. |
2. Sample Size Used for the Test Set and Data Provenance
The text explicitly states: "The results from the OTS are compared to a phantom with targets of known position."
- Sample Size for Test Set: The text does not specify a numerical sample size for the targets on the phantom. It only mentions "targets of known position."
- Data Provenance: The data is generated through testing with a phantom (a physical model with known properties), not clinical patient data. Therefore, it is a simulated/laboratory setting, not retrospective or prospective clinical data from a specific country.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: Not applicable.
- Qualifications of Experts: Not applicable.
The ground truth for this device's performance test is established by the known, pre-defined positions of targets on a phantom. Human experts are not involved in establishing this ground truth.
4. Adjudication Method for the Test Set
- Adjudication Method: Not applicable.
Since the ground truth is established by the known positions on a phantom, there is no need for human adjudication of the test results. The device's measurements are directly compared to these known values.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Comparative Effectiveness Study: No, an MRMC comparative effectiveness study was not done.
This submission is for an image-guided surgical planning tool, not an AI diagnostic tool. The performance evaluation focuses on the system's accuracy in tracking and localization, not on changes in human reader performance with or without AI assistance.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Yes, a standalone performance evaluation was done.
The reported accuracy of "0.3 mm" for the camera array appears to be a standalone measurement of the system's ability to track positions against a known phantom. While the system is intended for use with a surgeon ("human-in-the-loop"), the verification of its core accuracy is performed independently of a surgeon's interpretation or action, by comparing its output to predefined phantom targets. The statement "RSA Optical Tracking System (OTS) systems testing verifies that the stereotactic CT and MR localizer transformation equations are correctly encoded into the application software. Further, it verifies that all target coordinate input and displays are accurate. The results from the OTS are compared to a phantom with targets of known position. The camera array is accurate to a mean value of 0.3 mm. This accuracy is maintained over a period of extended operation" describes a standalone evaluation of the device's technical specifications.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: The ground truth used is based on known, pre-defined positions of targets on a phantom. This is a form of physical/engineering ground truth.
8. The Sample Size for the Training Set
- Sample Size for Training Set: The document does not provide information about a training set. This device is described as an "Optical Tracking System," not a machine learning or AI algorithm that would typically require a training set in the modern sense. Its functionality is based on established principles of optical tracking and geometry.
9. How the Ground Truth for the Training Set Was Established
- How Ground Truth for Training Set Was Established: Not applicable, as no training set is mentioned or implied for this type of device in the provided text.
Ask a specific question about this device
(163 days)
OAS is a surgical planning system which enables the surgeon to plan a surgical procedure so as to optimize his approach. The OAS provides the surgeon with visual information to make assessments during a surgical procedure. OAS is indicated for the operative planning of surgical procedures, for example: tumor resection, ventricular shunt placement, craniotomy, and ENT- intranasal ethmoidectomy.
The Operating Arm System, addressed in this premarket notification, has the same intended use and similar technological characteristics as the commercially available Radionics Operating Arm System (with CT images) and the ISG Viewing Wand System. New features described in this submission include the additional support of MR images and Radionics adhesive fiducial markers. The OAS provides an interactive, imageguided means of localizing targets in surgical procedures. It consists of an articulated Operating Arm, a computer workstation, application software, probes, and a Mayfield Attachment. The Operating Arm consists of a series of articulating joints and connecting links combined with an instrument holder. It is a five-jointed, five degree of freedom pointing device used for position (x, y, z) and angle measurement. Upon calibration of the instrument in the surgical environment, the surgeon manipulates the Arm to select points in surgical space which are interpreted by the computer and related to corresponding points in image space. Similarly, the ISG Viewing Wand System consists of an articulated Arm, a computer workstation, application software, probes, and a Mayfield Attachment. It is a six-jointed six degree of freedom electrogoniometer used for position (x, y, z) and angle measurement.
The provided text describes the Radionics Operating Arm System (OAS), a surgical planning and guidance system.
Here's an analysis of the acceptance criteria and study details based on the provided input:
Acceptance Criteria and Reported Device Performance
| Acceptance Criteria (Implied) | Reported Device Performance |
|---|---|
| Stereotactic CT/MR localizer transformation equations correctly encoded and accurate | System testing verifies correct encoding and accuracy of transformation equations. |
| All target coordinate input and displays accurate | System testing verifies accuracy of target coordinate input and displays. |
| Mechanical Arm accuracy for sampling points over clinically significant surgical volume | Mean accuracy: 0.36 ± 0.28 mm (mean ± std dev). |
| Accuracy maintained over extended operation | Accuracy is maintained over a period of extended operation. |
Note: The document does not explicitly state "acceptance criteria" but rather describes the testing performed and the results that presumably demonstrate the device meets the manufacturer's performance specifications for safety and effectiveness.
Study Details
-
Sample size used for the test set and the data provenance:
- Test Set Sample Size: Not explicitly stated as a number of patient cases or images, but the testing was done on a phantom. The phantom used had "targets of known position."
- Data Provenance: The test data (phantom) is not from a specific country or patient cohort; it is a controlled test artifact. The study is a prospective engineering validation study using a phantom.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable. The ground truth for the test set was established by the known positions of targets within the physical phantom. No human experts were involved in establishing this particular ground truth.
-
Adjudication method for the test set:
- Not applicable. The "known positions" in the phantom serve as the objective ground truth, not requiring human adjudication.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study was done or reported in this document. This device is a surgical planning and guidance system, not an AI-assisted diagnostic tool that aids human readers in interpretation.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- While the system has a human-in-the-loop for surgical procedures, the accuracy testing described for the mechanical arm and software transformation was a standalone algorithm/device performance test using a phantom. The reported accuracy of 0.36 ± 0.28 mm is a measurement of the system's precision against a known standard, not involving a human surgeon's interaction as part of that specific measurement.
-
The type of ground truth used:
- Known physical measurements (phantom targets): The ground truth for the primary accuracy claim (0.36 ± 0.28 mm) was the precisely "known positions" of targets within a phantom.
-
The sample size for the training set:
- The document does not describe a training set in the context of machine learning or AI model development. The system functions based on predefined transformation equations and an articulated mechanical arm, not a trained AI model. Therefore, a "training set" as understood in AI is not applicable here.
-
How the ground truth for the training set was established:
- Not applicable, as there is no described "training set" in the context of an AI model that requires establishing ground truth. The system's underlying mathematical models and engineering design are based on principles of physics, geometry, and mechanical engineering.
Ask a specific question about this device
(79 days)
The Dental Tray holds an impression of the upper teeth, and is part of the GTC Relocatable Head Holder in stereotactic collimated beam, computer planned, LINAC (linear accelerator) based treatment. The Dental Tray holds the dental impression. The dental impression allows repeat positioning of the stereotactic frame at the same location, e.g., for multiple lower dose fractionated stereotactic radiotherapy treatments.
The Dental Tray holds an impression of the upper teeth as part of the predicate system. This impression allows repeat positioning of the stereotactic frame at the same location, e.g., for multiple lower dose fractionated stereotactic radiotherapy treatments. The material of the GTC Dental Tray has been modified from that described in (K934523). The GTC Dental Tray addressed in this 510(k) PMN has the same intended use, design, and technological characteristics as the Dental Tray included in the commercially available GTC Relocatable Head Holder System (K934523).
The provided text describes a "Dental Tray for use with the Gill-Thomas-Cosman Relocatable Head Holder," a Class II device. However, the text does not contain information about acceptance criteria or a study proving the device meets acceptance criteria. The document is a "Summary of Safety and Effectiveness" which focuses on general information, intended use, and the basis for substantial equivalence to a predicate device.
Specifically, the text:
- Identifies the device as a Gill-Thomas-Cosman (GTC) Dental Tray.
- States its intended use as holding an impression of upper teeth for repeat positioning of a stereotactic frame in LINAC-based stereotactic radiotherapy.
- Mentions a predicate device (K934523) and claims substantial equivalence based on the same intended use, design, and technological characteristics, despite a material modification.
There is no information in the provided excerpts related to:
- A table of acceptance criteria and reported device performance.
- Sample size, data provenance, or details of a test set.
- Number or qualifications of experts for ground truth establishment.
- Adjudication method.
- Multi Reader Multi Case (MRMC) comparative effectiveness studies.
- Standalone (algorithm-only) performance.
- Type of ground truth used (expert consensus, pathology, outcomes data, etc.).
- Sample size for the training set.
- How ground truth for the training set was established.
Therefore,Based on the provided text, I cannot answer your request as the information regarding acceptance criteria, study details, and performance metrics is not present. The document focuses on the general description, intended use, and substantial equivalence claim for a dental tray component rather than detailed performance study results.
Ask a specific question about this device
(103 days)
Image processing and comparing an MR and a CT image set or two different CT image sets.
The ImageFusion system, addressed in this premarket notification, has the same intended use and technological characteristics as the commercially available StereoPlan system. Like StereoPlan, the ImageFusion system includes an image processing work station used to evaluate, manipulate, and compare MR and CT image data. In addition, ImageFusion software can reconstruct (fuse) nonstereotactic MR images into the image space of a reference CT stereotactic image set for subsequent stereotactic use, eliminating the need for the localizing hardware required in StereoPlan to define stereotactic locations in MR images. Subsequently, fused images can be used in the treatment planning for stereotactic neurosurgery, radiosurgery and radiotherapy procedures in the same way that supplementary stereotactic MR or CT images are utilized in StereoPlan.
This document is a Summary of Safety and Effectiveness for a medical device called ImageFusion. It's a premarket notification (K960071) from 1996, which is quite old. As such, the level of detail regarding study design, ground truth establishment, and contemporary AI/ML evaluation metrics (like specific sensitivities, specificities, AUC) is not present. The language reflects the regulatory expectations of that era.
Here's an attempt to extract and infer the requested information based on the provided text, noting where information is explicitly stated, implied, or absent.
Acceptance Criteria and Device Performance
| Criteria | Reported Device Performance |
|---|---|
| Registration Accuracy (MR to CT space) | - Average: 1.5 ± 0.6 mm for individual landmarks - Maximum: 2.5 mm for individual landmarks |
| Bone Segmentation Accuracy | - Verified as "accurate" during system and unit testing. (No specific numerical metric provided) |
| Landmark Alignment Accuracy | - Verified as "accurate" during system and unit testing. (No specific numerical metric provided) |
Note: The document specifies "registration of MR images in stereotactic CT space is accurate" and provides numerical values for this accuracy. For "bone segmentation and landmark alignment," it only states that these features are "accurate" based on system and unit testing, without providing quantitative metrics or specific acceptance criteria.
Study Information
2. Sample size used for the test set and the data provenance:
- Test Set Sample Size: Not explicitly stated. The text mentions "system testing" and "unit testing" but does not specify the number of cases or landmarks used in these tests.
- Data Provenance: Not specified. It's an older submission, and such details (country of origin, retrospective/prospective nature) were often not explicitly required or documented in this section.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not specified. The document does not describe the involvement of human experts in establishing ground truth for the "system testing" mentioned. It's possible the accuracy was determined by comparing the device's output to a known phantom or a previously established manual registration, but this is not detailed.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not specified. The description of testing is very high-level and does not include details on adjudication methods.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. The device (ImageFusion) is described as an "image processing and comparing" system for MR and CT images used in treatment planning for stereotactic neurosurgery, radiosurgery, and radiotherapy. The focus here is on the accuracy of the image fusion itself, not on aiding human readers in diagnosis or interpretation compared to a baseline. The device's role is to merge images for subsequent use, not to make or assist in making a diagnostic interpretation typically associated with MRMC studies in AI.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance assessment was done. The described "system testing" verifies the registration accuracy of MR images into CT space (1.5 ± 0.6 mm on average, 2.5 mm maximum) and the accuracy of features like bone segmentation and landmark alignment. This assessment focuses purely on the algorithm's output (the fused accuracy) without explicitly describing a human-in-the-loop component for this specific accuracy verification. The device is used by humans, but the performance metrics provided are for the algorithmic output.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not explicitly stated, but likely based on precise measurements or phantoms. Given the nature of "registration accuracy" for "individual landmarks," the ground truth was most likely established through highly precise physical phantoms (where landmark positions are known) or possibly through meticulous manual registration performed by an expert against a known reference, followed by quantitative measurement of the difference. Pathology or outcomes data are not relevant for assessing image registration accuracy.
8. The sample size for the training set:
- Not applicable / Not specified. The document describes a traditional software system, not a machine learning or AI algorithm in the modern sense that typically requires a distinct "training set." While software development involves testing and iterative refinement, the concept of a separate "training set" for model learning is absent from this type of regulatory submission from 1996. The device's functionality is based on deterministic algorithms for image processing and registration.
9. How the ground truth for the training set was established:
- Not applicable. As a traditional software system, there isn't a "training set" in the machine learning context. Therefore, the establishment of ground truth for such a set is not relevant.
Ask a specific question about this device
Page 1 of 1