Search Results
Found 12 results
510(k) Data Aggregation
(58 days)
ClearPoint Neuro, Inc.
The ClearPoint System is intended to provide stereotactic guidance for the placement and operation of instruments or devices during planning and operation of neurological procedures within an operating room environment and in conjunction with MR and/or CT imaging. During planning, the system is intended to provide functionality for the automatic identification, labeling, visualization of segmentable brain structures from a set of loaded MR images. The ClearPoint System is intended as an integral part of procedures that have traditionally used stereotactic methodology. These procedures include biopsies, catheter and electrode insertion including deep brain stimulation (DBS) (asleep or awake) lead placement. When used in an MRI environment, the system is intended for use only with 1.5 and 3.0 Tesla MRI scanners and MR Conditional implants and devices.
The updated ClearPoint Software Version 3.0 introduces modifications to support a new clinical workflow using intraoperative CT imaging when compared to the previous ClearPoint Software Version 2.2 (K233243). The ClearPoint System described in this submission is essentially identical from a technological standpoint to the cleared predicate device described in K233243 (ClearPoint System version 2.2). As mentioned above, since the prior clearance, the company has implemented software features to enable usage of the ClearPoint System during CT-guided procedures, in addition to MR-guided procedures supported in the predicate device. The hardware components are unchanged from the device described in K233243 and minor changes were made to the indications for use.
The ClearPoint System is comprised of a workstation laptop with software, the SMARTGrid Planning Grid, the SMARTFrame Trajectory Frame, the SMARTFrame Accessory Kit and the SMARTFrame Thumbwheel Extension. The SMARTGrid and associated Marking Tool are designed to assist the physician to precisely position the entry hole as called out in the trajectory planning software. The SMARTFrame is an Adjustable Trajectory Frame (ATF) that provides the guidance and fixation for neurosurgical tools. The image-visible fluids of the Targeting Cannula along with the fiducial markers in the base of the frame allows for trajectory feedback when the physician views the intraoperatively acquired images, makes changes and confirms with subsequent image acquisitions. Optionally, the ClearPoint System can be used with any head fixation frame to immobilize the patient's head with respect to the scanner table. ClearPoint Neuro also supplies an optional head fixation frame that can be used with the ClearPoint System. The ClearPoint Workstation includes the ClearPoint Workstation Software (for trajectory planning and monitoring) and a Laptop Computer. The hardware components of the current ClearPoint System are the SMARTFrame and Accessories. They are all single use devices that are provided sterile and include the SMARTGrid Planning Grid (Marking Grid, Marking Tool), SMARTFrame Pack (SMARTFrame or SMARTFrame XG, Centering Device and Wharen Centering Guide, Dock, Device Lock, Screwdriver, Roll Lock Screw and Washer), Rescue Screws (Extra Titanium Screws), Thumbwheel Extension, Accessory Kit (Peel-away Sheath, Stylet, Lancet, Depth Stop, Ruler), Scalp Mount Base, and Guide Tubes and Device Guide Packs (Guide Cannulas). In addition, the ClearPoint System is used with the separately cleared or Class I, 510(k) exempt products: SmartTip MRI Hand Drill and Drill Bit Kit, MRI Neuro Procedure Drape, with Marker Pen and Cover, and SmartFrame Fiducial.
The provided document (K243657) is a 510(k) Premarket Notification for the ClearPoint System (Software Version 3.0), which is a stereotaxic instrument. The document primarily focuses on demonstrating substantial equivalence to predicate devices and detailing the non-clinical testing performed.
Based on the provided text, here's a description of the acceptance criteria and the study that proves the device meets the acceptance criteria, addressing each point as much as possible:
1. A table of acceptance criteria and the reported device performance
The document provides accuracy specifications in tables:
Table 1: ClearPoint System Accuracy Specifications - MRI Guidance (Unchanged from predicate)
Performance Validation | Positional Error (mm) | Angular Error (deg.) | ||||
---|---|---|---|---|---|---|
ClearPoint System | Mean (X,Y,Z) | Std. Dev. | 99% CI | Mean | Std. Dev. | 99% CI |
0.14 | 0.37 | 0.44 | 0.32° | 0.17° | 0.46° | |
0.16 | 0.54 | 0.60 | ||||
0.56 | 0.57 | 0.10 |
Note: The table layout in the original document for MRI accuracy is a bit unusual with duplicated rows for positional error, and it's not explicitly labelled as "acceptance criteria." However, it presents the validated performance.
Table 2: ClearPoint System Accuracy Specifications - CT Guidance (New for v3.0)
Precision ME (mm) | Accuracy RMS (mm) | Accuracy Max (mm) | |
---|---|---|---|
X | 0.1 | 0.17 | 0.3 |
Y | 0.1 | 0.17 | 0.3 |
Z | 0.1 | 0.17 | 0.3 |
Roll | 0.1° | 0.17° | 0.3° |
Pitch | 0.1° | 0.17° | 0.3° |
Yaw | 0.1° | 0.17° | 0.3° |
Positional Error (mm) | Trajectory Angle Error (Degrees) | ||||
---|---|---|---|---|---|
Mean | Standard Deviation | 99% CI Upper Bound | Mean | Standard Deviation | 99% CI Upper Bound |
0.81 | 0.49 | 0.93 | 0.31 | 0.23 | 0.37 |
Explicit Acceptance Criteria (from "Targeting Accuracy" row in Table 3 comparison):
- Targeting Accuracy: ± 1.5 mm @ ≤125mm (This appears to be the primary specified acceptance criterion for overall targeting accuracy, presumably applying across both MRI and CT guidance given its placement in the general comparison table).
Reported Device Performance:
- MRI Guidance: Positional Error (99% CI) 0.44 mm, 0.60 mm, 0.10 mm. Angular Error (99% CI) 0.46°. These values are well within the ± 1.5 mm overall targeting accuracy.
- CT Guidance: Positional Error (99% CI Upper Bound) 0.93 mm. Trajectory Angle Error (99% CI Upper Bound) 0.37°. These values are also well within the ± 1.5 mm overall targeting accuracy.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document states:
- "Accuracy testing was performed using an MRI scanner to confirm that modifications included in the ClearPoint System 3.0 did not cause any unexpected changes in the accuracy specifications of the software, with successful results."
- "Additionally, accuracy testing was performed in a CT scanner to validate the CT-guided clinical workflow that is new to the ClearPoint 3.0 software and establish new ground-truth accuracy specifications."
However, the document does not specify the sample size for either the MRI or CT accuracy test sets.
The data provenance is also not specified regarding country of origin or whether it was retrospective or prospective. Given the nature of accuracy testing for a stereotaxic device, these are typically phantom-based, prospective tests conducted in a controlled lab or clinical environment, rather than patient data studies.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not mention human experts for establishing ground truth for the accuracy tests. The accuracy testing described appears to be technical validation against a known physical ground truth (e.g., phantom measurements), as is common for stereotaxic instrument validation. Therefore, expert consensus on images is not relevant for this type of accuracy assessment.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable, as the accuracy testing described is a technical validation against a physical ground truth, not a study evaluating human interpretation or a scenario requiring adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not mention any MRMC comparative effectiveness study or any evaluation of human readers (even though the device has "automatic identification, labeling, visualization" of structures). The testing detailed is primarily focused on the system's technical accuracy in guidance, not on AI assistance for human image interpretation.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, the accuracy testing described in Section 6, "Non-Clinical Testing," and detailed in Tables 1 and 2, represents standalone (algorithm only) performance testing against a technical ground truth. It evaluates the system's precision and accuracy in positional and angular measurements.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The ground truth used for the accuracy tests appears to be physical measurements from a phantom or test setup, given the context of "Positional Error" and "Angular Error" in millimeters and degrees. The document refers to "establish new ground-truth accuracy specifications" in relation to the CT testing, implying a precise, measurable standard. This is typical for the technical validation of stereotaxic guidance systems.
8. The sample size for the training set
The document does not specify a sample size for a "training set." The ClearPoint System 3.0 software introduces features like "automatic identification, labeling, visualization, and quantification of segmentable brain structures" and "Algorithms to automatically locate and identify marking grid, targeting frame components, cannula, and device tip from both MR and CT image sets." While these imply the use of machine learning or advanced algorithms that would require training data, the submission focuses on the validation of these features' accuracy, not on the details of their development (including training data specifics).
9. How the ground truth for the training set was established
Since the document does not discuss a training set, there is no information provided on how its ground truth was established. For the algorithms processing anatomical structures or hardware components, the ground truth for training data would typically involve manually annotated medical images by qualified personnel (e.g., radiologists, neurosurgeons, or trained annotators under expert supervision).
Ask a specific question about this device
(175 days)
ClearPoint Neuro, Inc.
Ask a specific question about this device
(161 days)
ClearPoint Neuro Inc.
The Bone Anchor is intended to be used with commercially available stereotactic systems for intracranial and neurosurgical procedures which require accurate positioning of compatible small surgical instruments or accessories in the cranium, brain, or nervous systems. It is designed to provide short-term fixation and positioning of compatible neurosurgical instruments or accessories under image-guidance.
The Bone Anchor is a single-use device for temporary short-term fixation of surgical tools or accessories during a single surgical procedure. The Bone Anchor may be used during image-guided or MR interventional procedures. This device is not intended to create a seal or barrier from potential infection and has not been tested for long-term (>24 hours) use. The Bone Anchor components are listed below along with their function:
- Bone Anchor (Skull Device) A device that is inserted into the Skull that is used . to hold an instrument or accessory in place after is it inserted to the desired position. Part of the Skull Fixation Device protrudes from the Skull.
- Bone Anchor Driver (Driver) An optional device used to position the Skull . Fixation Device into the skull.
- Bone Anchor Silicone Seal A device within the Bone Anchor which is . compressed against the inserted instrument or accessory when the Cap is tightened, thereby holding the instrument or accessory in place.
- Bone Anchor Cap A device which connects to the Bone Anchor and, when . tightened, compresses the Silicone Seal against the inserted instrument or accessory.
- . Bone Anchor Cover - An optional device which is placed over the Cap when inserting a flexible instrument or other accessory and can be locked into place on the Cap to secure the accessory.
The provided text is a 510(k) summary for the ClearPoint Neuro Bone Anchor, which focuses on demonstrating substantial equivalence to predicate devices rather than presenting a standalone study with specific acceptance criteria and detailed performance metrics. Therefore, many of the requested sections (e.g., sample size for test set, number of experts, adjudication method, MRMC study, ground truth for training set) are not available in this document.
However, based on the information provided, here's what can be extracted:
1. Table of Acceptance Criteria and Reported Device Performance
The document describes performance testing for the Bone Anchor and mentions testing for the predicate and reference devices. However, it does not explicitly
state specific acceptance criteria or provide quantitative performance results (e.g., exact load values, push force values, or targeting error measurements) for the Bone Anchor, only noting that "meets all test specifications" for demonstrating substantial equivalence.
Acceptance Criteria Category | Reported Device Performance |
---|---|
Load Testing | Not explicitly stated (Met test specifications) |
Push Force Testing | Not explicitly stated (Met test specifications) |
Insertion Force Testing | Not explicitly stated (Met test specifications) |
Retention Force Testing | Not explicitly stated (Met test specifications) |
Removal Force Testing | Not explicitly stated (Met test specifications) |
Driver Pull Force Testing | Not explicitly stated (Met test specifications) |
Note: The document states that the subject device "meets all test specifications" and performs "as well as the predicate device" but does not provide the numerical values for these specifications or the device's performance against them. Similarly, for the predicate Microtable, it lists "Dimensional Stability" and "Load Testing" but no specifics. For the reference device, it lists the same tests as the subject device.
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify the sample size used for the described performance testing. The provenance of the data (e.g., country of origin, retrospective or prospective) is also not mentioned.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
Not applicable. The document describes engineering and mechanical performance testing, not a study involving human experts establishing ground truth for clinical data.
4. Adjudication Method for the Test Set
Not applicable. This is not a clinical study involving human assessment of data.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, an MRMC comparative effectiveness study was not done. The document focuses on bench testing and comparison to predicate devices, not on human reader performance with or without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Not applicable. This device is a mechanical medical instrument, not an AI algorithm.
7. The Type of Ground Truth Used
For the performance testing mentioned (Load, Push Force, Insertion Force, Retention Force, Removal Force, Driver Pull Force Testing), the "ground truth" would be established by the engineering and mechanical testing standards and methodologies used to evaluate the device's physical properties and functionality. It is not based on expert consensus, pathology, or outcomes data in a clinical sense.
8. The Sample Size for the Training Set
Not applicable. This device is a mechanical instrument and does not involve AI/ML requiring a training set.
9. How the Ground Truth for the Training Set Was Established
Not applicable. As noted above, there is no training set for this type of device.
Ask a specific question about this device
(107 days)
ClearPoint Neuro Inc.
The SmartFrame OR is intended to provide stereotactic guidance for the placement and operation of instruments or devices during planning and operation of neurological procedures performed in conjunction with the use of a compatible optical stereotaxic navigation system using preoperative MR and/or CT imaging. These procedures include biopsies, catheter placement and electrode introduction.
The SmartFrame OR is a hardware-only precision trajectory aiming device for procedures conducted entirely in the traditional operating room and in conjunction with commercially available optical navigation systems such as Medtronic Stealth Station S8, or functionally equivalent optical navigation systems.
The SmartFrame OR Stereotactic System tower consists of three assemblies. The reference bracket arm attaches to the skull mount and the trajectory aiming tower is attached to the mount once it is affixed to the patient skull. The ring assembly of the base is attached to the patient's skull. The socket assembly fits over the two retaining blocks on the ring assembly and is secured with the tower thumbscrews.
The SmartFrame OR Kit consists of the following Components:
- 1x SmartFrame OR Tower ●
- 1x Device Guide, 2.1mm
- 1x Centering Ring ●
- 1x Dock ●
- 1x Lock
- 1x Lock (2.1mm) ●
- 1x SNS Thumb Wheel Extension ●
- . 1x Thumb Screw Pack
The plastic tower assembly is designed to provide multi-directional orientation adjustments to the Device Guide, which is a guide tube encased in the center of the tower assembly. The Device Guide has a 14-gauge central lumen through which a peel away sheath can be placed and oriented.
The tower, when attached to the base assembly, provides adjustments in the roll, pitch, x, and y directions by turning the appropriate thumb wheels. The thumb wheels on the SMARTFrame are used to maneuver the Device Guide by either direct or indirect physician contact.
Acceptance Criteria & Device Performance:
The primary acceptance criterion for the SmartFrame OR device appears to be its targeting accuracy, both in terms of positional error and trajectory angle error, as compared to the predicate device, Medtronic NexFrame.
Here's a table summarizing the reported device performance against implicitly accepted criteria derived from the predicate device's performance:
Metric | Acceptance Criteria (based on Predicate - Medtronic NexFrame) | Reported Device Performance (SmartFrame OR) |
---|---|---|
Positional Error (Mean) | ≤ 1.48 mm | 1.36 mm |
Positional Error (99% CI Upper Bound) | Not explicitly stated for predicate, but SmartFrame OR's value (1.57 mm) needs to be acceptable | 1.57 mm |
Trajectory Angle Error (Mean) | Not explicitly stated for predicate | 0.67 degrees |
Trajectory Angle Error (99% CI Upper Bound) | Not explicitly stated for predicate | 0.92 degrees |
Study Details:
-
Table of Acceptance Criteria and Reported Device Performance: This has been provided above. The implied acceptance criteria for positional error is that the SmartFrame OR's performance should be at least as good as, if not better than, the predicate device (Medtronic NexFrame). The document states that the NexFrame's targeting accuracy is ±1.48 mm. The SmartFrame OR's mean positional error of 1.36 mm and 99% CI upper bound of 1.57 mm (after adding a 0.25mm measurement tool differential) suggests it performs comparably or better, meeting the implicit acceptance. For trajectory angle error, no direct predicate comparison is given in the table, but the reported values suggest the device performs within acceptable limits for its intended use.
-
Sample Size Used for the Test Set and Data Provenance: The document does not explicitly state the sample size used for the benchtop accuracy testing. The provenance of the data is from bench testing conducted by ClearPoint Neuro, Inc., the device manufacturer. It is a controlled, prospective study in a laboratory setting, not real-world patient data.
-
Number of Experts Used to Establish Ground Truth and Qualifications: The document does not mention the involvement of experts in establishing ground truth for the bench testing. Ground truth for the accuracy testing would have been established by the precise design specifications of the device and the accuracy of the measurement tools used in the benchtop environment, not through expert consensus on medical images.
-
Adjudication Method for the Test Set: Not applicable. This was a benchtop accuracy test, not a subjective assessment of medical images requiring adjudication.
-
Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study: No, an MRMC comparative effectiveness study was not done. This device is a stereotaxic instrument, a hardware-only guidance system, not an AI or imaging diagnostic tool that would typically involve human readers. Therefore, there's no mention of how much human readers improve with AI assistance.
-
Standalone Performance: Yes, the accuracy testing described is a standalone (algorithm/device only) performance evaluation. It measures the physical accuracy of the SmartFrame OR system itself in a controlled environment, without human intervention in the accuracy measurement process.
-
Type of Ground Truth Used: The ground truth for the benchtop accuracy testing was derived from the physical specifications and measurements of the device's components and its ability to guide instruments to a precisely defined target. This is not
expert consensus, pathology, or outcomes data. It is a calculated and measured ground truth based on engineering principles. -
Sample Size for the Training Set: The document does not mention a training set. This device is a hardware instrument, not a software algorithm or AI model that requires a training set for development.
-
How Ground Truth for the Training Set Was Established: Not applicable, as there was no training set.
Ask a specific question about this device
(107 days)
ClearPoint Neuro Inc.
The ClearPoint Bone Screw Fiducials are intended to provide fixed reference point(s) in patients requiring stereotactic surgery in conjunction with CT imaging.
The ClearPoint Bone Screw Fiducials are titanium bone screw fiducials that provide fixed reference points during neurosurgical procedures.
The ClearPoint Bone Screw Fiducial Kit is provided sterile and is composed of the following:
- 5x - ClearPoint Bone Screw Fiducials
- 1x Bone Screw Fiducial holder
- 1x Screwdriver
The Bone Screw Fiducial is made of titanium and comprised of a threaded portion that is screwed into a patient's skull and a non-threaded, exposed length that protrudes from the patient's head. The head contains a feature for compatibility with a T8 Torx screwdriver and small divot positioned at the center of the spherical head which is used to center the pointer tip of a navigation tool.
The Bone Screw Holder is intended to assist in holding the Bone Screw Fiducials securely to the screwdriver during anchoring. This is accomplished by allowing the screwdriver to mate with the Bone Screw Fiducial without the Bone Screw Fiducial separating from the assembly therefore allowing it to be anchored to the skull.
The provided document is a 510(k) premarket notification for the ClearPoint Bone Screw Fiducials. It does not contain information related to software or AI/ML. The document describes a medical device, specifically bone screw fiducials, and compares them to a predicate device. The testing described is "bench testing" focusing on physical, performance, and safety requirements, and simulated workflow testing.
Therefore, for aspects related to AI/ML or a multi-reader multi-case (MRMC) study, the answer will be that this document does not contain that information.
Here's an analysis based only on the provided text, addressing the points where information is available:
1. A table of acceptance criteria and the reported device performance
The document states: "The results from all testing demonstrated that the ClearPoint Bone Screw Fiducials comply with all design and performance specifications."
It also mentions: "ClearPoint Bone Screw Fiducials have been demonstrated to meet all test specifications and has been shown to be as safe, as effective, and to perform as well as the predicate device."
However, the specific "acceptance criteria" and "reported device performance" metrics (e.g., specific thresholds for physical properties, accuracy in imaging, etc.) are not detailed within this document. The document summarizes that the device met these criteria but does not provide them in a table format.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document describes "bench testing" and "simulated workflow testing." It does not specify sample sizes for these tests or discuss data provenance (country of origin, retrospective/prospective) in the context of clinical data or patient-derived data, as the device is physical hardware.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This N/A. Ground truth, in the context of AI/ML, refers to expert-labeled data. The testing described is bench testing of a physical device. There is no mention of experts establishing a "ground truth" for imagery or diagnostic performance here.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This N/A. No adjudication method is described, as the testing is for physical device performance, not interpretation of data by multiple readers.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study was conducted or mentioned in this document. This document is for a physical medical device (bone screw fiducials), not a software or AI/ML product.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
No standalone algorithm performance study was done or mentioned. This document is for a physical medical device.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
This N/A. For a physical device, "ground truth" typically refers to engineering specifications met through measurements and physical verification. The document states "Design Verification testing was performed relative to these specifications." These specifications serve as the "ground truth" for the device's physical and performance characteristics. There is no biological or diagnostic ground truth described in this submission.
8. The sample size for the training set
This N/A. There is no training set mentioned, as this is not an AI/ML device.
9. How the ground truth for the training set was established
This N/A. There is no training set mentioned, as this is not an AI/ML device.
Summary regarding AI/ML aspects:
The provided document describes a 510(k) premarket notification for a physical medical device (ClearPoint Bone Screw Fiducials). It relies on bench testing to demonstrate performance and substantial equivalence to a predicate device. The information requested regarding AI/ML-specific testing, such as MRMC studies, training/test set sizes, data provenance, expert ground truth establishment, and algorithm-only performance, is not relevant to this submission and therefore not present in the provided text.
Ask a specific question about this device
(107 days)
ClearPoint Neuro Inc.
The ClearPointer is intended to be used in conjunction with the SmartFrame OR System and a compatible stereotactic optical navigation system for patient registration and navigation.
The ClearPointer Optical Navigation Wand is an optical tracker intended to be used with the SmartFrame OR Stereotactic System and ClearPoint Bone Screw Fiducials, both of which are subject of pending 510(k) submission. The ClearPointer is used in conjunction with a Medtronic StealthStation Navigation System to align the SmartFrame OR tower to the desired target. It is composed of 4 optical spheres and utilizes a sphere positional geometry that is recognizable by the Medtronic StealthStation S8 Optical navigation System. It is equipped with a Pointer attachment to allow for image registration and entry point location. The Pointer Attachment can also be removed from the Optical Tracker component, so that the Tracker can be mounted to a SmartFrame OR. The SmartFrame OR Hardware Kit is provided with the following components:
- 1x ClearPointer™ ●
- 1x Pointer Attachment ●
- 1x Reference Array Bracket Arm ●
- 1x Torx Screwdriver ●
- 10x Reflective Spheres
Here's a summary of the acceptance criteria and study information for the ClearPointer Optical Navigation Wand, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance:
Acceptance Criteria | Reported Device Performance |
---|---|
Instrument Error | 0.11 – 0.30 mm |
Registration Error | 0.2 – 0.6 mm |
2. Sample Size Used for the Test Set and Data Provenance:
The document does not specify the exact sample size used for the accuracy testing (benchtop testing). It only states that the testing was performed, but not how many measurements were taken or how many units of the device were tested.
The data provenance is retrospective bench testing, as it describes the testing performed after the device was developed to verify design specifications. The country of origin for the data is not explicitly stated, but given that ClearPoint Neuro, Inc. is based in the US and is filing with the FDA, it can be inferred that the testing aligns with US regulatory standards.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
The document does not provide information on the number of experts used or their qualifications for establishing ground truth in the accuracy testing. The accuracy measurements (instrument and registration error) were likely determined through objective, quantifiable methods using precise measurement tools in a controlled benchtop environment, rather than expert interpretation of medical images or patient outcomes.
4. Adjudication Method (e.g., 2+1, 3+1, none) for the Test Set:
The document does not specify any adjudication method. Given that the listed tests are objective accuracy measurements (instrument error, registration error) conducted during bench testing, adjudication by multiple experts is not typically applicable in this context.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done or reported in this document. The ClearPointer Optical Navigation Wand is a physical navigation instrument, not an AI-assisted diagnostic or interpretative tool for medical images, so an MRMC study involving human readers with and without AI assistance is not relevant to its stated purpose.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done:
Yes, a standalone performance evaluation was conducted in the form of "benchtop accuracy testing." This testing measured the inherent accuracy of the device (instrument error and registration error) independent of human variability in a clinical setting, simulating a workflow with Stealthstation S8 software.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.):
The ground truth used for the benchtop accuracy testing was based on objective, verifiable measurements of instrument and registration error against known or precisely controlled spatial references. It is not an expert consensus, pathology, or outcomes data, as these are typically associated with diagnostic or prognostic medical devices.
8. The Sample Size for the Training Set:
The document does not mention a training set because this device is a physical medical instrument, not a machine learning or AI algorithm that requires a training set.
9. How the Ground Truth for the Training Set Was Established:
This question is not applicable as there is no training set for this device.
Ask a specific question about this device
(60 days)
ClearPoint Neuro Inc.
The ClearPoint System is intended to provide stereotactic guidance for the placement and operation of instruments or devices during planning and operation of neurological procedures within the MRI environment and in conjunction with MR imaging. During planning, the system is intended to provide functionality for the automatic identification, labeling, visualization and quantification of segmentable brain structures from a set of loaded MR images. The ClearPoint System is intended as an integral part of procedures that have traditionally used stereotactic methodology. These procedures include biopsies, catheter and electrode including deep brain stimulation (DBS) lead placement. The System is intended for use only with 1.5 and 3.0 Tesla MRI scanners and MR conditional implants and devices.
The updated ClearPoint Software Version 2.2 integrates the ClearPoint Neuro Maestro Brain Model software (K213645) into the previous ClearPoint Software Version 2.1 (K222519). The ClearPoint Maestro™ Brain Model product is a stand-alone software application for automatic labeling, visualization, and quantification of segmentable brain structures from a set of MRI images and has been incorporated into the ClearPoint System software. The ClearPoint System described in this submission is essentially identical from a technological standpoint to the cleared predicate device described in K222519 (ClearPoint System). As mentioned above, since the prior clearance, the company has integrated the Maestro Brain Model into the software of the predicate device. Specifically, the company has released an updated version of software 2.1, which was part of the last clearance, and has now been upgraded to software 2.2.
The ClearPoint System is comprised of a workstation laptop with software, the SMARTGrid™ MRI-Guided Planning Grid, the SMARTFrame™ MRI-Guided Trajectory Frame, the SMARTFrame™ Accessory Kit and the SMARTFrame™ Thumbwheel Extension. The SMARTGrid and associated Marking Tool are designed to assist the physician to precisely position the entry hole as called out in the trajectory planning software. The SMARTFrame is an Adjustable Trajectory Frame (ATF) that provides the guidance and fixation for neurosurgical tools. The MRI visible fluids of the Targeting Cannula along with the fiducial markers in the base of the frame allow for trajectory feedback when the physician views the MRI images, makes changes and confirms with subsequent MR images.
The ClearPoint System can be used with any MRI-compatible head fixation frame to immobilize the patient's head with respect to the scanner table, as well as with any imaging coil(s) (supplied by scanner manufacturers) that meet the physician's desired imaging quality. ClearPoint Neuro also supplies an optional head fixation frame that can be used with the ClearPoint System.
The ClearPoint Workstation includes the following:
- ClearPoint Workstation Software (for trajectory planning and monitoring)
- Laptop Computer
The hardware components of the current ClearPoint System are the SMARTFrame and Accessories. They are all single use devices that are provided sterile and include the following:
- SMARTGrid MRI Planning Grid (interacts with the software to determine the desired location of the burr hole)
a. Marking Grid
b. Marking Tool - SMARTFrame Pack (SMARTFrame or SMARTFrame XG)
a. SMARTFrame ("ATF") with Base
b. Centering Device and Wharen Centering Guide
c. Dock
d. Device Lock (2 different diameters)
e. Screwdriver
f. Roll Lock Screw and Washer - Rescue Screws (Extra Titanium Screws)
- Thumbwheel Extension
- Accessory Kit
a. Peel-away Sheath
b. Stylet
c. Lancet
d. Depth Stop
e. Ruler - Scalp Mount Base
- Guide Tubes and Device Guide Packs (Guide Cannulas)
In addition, the ClearPoint System is used with the following separately cleared or Class I, 510(k)-exempt products:
SmartTip MRI Hand Drill and Drill Bit Kit
MRI Neuro Procedure Drape, with Marker Pen and Cover
SmartFrame MR Fiducial
Each of the above packs is sold separately and is intended to be used with the ClearPoint Workstation. Each of the components has been described in detail in previous submissions. The ClearPoint System described in this 510(k) is a modification to the company's cleared ClearPoint System (K222519).
The provide document primarily focuses on the substantial equivalence of the ClearPoint System (Software Version 2.2) to its predicate device (ClearPoint System Software Version 2.1) and the integration of functionalities from another cleared device (ClearPoint Maestro Brain Model K213645). While it mentions "Accuracy testing" and "acceptance criteria," the level of detail provided is insufficient to fully answer all aspects of your request, particularly regarding specific performance metrics for the integrated Maestro Brain Model functionalities, the study design for establishing ground truth, or details of multi-reader studies.
However, based on the information provided, here's what can be extracted and inferred:
1. A table of acceptance criteria and the reported device performance
The document mentions "Accuracy testing was performed to confirm that modifications included in ClearPoint System 2.2 did not cause any unexpected changes in the accuracy specifications of the software, with successful results." The Table 2, "ClearPoint System Accuracy Specifications," appears to represent the device's demonstrated performance against an underlying (but unstated) acceptance criterion for accuracy.
Acceptance Criterion (Inferred from Predicate Claims) | Reported Device Performance (ClearPoint System Software Version 2.2) |
---|---|
Positional Error (e.g., within a certain range like ±1.5 mm @ ≤125mm, as stated in the comparison table for Targeting Accuracy) | Positional Error (mm) |
Mean (X,Y,Z): 0.14, 0.16, 0.56 | |
Std. Dev.: 0.37, 0.54, 0.57 | |
99% CI: 0.44, 0.60, 0.10 | |
Angular Error (e.g., within a certain angular tolerance) | Angular Error (deg.) |
Mean: 0.32° | |
Std. Dev.: 0.17° | |
99% CI: 0.46° |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not explicitly state the sample size used for "Accuracy testing" or for the validation of the Maestro Brain Model functionalities. It broadly states "ClearPoint Neuro performed extensive Non-Clinical Verification Testing." No information on data provenance (country of origin, retrospective/prospective) is provided.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not provide details on the number or qualifications of experts used to establish ground truth for the test set. It mentions "Workflow for verifying the brain structure segmentation results" for the Maestro Brain Model functionality, which implies a human review process, but no specifics are given.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not specify any adjudication method used for the test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not mention any multi-reader multi-case (MRMC) comparative effectiveness study. The focus is on the integration of an existing cleared software and ensuring the overall system's accuracy specifications are maintained. The AI component (automatic segmentation of brain structures) is integrated, but its comparative effectiveness with human readers is not detailed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, a standalone validation of the "Maestro Brain Model" functionality was likely already done as part of its original 510(k) clearance (K213645). The current submission states, "The inclusion of the Maestro Brain Model functionalities incorporates the functions of a standalone software product that has previously been subject of a cleared 510(k), (K213645)."
While the specific standalone performance metrics for K213645 are not reproduced in this document, the fact that it was a "stand-alone software application for automatic labeling, visualization, and quantification of segmentable brain structures" implies that its performance as an algorithm-only component was evaluated during its initial clearance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The document does not explicitly state the type of ground truth used. For "Accuracy testing" related to positional and angular error, it typically involves phantom studies or precisely measured landmarks. For the brain structure segmentation, the "Maestro Brain Model" clearance (K213645) would have established its own ground truth, likely involving expert-drawn segmentations or anatomical atlases. The current submission only refers to the pre-existing clearance.
8. The sample size for the training set
The document does not provide any information regarding the sample size for the training set for any of the software's components, including the Maestro Brain Model.
9. How the ground truth for the training set was established
The document does not provide any information on how the ground truth for the training set was established. This information would typically be part of the original 510(k) submission for the Maestro Brain Model (K213645), which is referenced as a predicate, but not detailed here.
Ask a specific question about this device
(63 days)
ClearPoint Neuro, Inc.
The ClearPoint Array System (Version 1.2) is intended to provide stereotactic guidance for the placement and operation of instruments or devices during planning and operation of neurological procedures within the MRI in conjunction with MR imaging. The ClearPoint Array System (Version 1.2) is intended as an integral part of procedures that have traditionally used stereotactic methodology. These procedures include biopsies, catheter and electrode insertion including deep brain stimulation (DBS) lead placement. The System is intended for use only with 1.5 and 3.0 Tesla MRI scanners and MR Conditional implants and devices.
The ClearPoint Array System is comprised of a workstation laptop with workstation software, the SMARTGrid™ MRI-Guided Planning Grid, the SMARTFrame™ Array MRI-Guided Trajectory Frame, SmartFrame Array Reducer Tube Kit, the ClearPoint™ Accessory Kit, the SMARTFrame™ Array Thumb Wheel Extension Set, and the MRI Neuro Procedure Drape. The ClearPoint Array Workstation includes the following:
- ClearPoint Array Workstation Software (for trajectory planning and monitoring)
- Laptop Computer
The hardware components of the ClearPoint Array System are the SMARTFrame Array and accessories, and are listed below. They are all single use devices provided sterile. Beyond the changes described above, there have been no modifications to the hardware compared to the last cleared version of the device (K214040). - SMARTFrame Array Pack
a. SMARTFrame Array (adjustable trajectory frame to guide and hold the neurosurgical tools, includes Probe Adapter, Tracker Rod)
b. SMARTFrame Array Scalp Mount Base (includes fiducials, titanium screws, and titanium standoff pins)
c. Entry Point Locator
d. Targeting Stem
e. Centering Device
f. Dock
g. Device Lock (2 different diameters)
h. Screwdriver
i. 2.1-mm Guide Tube
j. 4.5 Center Drill Guide
k. 4.5 Offset Drill Guide
l. 3.4-mm Drill Reducer Tube
m. Center Insertion Guide
n. Offset Insertion Guide - SmartFrame Array Thumb Wheel Extension Set for the trajectory frame.
- SmartFrame Array Guide Tube Kit
a. 1.7-mm Guide Tube
b. 0.5-mm Guide Tube and Device Lock
c. 3.1-mm Guide Tube and Device Lock
d. SmartFrame Array Guide Tubes (sold separately)
e. 7.9mm Center and Offset Device Guides
f. 5.4mm Center and Offset Device Guides
Common Components to the ClearPoint System are listed below. These components have not been modified since their clearance (K214040, K200097, K100836, K091343). - SMARTGrid Pack (interacts with the Software to determine the desired location of the burr hole) (K100836)
a. Marking Grid
b. Marking Tool - Accessory Pack (K200097)
a. Peel away sheath
b. Stylet
c. Depth Stop
d. Ruler - MRI Neuro Procedure Drape (K091343)
The provided text describes the ClearPoint Array System (Version 1.2) and its non-clinical testing for substantial equivalence to a predicate device. It primarily focuses on software modifications and accuracy specifications.
Here's an analysis of the acceptance criteria and study proving the device meets them, based only on the provided text:
Key Takeaway: The provided document is a 510(k) summary, which focuses on demonstrating substantial equivalence to a legally marketed predicate device through non-clinical testing. It does not describe a clinical study comparing human reader performance with and without AI, or a standalone AI performance study. The "device meets acceptance criteria" refers to the non-clinical performance benchmarks.
1. A table of acceptance criteria and the reported device performance
The document presents performance validation data as part of the non-clinical testing, which serves as the acceptance criteria for the accuracy of the system.
Performance Validation | Acceptance Criteria (Implicit from Reported Performance) | Reported Device Performance (Mean) | Reported Device Performance (99% CI) |
---|---|---|---|
Positional Error (mm) | |||
X, Y, Z | Not explicitly stated as a separate "acceptance criteria" column, but the reported 99% CI implies the acceptable range. | X: 0.78, Y: 1.52, Z: -1.41 | X: 1.14, Y: 1.94, Z: -2.08 |
Angular Error (deg.) | |||
Not explicitly stated as a separate "acceptance criteria" column, but the reported 99% CI implies the acceptable range. | 0.67° | 0.85° |
Note on Acceptance Criteria: The document states, "The results of all testing met the acceptance criteria and demonstrated that the modified ClearPoint Array Software complies with all design specifications and performs as expected." However, the specific numerical acceptance criteria thresholds (e.g., "positional error must be less than X mm") are not explicitly listed in a separate column from the reported performance. Instead, the reported performance (especially the 99% CI) is the validation against which the "acceptance criteria" were met. The implication is that these measured values fell within the pre-defined acceptable limits for each parameter.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: The document mentions "Accuracy testing was performed," but does not specify the sample size (e.g., number of measurements, number of trajectories, or number of cases) used for this testing.
- Data Provenance: Not specified. Given it's non-clinical testing for a medical device 510(k), it's typically laboratory-based simulation/phantom data, not patient data from a specific country or collected retrospectively/prospectively.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable. This was non-clinical accuracy testing of a stereotaxic guidance system, not a study involving human interpretation of medical images or expert adjudication of clinical outcomes. The "ground truth" for positional and angular accuracy would have been established by engineering measurements against known physical references.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Not applicable. This was non-clinical accuracy testing, not a study requiring human adjudication of results.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study was not conducted or described. The document focuses on the mechanical/software accuracy of the stereotactic guidance system, not on AI assistance for human image readers. The "AI" concept as typically understood in medical image analysis (e.g., for detection or diagnosis) is not relevant to this device's described function as a stereotaxic instrument.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
- A form of "standalone" testing was done, but it's for the software components of a stereotaxic guidance system, not a diagnostic AI algorithm. The document states:
- "ClearPoint Neuro performed extensive Non-Clinical Verification Testing to evaluate the safety and performance of the software components of ClearPoint Array System (Version 1.2)."
- Specific tests included: "Automated Verification," "Integrated System Verification," "Localization Verification," and "Regression Test Verification."
- "Accuracy testing was performed to confirm that modifications included in ClearPoint Array 1.2 did not cause any unexpected changes in the accuracy specifications of the software, with successful results."
- This "accuracy testing" (Table 5-1) represents the "algorithm only" performance for positional and angular accuracy of the guidance system's calculations.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- The ground truth for the non-clinical accuracy testing would have been physical measurements or known engineering specifications from a controlled test environment (e.g., phantom studies with precisely known target locations and trajectories). It would not be expert consensus, pathology, or outcomes data, as this is a device for guidance during neurological procedures, not for diagnosis.
8. The sample size for the training set
- Not applicable. The document describes non-clinical verification testing of a stereotaxic guidance system's software and hardware, not a machine learning model that requires a "training set."
9. How the ground truth for the training set was established
- Not applicable. As no training set was described, there's no ground truth establishment for it.
Ask a specific question about this device
(28 days)
ClearPoint Neuro, Inc.
The ClearPoint System is intended to provide stereotactic guidance for the placement and operation of instruments or devices during planning and operation of neurological procedures within the MRI environment and in conjunction with MR imaging. The ClearPoint System is intended as an integral part of procedures that have traditionally used stereotactic methodology. These procedures include biopsies, catheter and electrode insertion including deep brain stimulation (DBS) lead placement. The System is intended for use only with 1.5 and 3.0 Tesla MRI scanners and MR Conditional implants and devices.
The ClearPoint System is comprised of a workstation laptop with software, the SMARTGrid™ MRI Guided Planning Grid, the SMARTFrame™ MRI-Guided Traiectory Frame, the SMARTFrame™ Accessory Kit and the SMARTFrame™ Thumbwheel Extension. The SMARTGrid and associated Marking Tool are designed to assist the physician to precisely position the entry hole as called out in the trajectory planning software. The SMARTFrame is an Adjustable Traiectory Frame (ATF) that provides the quidance and fixation for neurosurgical tools. The MRI visible fluids of the Targeting Cannula along with the fiducial markers in the frame allows for trajectory feedback when the physician views the MRI images, makes changes and confirms with subsequent MR images. The modified ClearPoint Software is used to provide stereotactic quidance for the insertion of one or more devices into the brain within a magnetic resonance imaging (MRI) environment, using hardware provided by ClearPoint Neuro, Incorporated. The software will guide the end user through a set of discrete workflow steps for identifying localization hardware mounted onto the patient, planning one or more trajectory paths into the brain, quiding the alignment of one or more stereotactic frames along each of the planned trajectories, and monitoring the insertion of one of more devices into the brain. The software also supports workflow for creating pre-operative plans prior to carrying out the intraoperative procedure. The software will be installed on a physical laptop computer situated inside the MRI Suite during the intra-operative procedure. There, it will be used in conjunction with a MRI scanner, the SMARTFrame adjustable trajectory frame (ATF), and associated disposable hardware kits provided by ClearPoint Neuro to guide the user through the insertion of one or more devices into the brain. Throughout the procedure, in instances where specific scans are required, the software application will prescribe scan plane parameters detailing the position and angulation of a desired image acquisition necessary to proceed with the workflow. In these cases, users are required to enter the parameters prescribed by the software manually on the MRI scanner console to carry out the appropriate image acquisition. The ClearPoint System can be used with any MRI-compatible head fixation frame to immobilize the patient's head with respect to the scanner table, as well as with any imaging coil(s) (supplied by scanner manufacturers) that meet the physician's desired imaging quality. ClearPoint Neuro also supplies an optional head fixation frame that can be used with the ClearPoint System.
The provided document describes a 510(k) premarket notification for a software update (version 2.1) to the ClearPoint System. The primary focus of the submission is to demonstrate that the updated software is substantially equivalent to the predicate device (ClearPoint System with software 2.0).
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Key Finding: The document explicitly states that the hardware and Indications for Use are unchanged from the predicate device. The updates are limited to the software of the ClearPoint System. Therefore, the "study" described focuses on software verification and validation, primarily demonstrating that the new software does not degrade performance or introduce new risks compared to the previous version, rather than establishing direct clinical effectiveness or an acceptance criteria for a new device.
1. Table of Acceptance Criteria and Reported Device Performance
Given that this submission is for a software update to an already cleared device, the acceptance criteria are framed in terms of ensuring the continued performance and safety of the device, rather than establishing initial performance benchmarks against a specific clinical threshold. The core acceptance criterion for this 510(k) is substantial equivalence to the predicate device.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
No Regression in Existing Functionality | Head-To-Head Comparison: ClearPoint 2.1 passed successfully, verifying "no unintended changes have been introduced from the previous version in essential features that are not expected to have changed." |
Image Registration Unit Test: ClearPoint 2.1 passed successfully, verifying "users will not be required to perform large manual registrations more often than in the previous version." | |
Integrity of Core Algorithms (Segmentation, Low-Level Math) | Automated Testing (Segmentation & Unit Tests): "All automated tests were executed with no failures and no incidental observations." This confirms the underlying algorithms perform as expected. |
Maintenance of Defined Accuracy Specifications (with MRI scanner) | Integrated System Testing: "All tests were executed and pass results were obtained," verifying "the ClearPoint 2.1 software is able to guide placement of a device within the defined accuracy specifications of the system." (The document states the system's targeting accuracy is ±1.5mm @ ≤125mm, implying this was maintained.) |
Compliance with Software Standards (IEC 62304) and Risk Management (ISO 14971:2019) | "Results of the software verification and validation activities demonstrate compatibilities with the requirements of the IEC 62304 standard. Risk analysis activities were also performed in compliance with requirements of ISO 14971:2019." |
No New Safety or Effectiveness Issues | Concluded that "The minor differences between the subject and predicate device do not raise any new issues of safety and effectiveness when the device is used as labeled, and the design controls and data collected ensure no adverse impact on safety or effectiveness." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: The document describes "test cases" and "data inputs" used for various software tests, but does not provide a specific number of cases or patients/procedures used in these tests. It refers to "previously acquired test image data" for manual verification tests.
- Data Provenance: Not explicitly stated, but the mention of "previously acquired test image data" suggests the use of existing, likely retrospective, data for some manual testing. No specific country of origin is mentioned. The integrated system testing involved "reproducing clinical usage" with an MRI scanner, implying simulated or real-world data akin to clinical scenarios.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: Not applicable in the traditional sense of clinical experts establishing ground truth for a diagnostic AI. The document describes "two different testers" for manual verification tests.
- Qualifications of Experts: The qualifications of these "testers" are not specified beyond their role in executing tests. Given the nature of a software update, these would likely be software quality assurance or engineering personnel rather than medical professionals.
4. Adjudication Method for the Test Set
- Adjudication Method: Not explicitly described as an adjudication method for ground truth, as the testing focuses on software functionality rather than clinical interpretation. For "manual verification bench tests," it states "Tests were executed independently by two different testers using the same build of software." This implies a form of independent verification but not a concensus-based adjudication of a medical "ground truth."
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done
- MRMC Study: No, a Multi Reader Multi Case (MRMC) comparative effectiveness study was not performed. This submission is for a software update to a stereotaxic guidance system, not a diagnostic AI intended to assist human readers in interpreting medical images. Therefore, a study to measure human reader improvement with AI assistance is not relevant to this type of device and submission.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: The testing described focuses on the software's functional correctness and its ability to maintain the system's "defined accuracy specifications" when integrated with hardware and MRI scanners.
- Automated Tests: These "exercise all the underlying segmentation algorithms in isolation outside the application against a range of data inputs" and "exercise low-level math and other libraries in isolation." This broadly represents a form of standalone performance testing for specific algorithmic components. However, this is not a measurement of diagnostic accuracy but rather functional validation.
- The "Targeting Accuracy" of the system is stated as "±1.5mm @ ≤125mm," which is a performance specification for the entire system, not just the software in isolation. The integrated system tests verify that the software contributes to maintaining this accuracy.
7. The Type of Ground Truth Used
- Type of Ground Truth: For the software verification and validation, the "ground truth" is implied by the expected output of the software (e.g., consistency with predicate software, known expected results for algorithms, or adherence to the system's accuracy specifications). It is not based on expert consensus, pathology, or outcomes data in a clinical sense. It's a technical "ground truth" derived from software requirements and direct comparison to the previous, cleared software version.
8. The Sample Size for the Training Set
- Training Set Sample Size: Not applicable/Not provided. This document describes a software update for a medical device that provides stereotactic guidance. It is not an AI/ML device that requires a "training set" in the context of machine learning model development. The software updates are described as "updates" and "upgraded," implying feature changes or bug fixes rather than re-training of a learned model.
9. How the Ground Truth for the Training Set was Established
- Ground Truth for Training Set: Not applicable. As stated above, this is a software update for a medical device, not an AI/ML application that involves a training set and associated ground truth establishment for model learning.
Ask a specific question about this device
(263 days)
ClearPoint Neuro, Inc.
ClearPoint Maestro™ Brain Model is intended for automatic labeling, visualization, volumetric and shape quantification of segmentable brain structures from a set of MR images. This software is intended to automate the process of identifying, labelling, and quantifying the volume and shape of brain structures visible in MRI images.
The ClearPoint Maestro™ Brain Model provides automated image processing of brain structures from T1-weighted MR images. Specifically, the device automates the manual process of identifying, labeling, and quantifying the volume and shape of subcortical structures to simplify the workflow for MRI segmentation.
The ClearPoint Maestro™ Brain Model consists of the following key functional modules.
- DICOM Read Module .
- Segmentation Module ●
- Visualization Module ●
- . Exporting Module
The segmented brain structures are color coded and overlayed onto the MR images or be displayed as 3-D triangular mesh representation. The viewing capabilities of the device also provides anatomic orientation labels (left, right, inferior, superior, anterior, posterior), image slice selection, standard image manipulation such as contrast adjustment, rotation, zoom, and the ability to adjust the transparency of the image overlay.
The output from ClearPoint Maestro™ Brain Model can also exported as a report in PDF format. The report also provides a comparison of segmented volume to normative values of brain structures based on reference data.
Here's a breakdown of the acceptance criteria and study details for the ClearPoint Maestro™ Brain Model, based on the provided FDA 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Segmentation Accuracy: | |
Dice coefficient >0.7 | Met: Mean Dice coefficients for 21 segmented brain structures in 101 subjects were significantly greater than 70%. The only exception was the third ventricle, attributed to manual labeling variability rather than device performance. |
Relative volume difference 70% and mean relative volume differences |
Ask a specific question about this device
Page 1 of 2