Search Results
Found 1 results
510(k) Data Aggregation
(63 days)
ClearPoint Array System (Version 1.2)
The ClearPoint Array System (Version 1.2) is intended to provide stereotactic guidance for the placement and operation of instruments or devices during planning and operation of neurological procedures within the MRI in conjunction with MR imaging. The ClearPoint Array System (Version 1.2) is intended as an integral part of procedures that have traditionally used stereotactic methodology. These procedures include biopsies, catheter and electrode insertion including deep brain stimulation (DBS) lead placement. The System is intended for use only with 1.5 and 3.0 Tesla MRI scanners and MR Conditional implants and devices.
The ClearPoint Array System is comprised of a workstation laptop with workstation software, the SMARTGrid™ MRI-Guided Planning Grid, the SMARTFrame™ Array MRI-Guided Trajectory Frame, SmartFrame Array Reducer Tube Kit, the ClearPoint™ Accessory Kit, the SMARTFrame™ Array Thumb Wheel Extension Set, and the MRI Neuro Procedure Drape. The ClearPoint Array Workstation includes the following:
- ClearPoint Array Workstation Software (for trajectory planning and monitoring)
- Laptop Computer
The hardware components of the ClearPoint Array System are the SMARTFrame Array and accessories, and are listed below. They are all single use devices provided sterile. Beyond the changes described above, there have been no modifications to the hardware compared to the last cleared version of the device (K214040). - SMARTFrame Array Pack
a. SMARTFrame Array (adjustable trajectory frame to guide and hold the neurosurgical tools, includes Probe Adapter, Tracker Rod)
b. SMARTFrame Array Scalp Mount Base (includes fiducials, titanium screws, and titanium standoff pins)
c. Entry Point Locator
d. Targeting Stem
e. Centering Device
f. Dock
g. Device Lock (2 different diameters)
h. Screwdriver
i. 2.1-mm Guide Tube
j. 4.5 Center Drill Guide
k. 4.5 Offset Drill Guide
l. 3.4-mm Drill Reducer Tube
m. Center Insertion Guide
n. Offset Insertion Guide - SmartFrame Array Thumb Wheel Extension Set for the trajectory frame.
- SmartFrame Array Guide Tube Kit
a. 1.7-mm Guide Tube
b. 0.5-mm Guide Tube and Device Lock
c. 3.1-mm Guide Tube and Device Lock
d. SmartFrame Array Guide Tubes (sold separately)
e. 7.9mm Center and Offset Device Guides
f. 5.4mm Center and Offset Device Guides
Common Components to the ClearPoint System are listed below. These components have not been modified since their clearance (K214040, K200097, K100836, K091343). - SMARTGrid Pack (interacts with the Software to determine the desired location of the burr hole) (K100836)
a. Marking Grid
b. Marking Tool - Accessory Pack (K200097)
a. Peel away sheath
b. Stylet
c. Depth Stop
d. Ruler - MRI Neuro Procedure Drape (K091343)
The provided text describes the ClearPoint Array System (Version 1.2) and its non-clinical testing for substantial equivalence to a predicate device. It primarily focuses on software modifications and accuracy specifications.
Here's an analysis of the acceptance criteria and study proving the device meets them, based only on the provided text:
Key Takeaway: The provided document is a 510(k) summary, which focuses on demonstrating substantial equivalence to a legally marketed predicate device through non-clinical testing. It does not describe a clinical study comparing human reader performance with and without AI, or a standalone AI performance study. The "device meets acceptance criteria" refers to the non-clinical performance benchmarks.
1. A table of acceptance criteria and the reported device performance
The document presents performance validation data as part of the non-clinical testing, which serves as the acceptance criteria for the accuracy of the system.
Performance Validation | Acceptance Criteria (Implicit from Reported Performance) | Reported Device Performance (Mean) | Reported Device Performance (99% CI) |
---|---|---|---|
Positional Error (mm) | |||
X, Y, Z | Not explicitly stated as a separate "acceptance criteria" column, but the reported 99% CI implies the acceptable range. | X: 0.78, Y: 1.52, Z: -1.41 | X: 1.14, Y: 1.94, Z: -2.08 |
Angular Error (deg.) | |||
Not explicitly stated as a separate "acceptance criteria" column, but the reported 99% CI implies the acceptable range. | 0.67° | 0.85° |
Note on Acceptance Criteria: The document states, "The results of all testing met the acceptance criteria and demonstrated that the modified ClearPoint Array Software complies with all design specifications and performs as expected." However, the specific numerical acceptance criteria thresholds (e.g., "positional error must be less than X mm") are not explicitly listed in a separate column from the reported performance. Instead, the reported performance (especially the 99% CI) is the validation against which the "acceptance criteria" were met. The implication is that these measured values fell within the pre-defined acceptable limits for each parameter.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: The document mentions "Accuracy testing was performed," but does not specify the sample size (e.g., number of measurements, number of trajectories, or number of cases) used for this testing.
- Data Provenance: Not specified. Given it's non-clinical testing for a medical device 510(k), it's typically laboratory-based simulation/phantom data, not patient data from a specific country or collected retrospectively/prospectively.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable. This was non-clinical accuracy testing of a stereotaxic guidance system, not a study involving human interpretation of medical images or expert adjudication of clinical outcomes. The "ground truth" for positional and angular accuracy would have been established by engineering measurements against known physical references.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Not applicable. This was non-clinical accuracy testing, not a study requiring human adjudication of results.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study was not conducted or described. The document focuses on the mechanical/software accuracy of the stereotactic guidance system, not on AI assistance for human image readers. The "AI" concept as typically understood in medical image analysis (e.g., for detection or diagnosis) is not relevant to this device's described function as a stereotaxic instrument.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
- A form of "standalone" testing was done, but it's for the software components of a stereotaxic guidance system, not a diagnostic AI algorithm. The document states:
- "ClearPoint Neuro performed extensive Non-Clinical Verification Testing to evaluate the safety and performance of the software components of ClearPoint Array System (Version 1.2)."
- Specific tests included: "Automated Verification," "Integrated System Verification," "Localization Verification," and "Regression Test Verification."
- "Accuracy testing was performed to confirm that modifications included in ClearPoint Array 1.2 did not cause any unexpected changes in the accuracy specifications of the software, with successful results."
- This "accuracy testing" (Table 5-1) represents the "algorithm only" performance for positional and angular accuracy of the guidance system's calculations.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- The ground truth for the non-clinical accuracy testing would have been physical measurements or known engineering specifications from a controlled test environment (e.g., phantom studies with precisely known target locations and trajectories). It would not be expert consensus, pathology, or outcomes data, as this is a device for guidance during neurological procedures, not for diagnosis.
8. The sample size for the training set
- Not applicable. The document describes non-clinical verification testing of a stereotaxic guidance system's software and hardware, not a machine learning model that requires a "training set."
9. How the ground truth for the training set was established
- Not applicable. As no training set was described, there's no ground truth establishment for it.
Ask a specific question about this device
Page 1 of 1