Search Results
Found 6 results
510(k) Data Aggregation
(266 days)
Cascination AG
OTOPLAN is intended to be used by otologists and neurotologists as a software interface allowing the display, segmentation, and transfer of medical image data from medical CT, MR, and XA imaging systems to investigate anatomy relevant for the preoperative planning and postoperative assessment of otological and neurotological procedures (e.g., cochlear implantation).
OTOPLAN is a Software as a medical Device (SaMD) which consolidates a DICOM viewer, ruler function, and calculator function into one software platform. The user can
- import DICOM-conform medical images, fuse supported images and view these images.
- navigate through the images and segment ENT relevant structures (semi-automatic/automatic), which can be highlighted in the 2D images and 3D view.
- use a virtual ruler to geometrically measure distances and a calculator to apply established formulae to estimate cochlear length and frequency.
- create a virtual trajectory, which can be displayed in the 2D images and 3D view.
- identify electrode array contacts, lead, and housing of a cochlear implant to assess electrode insertion and position.
- input audiogram-related data that were generated during audiological testing with a standard audiometer and visualize them in OTOPLAN.
OTOPLAN allows the visualization of third-party information, that is, cochlear implant electrodes, implant housings and audio processors.
The information provided by OTOPLAN is solely assistive and for the benefit of the user. All tasks performed with OTOPLAN require user interaction; OTOPLAN does not alter data sets but constitutes a software platform to perform tasks that are otherwise performed manually. Therefore, the user is required to have clinical experience and judgment.
The provided document describes the acceptance criteria and the study that proves the device (OTOPLAN version 3.1) meets these criteria for several new functionalities.
Here's the breakdown:
Acceptance Criteria and Device Performance Study for OTOPLAN v3.1
1. Table of Acceptance Criteria and Reported Device Performance
The document describes performance tests for several new automatic functions introduced in OTOPLAN v3.1. These are broadly categorized into Temporal Bone, Skin, and Inner Ear segmentation and thickness mapping, and CT-CT and CT-MR Image Fusion.
Table: Acceptance Criteria and Reported Device Performance
Functionality Tested | Acceptance Criteria | Reported Device Performance | Pass/Fail |
---|---|---|---|
Temporal Bone Thickness Mapping | Mean Absolute Difference (MAD) ≤ 0.6 mm, 95% Confidence Interval (CI) upper limit ≤ 0.8 mm | MAD: 0.17–0.20 mm, CI: 0.19–0.22 | Pass |
Temporal Bone 3D Reconstruction | Mean DICE coefficient ≥ 0.85, 95% CI lower limit ≥ 0.85 | DICE coefficient (R1): 0.88 [CI: 0.87–0.89] | |
DICE coefficient (R2): 0.86 [CI: 0.85–0.87] | |||
DICE coefficient (R3): 0.89 [CI: 0.88–0.90] | Pass | ||
Skin Thickness Mapping | Mean Absolute Difference (MAD) ≤ 0.6 mm, 95% Confidence Interval (CI) upper limit ≤ 0.8 mm | MAD: 0.21–0.23 mm, CI: 0.23–0.26 | Pass |
Skin 3D Reconstruction | Mean DICE coefficient ≥ 0.68, 95% CI lower limit ≥ 0.68 | DICE coefficient (R1): 0.89 [CI: 0.88–0.90] | |
DICE coefficient (R2): 0.87 [CI: 0.86–0.88] | |||
DICE coefficient (R3): 0.86 [CI: 0.84–0.88] | Pass | ||
Scala Tympany 3D Reconstruction | Mean DICE coefficient ≥ 0.65, 95% CI lower limit ≥ 0.65 | DICE coefficient: 0.76 [CI: 0.75–0.77] | Pass |
Inner Ear (Cochlea, Semi-circular canals, internal auditory canal) 3D Reconstruction (CT) | Mean DICE coefficient ≥ 0.80, 95% CI lower limit ≥ 0.80 | DICE coefficient (R1): 0.82 [CI: 0.81–0.83] | |
DICE coefficient (R2): 0.84 [CI: 0.83–0.85] | |||
DICE coefficient (R3): 0.85 [CI: 0.84–0.86] | Pass | ||
Inner Ear (Cochlea, Semi-circular canals, internal auditory canal) 3D Reconstruction (MR) | Mean DICE coefficient ≥ 0.80, 95% CI lower limit ≥ 0.80 | DICE coefficient (R1): 0.81 [CI: 0.80–0.82] | |
DICE coefficient (R2): 0.83 [CI: 0.82–0.84] | |||
DICE coefficient (R3): 0.84 [CI: 0.83–0.85] | Pass | ||
Cochlear Parameters (CT) | Mean absolute error (MAE) CDLoc measurement ≤ 1.5 mm | MAE (±SD) for CDLoc: | |
R1: 0.59 ± 0.37 mm | |||
R2: 0.64 ± 0.44 mm | |||
R3: 0.62 ± 0.39 mm | Pass | ||
Cochlear Parameters (MR) | Mean absolute error (MAE) CDLoc measurement ≤ 1.5 mm | MAE (±SD) for CDLoc: | |
R1: 0.56 ± 0.42 mm | |||
R2: 0.70 ± 0.39 mm | |||
R3: 0.64 ± 0.43 mm | Pass | ||
Image Fusion (CT-CT) - Semitones | Maximum mean absolute semitone error per electrode contact |
Ask a specific question about this device
(250 days)
Cascination AG
CAS-One IR is a user controlled, stereotactic accessory intended to assist in planning, navigation and manual advancement of one or more instruments, as well as in verification of instrument position and performance during Computed Tomography (CT) guided procedures.
In planning, the desired needle configuration and performance is defined relative to the target anatomy.
In navigation, the instrument position is displayed relative to the patient and guidance for needle alignment is provided while respiratory levels are monitored.
In verification, the achieved instrument configuration and performance are displayed relative to the previously defined plan through an overlay of the pre- and post- treatment image data.
CAS-One IR is indicated for use with rigid straight instruments such as needles and probes used in CT guided interventional procedures performed by physicians trained for CT procedures.
CAS-One IR is intended to be used for patients older than 18 years and eligible for CT-guided percutaneous interventions.
The system consists of the following main components:
- . A mobile navigation platform: this platform can be moved in and out of radiology rooms and is positioned next to the patient in front of the CT scanner. The platform includes two touch screens, a camera, and a computer.
- . Instruments: The instrument set comprises a guide arm, aiming device and a navigational pointer that are connected to each other and assist the user in aligning and positioning a needle trajectory relative to the patient. After positioning the aiming device using the guide arm, the aiming device is aligned with respect to the desired entry point (translational alignment) and rotationally oriented to the desired insertion angle.
- CAS-One IR software: The software provides the step-by-step workflow assistance for needle ● navigation. It provides a means for users to precisely plan a single or multiple needle trajectories, navigate a needle to this exact position and validate the inserted needle's position to the planned position.
Let's break down the information regarding the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) summary for CAS-One IR (K232022).
First, it's important to note that this 510(k) submission primarily focuses on demonstrating substantial equivalence to a predicate device (CAS-One IR, K152473). Therefore, the "study" described is a non-clinical performance testing and algorithm validation study, specifically addressing the differences and new features of the updated device. It is not an MRMC comparative effectiveness study or a typical standalone performance study with clinical endpoints.
Here's a breakdown of the requested information:
1. A table of acceptance criteria and the reported device performance
The document explicitly mentions acceptance criteria for the segmentation algorithms.
Acceptance Criteria (Mean DICE Coefficient) | Reported Device Performance |
---|---|
Liver: 0.9 | Passed |
Tumor: 0.8 | Passed |
Effective Treatment Volume: 0.8 | Passed |
Kidney: 0.85 | Passed |
Lung: 0.9 | Passed |
Mean Centerline DICE (Liver-Vessels): 0.6 | Passed |
For instrument detection algorithms, the performance is generally described as "reliability was gauged by analyzing the ground truth positions and the positions identified by the algorithm," and "These validation efforts provide a robust foundation for asserting the accuracy and effectiveness of the algorithms." Specific quantitative performance metrics for instrument detection are not provided in this summary, but it states they were assessed against ground truth.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
The document does not specify the sample size for the test set used for algorithm validation. It also doesn't provide information about the data provenance (e.g., country of origin, retrospective or prospective nature). It only mentions "ground truth data annotated by personnel considered expert in the domain."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
The document states that the ground truth data was "annotated by personnel considered expert in the domain." It does not specify the number of experts or their specific qualifications (e.g., years of experience, specific medical specialty like radiologist).
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not specify any adjudication method for establishing the ground truth for the test set. It simply states "annotated by personnel considered expert in the domain."
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not conducted. The document explicitly states: "Clinical testing was not required to demonstrate the safety and effectiveness of the device." The studies performed were non-clinical performance and algorithm validation.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
Yes, a standalone algorithm validation was performed. The "Algorithm validation" section describes testing the segmentation algorithms (comparing mean DICE coefficient with state-of-the-art algorithms) and instrument detection algorithms (gauging reliability by comparing algorithm-identified positions with ground truth). These are evaluations of the algorithm's performance in isolation.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth used for algorithm validation was expert annotation/segmentation. The document states, "Test protocols were systematically executed to assess the performance of the algorithmic validation procedures involved comparisons with ground truth data annotated by personnel considered expert in the domain." This implies the ground truth for segmentation and instrument positions was established by human experts.
8. The sample size for the training set
The document does not provide the sample size for the training set. It focuses on the validation of the algorithms rather than their development or training data.
9. How the ground truth for the training set was established
The document does not provide information on how the ground truth for the training set was established. Given the focus on substantial equivalence and non-clinical testing, this level of detail about training data is typically not required in a 510(k) summary if the primary claim relies on equivalence and validation of specific new features.
Ask a specific question about this device
(142 days)
Cascination AG
OTOPLAN is intended to be used by otologists and neurotologists as a software interface allowing the display, segmentation, and transfer of medical image data from medical CT, MR, and XA imaging systems to investigate anatomy relevant for the preoperative planning and postoperative assessment of otological procedures (e.g., cochlear implantation).
OTOPLAN consolidates a DICOM viewer, ruler function, and calculator function into one software platform. The user can import DICOM-conform medical images and view these images, navigate through the images and segment ENT-relevant structures (semi-automatic), which can be highlighted in the 2D images and 3D view, use a virtual ruler to geometrically measure distances and a calculator to apply established formulae to estimate cochlear length and frequency, create a virtual trajectory, which can be displayed in the 2D images and 3D view, identify electrode array contacts of a cochlear implant to assess electrode insertion and position, and input audiogram-related data that were generated during audiological testing with a standard audiometer and visualize them in OTOPLAN. OTOPLAN allows the visualization of third-party information, that is, a cochlear implant electrode arrav portfolio. The information provided by OTOPLAN is solely assistive and for the benefit of the user. All tasks performed with OTOPLAN require user interaction; OTOPLAN does not alter data sets but constitutes a software platform to perform tasks that are otherwise performed manually. Therefore, the user is required to have clinical experience and judgment. OTOPLAN is designed to run on a PC and requires the 64-bit Microsoft Windows 10 operating system. A PDF Reader such as Adobe Acrobat is recommended to access the instructions for use. For computation and usability purposes, the software is designed to be executed on a computer with touch screen capabilities.
The provided text discusses the OTOPLAN device (v2.0) and its substantial equivalence to a predicate device (OTOPLAN v1.3). The information regarding acceptance criteria and a detailed study proving the device meets these criteria is not fully presented in a standalone format as requested for all fields. However, based on the available text, I can extract and infer the following:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of acceptance criteria with specific numerical targets and performance metrics for the OTOPLAN v2.0 device itself. Instead, it focuses on demonstrating substantial equivalence to the predicate device, OTOPLAN v1.3, and verifying the new features.
However, for the new feature of "Electrode Contact Identification," performance testing was conducted. While specific numerical acceptance criteria (e.g., accuracy percentages) are not explicitly stated in a table, the conclusion states that the "testing demonstrated that the algorithm can accurately identify the electrode contacts."
Since the document stresses "substantial equivalence" and the safety/effectiveness of the updated device, the implicit acceptance criteria are that the OTOPLAN v2.0 performs at least as well as and does not adversely affect the safety and effectiveness compared to the predicate device, and for new features, they perform "accurately."
Feature/Metric | Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|---|
All Existing Functions | Substantially equivalent to OTOPLAN v1.3; does not adversely affect safety and effectiveness. Software design verification and validation, hazard analysis, and established moderate level of concern. | OTOPLAN v2.0 maintains the same intended use and functions as OTOPLAN v1.3 for cochlear parametrization, audiogram, virtual trajectory planning, postoperative quality checks, and export report. Existing 3D reconstruction functions (temporal bone, incus, malleus, stapes, facial nerve, chorda tympani, external ear canal) are also the same. Performance is demonstrated through internal testing and software validation. |
New 3D Reconstruction Functions | ||
(Cochlea, Sigmoid sinus, Cochlear bony overhang, Cochlear round window) | Same technological characteristics as functions in the predicate device (e.g., uses similar reconstruction methods). Safety and performance demonstrated through software validation activities and documentation. | These functions use the same reconstruction methods and processes as existing functions in the predicate device. For example, Cochlea uses the same method as temporal bone reconstruction. This was verified through software validation. |
New 3D Reconstruction Function | ||
(Electrode contacts - automatic detection) | Accurate identification of electrode contacts. Does not adversely affect the safety and effectiveness of the subject device. | "The testing demonstrated that the algorithm can accurately identify the electrode contacts." Performance was demonstrated through specific non-clinical performance testing and software validation using human temporal bone cadaver specimens. |
Overall Safety and Effectiveness | Substantially equivalent to the predicate device with regard to intended use, safety, and effectiveness. | The subject device is concluded to be substantially equivalent to the predicate device based on comparison of intended use, technological characteristics, and non-clinical performance testing (Software Verification and Validation, Human Factors and Usability Validation, Internal Test Standards). |
2. Sample Size Used for the Test Set and Data Provenance
For the specific new feature of "Electrode Contact Identification":
- Sample Size for Test Set: "human temporal bone cadaver specimens" (the exact number is not specified).
- Data Provenance: The specimens were "scanned with a Micro CT" (for ground truth) and "clinical CTs" (for test datasets). This implies a laboratory or research setting. The country of origin is not explicitly stated. The study is likely retrospective as it uses pre-existing or specially prepared cadaver specimens rather than living patients.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- The document does not explicitly state the number or qualifications of experts used to establish the ground truth for the "Electrode Contact Identification" test set. It only states that electrode contacts were "marked for the ground truth dataset."
4. Adjudication Method for the Test Set
- The document does not describe an explicit adjudication method (e.g., 2+1, 3+1). It only mentions that electrode contacts were "marked for the ground truth dataset" for the micro CT scans.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study was not done. The document primarily focuses on demonstrating substantial equivalence to a predicate device and verifying new features, not on the comparative effectiveness of human readers with vs. without AI assistance. The device is described as "assistive" and requiring "user interaction," but no study on human performance improvement is detailed. Human Factors and Usability Validation was performed on the predicate device, not a comparative effectiveness study with AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, a standalone performance test was done for the "Electrode Contact Identification" algorithm. The text states: "The electrode contact identification algorithm has been applied on the test dataset. The testing demonstrated that the algorithm can accurately identify the electrode contacts." This confirms standalone algorithm testing. The user then "reviews the result and can manually adjust the contacts points," indicating the human-in-the-loop aspect during clinical use, but the initial detection was algorithm-only.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
- For the "Electrode Contact Identification" feature: The ground truth was established by "electrode contacts marked" on "human temporal bone cadaver specimens" scanned with a Micro CT. This suggests expert marking/annotation on high-resolution imaging (Micro CT is considered a gold standard for anatomical detail beyond clinical CT).
8. The Sample Size for the Training Set
- The document does not provide information on the sample size for the training set for any of the algorithms or features. It focuses on the validation of the new features.
9. How the Ground Truth for the Training Set Was Established
- Since the sample size for the training set is not provided, the method for establishing its ground truth is also not described in this document.
Ask a specific question about this device
(266 days)
CAScination AG
OTOPLAN is intended to be used by otologists and neurotologists as a software interface allowing the display, segmentation, and transfer of medical image data from medical CT, MR, and XA imaging systems to investigate anatomy relevant for the preoperative planning and postoperative assessment of otological procedures (e.g., cochlear implantation).
OTOPLAN consolidates a DICOM viewer, ruler function, and calculator function into one software platform. The user can
- import DICOM-conform medical images and view these images.
- navigate through the images and segment ENT-relevant structures (semi-automatic), which can be highlighted in the 2D images and 3D view.
- use a virtual ruler to geometrically measure distances and a calculator to apply established formulae to estimate cochlear length and frequency.
- create a virtual trajectory, which can be displayed in the 2D images and 3D view.
- identify electrode array contacts of a cochlear implant to assess electrode insertion and position.
- input audiogram-related data that were generated during audiological testing with a standard audiometer and visualize them in OTOPLAN.
OTOPLAN allows the visualization of third-party information, that is, a cochlear implant electrode array portfolio.
The information provided by OTOPLAN is solely assistive and for the user. All tasks performed with OTOPLAN require user interaction; OTOPLAN does not alter data sets but constitutes a software platform to perform tasks that are otherwise performed manually. Therefore, the user is required to have clinical experience and judgment.
OTOPLAN is designed to run on a PC and requires the 64 bit Microsoft Windows 10 operating system. A PDF Reader such as Adobe Acrobat is recommended to access the instructions for use.
For computation and usability purposes, the software is designed to be executed on a computer with touch screen capabilities. The minimum hardware requirements are: - 12.3in wide screen
- 8GB of RAM
- 2 core CPU (such as a 5th generation i5 or i7)
- dedicated GPU with OpenGL 4.0 capabilities
- 250GB hard drive
The provided text is a 510(k) summary for the OTOPLAN device. This document primarily focuses on demonstrating substantial equivalence to a predicate device rather than providing a detailed clinical study report with specific acceptance criteria and performance metrics for an AI/algorithm component.
Based on the provided text, OTOPLAN is described as a software interface for displaying, segmenting, and transferring medical image data for pre-operative planning and post-operative assessment. It does include functions like semi-automatic segmentation and calculations based on manual 2D measurements, but it largely appears to be a tool that assists human users and does not replace their judgment or perform fully autonomous diagnostics. Therefore, it's unlikely to have the kind of acceptance criteria typically seen for AI/ML diagnostic algorithms (e.g., sensitivity, specificity, AUC).
The document states that "Clinical testing was not required to demonstrate the safety and effectiveness of OTOPLAN. This conclusion is based upon a comparison of intended use, technological characteristics, and nonclinical performance data (Software Verification and Validation Testing, Human Factors and Usability Validation, and Internal Test Standards)." This explicitly means there was no clinical study of the type that would prove the device meets acceptance criteria related to diagnostic performance.
However, I can extract information related to the closest aspects of "acceptance criteria" and "study that proves the device meets the acceptance criteria" from the provided text, focusing on the software's functional performance and usability. Since this is not a diagnostic AI/ML device in the sense of making independent clinical decisions, the "acceptance criteria" will be related to its intended functions and safety.
Here's a breakdown based on the information available:
1. A table of acceptance criteria and the reported device performance
The document does not provide a formal table of specific, quantifiable performance acceptance criteria (e.g., segmentation accuracy, measurement precision) with numerical results as one would expect for an AI diagnostic algorithm. Instead, the "performance" is demonstrated through various validation activities.
Category | Acceptance Criteria (Implied from testing focus) | Reported Device Performance |
---|---|---|
Software Functionality | Software functions as intended; outputs are accurate and reliable (e.g., correct calculation of cochlear length, correct display of information, accurate 2D measurements). Software is "moderate" level of concern. | "All tests have been passed and demonstrate that no question on safety and effectiveness is raised by this technological difference." |
"The internal tests demonstrate that the subject device can fulfill the expected performance characteristics and no questions of safety or performance were raised." (Referencing comparison with known dimensions). | ||
Human Factors & Usability | Device is safe and effective for intended users, uses, and use environments; users can successfully perform tasks and there are no critical usability errors. Conformance to FDA guidance and AAMI/ANSI/IEC 62366-1:2015. | "OTOPLAN has been found to be safe and effective for the intended users, uses and use environments." |
Safety and Effectiveness | No questions of safety or effectiveness are raised by technological differences or overall device operation. | "The subject device is equivalent to the predicate device with regard to intended use, safety and efficacy." |
"The subject device is substantially equivalent to the predicate device with regard to device performance." |
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Software Verification and Validation Testing & Internal Test Standards:
- The document mentions "tests with known dimensions which were loaded into OTOPLAN." No specific "sample size" of medical images or data is mentioned for these internal software tests, nor is the provenance of this "known dimension" data explicitly stated (e.g., synthetic, real anonymized clinical data). Given it's internal testing of software functionality rather than clinical performance, it's likely proprietary test cases.
- Human Factors and Usability Validation:
- Sample Size: "15 users from each user group." (User groups are not specified, but typically refer to the intended users like otologists and neurotologists).
- Data Provenance: "to be carried out in the US". This implies prospective usability testing with human users.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
- Software Verification and Validation & Internal Test Standards: The concept of "ground truth" as established by experts for medical image interpretation is not directly applicable here for these functional tests. The ground truth refers to "known dimensions" or expected calculation results, which are determined by the software developers and internal quality processes rather than expert radiologists.
- Human Factors and Usability Validation: No "ground truth" in the diagnostic sense is established by experts for this type of testing. The "ground truth" for usability testing relates to whether users can successfully complete tasks and if the device performs as expected according to the user.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
Not applicable. "Adjudication" methods (like 2+1 or 3+1 consensus) are used to establish ground truth in clinical image interpretation studies, typically when there's ambiguity or disagreement among expert readers. Since no clinical study involving image interpretation by multiple readers in this manner was performed (as explicitly stated that clinical testing was not required), no such adjudication method was used.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. The document explicitly states: "Clinical testing was not required to demonstrate the safety and effectiveness of OTOPLAN." Therefore, no MRMC comparative effectiveness study was conducted.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: The documentation focuses on the software's functional correctness. It states that OTOPLAN "does not alter data sets but constitutes a software platform to perform tasks that are otherwise performed manually." It emphasizes that "All tasks performed with OTOPLAN require user interaction" and "the user is required to have clinical experience and judgment."
- The internal tests seem to evaluate the standalone computational aspects (e.g., "correct calculation according to the published formula and display of the information," "tests with known dimensions which were loaded into OTOPLAN and results compared to the know dimension"). This validates the algorithm's performance for specific computational tasks but not its overall clinical diagnostic performance in a "standalone" fashion that replaces human judgment.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)
- Software Verification and Validation & Internal Test Standards: "Known dimensions" and
"Published formulas" for calculations. This indicates a ground truth based on pre-defined, mathematically verifiable inputs and outputs. - No ground truth from expert consensus, pathology, or outcomes data was used for a clinical study, as no clinical study was performed.
8. The sample size for the training set
The document describes OTOPLAN as a software interface with functions like segmentation and measurement, often based on user interaction or published formulas. It does not describe a machine learning or deep learning model that requires a "training set" in the conventional sense. The "semi-automatic" segmentation is mentioned, but if it uses algorithms that learn from data, no information is provided about such a training set size. This device appears to be a software tool with algorithmic functions rather than a continuously learning AI model.
9. How the ground truth for the training set was established
Not applicable, as no "training set" for a machine learning model is described in the document.
Ask a specific question about this device
(142 days)
CAScination AG
CAS-One IR is a user controlled, stereotactic accessory intended to assist in planning, navigation and manual advancement of one or more instruments, as well as in verification of instrument position and performance during Computed Tomography (CT) guided procedures.
In planning, the desired needle configuration and performance is defined relative to the target anatomy.
In navigation, the instrument position is displayed telative to the patient and guidance for needle alignment is provided while respiratory levels are monitored.
In verification, the achieved instrument configuration and performance are displayed relative to the previously defined plan through an overlay of the pre- and post- treatment image data.
CAS-One IR is indicated for use with rigid straight instruments such as needles and probes used in CT guided interventional procedures performed by physicians trained for CT procedures.
The system consists of three main components:
- A mobile navigation platform can be moved in and out of radiology rooms and is positioned next to the patient in front of the CT scanner. The platform includes two touch screens, a camera and a computer.
- Aiming device with trackable aiming insert: To aim the needles to their correct locations, the system uses an aiming device. The aiming device is attached to a multi-axis mechanical arm that can align the position of the aiming device around the expected needle entry position. The aiming device is first aligned to the desired entry point (translational alignment) and then alignment to the desired needle insertion angle is performed using a remote center of rotation principle (rotational alignment). There are two possible configurations of the aiming device.
- Instrument adapter clamp with trackable marker shield: As an alternative to the aiming device, trackable markershields can be attached directly to rigid needles by means of an instrument adapter. Calibration of the needle geometry is performed with a calibration unit supplied by CAScination.
- CAS-One IR software: The software provides the step-by-step workflow assistance for needle navigation. It provides a means for users to precisely plan a single or multiple needle trajectories, navigate a needle to this exact position and validate the inserted needle's position to the planned position.
The provided document is a 510(k) summary for the CAS-One IR device, detailing its intended use, description, and substantial equivalence to predicate devices. It mentions various tests performed for performance data, but does not explicitly state specific acceptance criteria or provide the detailed results of a study that directly proves the device meets those criteria in a quantitative manner.
Therefore, I cannot populate a table of acceptance criteria and reported device performance directly from this document. However, I can infer the types of performance data collected and the general conclusions drawn, addressing the other points as much as possible.
Here's a breakdown of the available information:
1. A table of acceptance criteria and the reported device performance
As stated, specific numerical acceptance criteria and their corresponding reported performance values are not provided in this document. The document primarily focuses on demonstrating substantial equivalence to predicate devices through various tests and evaluations.
However, based on the "Performance Data" section, we can infer the aspects of performance that were evaluated and the high-level conclusions:
Performance Aspect Evaluated (Inferred Acceptance Criteria) | Reported Device Performance (General Conclusion) |
---|---|
Positional Accuracy (bench test, compared to predicate) | Substantially equivalent to predicate technology |
Patient Registration Method Safety & Effectiveness | Safe and effective method of registering |
Integrated Clinical Workflow | Safe and effective (benchmarked against predicates) |
Accuracy of Needle Insertion Configurations (phantom study) | Accurate and as safe/effective as predicate devices |
Clinical Accuracy and Safety (post-clinical evaluation) | Accurate and as safe/effective on patients as predicate devices |
Usability (risk management & human factors) | Easy and accurate to use for both novice and experienced users |
2. Sample size used for the test set and the data provenance
- Sample size for test set: The document does not specify the sample size for any of the performance tests (e.g., number of cases in the phantom study, number of patients in the post-clinical evaluation, number of users in usability studies).
- Data provenance: Not specified (e.g., country of origin, retrospective or prospective nature of clinical evaluation). The document mentions a "post-clinical evaluation of interventions conducted with CAS-One IR," which implies real-world clinical data, likely retrospective if not a formal prospective clinical trial structured for regulatory submission.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- This information is not provided in the document. The document mentions "physicians trained for CT procedures" as the intended users, and "qualified users of varying degrees of experience" for usability studies, but no details on who established ground truth for performance evaluations.
4. Adjudication method for the test set
- This information is not provided in the document.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- A MRMC comparative effectiveness study is not mentioned in the document. The device is described as a "user controlled, stereotactic accessory intended to assist in planning, navigation and manual advancement," which implies human-in-the-loop operation rather than a standalone AI diagnostic tool. The performance description focuses on the accuracy and usability of the system as a whole, not specifically on the impact of an AI component on human reader performance improvement.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- A standalone (algorithm-only) performance evaluation is not explicitly mentioned for the CAS-One IR. The device is presented as an assistive system for user-controlled procedures. The "Software Verification and Validation" section confirms the software's moderate level of concern due to potential for minor injury, and that "Verification testing appropriate to the software classification was carried out." However, this relates to software quality assurance, not a standalone performance assessment of an AI algorithm in a diagnostic context.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- The document does not explicitly state the type of ground truth used for performance evaluations. For the "accuracy test that evaluated all needle insertion configurations... on a phantom," the ground truth would likely be the known physical positions on the phantom. For the "post-clinical evaluation," the ground truth for "accuracy" would likely relate to the achieved needle placement relative to the planned placement as observed during the procedure or from follow-up imaging, which could be considered a form of outcome data or expert assessment during the intervention.
8. The sample size for the training set
- This information is not provided in the document. The CAS-One IR is described as a "user controlled, stereotactic accessory" for navigation. It's not explicitly presented as an AI/ML device that requires a large training set in the typical sense (e.g., for image classification or segmentation). While it has software, its core function is guidance, not predictive analytics based on historical data. If there are any learning components, their training set details are not disclosed.
9. How the ground truth for the training set was established
- As the sample size and nature of a training set (in the context of AI/ML) are not provided, information on how its ground truth was established is also not available in this document.
Ask a specific question about this device
(189 days)
CAScination AG
The CAS-One Liver system is indicated for open liver surgical procedures where image guidance may be appropriate and where the patient can tolerate long apneic periods under general anesthesia.
The CAS-One Liver system is indicated for open liver surgical procedures where image aujdance may be appropriate and where the patient can tolerate long apneic periods under general anesthesia. It visualizes the position and pose of surgical instruments relative to a three-dimensional model of the patients liver in real-time.
The provided text describes the CAS-One Liver system, its indications for use, and a summary of performance data submitted for its 510(k) premarket notification. However, it does not contain specific acceptance criteria for device performance or a detailed study proving the device meets those criteria.
The document states:
- "Bench testing to show the accuracy and reproducibility was conducted and shown to meet the defined acceptance criteria for various functionality of the system (such as calibration, tracking and registration)."
This indicates that acceptance criteria were defined and testing was performed, but the document does not report what those criteria were, nor does it provide the results of the tests against those criteria.
Therefore, I cannot provide the requested information about acceptance criteria, reported device performance, sample sizes, ground truth, or details of a comparative effectiveness study.
Here's what can be extracted, based on the absence of the requested information:
-
Table of acceptance criteria and reported device performance:
- The document states "Bench testing to show the accuracy and reproducibility was conducted and shown to meet the defined acceptance criteria for various functionality of the system (such as calibration, tracking and registration)."
- CRITERIA NOT SPECIFIED.
- REPORTED PERFORMANCE NOT SPECIFIED.
-
Sample size used for the test set and the data provenance:
- NOT SPECIFIED. The document only mentions "Bench testing."
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- NOT SPECIFIED, as details of the "Bench testing" are not provided.
-
Adjudication method for the test set:
- NOT SPECIFIED.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size:
- NO, an MRMC comparative effectiveness study is not mentioned. The device is a navigation system for surgical procedures, and the testing described is "Bench testing" for accuracy and reproducibility, not a study of human reader improvement with AI assistance.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- The "Bench testing" for "accuracy and reproducibility" of "calibration, tracking and registration" could be considered standalone performance, as it tests the system's technical function. However, the exact details of how "human-in-the-loop" was excluded or included are NOT SPECIFIED.
-
The type of ground truth used:
- For "Bench testing" of a navigation system, ground truth would typically be established through precise measurements from calibrated instruments, phantoms, or simulated environments. However, the specific type of ground truth used for the CAS-One Liver system's bench testing is NOT SPECIFIED.
-
The sample size for the training set:
- The document implies a "virtual surgery planning tool" (MeVis Medical Solutions, Bremen, Germany) that processes CT/MRI scans. This tool likely uses algorithms that would have been "trained," but the details of any training set for the CAS-One Liver system's own components (if applicable for navigation algorithms) are NOT SPECIFIED. The document focuses on the end-use navigation system's performance.
-
How the ground truth for the training set was established:
- NOT SPECIFIED.
Ask a specific question about this device
Page 1 of 1