Search Results
Found 5 results
510(k) Data Aggregation
(103 days)
Therenva SAS
EndoNaut is an image fusion software solution and computerized navigational system intended to assist X-ray fluoroscopy-guided procedures in the positioning of surgical instruments and endovascular devices.
EndoNaut is indicated for use by Physicians for patients undergoing a fluoroscopy X-ray guided procedure in the chest, abdomen, pelvis, neck and lower limbs, such as aneurysm repair, artery/vein embolization, or peripheral artery disease treatment.
The information provided by the software or system is in no way intended to substitute for, in whole or in part, the surgeon's judgment and analysis of the patient's condition.
It is mandatory to check the real-time anatomy with a suitable imaging technique, such as a contrast-enhanced angiography, before deploying any invasive medical device.
EndoNaut is a computerized navigation system consisting of a software part that carries the medical features and technologies that are controlled and can be installed on a hardware part that enables the medical device to be used in accordance with its intended purpose.
EndoNaut is intended to assist X-ray fluoroscopy-guided procedures in the positioning of surgical instruments and endovascular devices.
EndoNaut Software parts are supported by hardware and software accessories which enable image display and an interaction with the user.
The Software part is interoperable with EndoSize which is a standalone Software designed and developed by Therenva to enable case planning strategy and device (endoprosthesis) selection be-fore endovascular procedure. EndoSize is used by practitioners (in the preparation phase of the operating procedure) or by endoprosthesis manufacturers to visualize vascular structures and/or carry out an extract of the vascular structure from the preoperative CT scan. EndoSize is medical device software which obtained a substantial equivalence determination and FDA clearance through the CDRH premarket notification process (510(K)) (NºK160376).
The provided text is a 510(k) Summary for the medical device EndoNaut. It details the device, its intended use, and a comparison to a predicate device to establish substantial equivalence. However, it does not contain information about a specific study proving the device meets acceptance criteria related to its performance in terms of AI model accuracy, such as sensitivity, specificity, or reader study outcomes.
The document primarily focuses on demonstrating that the new version of EndoNaut (Subject Device
) has similar intended use, functionalities, and safety/effectiveness profiles as its previously cleared predicate device (Predicate Device [K212383](https://510k.innolitics.com/search/K212383)
), despite some architectural changes (e.g., from standalone software to a server/client model). It emphasizes design verification and validation activities rather than clinical performance studies for AI accuracy.
Here's a breakdown of why the requested information cannot be fully extracted and what can be inferred:
Unable to provide a table of acceptance criteria and reported device performance related to AI model accuracy. The document states:
- "The subject of this premarket submission did not require clinical studies to support equivalence." (Page 17)
- It mentions "Verification and validation activities have demonstrated that the EndoNaut (Server) software variant performs equally as the EndoNaut (Standalone) predicate software by providing reliable results, without functional regression and moreover, offers robust safety/security mechanisms." (Page 18)
- It refers to "Features which call machine-learning algorithms: Registration 3D/2D Motion detection Contrast injection detection" (Page 6). It states that "[t]he algorithms have not been changed. Only the way they are implemented is different... These implementation changes do not change the purpose of the algorithms (they do what they did before)." (Page 6)
This implies that the "performance" validated was primarily related to the continued functionality and safety of the existing algorithms within the new architecture, rather than a re-evaluation of their diagnostic accuracy or impact on human performance in a clinical setting. Thus, there are no specific accuracy metrics (e.g., sensitivity, specificity, AUC) or reader study results reported for the device in this document.
Based on the provided text, the following information is either explicitly stated, can be inferred, or is explicitly absent:
-
A table of acceptance criteria and the reported device performance:
- Acceptance Criteria for AI Performance (e.g., sensitivity, specificity): Not provided in the document. The submission focuses on substantial equivalence to a predicate device and verification of existing algorithms within a new architecture, rather than establishing new performance benchmarks for AI accuracy.
- Reported Device Performance: No quantitative performance metrics (e.g., sensitivity, specificity, accuracy, AUC) related to the AI algorithms' diagnostic capabilities are reported. The performance discussion centers on "functional regression" checks and demonstrating "reliable results" and "robust safety/security mechanisms" for the software change (Page 18).
-
Sample size used for the test set and the data provenance: Not specified in the context of AI performance evaluation. The document mentions "Simulated use testing (Validation)" (Page 18) but does not detail the dataset used, its size, or provenance.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not specified. As no clinical studies or specific accuracy evaluations are detailed, the ground truth establishment for such performance metrics is not discussed.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set: Not specified.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not done/reported. The document explicitly states: "The subject of this premarket submission did not require clinical studies to support equivalence." (Page 17).
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not explicitly detailed with performance metrics. The document confirms the presence of machine learning algorithms for "Registration 3D/2D, Motion detection, Contrast injection detection" and states that "The algorithms have not been changed. Only the way they are implemented is different... These implementation changes do not change the purpose of the algorithms (they do what they did before)." (Page 6). This suggests that the algorithms themselves were part of the predicate device and their function was maintained, but a formal standalone performance study for this specific submission is not presented. The verification activities might have included internal functional testing of these algorithms.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not specified regarding the AI features. For the general functionality, the "ground truth" would be the expected behavior and output based on design specifications and the predicate device's performance.
-
The sample size for the training set: Not specified. The document states that the machine learning algorithms "have not been changed" from the predicate device (Page 6), implying that any training would have occurred for the predicate device, but details are not provided here.
-
How the ground truth for the training set was established: Not specified.
Summary based on the document:
This 510(k) submission for EndoNaut (K222070) is primarily concerned with demonstrating substantial equivalence to a previously cleared predicate device (EndoNaut K212383) following changes in software architecture and minor hardware updates. It relies on:
- Design verification and validation (V&V) activities: These include risk assessments, usability reviews, requirement reviews, design reviews, clinical evaluation report reviews, testing on unit level, integration testing, interoperability testing, performance testing, safety testing, and simulated use testing (Page 18).
- No new clinical studies: The submission explicitly states that clinical studies were not required to support equivalence (Page 17).
- Consistency of AI algorithms: The document clarifies that the underlying machine learning algorithms for features like 3D/2D registration, motion detection, and contrast injection detection have not been changed but their implementation differs due to the new software architecture. The validation activities focused on ensuring these algorithms perform "equally as the EndoNaut (Standalone) predicate software" (Page 18) without "functional regression" and maintaining safety and security.
Therefore, the document does not provide the detailed acceptance criteria or performance study results typically associated with AI model accuracy (e.g., sensitivity, specificity, reader studies) for a new AI function or a significant modification that would necessitate re-evaluation of diagnostic performance. Instead, it attests to the maintained functionality and safety of existing features within an updated system.
Ask a specific question about this device
(29 days)
Therenva SAS
EndoNaut is indicated for the treatment of patients with endovascular diseases and who needs, such as but not limited to the following examples:
- · endovascular aortic aneurysm repair (AAA and TAA),
- · angioplasty,
- · stenting.
- · embolization in iliac arteries and corresponding veins.
EndoNaut is indicated for endovascular procedures in the thorax, abdomen, pelvis and lower limbs.
EndoNaut system is an imaging solution for intraoperative navigation and guidance tool for endovascular procedures (aorto-iliac and peripheral).
EndoNaut provides localization assistance by combining 3D preoperative scans and 2D intra-operative fluoroscopy imagery to help position guides, catheters and other vascular devices.
EndoNaut Software is interoperable with EndoSize which is a standalone Software designed and developed by Therenva to enable case planning strategy and device (endoprosthesis) selection before endovascular procedure.
The provided document is a 510(k) summary for a modified medical device, EndoNaut, and focuses on demonstrating substantial equivalence to a predicate device (also named EndoNaut, K171829). It does not contain detailed acceptance criteria, device performance metrics, or study design information typically found in a clinical study report for proving device effectiveness.
The document primarily outlines minor changes in hardware components and software naming conventions, and reaffirms compliance with relevant standards. It explicitly states that "No new risks or possible errors were detected or identified. New clinical data were not necessary."
Therefore, I cannot provide detailed acceptance criteria or a study that proves the device meets specific acceptance criteria in the way you requested. The document does not describe such a study.
However, I can extract the information related to the device's validation and verification activities, which are general statements of compliance rather than specific performance metrics against pre-defined acceptance criteria.
Based on the provided document, here's what can be extracted regarding acceptance criteria and "study" (referring to verification and validation activities):
The document does not detail specific, quantitative acceptance criteria for device performance. Instead, it relies on demonstrating that the modified device (EndoNaut, subject device) is substantially equivalent to the predicate device (EndoNaut, K171829) and continues to meet general safety and effectiveness requirements. The "study" in this context refers to the design verification and validation activities conducted as part of the quality assurance system.
1. A table of acceptance criteria and the reported device performance
No specific, quantitative acceptance criteria are provided in the document, nor are specific performance metrics reported against such criteria. The document states "These tests showed that the EndoNaut Workstation TS1CA2DS1-2 meets the design specification and performed as intended." The "acceptance criteria" are implied to be conformance with "design specification" and "performed as intended," as well as compliance with various standards.
Acceptance Criterion (Implied) | Reported Device Performance |
---|---|
Conforms to Design Specifications | Meets design specifications and performs as intended. |
Safe and Effective | Deemed safe and effective (based on substantial equivalence). |
ISO 14971 compliance | Applied |
IEC 62304 compliance | Applied |
IEC 62366 compliance | Applied |
IEC 60601-1 compliance (Workstation) | Yes (for CENELEC countries) |
ANSI AAMI ES60601-1 (Workstation) | Yes (for US deviations to IEC 60601-1) |
IEC 60601-2 compliance (Workstation) | Yes |
IEC 60601-1-6 compliance (Workstation) | Yes |
2. Sample size used for the test set and the data provenance
The document does not specify a "test set" and thus no sample size for such a set. The validation and verification activities are generally performed internally without a distinct "test set" from real patient data in the context of a 510(k) for minor modifications. The document does not provide a sample size or data provenance.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not describe a clinical study involving experts establishing ground truth. Therefore, this information is not available.
4. Adjudication method for the test set
Not applicable, as no external test set or adjudication process is described in the provided document.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not describe an MRMC study or any study evaluating human reader improvement with AI assistance. The device is described as an "imaging solution for intraoperative navigation and guidance tool," and the AI module mentioned pertains to 2D-3D fusion and semi-automatic registration, not diagnostic interpretation by human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document does not explicitly state an "algorithm only" standalone performance study. The device is intended for "intraoperative navigation and guidance," implying human-in-the-loop use. The "Performance testing (Verification)" and "Simulated use testing (Validation)" would cover the device's functional integrity, but no details on standalone performance metrics are provided.
7. The type of ground truth used
The document does not mention specific "ground truth" data used for performance evaluation in the context of clinical accuracy (e.g., pathology, outcomes data). The verification and validation activities likely used internal reference standards and simulated use cases to confirm the device's functional performance and safety.
8. The sample size for the training set
Not applicable. The document is for a 510(k) submission for a modified device, primarily addressing hardware and software updates, and reaffirming substantial equivalence. It does not describe the development or training of an AI model, only mentions an "AI module" in the context of 2D-3D fusion and registration. Therefore, no training set size is provided.
9. How the ground truth for the training set was established
Not applicable, as no training set is described.
Ask a specific question about this device
(93 days)
Therenva SAS
EndoNaut provides image guidance by overlaying preoperative 3D vessel anatomy onto live fluoroscopic images in order to assist in the positioning of the guidewires, catheters and other endovascular devices.
EndoNaut is intended to assist endovascular procedures in the thorax, abdomen, neck, pelvis and lower limbs. Suitable procedures include (but not limited to) endovascular aortic aneurysm repair (AAA and TAA), angioplasty, stenting and embolization in iliac arteries and corresponding veins.
EndoNaut is not intended for use in the X-ray guided procedures in the liver, kidneys or pelvic organs.
EndoNaut is a stand-alone software medical device that runs on a Windows based computer that meets the minimum requirements.
EndoNaut software provides navigation tools for image-guided endovascular surgery. The device enables to register the X-ray intra-operative images and the pre-operative data.
The device is operated by the physician or a trained operator. The client machine captures a live fluoroscopy video feed from the X-ray's machine external video port, either in digital or in analog format. Any machine that meets the hardware and software requirements can be supported.
The EndoNaut will be marketed as a software only solution.
Below is a summary of the acceptance criteria and study information for the EndoNaut device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Performance Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|
Registration Accuracy | Registration error less than 3mm | Results meet the acceptance criteria |
Panorama Creation Error | Maximum acceptable error of 10mm for peripheral artery surgery | Mean errors below the acceptance criteria |
Measuring Functions (Distance) | No error for distance measurement along centerline (compared to planning tool) | Results meet the acceptance criteria |
Measuring Functions (Length on Image) | Max error 1mm for length measurements on images (compared to visible rule) | Results meet the acceptance criteria |
Software Compliance | Compliance with ISO 14971, IEC 62304, IEC 62366, and FDA Guidance for Software Contained in Medical Devices | Software verification and validation testing performed, demonstrating compliance with these standards and usability testing with clinical users. |
Clinical Feasibility | Feasibility of fusion imaging during aortic endovascular procedures (Primary Endpoint) | Clinical study conclusion confirms the device is safe and effective and supports the indications for use, implying feasibility was achieved. (Specific percentage not provided) |
Clinical Efficiency | Evaluation of efficiency in deploying infrarenal aortic stent grafts to treat unruptured atheromatous aneurysms (Secondary Endpoint: radiation dose) | Clinical study conclusion confirms the device is safe and effective and supports the indications for use. (Specific improvement in radiation dose not provided, but implies efficiency was found acceptable) |
2. Sample Sizes and Data Provenance
-
Test Set (for Technical Performance - Registration Accuracy):
- Sample Size: 5000 different registrations on 100 pre-operative CT-scan images from 50 patients.
- Data Provenance: Not explicitly stated (e.g., country of origin). Appears to be retrospective, using existing CT-scan images.
-
Test Set (for Technical Performance - Panorama Creation):
- Sample Size: 7 cases (6 patients and 1 phantom).
- Data Provenance: Not explicitly stated. The use of "in-vivo data" suggests patient data, likely retrospective or collected specifically for this purpose.
-
Test Set (for Clinical Study):
- Sample Size: Not explicitly stated, though it is described as a "single-centre, prospective feasibility pilot study."
- Data Provenance: Prospective, single-center study. Location not specified.
3. Number of Experts and Qualifications for Ground Truth (Test Set)
- Technical Performance (Registration Accuracy): Not applicable. The ground truth was a "gold standard transformation matrix," which suggests a numerically derived or calculated reference, not directly established by human experts in this context.
- Technical Performance (Panorama Creation): One "perfect panorama (manually corrected)" was used as ground truth. No specific number or qualifications of the individual who manually corrected it are provided.
- Clinical Study: Not applicable for establishing ground truth of the device's performance. The clinical endpoints (feasibility, radiation dose) are objective measures.
4. Adjudication Method for the Test Set
- Technical Performance: Not applicable as the ground truth was either a numerical matrix or a manually corrected panorama, without a listed multi-reader adjudication process.
- Clinical Study: No explicit adjudication method for clinical endpoints (e.g., disagreement resolution for radiation dose measurements) is described.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC comparative effectiveness study is mentioned, nor is there any information about the effect size of human readers improving with AI vs. without AI assistance. The clinical study focused on feasibility and efficiency of the device itself.
6. Standalone Performance
- Yes, performance evaluations were conducted for the algorithm in a standalone manner. The technical performance tests (registration accuracy, panorama creation, measuring functions) demonstrate the algorithm's capabilities without direct human interaction beyond setting up the test scenarios. The device itself is described as a "stand-alone software medical device."
7. Type of Ground Truth Used
- Technical Performance (Registration Accuracy): "Gold standard transformation matrix."
- Technical Performance (Panorama Creation): "Perfect panorama (manually corrected)."
- Technical Performance (Measuring Functions): Comparison with measurements from "EndoSize (planning tool)" and "a visible rule on the fluoroscopic image."
- Clinical Study: Clinical endpoints such as "feasibility rate of fusion" and "radiation dose as measured by fluoroscopy time, dose-area product and air kerma" were used as measures of outcome.
8. Sample Size for the Training Set
- The document does not explicitly state the sample size for any training set. The performance data section describes testing on CT-scan images and patient cases but does not distinguish between training and test data or provide details on the training methodology.
9. How the Ground Truth for the Training Set Was Established
- Since no information about a distinctive training set or its sample size is provided, there is no description of how ground truth for a training set was established.
Ask a specific question about this device
(62 days)
Therenva SAS
EndoSize is a software solution that is intended to provide Physicians and Clinical Specialists with additional information to assist them in reading and interpreting DICOM CT scan images of structures of the heart and vessels.
EndoSize enables the user to visualize and measure (diameters, lengths, volumes, angles) structures of the heart and vessels.
Indications for Use:
EndoSize enables visualization and measurement of the heart and vessels for preoperational planning and sizing for cardiovascular interventions and surgery, and for postoperative evaluation.
General functionalities are provided such as:
- Segmentation of cardiovascular structures
- Automatic and manual centerline detection
- Visualization of CT scan images in every planes, 2D review, 3D reconstruction, Volume Rendering, MPR, Stretched CMPR
- Measurement and annotation tools
- Reporting tools
EndoSize is a stand-alone software application that runs on any standard Windows or Mac OSX based computer. It enables Physicians and Clinical Specialists to select patient CT scan studies from various data sources, view them, and process the images thanks to a comprehensive set of tools. EndoSize is intended to provide a clinical decision support system during the preoperative planning of endovascular surgery.
EndoSize contains five modules dedicated to different types of endovascular interventions, EndoSize EVAR, EndoSize FEVAR, EndoSize TEVAR, EndoSize TAVI and EndoSize Peripheral. These modules can be marketed in combination or as separate solutions. It is also possible to market custom versions of EndoSize to Stent manufacturers, based on the modules listed above. The differences between EndoSize and a custom version of EndoSize (user interface, manufacturer logo, manufacturer stent catalogue in the software, optional features of a generic module), do not modify neither the functioning nor the safety of the software.
One custom version of EndoSize is marketed under the trademark "Intelix". This version includes the modules Intelix AFX and/or Intelix AFX2 and/or Intelix NELLIX which are customized versions of EndoSize EVAR module for specific endografts.
EndoSize enables assessment and measurement of different vascular structures such as vessels, valves, aneurysms, and other anomalies. It provides simple to assess the feasibility of endovascular procedures. EndoSize can combine 2D scan slices into comprehensive 3D models of the patient, and can display supporting DICOM CT scan data. The software accurately represents different types of tissue, making it easier to diagnose anomalies in scans. It works with DICOM CT scan images and can access multiple DICOM data files and PACS server.
The provided document does not detail specific acceptance criteria or a comprehensive study demonstrating that the device meets these criteria in a quantitative manner. Instead, it is a 510(k) summary for a software update (EndoSize version 3.1, K160376), asserting substantial equivalence to a previously cleared device (EndoSize version 3.0, K141475).
The document focuses on:
- Product description and intended use: EndoSize is software for visualizing and measuring heart and vessel structures from CT scans for pre-operational planning, sizing, and post-operative evaluation in cardiovascular interventions and surgery.
- Changes from predicate device: Primarily updates to catalogs, minor UI/UX improvements, and new tools like calcium estimation, C-arm angle recording, and NASCET value calculation, which are based on existing functionalities.
- Performance Data (Conformance): The device is stated to conform to DICOM standards, ISO 14971 (risk management), and IEC 62304 (software life-cycle).
- Bench Testing: It states that every specification is validated by bench tests, including importation, patient management, display, processing, module functioning, measurement, and report creation/exportation. Any modifications undergo the same bench testing and regression testing.
Therefore, many of the requested details about acceptance criteria, specific study design, sample sizes, expert involvement, and ground truth establishment are not present in this regulatory submission document. This type of 510(k) relies on the argument of substantial equivalence, where the new features are described as based on existing, cleared technology, and the changes do not raise new questions of safety or effectiveness. The "performance data" section focuses on software development life cycle processes and internal validation via bench testing, rather than a formal clinical performance study with defined acceptance criteria and human readers.
Here's a breakdown of what can be inferred or is explicitly stated from the document regarding your questions:
1. A table of acceptance criteria and the reported device performance
Not explicitly provided. The document states:
- "Every specification of the EndoSize software is validated by a bench test before release."
- "Bench testing includes: Tests of Importation of DICOM images, Patient Manager tests, Tests of image display and processing, Functioning tests of the different modules..., Measurement tests, Reports creation and exportation tests."
- "Every modification to the EndoSize software is validated by the same bench testing as described above."
This implies that acceptance criteria are defined for these bench tests, but the specific numerical or qualitative criteria and the results (e.g., "99% of images imported successfully," "measurements were within X% of ground truth") are not publicly disclosed in this summary. The "reported device performance" is essentially that it "successfully undergone bench testing" and "performs as well or better than the predicate device."
2. Sample sizes used for the test set and the data provenance
Not specified. The document mentions "bench testing" which usually refers to internal laboratory testing on a set of pre-defined test cases, not necessarily a large patient image dataset for clinical validation. The provenance of any data used for these internal bench tests (e.g., country of origin, retrospective/prospective) is not disclosed.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not specified. Given the nature of a software update (special 510(k)) and the focus on "bench testing" of software functionalities, it's highly probable that ground truth for internal validation was established by software engineers and potentially clinical experts employed by the manufacturer, but no details on their number or qualifications are provided.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
Not specified. This level of detail on ground truth establishment is not present.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, an MRMC study was not detailed or performed for this submission. The device is not presented as an AI-assistant for human interpretation in the sense of a diagnostic aid that changes reader performance. It's an image processing and measurement tool. The primary purpose of this 510(k) (a 'Special 510(k)') is to demonstrate substantial equivalence of updated software, not to prove clinical utility with a new type of performance study.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Not explicitly detailed as a formal study for this submission. The document describes the software's capabilities (segmentation, centerline detection, measurements, etc.) and states these were validated via "bench testing." This implies internal, automated, or semi-automated tests of the algorithms' outputs, which aligns with "standalone" performance, but not in the context of a rigorous, independently assessed clinical study.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
Not specified. For internal bench tests on image processing and measurement software, ground truth likely involves:
- Known input data: Using CT scans with pre-defined anatomical structures or simulated data where "true" measurements are known.
- Manual measurements: Highly precise, manually performed measurements by trained personnel (possibly clinical experts) on images to serve as a reference for comparison with software's automated measurements.
8. The sample size for the training set
Not specified. This document focuses on the software update and its validation (bench testing), not the development of a machine learning model, so a "training set" in that context is not discussed. If any machine learning components were present, the original K141475 submission might have briefly touched upon it, but this document contains no information.
9. How the ground truth for the training set was established
Not applicable/Not specified. As there's no mention of a machine learning training set for this specific submission, this information is not provided.
Ask a specific question about this device
(57 days)
THERENVA SAS
EndoSize is a software solution that is intended to provide Physicians and Clinical Specialists with additional information to assist them in reading and interpreting DICOM CT scan images of structures of the heart and vessels.
EndoSize enables the user to visualize and measure (diameters, lengths, volumes, angles) structures of the heart and vessels.
EndoSize enables visualization and measurement of the heart and vessels for pre-operational planning and sizing for cardiovascular interventions and surgery, and for postoperative evaluation.
General functionalities are provided such as:
- Segmentation of cardiovascular structures
- Automatic and manual centerline detection
- Visualization of CT scan images in every planes, 2D review, 3D reconstruction, Volume Rendering, MPR, Streiched CMPR
- Measurement and annotation tools
- Reporting tools
EndoSize is a stand-alone software application that runs on any standard Windows or Mac OSX based computer which meets the minimal requirements. It enables Physicians and Clinical Specialists to select patient CT scan studies from various data sources, view them, and process the images thanks to a comprehensive set of tools. EndoSize is intended to provide a clinical decision support system during the preoperative planning of endovascular surgery.
EndoSize contains five modules dedicated to different types of endovascular interventions, EVAR, FEVAR, TEVAR, TAVI and Peripheral. These modules can be marketed in combination or as separate solutions. It is also possible to market custom versions of EndoSize to Stent manufacturers, based on the modules listed above. The differences between EndoSize and a custom version of EndoSize (user interface, manufacturer logo, manufacturer catalogue included in the software, of a generic module) do not modify neither the functioning nor the safety of the software.
EndoSize enables assessment and measurement of different vascular structures such as vessels, valves, aneurysms, and other anomalies. It provides simple techniques to assess the feasibility of endovascular procedures. EndoSize can combine 2D scan slices into comprehensive 3D models of the patient, and can display supporting DICOM CT scan data. The software accurately represents different types of tissue, making it easier to diagnose anomalies and plan interventional procedures. It works with DICOM CT scan images and can access multiple DICOM data files and PACS server.
The provided text describes the EndoSize software, its intended use, and its equivalence to a predicate device but does not contain detailed acceptance criteria or a specific study proving the device meets quantitative performance metrics. Instead, it refers to "bench tests" for validation.
Here's an attempt to answer your questions based only on the provided text, highlighting what is missing:
1. A table of acceptance criteria and the reported device performance
The document states: "Every specification of the EndoSize software is validated by a bench test before release."
It then lists the types of tests conducted:
- Tests of Importation of DICOM images
- Patient Manager tests
- Tests of image display and processing
- Functioning tests of the different modules EVAR, TEVAR, FEVAR, TAVI and Peripheral
- Measurement tests
- Reports creation and exportation tests
However, specific quantitative acceptance criteria (e.g., accuracy thresholds for measurements, speed requirements, specific segmentation performance metrics) and the numerical results of these tests (the "reported device performance") are NOT provided in this document. The document only states that the software "successfully undergone every bench testing designed to simulate clinical use."
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
This information is not provided in the document. It only mentions "bench tests" but does not detail the nature, size, or provenance of any testing datasets.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided. The document makes no mention of expert involvement in establishing ground truth for any test set. It does state that "The information and measurements displayed, exported or printed are validated and interpreted by Physicians," but this refers to end-user interpretation, not ground truth establishment for software validation.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This information is not provided. The document does not describe any MRMC studies or human reader performance evaluations.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document refers to "bench tests" that validate "every specification of the EndoSize software." This implies a standalone performance evaluation of the software's functionalities. However, no specific quantitative results or methodology for such a standalone performance study are detailed. The listed tests (image importation, display, processing, module functioning, measurements, reports) suggest a system-level functional test, but not a rigorous clinical performance study.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
This information is not provided. Given that the listed tests are functional ("importation," "display," "processing," "measurement," "reporting"), the ground truth for these might be internal reference standards or expected outputs rather than clinical data ground truth.
8. The sample size for the training set
This information is not provided. The document focuses on validation/testing and does not describe the development or training of any machine learning components, although features like "Automatic segmentation" and "Automatic centerline" suggest some underlying algorithms.
9. How the ground truth for the training set was established
This information is not provided.
Ask a specific question about this device
Page 1 of 1