Search Results
Found 2 results
510(k) Data Aggregation
(25 days)
EFAI PACS PRO is intended to be used as a Digital Imaging and Communications in Medicine (DICOM) and non-DICOM information and data management system. The EFAI PACS PRO displays, processes, stores, and transfers medical data from original equipment manufacturers (OEMs) that support the DICOM standard, with the exception of mammography. It provides the capability to store images and patient information from OEM equipment, and perform filtering, digital manipulation and quantitative measurements. The client software is designed to run on standard personal and business computers. The product is intended to be used by trained medical professionals, including but not limited to radiologists, oncologists, and physicians. It is intended to provide image and related information that is interpreted by a trained professional to render findings and/or diagnosis, but it does not directly generate any diagnosis or potential findings.
The software is a stand-alone software as medical device (Stand-alone SaMD) which provides instant services for clinicians able to use web browsers at client stations to search and view medical data of desired patients which is stored in the software. The software also provides the following visualization, annotation and quantification functionalities which can be applied to the images on the web browser at client stations.
The provided text describes a 510(k) premarket notification for the EFAI PACS PRO device. However, it does not contain specific acceptance criteria, detailed performance data, or information regarding a study design (like sample sizes, ground truth establishment, expert qualifications, or MRMC studies) that would typically be associated with proving a device meets acceptance criteria.
The document primarily focuses on demonstrating substantial equivalence to a predicate device (EFAI PACS K211257) based on its intended use, technological characteristics, and conformance to general software and usability standards.
Here's a breakdown of the requested information based on the provided text, highlighting what is present and what is missing:
1. Table of Acceptance Criteria and Reported Device Performance
This information is not explicitly stated in the provided document in the form of a table with specific quantitative acceptance criteria or reported performance metrics (e.g., accuracy, sensitivity, specificity).
The "Performance Data - Non-Clinical" section states: "Results confirm that the design inputs and performance specifications for the device are met. The EFAI PACS PRO passed the testing in accordance with internal requirements, national standards, and international standards shown below, supporting its safety and effectiveness, and its substantial equivalence to the predicate device."
However, it does not detail:
- What those "design inputs and performance specifications" are (i.e., the acceptance criteria).
- The actual "results" or specific performance values achieved by the device against these criteria.
2. Sample size used for the test set and the data provenance
This information is not provided in the document. The document mentions "non-clinical tests" but does not detail the datasets used for these tests.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This information is not provided. The document does not describe the establishment of a ground truth for a test set, nor does it mention any experts involved in such a process.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
This information is not provided. There is no description of an adjudication method, as no specific test set and ground truth establishment process are detailed.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This information is not provided. The document makes no mention of an MRMC study or any assessment of human reader performance improvement with AI assistance. The device is described as a PACS system for displaying, processing, storing, and transferring medical data, and does not directly generate diagnoses or findings, suggesting it may not involve an AI component that directly aids in human reader interpretation for diagnostic tasks in the way an AI-CAD device might.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
This information is not provided. The device is described as a "stand-alone software as medical device" (SaMD) and offers functionalities like visualization, annotation, and quantification. However, it is a PACS system; it is not an algorithm designed to perform diagnostic tasks independently. Thus, a "standalone" performance evaluation in the context of an AI algorithm predicting an outcome is not applicable here. The "standalone" refers to the entire software system existing on its own, not an AI component's diagnostic performance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
This information is not provided. There is no mention of ground truth in the document, as no specific diagnostic performance study is detailed.
8. The sample size for the training set
This information is not provided. Since no AI/ML algorithm requiring a training set for diagnostic purposes is described, details about a training set are absent.
9. How the ground truth for the training set was established
This information is not provided for the same reason as point 8.
Summary of what the document does provide regarding "performance":
The document indicates that adherence to the following standards supports its safety and effectiveness and substantial equivalence:
- Software verification and validation per IEC 62304/FDA Guidance
- Application of usability engineering to medical devices Part 1 per IEC 62366-1
- Guidance on the application of usability engineering to medical devices per IEC 62366-2
This implies that the "acceptance criteria" were met through compliance with these general software development, validation, and usability standards, rather than specific quantitative diagnostic performance metrics. The device is a "Medical Image Management And Processing System" (PACS), not a CAD (Computer-Aided Detection/Diagnosis) device, which would typically require extensive clinical performance studies with specific accuracy metrics.
Ask a specific question about this device
(137 days)
EFAI PACS PICTURE ARCHIVING AND COMMUNICATION SYSTEM is intended to be used as a Digital Imaging and Communications in Medicine (DICOM) and non-DICOM information and data management system. The EFAI PACS PICTURE ARCHIVING AND COMMUNICATION SYSTEM displays, processes, stores, and ransfers medical data from original equipment manufacturers (OEMs) that support the DICOM standard, with the exception of mammography. It provides the capability to store images and patient information from OEM equipment, and perform filtering, digital manipulation and quantitative measurements. The client software is designed to run on standard personal and business computers. The product is intended to be used by trained medical professionals, including but not limited to radiologists. oncologists, and physicians. It is intended to provide image and related information that is interpreted by a trained professional to render findings and/or diagnosis, but it does not diagnosis or potential findings.
The software is a stand-alone software as medical device (Stand-alone SaMD) which provide to instant services for clinician can able to use web browser at client stations to search and view medical data of desired patient which is stored in the software also provides the following visualization, annotation and quantification functionalities which can be applied to the images on the web browser at client stations:
Visualization Functionalities:
- a). 2D image view
- b). Zoom ratio adjust
- c). Pan
- d). Window Level and Window Width adjust
- e). Positive & Negative film effect
- f). Mirror Flip
- g). Multi-screen display
- h). Cine Loop Play
Annotation Functionality:
- Point Annotation
Quantification Functionalities:
- a). Distance measurement
- b). Area measurements
- c). Angle measurement
- d). Single Point measurement
The provided document, a 510(k) Premarket Notification for the EFAI PACS Picture Archiving and Communication System, primarily focuses on demonstrating substantial equivalence to a predicate device (Arterys Viewer K171544) rather than presenting a detailed study proving the device meets specific acceptance criteria through extensive empirical performance testing.
The document explicitly states on page 7, Section 5.8 "Performance Date - Clinical": "EFAI PACS did not require clinical study since substantial equivalence to the currently market and predicate device was demonstrated with the following attribute: Design features; Indications for Use; Fundamental scientific technology; Non-clinical performance testing; Safety and effectiveness."
Therefore, a table of acceptance criteria and reported device performance based on a dedicated clinical study cannot be extracted from this document, as such a study was not performed for this 510(k) submission.
Instead, the submission relies on the successful completion of non-clinical tests (software verification and validation, usability engineering) and a clinical evaluation based on existing literature and adverse event data to support substantial equivalence.
Here's an analysis of the provided information concerning the device's acceptance, focusing on what can be extracted:
1. Table of Acceptance Criteria and Reported Device Performance (as inferred from Non-Clinical Testing and Equivalence Claim):
Since no specific "performance criteria" for a clinical study are provided, the acceptance criteria are implicitly those of demonstrating substantial equivalence to the predicate device and meeting software/usability standards.
| Acceptance Criteria (Implied) | Reported Device Performance (as stated in document) |
|---|---|
| Non-Clinical Performance: | |
| Software Verification & Validation (IEC 62304/FDA Guidance) | "Results confirm that the design inputs and performance specifications for the device are met. The EFAI PACS passed the testing in accordance with internal requirements, national standards, and international standards shown below, supporting its safety and effectiveness, and its substantial equivalence to the predicate device." (Page 7, Section 5.7) "In Compliance with." (Page 7, Section 5.7) |
| Usability Engineering (IEC 62366-1 & 62366-2) | "In Compliance with." (Page 7, Section 5.7) |
| Clinical Performance (via Equivalence and Evaluation): | |
| Substantial Equivalence to Predicate Device (Arterys Viewer K171544) for indicated uses and technological characteristics, without raising new safety/effectiveness questions. | "EFAI PACS did not require clinical study since substantial equivalence to the currently market and predicate device was demonstrated with the following attribute: Design features; Indications for Use; Fundamental scientific technology; Non-clinical performance testing; Safety and effectiveness." (Page 7, Section 5.8) "The EFAI PACS has the same intended use as the Arterys Viewer, and the same or similar technological characteristics. The differences in technological characteristics do not raise new or different questions of safety and effectiveness. Performance testing has demonstrated the EFAI PACS is as safe and effective as the predicate device. Therefore, the EFAI PACS is substantially equivalent to the predicate device." (Page 8, Section 5.9) |
2. Sample size used for the test set and the data provenance:
- Sample Size: Not applicable in the context of a prospective clinical performance study. The document refers to "non-clinical tests" and a "clinical evaluation" which relied on existing data.
- Data Provenance: Not explicitly stated as a sampled test set. The clinical evaluation involved reviewing "adverse event data base search results" and "three clinical articles mentioned in the CEP and CER" (Page 8, Section 5.8). The country of origin for these databases or articles is not specified. It's retrospective analysis of existing information.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable. No new test set requiring expert ground truth establishment was used for performance validation in a clinical study. The clinical evaluation relied on existing literature and adverse event data.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not applicable. No test set requiring adjudication of findings was used for a clinical study.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study was not performed. The device is a PACS system for image management, display, and basic manipulation, not an AI-assisted diagnostic tool. Its functionality explicitly states it "does not directly generate any diagnosis or potential findings" (Page 4, Section 5.4).
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The device itself is a "Stand-alone Software as Medical Device (SaMD)" (Page 4, Section 5.5). However, a standalone performance test in the sense of an algorithm producing diagnostic output was not done because the device's function is as an image management system, not a diagnostic AI. The non-clinical testing verified its functional performance (e.g., display, measurement accuracy, DICOM compliance).
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For functional non-clinical tests: The "ground truth" would be against design specifications and expected outputs for image display, processing, storage, and measurement tools. This would be established through internal verification and validation procedures.
- For the clinical evaluation: It relied on available "adverse event data base search results" and data from "three clinical articles," which represent existing clinical information rather than a newly established ground truth for device performance.
8. The sample size for the training set:
- Not applicable. This is a PACS system, not a machine learning algorithm that learns from training data for diagnostic or predictive tasks.
9. How the ground truth for the training set was established:
- Not applicable, as there is no training set for this type of device.
In summary, the EFAI PACS 510(k) submission primarily demonstrates safety and effectiveness through substantial equivalence to an existing predicate device and compliance with relevant software and usability engineering standards, rather than through a dedicated clinical performance study with defined acceptance criteria and a test set.
Ask a specific question about this device
Page 1 of 1