Search Results
Found 2 results
510(k) Data Aggregation
(132 days)
The Diagnostic Ultrasound System Aplio 500 Model TUS-A500, Aplio 400 Model TUS-A400 and Aplio 300 Model TUS-A300 is indicated for the visualization of structures, and dynamic processes with the human body using ultrasound and to provide image information for diagnosis in the following clinical applications: fetal, abdominal, intra-operative (abdominal), pediatric, small organs, trans-vaginal, trans-rectal, neonatal cephalic, adult cephalic, cardiac (both adult and pediatric), peripheral vascular, transesophageal, and musculo-skeletal (both conventional and superficial).
The Aplio 500 Model TUS-A500, Aplio 400 Model TUS-A400 and Aplio 300 Model TUS-A300 are mobile diagnostic ultrasound systems. These systems are Track 3 devices that employ a wide array of probes including flat linear array, convex linear array, and sector array with frequency ranges between approximately 2 MHz to 12 MHz.
This is a 510(k) premarket notification for modifications to an ultrasound system, not for an AI device. The document describes the device as the "Aplio 500 Model TUS-A500, Aplio 400 Model TUS-A400 and Aplio 300 Model TUS-A300" diagnostic ultrasound systems. The submission is for "Modification of a cleared device" that "improves upon existing features including the image visualization of blood flow."
Therefore, the prompt's request for "acceptance criteria and the study that proves the device meets the acceptance criteria" in the context of an AI device, along with details like "sample size used for the test set," "number of experts used to establish the ground truth," "adjudication method," "MRMC comparative effectiveness study," "standalone performance," and "training set," is not applicable to this document.
The document does not describe an Artificial Intelligence (AI) / Machine Learning (ML) enabled device. It is a traditional medical device modification.
Here's what can be extracted regarding performance testing, although it's not in the context of AI acceptance criteria:
1. A table of acceptance criteria and the reported device performance:
This document does not provide specific quantitative acceptance criteria or detailed performance metrics in the format typically seen for AI device evaluations. The submission states:
- Acceptance Criteria (Implicit): The device modifications meet the requirements for improved/added features. The device is safe and effective for its intended use.
- Reported Device Performance: The modifications improve existing features, specifically "the image visualization of blood flow." The document also lists the various clinical applications and modes of operation for which the system and its transducers are indicated (e.g., Fetal, Abdominal, Cardiac, Peripheral Vascular, etc., and B-mode, M-mode, PWD, CWD, Color Doppler, etc.). However, it does not provide quantitative results like sensitivity, specificity, or image quality scores for these improvements or listed functionalities, as would be expected for an AI device.
2. Sample sized used for the test set and the data provenance:
- Sample Size: Not specified for any test set.
- Data Provenance: "acquisition of representative clinical images" was conducted as part of the testing. No country of origin is mentioned, and it is implied to be retrospective, as the images are "acquired."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable as this is not an AI device submission requiring expert human ground truth for algorithm performance evaluation. Testing involved "bench testing and the acquisition of representative clinical images."
4. Adjudication method for the test set:
- Not applicable as this is not an AI device submission requiring adjudication of human expert annotations or ground truth.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study was done, as this is not an AI device.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not applicable, as this is not an AI device.
7. The type of ground truth used:
- For the "acquisition of representative clinical images", the ground truth is implicitly the clinical reality captured by the ultrasound imaging, verified by standard clinical interpretation and potentially other diagnostic methods. However, the document does not elaborate on how this "ground truth" was formally established or used to evaluate the new features of the device (like improved blood flow visualization) beyond stating that the features met requirements.
8. The sample size for the training set:
- Not applicable, as this is not an AI device and thus has no training set in the AI/ML sense.
9. How the ground truth for the training set was established:
- Not applicable, as this is not an AI device and thus has no training set.
Ask a specific question about this device
(33 days)
This software is intended for displaying and analyzing ultrasound images for medical diagnosis in cardiac and general examinations.
UltraExtend USWS-900A v2.1 and v3.1 is a software package that can be installed in a general-purpose personal computer (PC) to enable data acquired from Aplio diagnostic ultrasound systems (Aplio XG, Aplio MX, Aplio Artida, Aplio 300, Aplio 400 and Aplio 500), to be loaded onto a PC for image processing with other application software product. UltraExtend USWS-900A v2.1 and v3.1 is a postprocessing software that implements functionality and operability equivalent to that of the diagnostic ultrasound system the data was acquired from, providing a seamless image reading environment from examination using the diagnostic ultrasound system to diagnosis using the PC.
The provided document is a 510(k) Pre-market Notification for a software product called "UltraExtend USWS-900A v2.1 and v3.1." This submission is for a modification of an already cleared device and does not include a study proving device performance against acceptance criteria in the typical sense of a clinical trial for a novel device.
Instead, the submission focuses on demonstrating substantial equivalence to predicate devices. This means that the device is shown to function similarly and be intended for the same use as legally marketed devices.
Therefore, many of the requested categories for a study proving device performance are not directly applicable or are addressed differently in this type of submission.
Here's a breakdown based on the provided text:
Acceptance Criteria and Reported Device Performance
The document states that "Risk Analysis, Verification/Validation testing conducted through bench testing, as well as software validation documentation... demonstrate that the device meets established performance and safety requirements and is therefore deemed safe and effective." However, it does not provide a table of specific acceptance criteria or quantitative performance metrics for those criteria. The "performance" being evaluated is primarily the functional equivalence and safety of the software modifications.
Acceptance Criteria (Implied) | Reported Device Performance (Implied) |
---|---|
Functional Equivalence: The software should perform key functionalities (displaying, analyzing ultrasound images, accessing data from specific ultrasound systems, running applications like CHI-Q and TDI-Q, 2D wall motion tracking) in a manner equivalent to the predicate devices and the diagnostic ultrasound systems from which the data is acquired. | "UltraExtend USWS-900A v2.1 and v3.1 is a post-processing software that implements functionality and operability equivalent to that of the diagnostic ultrasound system the data was acquired from, providing a seamless image reading environment..." Modifications allow data from Aplio 300, 400, 500 systems to be accessible, and new applications (CHI-Q, TDI-Q) and features (2D wall motion tracking) were added. |
Safety: The modifications should not introduce new safety concerns and the device should comply with relevant regulations and standards. | "Risk Analysis, Verification/Validation testing conducted through bench testing... demonstrate that the device meets established performance and safety requirements and is therefore deemed safe and effective." Device designed and manufactured under Quality System Regulations (21 CFR §820 and ISO 13485 Standards) and IEC 62304 processes were implemented. |
Compatibility: The software should be compatible with specified operating systems (Windows XP for v2.1, Windows 7 for v3.1) and able to access data from the listed Aplio diagnostic ultrasound systems. | UltraExtend USWS-900A v2.1 runs under Windows XP and v3.1 runs under Windows 7. Allows data acquired by Aplio 300, Aplio 400 and Aplio 500 Diagnostic Ultrasound Systems to be accessible. |
Study Details (Based on the document, many are not applicable for a 510(k) modification without clinical studies)
-
Sample size used for the test set and the data provenance:
- Test Set Sample Size: Not explicitly stated as a separate "test set" in the context of a clinical study. The validation involved "bench testing" and "software validation documentation." This typically means testing against a variety of use cases and scenarios, but the number of cases or the specific data used for this internal validation is not provided.
- Data Provenance: Not specified. As no clinical studies were performed, there's no mention of country of origin or retrospective/prospective data for a clinical test set. The data would likely be internally generated or from existing Aplio systems.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable as no clinical study with expert-established ground truth was conducted. The "ground truth" for software validation would be adherence to functional specifications and absence of bugs, verified by software engineers and quality assurance personnel.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable as no clinical study with adjudicated results was conducted.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC study was done. The document explicitly states: "UltraExtend USWS-900A v2.1 and v3.1 did not require clinical studies to support substantial equivalence." This is a software for displaying and analyzing images, not an AI diagnostic tool requiring MRMC evaluation for reader improvement.
-
If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Not directly applicable in the sense of an algorithmic diagnostic performance study. The "standalone" performance here refers to the software's ability to correctly process and display images, and run its embedded applications. This was assessed through "Risk Analysis, Verification/Validation testing conducted through bench testing, as well as software validation documentation."
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For software validation, the "ground truth" would be the software requirements and specifications. The validation process verifies that the software functions as designed and meets these predefined requirements, rather than clinical ground truth (like pathology or expert consensus).
-
The sample size for the training set:
- Not applicable. This is not a machine learning or AI device that requires a separate "training set" in the context of developing a diagnostic algorithm. It's a software package for image post-processing and display.
-
How the ground truth for the training set was established:
- Not applicable, as no training set (in the ML context) was used.
Ask a specific question about this device
Page 1 of 1