Search Results
Found 4 results
510(k) Data Aggregation
(132 days)
The Diagnostic Ultrasound System Aplio 500 Model TUS-A500, Aplio 400 Model TUS-A400 and Aplio 300 Model TUS-A300 is indicated for the visualization of structures, and dynamic processes with the human body using ultrasound and to provide image information for diagnosis in the following clinical applications: fetal, abdominal, intra-operative (abdominal), pediatric, small organs, trans-vaginal, trans-rectal, neonatal cephalic, adult cephalic, cardiac (both adult and pediatric), peripheral vascular, transesophageal, and musculo-skeletal (both conventional and superficial).
The Aplio 500 Model TUS-A500, Aplio 400 Model TUS-A400 and Aplio 300 Model TUS-A300 are mobile diagnostic ultrasound systems. These systems are Track 3 devices that employ a wide array of probes including flat linear array, convex linear array, and sector array with frequency ranges between approximately 2 MHz to 12 MHz.
This is a 510(k) premarket notification for modifications to an ultrasound system, not for an AI device. The document describes the device as the "Aplio 500 Model TUS-A500, Aplio 400 Model TUS-A400 and Aplio 300 Model TUS-A300" diagnostic ultrasound systems. The submission is for "Modification of a cleared device" that "improves upon existing features including the image visualization of blood flow."
Therefore, the prompt's request for "acceptance criteria and the study that proves the device meets the acceptance criteria" in the context of an AI device, along with details like "sample size used for the test set," "number of experts used to establish the ground truth," "adjudication method," "MRMC comparative effectiveness study," "standalone performance," and "training set," is not applicable to this document.
The document does not describe an Artificial Intelligence (AI) / Machine Learning (ML) enabled device. It is a traditional medical device modification.
Here's what can be extracted regarding performance testing, although it's not in the context of AI acceptance criteria:
1. A table of acceptance criteria and the reported device performance:
This document does not provide specific quantitative acceptance criteria or detailed performance metrics in the format typically seen for AI device evaluations. The submission states:
- Acceptance Criteria (Implicit): The device modifications meet the requirements for improved/added features. The device is safe and effective for its intended use.
- Reported Device Performance: The modifications improve existing features, specifically "the image visualization of blood flow." The document also lists the various clinical applications and modes of operation for which the system and its transducers are indicated (e.g., Fetal, Abdominal, Cardiac, Peripheral Vascular, etc., and B-mode, M-mode, PWD, CWD, Color Doppler, etc.). However, it does not provide quantitative results like sensitivity, specificity, or image quality scores for these improvements or listed functionalities, as would be expected for an AI device.
2. Sample sized used for the test set and the data provenance:
- Sample Size: Not specified for any test set.
- Data Provenance: "acquisition of representative clinical images" was conducted as part of the testing. No country of origin is mentioned, and it is implied to be retrospective, as the images are "acquired."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable as this is not an AI device submission requiring expert human ground truth for algorithm performance evaluation. Testing involved "bench testing and the acquisition of representative clinical images."
4. Adjudication method for the test set:
- Not applicable as this is not an AI device submission requiring adjudication of human expert annotations or ground truth.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study was done, as this is not an AI device.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not applicable, as this is not an AI device.
7. The type of ground truth used:
- For the "acquisition of representative clinical images", the ground truth is implicitly the clinical reality captured by the ultrasound imaging, verified by standard clinical interpretation and potentially other diagnostic methods. However, the document does not elaborate on how this "ground truth" was formally established or used to evaluate the new features of the device (like improved blood flow visualization) beyond stating that the features met requirements.
8. The sample size for the training set:
- Not applicable, as this is not an AI device and thus has no training set in the AI/ML sense.
9. How the ground truth for the training set was established:
- Not applicable, as this is not an AI device and thus has no training set.
Ask a specific question about this device
(42 days)
The Diagnostic Ultrasound System Aplio 500 Model TUS-A500, Aplio 400 Model TUS-A400 And Aplio 300 Model TUS-A300 is indicated for the visualization of structures, and dynamic processes with the human body using ultrasound and to provide image information for diagnosis in the followinq clinical applications: fetal, abdominal, intraoperative (abdominal), pediatric, small organs, trans-rectal, neonatal cephalic, adult cephalic, cardiac (both adult and pediatric), peripheral vascular, transesophageal, and musculo-skeletal (both conventional and superficial).
The Aplio 500 Model TUS-A500, Aplio 400 Model TUS-A400 and Aplio 300 Model TUS-A300 are mobile diagnostic ultrasound systems. These systems are Track 3 devices that employ a wide array of probes including flat linear array, convex linear array, and sector array with frequency ranges between approximately 2 MHz to 12 MHz.
The provided text is a 510(k) summary for a Diagnostic Ultrasound System (Aplio 500, 400, 300 V3.0). It primarily details the device description, intended uses, and safety standards it complies with. However, it does not include information about specific acceptance criteria related to a study proving the device meets performance metrics, nor does it describe such a study. The "Testing" section mentions "Verification/Validation testing conducted through bench testing," but provides no details on the methodology, sample sizes, or results of these tests, particularly concerning clinical performance or AI integration.
Therefore, I cannot fulfill most of the requested information regarding acceptance criteria and a study to prove device performance because the provided document does not contain that level of detail. It is a regulatory submission focused on substantial equivalence to a predicate device and compliance with general safety and performance standards rather than a performance study report.
Here's what can be extracted based on the provided text, and where information is missing:
Acceptance Criteria and Study for Device Performance (Based on available information and typical assumptions for such submissions):
The document details compliance with various standards and states that "Verification/Validation testing conducted through bench testing...demonstrates that the requirements for the improved/added features have been met." However, specific numerical or qualitative acceptance criteria for these tests or the detailed performance metrics are not provided in this summary.
Given that this is a 510(k) for a modification of an existing diagnostic ultrasound system, the acceptance criteria would typically revolve around demonstrating that the modified device performs equivalently to the predicate device and continues to meet established safety and effectiveness standards for diagnostic ultrasound imaging. These standards are general for an ultrasound system and are linked to the capabilities of the various transducers across different clinical applications (e.g., Fetal, Abdominal, Cardiac, Musculo-skeletal).
For example, implicit acceptance would involve:
- Image Quality: Resolution, contrast, penetration, and artifact levels being equivalent to or better than the predicate device.
- Doppler Accuracy: Accurate measurement of blood flow velocities.
- Safety: Compliance with acoustic output limits and electrical safety standards.
- Functionality: All advertised imaging modes (B-mode, M-mode, PWD, CWD, Color Doppler, THI, Dynamic Flow, Power, CHI 2D, 4D, etc.) function as intended.
Study Proving Device Meets Acceptance Criteria:
The document mentions "Verification/Validation testing conducted through bench testing." This refers to internal company testing to ensure the device performs as designed and meets regulatory requirements. It is not a clinical study designed to statistically prove the device's diagnostic performance against a ground truth in real patient scenarios.
1. Table of Acceptance Criteria and Reported Device Performance:
| Acceptance Criteria (Implied/General) | Reported Device Performance (Summary Statement) |
|---|---|
| Compliance with general safety and performance standards (e.g., IEC) | "This device is in conformance with the applicable parts of the IEC60601-1, IEC 60601-1-1, IEC 60601-1-2, IEC 60601-1-4, IEC 60601-2-37, IEC 62304, NEMA UD3 Output Display and ISO 10993-1 standards." (Section 14) |
| Device functions as intended and improves existing features | "Verification/Validation testing conducted through bench testing...demonstrates that the requirements for the improved/added features have been met." (Section 15) |
| Image quality and functionality equivalent to predicate device | "The Aplio 500 Model TUS-A500 Version 3.0, Aplio 400 Model TUS-A400 Version 3.0 and Aplio 300 Model TUS-A300 Version 3.0, functions in a manner similar to and is intended for the same use as the predicate device." (Section 13) |
2. Sample size used for the test set and the data provenance:
- Sample Size: Not specified. "Bench testing" generally implies testing on phantoms, cadavers, or simulated environments, rather than a specific sample size of human subjects for clinical performance evaluation as would be seen in an AI/software study.
- Data Provenance: Not specified. Given it's "bench testing," it would be internally generated data, likely from controlled laboratory environments. No indication of country of origin, or retrospective/prospective acquisition.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable/Not specified. For "bench testing" of an ultrasound system, "ground truth" would be established through physical measurements, known properties of test objects (phantoms), and engineering specifications rather than expert human interpretation of images for diagnostic accuracy.
4. Adjudication method for the test set:
- Not applicable/Not specified. Adjudication is typically for human interpretations or highly subjective measurements, not standard engineering bench tests of a diagnostic imaging system's technical performance.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No. This submission predates widespread AI-assisted diagnostic ultrasound systems (2013) and is for a general diagnostic ultrasound system, not an AI-specific device or feature. Therefore, no MRMC study or AI assistance effect size is mentioned.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not applicable. This is a diagnostic ultrasound system, which inherently requires human operation and interpretation. There is no "algorithm only" performance reported in this context.
7. The type of ground truth used:
- For "bench testing," the ground truth would typically be:
- Engineering Specifications: Measured values compared against design specifications.
- Physical Phantoms: Known properties (e.g., size, density, flow rate) of objects within a phantom.
- Reference Devices: Comparison against calibrated reference measurement equipment.
8. The sample size for the training set:
- Not applicable/Not specified. This document describes a diagnostic ultrasound hardware and software system, not an AI/machine learning algorithm that requires a "training set" in the context of deep learning models.
9. How the ground truth for the training set was established:
- Not applicable, as there is no mention of a "training set" in the AI/machine learning sense.
Ask a specific question about this device
(94 days)
The DIAGNOSTIC ULTRASOUND SYSTEM APLIO 500 MODEL TUS-A500, APLIO 400 MODEL TUS-A400 and APLIO 300 MODEL TUS-A300 is indicated for the visualization of structures, and dynamic processes with the human body using ultrasound and to provide image information for diagnosis in the following clinical applications: fetal, abdominal, intra-operative (abdominal), pediatric, small organs, trans-vaginal, trans-rectal, neonatal cephalic, adult cephalic, cardiac (both adult and pediatric), peripheral vascular, transesophageal, and musculoskeletal (both conventional and superficial).
The DIAGNOSTIC ULTRASOUND SYSTEM APLIO 500 MODEL TUS-A500. APLIO 400 MODEL TUS-A400 and APLIO 300 MODEL TUS-A300 are mobile system. These systems are Track 3 devices that employ a wide array of probes that include flat linear array, convex linear array, and sector array with a frequency range of approximately 2 MHz to 12 MHz.
The provided text is a 510(k) summary for the Toshiba DIAGNOSTIC ULTRASOUND SYSTEM APLIO 500 MODEL TUS-A500, APLIO 400 MODEL TUS-A400, and APLIO 300 MODEL TUS-A300 Version 2.1. It primarily focuses on demonstrating substantial equivalence to a predicate device (Toshiba DIAGNOSTIC ULTRASOUND SYSTEM APLIO 500 MODEL TUS-A500 / APLIO 400 MODEL TUS-A400 / APLIO 300 MODEL TUS-A300 V2.0; 510(k) control number K110870) and outlining the intended uses for various transducers.
Based on the provided document, the information requested in your prompt about acceptance criteria and a study proving performance (specifically in the context of AI/ML or new clinical parameters) is not present. This document predates the widespread regulatory submissions for AI/ML devices in medical imaging, and its focus is on general ultrasound system functionality and traditional clinical applications.
Therefore, I cannot provide a table of acceptance criteria and reported device performance for AI features, nor can I provide details about sample sizes for test/training sets, data provenance, number/qualifications of experts, adjudication methods, MRMC comparative effectiveness studies, or standalone algorithm performance studies related to AI.
The document does include detailed tables for each transducer showing the "Intended Use: Diagnostic ultrasound imaging or fluid flow analysis of the human body as follows:", listing various clinical applications and modes of operation. For each application, it indicates whether it's a "P" (previously cleared by FDA) or "E" (added under this appendix) indication, or "N" (new indication) for a very few. This is essentially a declaration of the intended uses, but it does not specify performance acceptance criteria or provide study results to demonstrate performance for these applications.
The document mentions compliance with several standards, such as IEC 60601-1 and its parts, IEC 62304, and AIUM-NEMA UD2/UD3 standards. These are general safety and performance standards for medical electrical equipment and ultrasound output measurement/display. They are compliance standards, not specific acceptance criteria for AI or diagnostic performance in patient studies.
In summary, the provided content is a regulatory submission for device clearance based on substantial equivalence to a predicate device, focusing on intended uses and compliance with general safety and performance standards. It does not contain the kind of detailed information about acceptance criteria and performance studies you're asking for, particularly concerning AI or specific diagnostic efficacy metrics.
Ask a specific question about this device
(23 days)
The intended use of this system is to visualize structures, characteristics, and dynamic processes within the human body using ultrasound and to provide image information for diagnosis for cardiac and vascular.
The DIAGNOSTIC ULTRASOUND SYSTEM APLIO ARTIDA (Model SSH-880CV) is intended to be used for the following types of studies: cardiac, transesophageal, abdominal and peripheral vascular.
Diagnostic ultrasound imaging or fluid flow analysis of the human body as follows: Cardiac Adult, Cardiac Pediatric, Trans-esoph. (Cardiac), Peripheral vessel, Abdominal, Small Organ (Specify) (1), Musculo-skeletal (Conventional), Musculo-skeletal (Superficial).
The APLIO ARTIDA SSH-880CV is a mobile Ultrasound Diagnostic System for cardiology and vascular imaging. It has a capability of providing a 3D real time image of a heart as well as 2D images. The system is consists of a main console, a color LCD display and several transducers. The compatible transducers are linear array, curved linear and phased array with a frequency range of 2.5 MHz to 7.5MHz. Accordingly it has various software options for cardiac and vascular examinations.
Here's an analysis of the acceptance criteria and study information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The submission primarily focuses on the substantial equivalence of the APLIO ARTIDA MODEL SSH-880CV V3.0 (subject device) to the APLIO ARTIDA MODEL SSH-880CV V2.0 (predicate device), with the addition of a new feature: Activation Imaging (AI) - 3D Wall Motion Tracking (3D WMT). The acceptance criteria for this new feature is qualitative and tied to its ability to display activation timing.
| Acceptance Criterion | Reported Device Performance |
|---|---|
| AI images provide activation timing. | The result of the clinical evaluation satisfied a pass criterion. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 10 subjects.
- Data Provenance: The text states, "A clinical evaluation of Activation Imaging (AI) was conducted at a evaluation site for the validation of AI." No specific country of origin is mentioned, but the submitter's address is in Tustin, CA, USA, and the device manufacturer is Toshiba Medical Systems Corporation, Japan. This suggests it could be a US-based study or an international study. The study was prospective in nature, as indicated by "scheduled for routine Echocardiographic Evaluation."
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
The document does not explicitly state the number of experts used to establish ground truth or their specific qualifications (e.g., "radiologist with 10 years of experience"). It only mentions that subjects were "scheduled for routine Echocardiographic Evaluation by their physician," implying that physicians (likely cardiologists or specialized sonographers) were involved in the standard diagnostic process which would inform the assessment of "dyssynchrony."
4. Adjudication Method for the Test Set
The document does not specify an adjudication method (e.g., 2+1, 3+1, none) for the test set.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
No MRMC comparative effectiveness study is mentioned. The study was a clinical evaluation of the AI feature itself, not a comparison of human readers with vs. without AI assistance. The focus was on whether the AI images provided activation timing, not on how it improved human reader performance.
6. If a Standalone (Algorithm-Only Without Human-in-the-Loop Performance) Was Done
Yes, a standalone evaluation was performed. The "pass/fail criterion was used to determine if the AI images provided the activation timing." This directly assesses the algorithm's output without requiring human interpretation for its performance evaluation against a specific criterion.
7. The Type of Ground Truth Used
The ground truth was established by assessing if the AI images "provided the activation timing" for subjects with "dyssynchrony." This implies a clinical assessment of myocardial movement from the acquired 3D images, likely evaluated against established medical understanding of cardiac dyssynchrony and activation timing. While not explicitly stated as "expert consensus," the nature of the evaluation for a diagnostic ultrasound feature inherently relies on clinical judgment and established diagnostic criteria within the medical field. It's an implicit clinical ground truth based on the physician's evaluation rather than a pathology result or outcome data.
8. The Sample Size for the Training Set
The document does not provide any information regarding a training set or its sample size. The focus is solely on the clinical evaluation of the new feature.
9. How the Ground Truth for the Training Set Was Established
Since no training set is mentioned, there is no information on how its ground truth was established.
Ask a specific question about this device
Page 1 of 1