Search Results
Found 105 results
510(k) Data Aggregation
(108 days)
TOSHIBA AMERICA MEDICAL SYSTEMS INC
Vantage Elan systems are indicated for use as a diagnostic imaging modality that produces cross-sectional transaxial, coronal, sagittal, and oblique images that display anatomic structures of the head or body. Additionally, this system is capable of non-contrast enhanced imaging, such as MRA.
MRI (magnetic resonance imaging) images correspond to the spatial distribution of protons (hydrogen nuclei) that exhibit nuclear magnetic resonance (NMR). The NMR properties of body tissues and fluids are:
- · Proton density (PD) (also called hydrogen density)
- · Spin-lattice relaxation time (T1)
- · Spin-spin relaxation time (T2)
- Flow dynamics
- · Chemical Shift
Contrast agent use is restricted to the approved drug indications. When interpreted by a trained physician, these images yield information that can be useful in diagnosis.
The Vantage Elan (Model MRT-2020) is a 1.5 Tesla Magnetic Resonance Imaging (MRI) System. The Vantage Elan uses 1.4m short and 4.1 tons light weight magnet. It includes the Toshiba Pianissimo™Σ technology (scan noise reduction technology). The design of the gradient coil and the whole body coil of the Vantage Elan provides the maximum field of view of 55 x 50 cm. The Model MRT-2020/A1 is without secondary cooling system and the Model MRT-2020/A2 is with secondary cooling system. The Vantage Elan MRI System is comparable to the current 1.5T EXCELART Vantage Titan MRI System (K120638), cleared Jun 1, 2012 with the following modifications.
This document is a 510(k) premarket notification for the Vantage Elan (Model MRT-2020) Magnetic Resonance Imaging (MRI) System. The focus of the submission is to demonstrate substantial equivalence to a predicate device (EXCELART Vantage Titan MRI System, K120638) despite hardware and software changes.
The document does not describe a study to prove acceptance criteria for a device with machine learning or AI components. Instead, it outlines the regulatory review process for a conventional medical imaging device. The "acceptance criteria" and "device performance" in this context refer to the device's conformance to established safety regulations, technical standards, and its ability to produce diagnostic-quality images.
Therefore, for your request, I will interpret "acceptance criteria" as the performance and safety standards the device must meet, and "device performance" as how the Vantage Elan compared to these standards or to its predicate device. Since it's a traditional medical device submission, many of your requested items (like sample sizes for test sets, data provenance, ground truth establishment for AI models, MRMC studies, or standalone algorithm performance) are not applicable or explicitly mentioned in the context you've provided.
Here is an analysis based on the provided text, addressing your points where possible:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Implied by Regulatory Standards & Predicate Comparison) | Reported Device Performance (Vantage Elan vs. Predicate) |
---|---|
Safety Parameters: | |
- Whole body Maximum SAR (Specific Absorption Rate) conforming to IEC 60601-2-33 (2010), ≤ 4W/kg (1st operating mode) | - 4W/kg for whole body (1st operating mode specified in IEC 60601-2-33 (2010)) |
- Maximum dB/dt conforming to IEC 60601-2-33 (2010) (1st operating mode) | - |
Ask a specific question about this device
(206 days)
TOSHIBA AMERICA MEDICAL SYSTEMS, INC.
- Intended to be used as a universal diaging system for radiographic and fluoroscopic examinations, including general R&F and pediatric examinations.
- Intended for use by a qualified/trained doctor or technologist on both and pediatric subjects taking diagnostic and fluoroscopic exposures of the whole body, skull, spinal column, extremities and other body parts. Applications can be performed with the patient sitting, standing, or lying in prone or supine position.
The main function of the KALARE (DREX-KL80) is to perform fluoroscopy/radiography of the examinations of the gastrointestinal tract examination, support for endoscopy, nonvascular contrast study, general abdominal radiography and general skeletal radiography. Using fluorescent scintillation effects of X-rays that have passed through the patient's body, image information is obtained for medical diagnosis and treatment.
This 510(k) submission (K133553) describes a modification to an existing device, the KALARE (DREX-KL80), which involves the addition of a previously cleared Flat Panel Detector (FPD). As such, the study performed is a justification of substantial equivalence rather than a detailed performance study against specific acceptance criteria for a novel device.
Here's an analysis of the provided information, addressing your points where possible:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Safety | The device is designed and manufactured under Quality System Regulations (21 CFR § 820 and ISO 13485). |
Conforms with applicable parts of IEC60601-1, IEC 60601-2-7, IEC60601-2-28, and IEC 60601-2-32 standards. | |
Meets all requirements of the Federal Diagnostic Equipment Standard (21 CFR §1020). | |
Effectiveness | Bench testing: Confirmed that the installation of the detector met the stated specifications of the component manufacturer. |
Comparative Testing: Image quality, artifacts, and motion/dynamic capabilities were compared between the predicate device and the modified device. | |
Conclusion: The testing demonstrated that substantial equivalence to the predicate device could be proven without the use of clinical images. | |
Indications for Use Unchanged | The modifications incorporated into the KALARE do not change the indications for use or the intended use of the device. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: Not applicable in the traditional sense of a clinical test set. The study primarily involved bench testing and comparative non-clinical performance evaluations.
- Data Provenance: The testing was conducted in accordance with applicable standards published by the International Electromechanical Commission (IEC). The device's manufacturing site is in Japan (Toshiba Medical Systems Corporation). The specific location or "provenance" of the bench testing is not explicitly stated beyond "Testing of the modified system was conducted...". The study type is a verification and validation study focused on demonstrating substantial equivalence through non-clinical means.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- This information is not provided because the study did not involve a clinical test set with a need for expert-established ground truth. The evaluation was primarily technical and comparative against the predicate device's established performance and component specifications.
4. Adjudication Method for the Test Set
- Not applicable, as no external expert adjudication of clinical cases was performed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No, a MRMC comparative effectiveness study was not done. The submission explicitly states: "The conclusion of this testing demonstrated that substantial equivalence to the predicate device could be proven without the use of clinical images."
6. Standalone (Algorithm Only) Performance Study
- No, a standalone (algorithm only) performance study was not done for a new or modified algorithm. The device is an X-ray system that incorporates a (modified) detector, not an AI algorithm.
7. Type of Ground Truth Used
- The "ground truth" for this submission would be defined by the technical specifications of the previously cleared FPD component, the established performance characteristics of the predicate device, and the applicable industry standards (IEC, 21 CFR §1020). The testing aimed to show that the new configuration met these technical and performance benchmarks.
8. Sample Size for the Training Set
- Not applicable. This is a submission for a modification to a medical device (an X-ray system), not for a machine learning or AI algorithm that would require a training set.
9. How the Ground Truth for the Training Set Was Established
- Not applicable, as there was no training set for an AI algorithm.
Ask a specific question about this device
(60 days)
TOSHIBA AMERICA MEDICAL SYSTEMS, INC.
The DIAGNOSTIC ULTRASOUND SYSTEM APLIO ARTIDA (Model SSH-880CV) is intended to be used for the following types of studies: cardiac, transesophageal, abdominal and peripheral vascular.
The APLIO ARTIDA, Model SSH-880CV is a mobile diagnostic ultrasound system. This is a Track 3 device that employs a wide array of probes including convex, pencil, flat linear array and sector array, with a frequency range of approximately 2.0 MHz to 7.5 MHz. This system supports basic measurements including distance, time, angle, and trace, as well as combinations of some basic measurements.
The provided 510(k) summary for the Toshiba Aplio Artida (SSH-880CV), V3.2, describes modifications to an already cleared diagnostic ultrasound system. It does not contain information about a study proving the device meets specific acceptance criteria in the typical sense of a clinical performance study with defined metrics.
Instead, the submission focuses on demonstrating substantial equivalence to a predicate device (Aplio Artida (SSH-880CV), V3.0) and confirming that "modifications to the cleared device which improves upon existing features including contour tracing and expansion of the sector display function for two existing transducers and use of a proprietary 4D render" do not change the indications for use or intended use.
Therefore, many of the requested categories for acceptance criteria and study details cannot be directly extracted from this document, as the submission relies on bench testing, software validation, risk management, and design controls to demonstrate safety and effectiveness for the modified features.
Here's a breakdown of the information that can be extracted or inferred, and what is explicitly not present:
1. A table of acceptance criteria and the reported device performance:
This document does not provide a table with specific quantitative acceptance criteria (e.g., sensitivity, specificity, accuracy targets) for clinical performance, nor does it report device performance against such criteria. The "performance" demonstrated is that the modified features function as intended and do not compromise safety or effectiveness compared to the predicate device.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
No information on a "test set" in the context of clinical performance data is provided. The testing mentioned is "bench testing" and "software validation," which are likely internal engineering verification and validation activities.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
Not applicable, as no external clinical test set and ground truth establishment by experts is described for this submission.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
Not applicable, as no external clinical test set and adjudication process is described for this submission.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
Not applicable. This submission is for modifications to a diagnostic ultrasound system (hardware/software features), not an AI algorithm intended to assist human readers in interpretation. There is no mention of AI or MRMC studies.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
Not applicable. This is not an AI algorithm submission. The device is a diagnostic ultrasound system operated by a human user.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
Not applicable in the context of clinical performance. The "ground truth" for the modified features would be their proper functioning as verified through engineering tests and adherence to specifications.
8. The sample size for the training set:
Not applicable, as this is not an AI/machine learning device that involves a training set.
9. How the ground truth for the training set was established:
Not applicable, as this is not an AI/machine learning device.
Summary of Device Acceptance and Study as Per Document:
- Acceptance Criteria (Implicit):
- The modified features (contour tracing, expanded sector display for two transducers, proprietary 4D render) function as intended.
- The modifications do not change the indications for use or the intended use of the device.
- The device continues to comply with relevant safety standards (e.g., IEC60601-1, IEC 60601-1-2, IEC 60601-2-37, IEC 62304, NEMA UD3).
- The device is designed and manufactured under Quality System Regulations (21 CFR § 820 and ISO 13485 Standards).
- Study That Proves Device Meets Acceptance Criteria:
- The submission refers to "Risk Analysis and verification testing conducted through bench testing" to demonstrate that "requirements for the improved/added features have been met."
- "Software Documentation for a Moderate Level of Concern" and "software validation" were included to support the software modifications.
- Testing was conducted in accordance with applicable IEC standards for Medical Devices.
- The conclusion states: "Based upon bench testing and successful completion of software validation, application of risk management and design controls, it is concluded that this device is safe and effective for its intended use."
In essence, this 510(k) submission for the Toshiba Aplio Artida V3.2 is for a modification to an existing device, and the "study" described is a series of engineering and software verification and validation activities rather than a clinical performance study with predefined acceptance metrics against patient data. The primary goal was to establish substantial equivalence and ensure the modifications did not introduce new safety or efficacy concerns.
Ask a specific question about this device
(127 days)
TOSHIBA AMERICA MEDICAL SYSTEMS, INC.
The Angio Workstation is used in combination with an interventional angiography system (Infinix-i series systems and INFX-8000V and INFX-8000C systems) to provide 2D and 3D imaging and Dose Tracking System functions in selective catheter angiography for the heart, chest, and abdomen.
The XIDF-AWS801 Angio Workstatoin is a workstation for post-processing software that displays images in 2-d or 3-d format to provide additional information to the clinician. The software on this device remains unchanged with the exception of XIDF-DTS802 software.
The dose tracking system (DTS) is an application software package intended to provide the estimated dose distribution information during radiographic and fluoroscopic procedures. The dose tracking system (DTS) calculates the radiation dose of the patient's skin using the exposure technique parameters and exposure geometry obtained from the x-ray imaging system and presents the cumulative results in a color mapping on a 3D graphic of the patient model.
This 510(k) submission (K132106) describes a modification to the Toshiba XIDF-AWS801 Angio Workstation, specifically the addition of the XIDF-DTS802 Dose Tracking Software. The submission states that the added functionality does not change the intended use of the DTS and that the test methodology for verification remains unchanged from what was previously reported to the FDA (K123097). Therefore, the acceptance criteria and study details are primarily referred to in previous submissions.
Based on the provided document, here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria in a table format for the XIDF-DTS802. Instead, it refers to prior testing and general verification. The performance is summarized as follows:
Feature/Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|
Dose Tracking System (DTS) Accuracy | Not explicitly stated in this document. Implied adherence to previous validation in K123097. | "Based upon this testing the accuracy of the displayed estimated dose was determined and is included in the user information." (No specific values provided in this document). |
Expanded Functionality (General Angiography, Radiography, C-arm positions) | Not explicitly stated. Verified through testing. | Functionality has been added and verified with the same test methodology as previously reported to the Agency (K123097). |
Conformance to Standards | IEC60601-1, 21 CFR §820 (Quality System Regulations), ISO 13485, 21 CFR §1020 (Federal Diagnostic Equipment Standard for X-rays). | The device is designed, manufactured, and conforms to applicable parts of these standards. |
2. Sample Size Used for the Test Set and the Data Provenance
- Sample Size: Not explicitly stated in this document. The testing involved "anthropomorphic phantoms and Lexan phantoms." The number of phantoms used or the quantity of data generated from them is not specified.
- Data Provenance: Not specified, but likely proprietary data generated by Toshiba America Medical Systems, Inc. through their testing. The document refers to "anthropomorphic phantoms and Lexan phantoms," indicating a laboratory testing environment rather than clinical data from human subjects.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
- Not stated. The ground truth for dose estimation in phantom studies is typically established through direct measurement with dosimeters, which serve as the "true" dose for comparison. Clinical experts are usually involved in designing such studies and interpreting results, but their specific role in establishing the ground truth (i.e., the actual dose delivered to the phantom) is generally not as a human "reader" compared to imaging interpretation studies.
4. Adjudication Method for the Test Set
- Not applicable/Not stated. The testing methodology focuses on technical accuracy of dose estimation using phantoms rather than interpretation of clinical images by human readers requiring adjudication.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
- No. This information is not mentioned in the document. The device is a dose tracking system, not an AI-powered diagnostic image interpretation tool that would typically undergo MRMC studies to assess reader improvement.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, implicitly. The testing described as using "anthropomorphic phantoms and Lexan phantoms to verify and validate the performance of the system" to determine the "accuracy of the displayed estimated dose" refers to the standalone performance of the DTS algorithm. The "human-in-the-loop" aspect for a dose tracking system would typically involve the clinician observing the dose display during a procedure, but the accuracy validation itself is a standalone assessment of the calculated dose versus a measured dose.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
- The ground truth for the test set was based on measured radiation doses in anthropomorphic and Lexan phantoms. For dose estimation systems, physical measurements using calibrated dosimetry equipment (e.g., ionization chambers, thermoluminescent dosimeters) provide the ground truth for comparison.
8. The Sample Size for the Training Set
- Not specified. The document does not provide details about a distinct "training set" as one might find for a machine learning algorithm. The DTS calculates dose based on exposure parameters and geometry, which are physics-based models, not typically "trained" on a large dataset in the same way a deep learning model would be. Calibration and underlying physics models would be developed and validated, but a separate "training set" as defined for statistical or AI models is not indicated.
9. How the Ground Truth for the Training Set Was Established
- Not applicable (based on the lack of a distinct "training set" as described above). The underlying models and calculations within the DTS would have been developed and verified through established physics principles and possibly empirical measurements, which serve as the foundation for the software's accuracy.
Ask a specific question about this device
(33 days)
TOSHIBA AMERICA MEDICAL SYSTEMS, INC.
This software is intended for displaying and analyzing ultrasound images for medical diagnosis in cardiac and general examinations.
UltraExtend USWS-900A v2.1 and v3.1 is a software package that can be installed in a general-purpose personal computer (PC) to enable data acquired from Aplio diagnostic ultrasound systems (Aplio XG, Aplio MX, Aplio Artida, Aplio 300, Aplio 400 and Aplio 500), to be loaded onto a PC for image processing with other application software product. UltraExtend USWS-900A v2.1 and v3.1 is a postprocessing software that implements functionality and operability equivalent to that of the diagnostic ultrasound system the data was acquired from, providing a seamless image reading environment from examination using the diagnostic ultrasound system to diagnosis using the PC.
The provided document is a 510(k) Pre-market Notification for a software product called "UltraExtend USWS-900A v2.1 and v3.1." This submission is for a modification of an already cleared device and does not include a study proving device performance against acceptance criteria in the typical sense of a clinical trial for a novel device.
Instead, the submission focuses on demonstrating substantial equivalence to predicate devices. This means that the device is shown to function similarly and be intended for the same use as legally marketed devices.
Therefore, many of the requested categories for a study proving device performance are not directly applicable or are addressed differently in this type of submission.
Here's a breakdown based on the provided text:
Acceptance Criteria and Reported Device Performance
The document states that "Risk Analysis, Verification/Validation testing conducted through bench testing, as well as software validation documentation... demonstrate that the device meets established performance and safety requirements and is therefore deemed safe and effective." However, it does not provide a table of specific acceptance criteria or quantitative performance metrics for those criteria. The "performance" being evaluated is primarily the functional equivalence and safety of the software modifications.
Acceptance Criteria (Implied) | Reported Device Performance (Implied) |
---|---|
Functional Equivalence: The software should perform key functionalities (displaying, analyzing ultrasound images, accessing data from specific ultrasound systems, running applications like CHI-Q and TDI-Q, 2D wall motion tracking) in a manner equivalent to the predicate devices and the diagnostic ultrasound systems from which the data is acquired. | "UltraExtend USWS-900A v2.1 and v3.1 is a post-processing software that implements functionality and operability equivalent to that of the diagnostic ultrasound system the data was acquired from, providing a seamless image reading environment..." Modifications allow data from Aplio 300, 400, 500 systems to be accessible, and new applications (CHI-Q, TDI-Q) and features (2D wall motion tracking) were added. |
Safety: The modifications should not introduce new safety concerns and the device should comply with relevant regulations and standards. | "Risk Analysis, Verification/Validation testing conducted through bench testing... demonstrate that the device meets established performance and safety requirements and is therefore deemed safe and effective." Device designed and manufactured under Quality System Regulations (21 CFR §820 and ISO 13485 Standards) and IEC 62304 processes were implemented. |
Compatibility: The software should be compatible with specified operating systems (Windows XP for v2.1, Windows 7 for v3.1) and able to access data from the listed Aplio diagnostic ultrasound systems. | UltraExtend USWS-900A v2.1 runs under Windows XP and v3.1 runs under Windows 7. Allows data acquired by Aplio 300, Aplio 400 and Aplio 500 Diagnostic Ultrasound Systems to be accessible. |
Study Details (Based on the document, many are not applicable for a 510(k) modification without clinical studies)
-
Sample size used for the test set and the data provenance:
- Test Set Sample Size: Not explicitly stated as a separate "test set" in the context of a clinical study. The validation involved "bench testing" and "software validation documentation." This typically means testing against a variety of use cases and scenarios, but the number of cases or the specific data used for this internal validation is not provided.
- Data Provenance: Not specified. As no clinical studies were performed, there's no mention of country of origin or retrospective/prospective data for a clinical test set. The data would likely be internally generated or from existing Aplio systems.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable as no clinical study with expert-established ground truth was conducted. The "ground truth" for software validation would be adherence to functional specifications and absence of bugs, verified by software engineers and quality assurance personnel.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable as no clinical study with adjudicated results was conducted.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC study was done. The document explicitly states: "UltraExtend USWS-900A v2.1 and v3.1 did not require clinical studies to support substantial equivalence." This is a software for displaying and analyzing images, not an AI diagnostic tool requiring MRMC evaluation for reader improvement.
-
If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Not directly applicable in the sense of an algorithmic diagnostic performance study. The "standalone" performance here refers to the software's ability to correctly process and display images, and run its embedded applications. This was assessed through "Risk Analysis, Verification/Validation testing conducted through bench testing, as well as software validation documentation."
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For software validation, the "ground truth" would be the software requirements and specifications. The validation process verifies that the software functions as designed and meets these predefined requirements, rather than clinical ground truth (like pathology or expert consensus).
-
The sample size for the training set:
- Not applicable. This is not a machine learning or AI device that requires a separate "training set" in the context of developing a diagnostic algorithm. It's a software package for image post-processing and display.
-
How the ground truth for the training set was established:
- Not applicable, as no training set (in the ML context) was used.
Ask a specific question about this device
(134 days)
TOSHIBA AMERICA MEDICAL SYSTEMS, INC.
DTS is intended to display an approximation of both skin dose distribution and skin dose rate in real time during fluoroscopic interventional procedures of cardiac angiography. This software is intended for use on the Toshiba INFX-8000F CSi cardiac labs.
The dose tracking system (DTS) is an application software package intended to provide the estimated dose distribution information during X-ray fluoroscopic procedures. The dose tracking system (DTS) calculates the radiation dose of the patient's skin using the exposure technique parameters and exposure geometry obtained from the x-ray imaging system (Toshiba Infinix-i) and presents the cumulative results in a color mapping on a 3D graphic of the patient model.
Toshiba America Medical Systems, Inc. Pre-Market Notification 510(k) XIDF-DTS801; Dose Tracking Software
Here's an analysis of the provided 510(k) summary regarding the Dose Tracking System (DTS):
1. A table of acceptance criteria and the reported device performance
The provided document (K123097) is a 510(k) summary for a dose tracking system. It does not explicitly state specific acceptance criteria with numerical thresholds or reported device performance metrics in a direct table format as one might find in a detailed clinical study report. Instead, it generally states that "Testing was performed using anthropomorphic phantoms and Lexan phantoms to verify and validate the performance of the system. Based upon this testing the accuracy of the displayed estimated dose was determined and is included in the user information."
Without access to the "user information" or a more detailed test report, specific acceptance criteria and detailed performance metrics cannot be tabulated from this document. The existing information only confirms that some form of testing was conducted to verify accuracy.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
The document does not specify the exact sample size for the test set. It mentions "anthropomorphic phantoms and Lexan phantoms" were used.
The data provenance is not explicitly stated regarding country of origin or whether it was retrospective/prospective. Given the nature of phantom testing, it's likely that the data was generated specifically for this pre-market notification (prospective) and would have occurred at the manufacturer's facility or a contracted testing facility.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
The document does not provide any information about experts being used to establish ground truth for the test set, nor their qualifications. Given that the testing involved phantoms, the ground truth would likely be based on physical measurements or known properties of the phantoms and applied radiation, rather than expert interpretation of medical images.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not describe any adjudication method for the test set. This is consistent with testing on phantoms where objective measurements are typically compared against the device's output, rather than subjective human interpretation requiring adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not conducted or described in this 510(k) summary. The device in question is a "Dose Tracking System" that calculates and displays estimated skin dose, not an AI diagnostic tool intended to assist human readers in image interpretation. Therefore, a study to measure improvement in human reader performance with or without AI assistance is not relevant to this type of device.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
Yes, the testing described appears to be a standalone (algorithm only) performance evaluation. The device (DTS software) calculates and displays dose information based on exposure parameters and geometry from the X-ray system. The "testing was performed using anthropomorphic phantoms and Lexan phantoms to verify and validate the performance of the system," which implies evaluating the accuracy of the system's dose estimations against known phantom conditions, independent of human interaction beyond operating the system.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The type of ground truth used would have been based on physical measurements or known radiation dose characteristics applied to the phantoms. For instance, dosimeters might have been used to precisely measure actual dose delivery to the phantoms under specific conditions, which would then serve as the ground truth to evaluate the accuracy of the DTS's estimated dose. It would not be expert consensus, pathology, or outcomes data, as these are typically relevant for diagnostic or clinical decision-making devices.
8. The sample size for the training set
The document does not mention a training set or any details related to machine learning model training. The DTS calculates dose based on "exposure technique parameters and exposure geometry obtained from the x-ray imaging system" and "a reference dose table." This suggests a deterministic calculation based on established physics principles and possibly pre-calibrated data (reference dose table), rather than a machine learning approach requiring a distinct training set.
9. How the ground truth for the training set was established
As no training set is described or implied for a machine learning model, the question of how its ground truth was established is not applicable to this 510(k) submission. If the "reference dose table" is considered a form of pre-established data, its ground truth would have been established through physicist-derived radiation dose models and calibration measurements.
Ask a specific question about this device
(108 days)
TOSHIBA AMERICA MEDICAL SYSTEMS, INC.
The MRI system is indicated for use as a diagnostic imaging modality that produces cross-sectional transaxial, coronal, sagittal, and oblique images that display anatomic structures of the head or body. In addition, this system supports non-contrast MRA. MRI (magnetic resonance imaging) images correspond to the spatial distribution of protons (hydrogen nuclei) that exhibit nuclear magnetic resonance (NMR). The NMR properties of body tissues and fluids are:
- Proton density (PD) (also called hydrogen density),
- Spin-lattice relaxation time (T1),
- Spin-spin relaxation time (T2),
- Flow dynamics,
- Chemical shift.
Contrast agent use is restricted to the approved drug indications. When interpreted by a trained physician, these images yield information that can be useful in diagnosis.
The Vantage Titan with Helios gradient (Model MRT-1504/U5) is a 1.5 Tesla Magnetic Resonance Imaging (MRI) System. The Vantage Titan with Helios gradient uses the same magnet as the Vantage Titan (K120638). The gradient performance was modified using. the same gradient amplifier and gradient coil as Vantage Titan HSR (K112003).
The provided text describes a 510(k) premarket notification for a modified MRI device (Vantage Titan with Helios gradient, model MRT-1504/U5). However, it does not contain the detailed information required to answer all parts of your request, specifically regarding acceptance criteria for device performance studies and the specifics of those studies (e.g., sample sizes for test/training sets, expert qualifications, adjudication methods, MRMC studies, or standalone algorithm performance).
The submission focuses on establishing substantial equivalence to previously cleared predicate devices (K120638: Vantage Titan, K112003: Vantage Titan HSR) by highlighting hardware and software modifications and confirming safety parameters and imaging performance are maintained.
Here's a breakdown of what can be answered based on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state "acceptance criteria" for imaging performance in a quantifiable way beyond affirming "Imaging Quality metrics utilizing phantoms are provided in this submission." Instead, it compares safety parameters to the predicate devices and states "No change from the previous predicate submission (K120638)." regarding imaging performance parameters.
Acceptance Criteria (Implied from Predicate Equivalence) | Reported Device Performance (Subject Device: Vantage Titan with Helios gradient, MRT-1504/U5) |
---|---|
Safety Parameters (Compared to Predicates) | |
Static field strength: 1.5T | 1.5T |
Peak and A-weighted acoustic noise: Comparable to predicates (Predicate K120638: 106.2 dB (A-weighted), 115.4 dB (peak); Predicate K112003: 113.0 dB (A-weighted), 121.6 dB (peak)) | 112.0 dB (A-weighted), 122.9 dB (peak) |
Operational modes: 1st operating mode | 1st operating mode |
Safety parameter display: SAR, dB/dt | SAR, dB/dt |
Operating mode access requirements: Allows screen access to 1st level operating mode | Allows screen access to 1st level operating mode |
Maximum SAR: 4W/kg for whole body (1st operating mode, IEC 60601-2-33 (2002)) | 4W/kg for whole body (1st operating mode, IEC 60601-2-33 (2002)) |
Maximum dB/dt: |
Ask a specific question about this device
(183 days)
TOSHIBA AMERICA MEDICAL SYSTEMS, INC.
The UltraExtend FX (TUW-U001S) is designed to allow the user to observe images and perform analysis using the examination data acquired with specified diagnostic ultrasound systems Aplio 500, Aplio 400 and Aplio 300.
UltraExtend FX is a software package that can be installed in a general-purpose personal computer (PC) to enable data acquired from Aplio diagnostic ultrasound system (Aplio 300, Aplio 400 and Aplio 500), to be loaded onto the PC for image processing with other application software product. UltraExtend FX is a post-processing software that implements functionality and operability equivalent to that of the diagnostic ultrasound system, providing a seamless image reading environment from examination using the diagnostic ultrasound system to diagnosis using the PC.
The provided document does not contain details about specific acceptance criteria, a study proving the device meets those criteria, or quantitative performance metrics. It primarily focuses on the device description, regulatory information, and a substantial equivalence determination to a predicate device.
Therefore, many of the requested sections (Table of acceptance criteria and reported device performance, Sample size used for the test set, Number of experts used, Adjudication method, MRMC study details, Standalone performance, Type of ground truth, Training set sample size, and Ground truth establishment for training set) cannot be extracted from this document.
However, based on the provided text, here's what can be inferred or stated:
- Device Name: UltraExtend FX, TUW-U001S
- Device Type: Software package for post-processing ultrasound images.
- Purpose: To enable observation and analysis of images acquired from specified diagnostic ultrasound systems (Aplio 300, Aplio 400, and Aplio 500).
Here's a response addressing the available information and noting the missing details for the requested categories:
Based on the provided K121076 510(k) summary for the Toshiba America Medical Systems UltraExtend FX, TUW-U001S:
This submission focuses on establishing substantial equivalence to a predicate device rather than presenting a detailed study with specific quantitative acceptance criteria and performance data for a novel algorithm or diagnostic aid. The device is a post-processing software for ultrasound images, and the validation activities appear to relate to ensuring the software functions as intended and is safe and effective in its role as an image viewer and analysis tool, in line with its predicate.
1. A table of acceptance criteria and the reported device performance:
Specific quantitative acceptance criteria and corresponding reported device performance metrics (e.g., sensitivity, specificity, accuracy, precision) for a diagnostic output are not provided in this document. The document primarily describes the functionality and comparative aspects for substantial equivalence.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective):
Details regarding the sample size, type, or provenance of any test sets (e.g., number of images, patient cases) used for formal performance evaluation are not explicitly mentioned in this document. The document states "Verification and validations tests were conducted on the subject device through bench testing to confirm device safety and effectiveness," which generally refers to software functionality testing, but not typically to clinical image-based performance studies in the way a diagnostic algorithm would be evaluated.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience):
Information regarding experts, ground truth establishment, or their qualifications for any clinical test set is not present in the provided text.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
No information on adjudication methods for a test set is available in this document.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study and any reported effect sizes for human reader improvement are not mentioned in this document. This submission pertains to an image processing workstation, not a device providing diagnostic assistance via AI.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
The document describes the UltraExtend FX as a "post-processing software that implements functionality and operability equivalent to that of the diagnostic ultrasound system, providing a seamless image reading environment from examination using the diagnostic ultrasound system to diagnosis using the PC." It's an analysis tool for images with human interpretation, not a standalone algorithm making diagnoses. Therefore, a standalone performance evaluation in the context of an autonomous AI algorithm is not applicable and not reported.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
Given the nature of the device as an image processing workstation, the concept of "ground truth" for diagnostic accuracy (e.g., pathology) is not directly applicable to its evaluation in the manner typically seen for AI diagnostic algorithms. For functional validation, the "ground truth" would likely relate to the correct display and measurement capabilities compared to the original system or expected outputs. Specifics are not detailed.
8. The sample size for the training set:
As this is an image processing software and not an AI/machine learning model, the concept of a "training set" in that context is not applicable, and no information is provided.
9. How the ground truth for the training set was established:
Not applicable, as there is no mention of an AI/machine learning model or training set.
Ask a specific question about this device
(22 days)
TOSHIBA AMERICA MEDICAL SYSTEMS, INC.
This device is indicated as a general radiography device. It is capable of providing digital images of the head, neck, spine, chest, abdomen, and limbs by converting x-rays to digital images. Excluded indications include mammography, fluoroscopy and angiography studies.
The RADREX-i is a general purpose x-ray system that employs Solid State Imager(s), SSXI, which converts x-rays directly into electrical signals shich can, after appropriate processing be displayed on LCD monitors or printed to a medical grade image printer. The system console is a PC based devise that allows for worklist management, image storage, image processing, image exporting and image printing. The system may be equipped with a table and/or vertical wall unit, is configurable with up to two x-ray tubes, and has an auto stitching function.
The provided documents do not contain a detailed study report with quantitative acceptance criteria and device performance metrics in the format requested. The submission is a 510(k) for a modification (Special 510(k)) to an already cleared device, primarily adding new flat panel detectors.
Therefore, many of the requested elements (e.g., sample sizes, ground truth establishment, expert qualifications, MRMC studies, standalone performance with specific metrics like sensitivity/specificity) are not present in the provided text.
Specifically, the document states:
- "Image Quality metrics utilizing phantoms are provided in this submission." (Section 18. TESTING)
- "Safety and effectiveness have been verified via risk management and application of design controls to the modifications." (Section 19. CONCLUSION)
These statements indicate that testing was performed, but the specific results and methodology are not detailed in the provided summary. The submission focuses on demonstrating substantial equivalence to a predicate device, and for modifications like this, a detailed clinical study with human readers might not have been deemed necessary by the submitter or requested by the FDA if phantom testing and engineering verification were sufficient to establish equivalence for the new components.
Given the available information, here's what can be extracted, with explicit notes for missing information:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Metric (if available) | Acceptance Criteria | Reported Device Performance |
---|---|---|---|
Image Quality (Phantoms) | (Not specified) | (Not specified) | Provided in submission (specific values not detailed) |
Safety | Compliance | 21 CFR § 820, ISO 13485, IEC60601-1, IEC 60601-2-32, IEC60601-2-28, 21 CFR §1020 | Device is designed and manufactured in conformance with these standards. |
Effectiveness | (Not specified) | (Not specified) | Verified via risk management and design controls. |
2. Sample size used for the test set and the data provenance:
- Sample Size (Test Set): Not specified. The document primarily refers to "Image Quality metrics utilizing phantoms," implying phantom-based testing rather than a clinical human-subject test set.
- Data Provenance: Not specified. Based on phantom testing, geographic origin is not relevant. The testing would be considered prospective for the device modification.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not applicable. Ground truth for phantom-based image quality metrics is typically established by physical measurements or known characteristics of the phantom, not by expert consensus on clinical images.
- Qualifications of Experts: Not applicable.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Adjudication Method: Not applicable, as detailed multi-reader clinical testing with human subjects is not described for the test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study: No. This submission does not describe an AI device or an MRMC study comparing human readers with and without AI assistance. The device is a general radiography system, not an AI diagnostic tool.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Standalone Performance Test: Not applicable. The device is an imaging system, not an algorithm, and its performance is assessed via image quality and safety standards for the hardware/software.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Type of Ground Truth: For the "Image Quality metrics utilizing phantoms," the ground truth would be based on the known, reproducible properties of the phantom and objective physical measurements (e.g., spatial resolution, contrast-to-noise ratio, MTF, DQE). If any limited subjective evaluation was done, it is not described.
8. The sample size for the training set:
- Sample Size (Training Set): Not applicable. This is not an AI/ML device that requires a training set of data in the conventional sense. The "base software" is stated to remain unchanged, implying previous development and validation.
9. How the ground truth for the training set was established:
- Ground Truth (Training Set): Not applicable. See point 8.
Ask a specific question about this device
(66 days)
TOSHIBA AMERICA MEDICAL SYSTEMS, INC.
This device is indicated to acquire and display cross sectional volumes of the whole body, to include the head, with the capability to image whole organs in a single rotation. Whole organs include but are not limited to brain, heart, pancreas, etc.
The Aquilion ONE has the capability to provide volume sets of the entire organ. These volume sets can be used to perform specialized studies, using indicated software/hardware, of the whole organ by a trained and qualified physician.
The Aquilion ONE Vision, TSX-301C/1, v4.90 is a whole body CT scanner. This device captures cross sectional volume data sets. The device consists of a gantry, patient couch (table) and peripheral cabinets used for data processing and display.
The provided text is a 510(k) summary for the Toshiba Aquilion ONE Vision CT scanner. It focuses on demonstrating substantial equivalence to a predicate device rather than presenting a study to prove acceptance criteria for a new feature or algorithm. Therefore, many of the requested sections (sample size, expert qualifications, adjudication method, MRMC study, standalone performance, ground truth details, training set size) are not applicable or cannot be extracted from this type of regulatory submission.
Here's an analysis of the available information:
Acceptance Criteria and Device Performance
The submission focuses on demonstrating substantial equivalence to a predicate device by comparing technical specifications. The "acceptance criteria" here are implicitly that the new device meets or exceeds the performance of the predicate device for critical technical specifications, which in turn supports the claim that the indications for use and safety/effectiveness remain unchanged.
Table of Acceptance Criteria and Reported Device Performance
Item | Acceptance Criteria (Predicate Device K113466) | Reported Device Performance (Aquilion ONE Vision, TSX-301C/1) |
---|---|---|
Gantry Rotational Speed | 0.35 Seconds | 0.275 Seconds |
View Rate (number of views transferred per second) | 2572 | 2910 |
X-ray Generator Output Power | 70kW Maximum | 90kW Maximum |
X-ray Tube angle | 11 degrees | 10 degrees |
Computer System | Dual Core Xeon based | Quad Core Xeon based |
Image reconstruction (maximum speed) | 30 images per second | 50 images per second |
Gantry Opening | 720mm | 780mm |
Summary of Changes:
- Increased rotational speed from 350mS to 275mS.
- X-ray Generator changed to match dose at new speed.
- Tube has hardware enhancements to allow for higher rotational speed.
- View rates have been increased.
Study Proving Device Meets Acceptance Criteria:
The submission does not describe a clinical study. Instead, it relies on technical testing and comparison to a predicate device to demonstrate substantial equivalence and adherence to safety standards.
- 17. TESTING: "Image Quality metrics utilizing phantoms are provided in this submission. Additionally, testing of the modified system was conducted in accordance with the applicable standards published by the International Electrotechnical Commission (IEC) for Medical Devices and CT Systems."
This indicates that internal performance testing, likely using phantoms, was conducted to verify the changes and ensure image quality, and that the system conforms to relevant IEC standards for safety and performance. The specific details or results of these phantom tests are not included in this summary.
Additional Information Not Available in the Provided Text:
- Sample size used for the test set and the data provenance: Not applicable, as no human subject test set or clinical study is described. The performance data is derived from technical specifications and phantom testing.
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable, as there is no human subject test set requiring expert ground truth.
- Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable.
- If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. This device is a CT scanner, not an AI-powered diagnostic tool requiring reader performance evaluation.
- If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable. The submission focuses on hardware and core software modifications of a CT scanner, not on a standalone algorithm.
- The type of ground truth used (expert consensus, pathology, outcomes data, etc): Not applicable, as the "ground truth" for this submission are the technical specifications and performance of the predicate device, against which the modifications are compared.
- The sample size for the training set: Not applicable, as this is a CT scanner modification, not an AI model training.
- How the ground truth for the training set was established: Not applicable.
Ask a specific question about this device
Page 1 of 11