Search Results
Found 11 results
510(k) Data Aggregation
(265 days)
TeraRecon Cardiovascular.Calcification.CT is intended to provide an automatic 3D segmentation of calcified plaques within the coronary arteries and outputs a mask for calcium scoring systems to use for calculations. The results of TeraRecon Cardiovascular.Calcification.CT are intended to be used in conjunction with other patient information by trained professionals who are responsible for making any patient management decision per the standard of care. TeraRecon Cardiovascular.Calcification.CT is a software as a medical device (SaMD) deployed as a containerized application. The device inputs are CT heart without contrast DICOM images. The device outputs are DICOM result files which may be viewed utilizing DICOM-compliant systems. The device does not alter the original input data and does not provide a diagnosis.
TeraRecon Cardiovascular.Calcification.CT is indicated to generate results from Calcium Score CT scans taken of adult patients, 30 years and older, except patients with pre-existing cardiac devices, electrodes, previous and established ischemic diseases (IMA, bypass grafts, stents, PTCA) and Thoracic metallic devices. The device is not specific to any gender, ethnic group, or clinical condition. The device's use should be limited to CT scans acquired on General Electric (GE) or Siemens Healthcare or their subsidiaries (e.g. GE Healthcare) equipment. Use of the device with CT scans from other manufacturers is not recommended.
The TeraRecon Cardiovascular.Calcification.CT algorithm is an image processing software device that can be deployed as a containerized application (e.g., Docker container) that runs on off-the-shelf hardware or on a cloud platform. The device provides an automatic 3D segmentation of the coronary calcifications.
When TeraRecon Cardiovascular.Calcification.CT results are used in external viewer devices such as TeraRecon's Intuition or Eureka Clinical AI medical devices, all the standard features offered by the external viewer are employed.
The TeraRecon Cardiovascular.Calcification.CT algorithm is not intended to replace the skill and judgment of a qualified medical practitioner and should only be used by individuals that have been trained in the software's function, capabilities, and limitations.
Here's a breakdown of the acceptance criteria and study details for the TeraRecon Cardiovascular.Calcification.CT device, based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Device Performance
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Agatston Classification Accuracy: At least 80% accuracy for the 4 revised Agatston classes (0-10, 11-100, 101-400, >400), with a lower bound 95% confidence interval (CI) of at least 75%. | Passed. Mean accuracies exceeded 94% across Agatston categories, with 95% CI lower bounds above 75%. |
| Vessel Calcification Classification (Dice Similarity Coefficient): At least 80% DICE with a lower bound 95% confidence interval of at least 75% for segmentation of calcifications by vessel (LM, LAD, LCX, RCA). | Passed. Segmentation performance, measured by Dice similarity coefficient against expert annotations, consistently exceeded the predefined acceptance criteria (≥80% Dice with lower 95% CI ≥75%). |
Study Details
1. Sample Size Used for the Test Set:
The test set included 422 adult patients.
2. Data Provenance (Country of Origin, Retrospective/Prospective):
- Retrospective cohort study.
- At least 50% of the ground truth data is from the US, divided between multiple locations.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
- 3 annotators (experts) were used for each study to segment coronary vessels and apply thresholds to create calcification masks within the vessels.
- Qualifications of experts: Not explicitly stated in the provided text.
4. Adjudication Method for the Test Set:
- Majority vote (2+1 method): The final calcification ground truth for the calcification segmentation masks was attained if a voxel was part of at least 2 of the masks defined by the three annotators.
- For the ground truth vessel of calcification, it was also attained by majority vote among the 3 annotators.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- No MRMC comparative effectiveness study involving human readers with and without AI assistance was mentioned. The study focused on standalone device performance against expert-established ground truth.
6. Standalone Performance:
- Yes, a standalone (algorithm only without human-in-the-loop performance) study was conducted. The results reported are directly attributed to the TeraRecon Cardiovascular.Calcification.CT device's performance against ground truth.
7. Type of Ground Truth Used:
- Expert consensus based on annotations from three experts. The experts segmented coronary vessels and applied thresholds to create calcification masks. The final ground truth was established by majority vote among these annotators for both the calcification mask and the vessel classification.
8. Sample Size for the Training Set:
- The document does not explicitly state the sample size used for the training set. It only describes the test set.
9. How the Ground Truth for the Training Set was Established:
- The document does not explicitly state how the ground truth for the training set was established. It only describes the ground truth establishment for the test set.
Ask a specific question about this device
(115 days)
TeraRecon Aorta.CT is intended to provide an automatic 3D segmentation and label anatomical landmarks of the Aorta. The results of TeraRecon Aorta.CT are intended to be used in conjunction with other patient information by trained professionals who are responsible for making any patient management decision per the standard of care. TeraRecon Aorta.CT is a software as a medical device (SaMD) deployed as a containerized application. The device inputs are CT Angiography with contrast DICOM images. The device outputs are DICOM result files which may be viewed utilizing DICOM-compliant systems. The device does not alter the original input data and does not provide a diagnosis.
TeraRecon Aorta.CT is indicated to generate results from aortic CT Angiography scans taken of adult patients except patients with pre-existing aortic device, bicuspid aortic valve anomaly, aortic dissection, aortic rupture, and abdominal metallic devices. The device is not specific to any gender, ethnic group, or clinical condition.
The TeraRecon Aorta.CT algorithm is an image processing software device that can be deployed as a containerized application (e.g., Docker container) that runs on off-the-shelf hardware or on a cloud platform.
The device provides an automatic 3D segmentation of the aorta and landmarks of important aortic anatomy. When TeraRecon Aorta.CT results are used in external viewer devices such as TeraRecon's Intuition or Eureka Clinical Al medical devices, all the standard features offered by the external viewer are employed.
The TeraRecon Aorta.CT algorithm is not intended to replace the skill and judgment of a qualified medical practitioner and should only be used by individuals that have been trained in the software's function, capabilities, and limitations.
Here's a summary of the acceptance criteria and study details for the TeraRecon Aorta.CT (1.1.0) device:
1. Table of Acceptance Criteria and Reported Device Performance
| Feature | Acceptance Criteria | Reported Device Performance |
|---|---|---|
| Lumen Segmentation | Mean DICE score >= 80% | Mean DICE score: 88% (Passed) |
| Aorta Segmentation | Mean DICE score >= 80% | Mean DICE score: 90% (Passed) |
| Landmarking (Overall) | Each of the 22 landmarks independently pass class-specific criteria in 80% of cases. Lower bound of the 95% exact binomial confidence interval >= 70%. | All landmarks passed the acceptance criteria, all 95% confidence intervals were at least 70%. (Passed) |
| Landmarking (Specific Criteria): | ||
| Common Left/Right Iliac Arteries, Left/Right Femoral Arteries (4 landmarks) | Correct identification of the vessel in accordance with ground truth. | Not explicitly stated with individual percentages, but included in the overall "all landmarks passed" statement. |
| Remaining 17 Landmarks (except aortic bifurcation) | Euclidean distance between ground truth annotation and medical device output locations within 5mm. | Not explicitly stated with individual percentages, but included in the overall "all landmarks passed" statement. |
| Aortic Bifurcation (1 landmark) | Euclidean distance between ground truth annotation and medical device output locations within 2cm. | Not explicitly stated with individual percentages, but included in the overall "all landmarks passed" statement. |
2. Sample Size Used for the Test Set and Data Provenance
- Initial Sample Size for Test Set: 170 CTA scans for segmentation and landmarking.
- Adjusted Sample Size for Landmarking: 170 initial studies + 29 supplemental studies = 199 studies (to achieve a target of 70 annotatable landmarks per target).
- Data Provenance: Retrospective cohort study. At least 50% of the ground truth data is from US patients across 3 geographical regions in the United States. The validation data was enriched with data from patients with clinical diagnosis of aortic dilation/aneurysm and/or aortic valve disease. The final manufacturer distribution of scanner types was 77 Siemens, 33 GE, 35 Philips, and 25 Canon.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts
- Number of Experts: Not explicitly stated as a count, but referred to as "annotators" for landmarking and a "US board certified radiologist" for checking collected datasets.
- Qualifications of Experts: The individual who checked collected datasets was a "US board certified radiologist, who is currently practicing in the United States and reads similar scans." The qualifications of the "annotators" for landmarking are not specified beyond their task.
4. Adjudication Method for the Test Set
The document does not explicitly state an adjudication method (e.g., 2+1, 3+1). It implies that ground truth was established by experts (radiologist and annotators) and then compared to the device output. There is no mention of a process for resolving discrepancies among multiple experts or between expert and device output outside of direct comparison.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done
No, an MRMC comparative effectiveness study that involves human readers with and without AI assistance was not explicitly described or presented in the provided text. The study focuses on evaluating the standalone performance of the AI device against ground truth.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, a standalone study was performed. The described study evaluates the Aorta.CT device's performance (segmentation DICE scores and landmarking accuracy) directly against expert-established ground truth. There is no mention of human-in-the-loop performance evaluation in this specific study.
7. The Type of Ground Truth Used
- Segmentation (Lumen and Aorta Wall): Expert annotations as described by "Comparison of Aorta lumen segmentation results from the medical device to aorta lumen segmentation from ground truth" and "This aorta wall to wall segmentation includes aorta lumen + wall for the comparison."
- Landmarking: Expert annotations as described by "We will also examine the subject device landmarking locations compared to each ground truth annotation."
8. The Sample Size for the Training Set
The sample size for the training set is not provided in the given text. The document focuses on the validation study and its test set.
9. How the Ground Truth for the Training Set was Established
The method for establishing ground truth for the training set is not provided in the given text. The document only describes how ground truth was established for the retrospective cohort test set.
Ask a specific question about this device
(186 days)
The TeraRecon Neuro Algorithm is an algorithm for use by trained professionals, including but not limited to physicians, surgeons and medical clinicians.
The TeraRecon Neuro Algorithm is a standalone image processing software device that can be deployed as a Microsoft Windows executable on off-the-shelf hardware or as a containerized application (e.g., a Docker container) that runs on off-the-shelf hardware or on a cloud platform. Data and images are acquired via DICOM compliant imaging devices. DICOM results may be exported, combined with, or utilized by other DICOM-compliant systems and results.
The TeraRecon Neuro Algorithm provides analysis capabilities for functional, dynamic, and derived imaging datasets acquired with CT or MRI. It can be used for the analysis of dynamic brain perfusion image data, showing properties of changes in contrast over time. This functionality includes calculation of parameters related to brain tissue perfusion, vascular assessment, tissue blood volume, and other parametric maps with or without the ventricles included in the calculation. The algorithm also include volume reformat in various orientation, rotational MIP 3D batch while removing the skull. This "tumble view" allows qualitative review of vascular structure in direct correlation to the perfusion maps for comprehensive review.
The results of the TeraRecon Neuro Algorithm can be delivered to the end-user through image viewers such as TeraRecon's Aquarius Intuition system, TeraRecon's Eureka AI Results Explorer, TeraRecon's Eureka Clinical AI Platform, or other image viewing systems like PACS that can support DICOM results generated by the TeraRecon Neuro Algorithm.
The TeraRecon Neuro Algorithm results are designed for use by trained healthcare professionals and are intended to assist the physician in diagnosis, who is responsible for making all final patient management decisions.
The TeraRecon Neuro algorithm version 2.0.0 is a modification of the predicate device Neuro.AI Algorithm (K200750), which was a modification of the predicate device, Intuition-TDA, TVA, Parametric Mapping (which was cleared under K131447). The predicate device Intuition -TDA, TVA, Parametric Mapping is an optional module/workflow for the Intuition system (K121916). The TeraRecon Neuro algorithm is an image processing software device that can be deployed as a Microsoft Windows executable on off-the-shelf hardware or as a containerized application (e.g., Docker container) that runs on off-the-shelf hardware or on a cloud platform. The device has limited network connectivity or external medical support.
TeraRecon Neuro allows motion correction and processes, calculates and outputs brain perfusion analysis results for functional, dynamic, and derived imaging datasets acquired with CT or MRI. TeraRecon Neuro results are used for the analysis of dynamic brain perfusion image data, showing properties of changes in contrast over time. This functionality includes calculation of parameters related to brain tissue perfusion, vascular assessment and tissue blood volume.
Outputs include parametric map of measurements including time to peak (TTP), take off time (TOT), recirculation time (RT), mean transit time (MTT), blood volume (BV/CBV), blood flow (BF/CBF), time to maximum (Tmax) and penumbra/umbra maps that are derived from combinations of measurement parameters, such as mismatch maps and hypoperfusion maps with volumes and ratios, as well as 2D and 3D visualization of brain tissues and brain blood vessels (Note: Tmax, mismatch and hypoperfusion maps are only available for images of CT modality).
When TeraRecon Neuro results are used in external viewer devices such as TeraRecon's Intuition or Eureka medical devices, all the standard features offered by Intuition or Eureka are employed such as image manipulation tools like drawing the region of interest, manual or automatic segmentation of structures, tools that support creation of a report, transmitting and storing this report in digital form, and tracking historical information about the studies analyzed by the software.
The TeraRecon Neuro algorithm outputs can be used by physicians to aid in the diagnosis and for clinical decision support including treatment planning and post treatment evaluation. The software is not intended to replace the skill and judgment of a qualified medical practitioner and should only be used by individuals that have been trained in the software's function, capabilities and limitations. The device is intended to provide supporting analytical tools to a physician, to speed decision-making and to improve communication, but the physician's judgment is paramount, and it is normal practice for physicians to validate theories and treatment decisions multiple ways before proceeding with a risky course of patient management.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Software Acceptance Criteria | All pre-defined acceptance criteria for the Neuro.AI Algorithm were met, and all software test cases passed during software development and testing in accordance with IEC 62304:2006/AI:2015. |
| Qualitative Clinical User Evaluation | The generated maps of TeraRecon Neuro were confirmed through qualitative assessment to be at least 85% substantially equivalent or better than the predicate and reference devices. |
| Quantitative Tmax Measurement Accuracy | Subject device limit of agreement for both absolute error and absolute percent error (of Tmax measurements compared to ground truth, defined as the average Tmax of two reference devices) was less than or equal to the limit of agreement of each predicate device compared to the ground truth. |
| Safety and Effectiveness | The TeraRecon Neuro device meets its qualified requirements, performs as intended, and is as safe and effective as the predicate device. No new or different questions of safety or efficacy have been raised. All risks were analyzed, and there are no new risks or modified risks that could result in significant harm which are not effectively mitigated in the predicate device. The device is determined to be Substantially Equivalent to the predicate device in terms of safety, efficacy, and performance. |
Study Details
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state the numerical sample size for the test set used in the qualitative clinical user evaluation or the quantitative Tmax measurement accuracy study. It refers to "comparison maps generated by the subject device, the predicate device and two additional reference devices." Without specific numbers, it's impossible to determine the precise size of the test set cases.
Regarding data provenance, the document does not provide details on the country of origin or whether the data was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: One expert was used.
- Qualifications: Dr. Robert Falk, MD. No additional details about his specific experience or sub-specialty (e.g., radiologist with X years of experience) are provided in the text.
4. Adjudication Method for the Test Set
The adjudication method used for the clinical user evaluation was not explicitly specified as 2+1, 3+1, or any other formal method. The study involved a single evaluator (Dr. Robert Falk, MD) who was "asked to confirm through qualitative assessment." This suggests a single-expert review, rather than a multi-expert adjudication process.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of Improvement with AI vs. Without AI Assistance
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly described. The evaluation involved a single expert providing a qualitative assessment. The study was focused on demonstrating substantial equivalence to predicate and reference devices, not on measuring the improvement of human readers with AI assistance. Therefore, there is no reported effect size of how much human readers improve with AI vs. without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance evaluation was conducted for the quantitative Tmax measurement. The acceptance criteria for Tmax accuracy were based on comparing the subject device's measurements directly against the ground truth (average of reference devices) in ROIs, without explicit human intervention in the measurement process for the test cases. While the "ground truth" itself is derived from other devices (which are used by humans), the comparison of the algorithm's output to this ground truth represents a standalone assessment of the algorithm's quantitative accuracy.
7. The Type of Ground Truth Used
- Qualitative Clinical User Evaluation: The ground truth for this evaluation appears to be the performance of the predicate and reference devices, as the subject device's maps were compared to these for substantial equivalence. It's a comparative assessment rather than an absolute ground truth (e.g., pathology).
- Quantitative Tmax Measurement Accuracy: The ground truth for Tmax measurements was defined as the average Tmax measurement of the two reference devices (GE Medical Systems FastStroke CT Perfusion 4D (K193289) and ISchemaView RAPID (K182130)) for a given ROI.
8. The Sample Size for the Training Set
The document does not provide any information regarding the sample size used for the training set for the TeraRecon Neuro algorithm.
9. How the Ground Truth for the Training Set Was Established
The document does not provide any information on how the ground truth for the training set was established. Training set details are not discussed.
Ask a specific question about this device
(228 days)
The Neuro.AI Algorithm is an algorithm for use by trained professionals, including but not limited to physicians, surgeons and medical clinicians.
The Neuro.Al Algorithm is a standalone image processing software device that can be deployed as a Microsoft Windows® executable on off-the-shelf hardware or as a containerized application (e.g., a Docker container) that runs on off-the-shelf hardware or on a cloud platform. Data and images are acquired via DICOM compliant imaging devices. DICOM results may be exported, combined with or utilized by other DICOM-compliant systems and results.
The Neuro.AI algorithm provides analysis capabilities for static, functional, dynamic and derived imaging datasets acquired with CT or MRI. It can be used for the analysis of dynamic brain image data, showing properties of changes in contrast over time. This functionality includes calculation of parameters related to brain tissue perfusion, vasular assessment and tissue blood volume and other parametric maps with or without the ventricles included in the calculation. The algorithm also includes volume reformat in various orientions, rotational MIP 3D batch while removing the skull. This "tumble view" allows qualitative review of vascular structure in direct correlation to the perfusion maps for comprehensive review.
The results of the Neuro.AI Algorithm can be delivered to the end-user through image viewers such as TeraRecon's Aquarius iNtuition system, TeraRecon's Northstar AI Results Explorer, or other image viewing systems like PACS that can support DICOM results generated by Neuro.AI.
The Neuro.AI Algorithm results are designed for use by trained healthcare professionals and are intended to assist the physician in diagnosis, who is responsible for making all final patient management decisions.
The Neuro.Al Algorithm is a modification of the predicate device, iNtuition-TDA, TVA, Parametric Mapping which was cleared under K131447. The predicate device is an optional module/workflow for the iNtuition system (K121916). The Neuro.Al Algorithm is a standalone image processing software device that can be deployed as a Microsoft® Windows executable on off-the-shelf hardware or as a containerized application (e.g., Docker container) that runs on off-the-shelf hardware or on a cloud platform. The device has limited network connectivity or external medical support.
The Neuro.Al Algorithm allows motion correction and processes, calculates and outputs brain perfusion analysis results for static, functional, dynamic and derived imaging datasets acquired with CT or MRI. Neuro.Al results are used for visualization and analysis of dynamic brain perfusion image data, showing properties of changes in contrast over time. This functionality includes calculation of parameters related to brain tissue perfusion, vascular assessment displayed in rotational Maximum Intensity Projection (MIP) called the tumble view, and tissue blood volume and other parametric maps with or without brain ventricles included in the calculation.
Outputs include text and parametric map displays of measurements including time to peak (TTP), take off time (TOT), recirculation time (RT), mean transit time (MTT), blood volume (BV/CBV), blood flow (BF/CBF), classification maps, reformatted images and rotational MIPs for 2D and 3D visualization of brain tissues and blood vessels, and for correlation to the perfusion maps.
The results of the Neuro.Al Algorithm can be delivered to the end-user through image viewers such as TeraRecon's iNtuition system, TeraRecon's Northstar Al Results Explorer ("Northstar"), or other third-party image viewing systems like PACS that can display the DICOM results generated by Neuro.Al output does not depend on the viewing system's capabilities as the results are self-contained and the only interface is through DICOM.
When the Neuro.Al Algorithm results are used on iNtuition, all the standard features offered by iNtuition are employed such as image manipulation tools like drawing the region of interest, manual or automatic segmentation of structures, tools that support creation of a report, transmitting and storing this report in digital form, and tracking historical information about the studies analyzed by the software.
The Neuro.Al algorithm can be used by physicians to aid in the diagnosis. The software is not intended to replace the skill and judgment of a qualified medical practitioner and should only be used by individuals that have been trained in the software's function, capabilities and limitations. The device is intended to provide supporting analytical tools to a physician, to speed decision-making and to improve communication, but the physician's judgment is paramount, and it is normal practice for physicians to validate theories and treatment decisions multiple ways before proceeding with a risky course of patient management.
The provided document describes the Neuro.AI Algorithm and its substantial equivalence to a predicate device, iNtuition-TDA, TVA, Parametric Mapping (K131447). However, it does not contain a detailed performance study with specific acceptance criteria and reported device performance in the format requested. The document focuses on regulatory compliance, outlining the device's indications for use, technological characteristics, and a general statement about software verification and validation.
Therefore, many of the requested items cannot be extracted directly from this document.
Here's a breakdown of what can and cannot be answered based on the provided text:
1. A table of acceptance criteria and the reported device performance
Not provided in the document. The text states: "During software testing, all predefined acceptance criteria for the Neuro.Al Algorithm were met and all software test cases passed." However, it does not specify what those acceptance criteria were or provide a table of performance metrics.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
Not provided in the document. The document mentions "software testing and performance evaluation" but does not detail the test set's sample size or data provenance.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Not provided in the document. The document describes the device's intended use by "trained professionals, including but not limited to physicians, surgeons and medical clinicians" but doesn't specify how ground truth was established for testing.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not provided in the document.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not provided in the document. The document does not describe a comparative effectiveness study involving human readers with and without AI assistance. The focus is on the device's substantial equivalence to a predicate device.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, implicitly. The document describes the "Neuro.AI Algorithm as a standalone image processing software device." The testing mentioned ("software testing and performance evaluation") would inherently be evaluating the algorithm's standalone performance against its predefined acceptance criteria, even if those criteria aren't explicitly detailed. The statement "The Neuro.AI Algorithm is as safe and effective as the predicate device" implies standalone testing for functional equivalence.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
Not explicitly provided in the document. While the device assists in diagnosis, the method for establishing ground truth for testing is not described.
8. The sample size for the training set
Not provided in the document. The document describes a "510(k) summary," which focuses on demonstrating substantial equivalence to a predicate device rather than detailing AI model development specifics like training set size.
9. How the ground truth for the training set was established
Not provided in the document. Similar to the training set size, the method for establishing ground truth for training is not included in this regulatory summary.
Ask a specific question about this device
(218 days)
iNtuition-TDA, TVA, and Parametric Mapping are software modules which supports assessment of time-dependent behavior of image intensity, dimensions or volume of regions of interest over time, for volumetric or planar dynamic image types such as CT or MR. Parametric mapping tools encode in color various parameters derived from the temporal or spatial characteristics of the planar or volumetric data.
Support is provided for digital image processing to derive metadata or new images from input image sets, for internal use or for forwarding to other devices using the DICOM protocol. Image processing tools are provided to extract metadata to derive parametric images from combinations of multiple input images.
iNtuition-TDA, TVA and Parametric Mapping are iNtution based software features with dedicated workflows and basic tools and thus support post-processing, displaying and manipulation of reports and medical images from acquisition devices and visualization in 2D, 3D and 4D for single or multiple datasets, or combinations thereof.
iNtuition-TDA, TVA, Parametric Mapping are designed for use by healthcare professionals and are intended to assist the physician in diagnosis, who is responsible for making all final patient management decisions.
iNtuition - TDA. IVA. Parametric Mapping are post-processing modules, part of iNtuition, which is a software device generally used with off-the-shelf hardware. offered in various configurations, with the simplest configuration being a stand-alone workstation capable of image review, communications, archiving, database maintenance, remote review, reporting and basic 3D capabilities. It can also be configured as a server with some, all, or none of its optional features disabled. A fully-configured iNtuition system is capable of various image processing and visualization functions to support the physician in medical image reviewing.
iNtuition - TDA, TVA, Parametric Mapping intended used is to provide solutions to various medical image analysis and viewing problems, which come about as modalities generate more and more images. They also support image distribution over networks, and are DICOM compliant.
iNtuition Time-Dependent Analysis (TDA) and Time-Volume Analysis (TVA) features can obtain quantitative information relating to the evolution of the intensity, density or dimensions of certain regions of CT. MR or other images over time. Statistical analysis such as a histogram representation of the image density values in an image is supported, and analysis of changes in volume over time from multi-phase volumetric images; for example, eiection fraction and stroke volume measurement calculation can be performed using the Time-Volume Analysis tools.
iNtuition Parametric Mapping tools encode in color various parameters derived from the temporal or spatial characteristics of the planar or volumetric data.
iNtuition - TDA, TVA and Parametric Mapping are iNtuition-based optional features, and employ all standard features offered by iNtuition, such as convenient tools to support creation of a report, transmitting and storing this report in digital form, and tracking historical information about the studies analyzed with the software.
These three modules can be sold separately or as a part of the bigger iNtuition package.
The provided text does not contain detailed acceptance criteria for quantitative device performance or a study explicitly proving the device meets such criteria. Instead, it focuses on demonstrating substantial equivalence to predicate devices and adherence to internal company procedures and voluntary industry standards.
Here's an analysis based on the information available:
1. Table of Acceptance Criteria and Reported Device Performance:
No specific, measurable acceptance criteria with corresponding performance metrics are reported in the document. The general acceptance is that the device "fully satisfies all expected and previously defined system requirements and features" and is "substantially equivalent to and perform as well as the predicate devices."
| Acceptance Criteria (Not Explicitly Stated as Measurable Metrics) | Reported Device Performance |
|---|---|
| Satisfies all expected and previously defined system requirements and features | "Test results support the conclusion that actual device performance satisfies the design intent and is equivalent to its predicate devices." |
| Substantially equivalent to predicate devices for intended use and technological characteristics | "In all material aspects, iNtuition-TDA, TVA, Parametric Mapping is substantially equivalent to the predicate devices." |
| No new significant safety and effectiveness concerns | "The introduction of iNtuition-TDA, TVA, Parametric Mapping has no significant concerns of safety and efficacy." |
| Adheres to internal company procedures for software testing and validation | "Performance testing was carried out according to internal company procedures. Software testing and validation were done according to written test protocols established before testing was conducted." |
| Adheres to voluntary standards (e.g., DICOM) | "voluntary standards such as DICOM, various in-house standard operating procedures are in place and utilized in the production of the software." |
2. Sample size used for the test set and the data provenance:
- Sample size for the test set: Not specified. The document states "Software testing and validation were done according to written test protocols." It doesn't mention a specific test set size (e.g., number of cases or images).
- Data provenance: Not specified. Since clinical studies were not required, it's unlikely that the "test set" involved patient data in a formal clinical trial sense. It likely refers to internal software testing using simulated or previously acquired anonymized data that were part of the predicate device's validation.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
Not applicable. The document states that clinical studies were not required. The "ground truth" for the software testing would have been based on the expected outputs as defined by the design requirements, rather than expert interpretation of medical images for diagnostic accuracy. "Test results were reviewed by designated technical professionals." Their qualifications are not specified beyond "technical professionals."
4. Adjudication method for the test set:
Not applicable. No mention of an adjudication method, as it wasn't a study involving human interpretation of medical images with a diagnostic endpoint.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No, an MRMC comparative effectiveness study was not done. The document explicitly states: "The subject of this traditional 510k notification, iNtuition-TDA. TVA, Parametric Mapping, did not require clinical studies to show safety and effectiveness of the software." Therefore, there is no information on the effect size of human reader improvement with or without AI assistance.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
The performance testing described is likely a standalone assessment of the software's functionality and accuracy against its design specifications. The document states, "Test results support the conclusion that actual device performance satisfies the design intent and is equivalent to its predicate devices." However, it doesn't quantify this performance in medical terms (e.g., sensitivity, specificity for a particular pathology), but rather in terms of meeting functional requirements. The device is intended to "assist the physician in diagnosis, who is responsible for making all final patient management decisions," implying it is not a standalone diagnostic device.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
Given the lack of clinical studies, the "ground truth" for the software's internal testing would have been defined by the expected computational results based on the software's design and algorithms. For example, if the software calculates ejection fraction, the ground truth would be the mathematically correct ejection fraction from a given input, based on a reference method or calculation. It would not be based on expert medical consensus, pathology, or outcomes data in a clinical validation context.
8. The sample size for the training set:
Not applicable. This is a post-processing software module, not a machine learning or AI algorithm that typically requires a large training set for model development. The document does not mention any machine learning components, and thus, no training set or its size is provided. The comparison is based on "similar technological characteristics" to predicate devices.
9. How the ground truth for the training set was established:
Not applicable, as there is no mention of a training set for machine learning.
Ask a specific question about this device
(274 days)
To receive, store, transmit, post-process, display and allow manipulation of reports and medical images from acquisition devices, including optical or other non-DICOM format images. DICOM images with modality type XA, US, CR, DR, SPECT, NM and MG, and images from volumetric medical scanning devices such as EBT, CT, PET or MRI. To provide access to images derived data and derived images via client-server software, web browser and mobile technology.
Visualization in 2D, 3D and 4D are supported for single or multiple datasets, or combinations thereof. Tools are provided to define and edit paths through structures such as centerlines, which may be used to analyze cross-sections of structures, or to provide flythrough visualizations rendered along such a centerline. Segmentation of regions of interest and quantitative analysis tools are provided, for images of vasculature, pathology and morphology, including distance, angle, volume, histogram, ratios thereof, and tracking of quantities over time. A database is provided to track and compare results using published comparison techniques such as RECIST and WHO. Calcium scoring for quantification of atherosclerotic plaque is supported.
Support is provided for digital image processing to derive metadata or new images from input image sets, for internal use or for forwarding to other devices using the DICOM protocol. Image processing tools are provided to extract metadata to derive parametric images from combinations of multiple input images, such as temporal phases, or images co-located in space but acquired with different imaging parameters, such as different MR pulse sequences, or different CT image parameters (e.g. dual energy).
iNtuition is designed for use by healthcare professionals and is intended to assist the physician in diagnosis, who is responsible for making all final patient management decisions.
Interpretation of mammographic images or digitized film screen images is supported only when the software is used without compression and with an FDA-Approved monitor that offers at least 5Mpixel resolution and meets other technical specifications reviewed and accepted by the FDA.
iNtuitionMOBILE provides wireless and portable access to medical images. This device is not intended to replace full workstations and should be used only when there is no access to a workstation. Not intended for diagnostic use when used via a web browser or mobile device.
iNtuition is a software device generally used with off-the-shelf hardware, offered in various configurations, with the simplest configuration being a stand-alone workstation capable of image review, communications, archiving, database maintenance, remote review, reporting and basic 3D capabilities described elsewhere in this document. The system can also be configured as a server with some, all, or none of its optional features disabled. Whether provided as a workstation or a server, the iNtuition software is designed to provide access by a local user physically sitting at the computer hosting the iNtuition server software, and/or by one or more remote users who concurrently connect to the server using a freely-downloadable thin client application (with conference capabilities). iNtuition supports the physician in medical image viewing.
A fully-configured iNtuition system is capable of various image processing and visualization functions, including full-color Volume Rendering, Calcium Scoring, Segmentation Analysis and Tracking (SAT), Vessel Analysis, Flythrough, Multi-phase review, CT/ CTA Subtraction, Lobular Decomposition (LD), iGENTLE, Maxillo-Facial, Volumetric Histogram, Findings Workflow, Fusion CT/ MR/ PET/ SPECT, MultiKV etc. Each of these features may be offered as an independent upgrade option to the basic configuration.
The intended use of the device is to provide solutions to various medical image analysis and viewing problems, which come about as modalities generate more and more images. It also supports image distribution over networks, and is DICOM compliant.
The provided 510(k) summary for the iNtuition device (K121916) explicitly states that no clinical studies were required or performed to prove the safety and effectiveness of the software. This is a critical piece of information. The assessment relies on non-clinical performance tests and a comparison to predicate devices to establish substantial equivalence.
Therefore, many of the requested categories related to clinical studies and ground truth establishment will be "Not Applicable" or "Not Reported" based on the provided document.
Here's the breakdown based on the given text:
1. A table of acceptance criteria and the reported device performance
| Acceptance Criteria (Implied) | Reported Device Performance |
|---|---|
| Compliance with internal company procedures | Performance testing carried out according to internal company procedures. |
| Compliance with voluntary standards (e.g., DICOM) | Voluntary standards such as DICOM are in place and utilized in the production of the software. |
| Software testing and validation according to written test protocols | Software testing and validation were done according to written test protocols established before testing was conducted. |
| Software fully satisfies all expected and previously defined system requirements and features | Test results were reviewed by designated technical professionals, ensuring the software fully satisfies all expected and previously defined system requirements and features. |
| Actual device performance satisfies design intent | Test results support the conclusion that actual device performance satisfies the design intent. |
| Substantial equivalence to predicate devices | Device is substantially equivalent to predicate devices (Aquarius Workstation (K011142), AquariusNET Server (K012086), AquariusAPS Server (K061214), VitreaView (K122136), IQQA-Liver Software (K061696)). |
| No significant concerns of safety and efficacy | "The introduction of iNtuition has no significant concerns of safety and efficacy." |
| Performs as well as predicate devices | "iNtuition... performs as well as the predicate devices." |
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not reported. The document states "no clinical studies were required to show safety and effectiveness of the software." Performance testing was non-clinical.
- Data Provenance: Not reported, as no clinical data was used for direct safety and effectiveness demonstrations. Non-clinical performance tests would likely use synthetic or internal test data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not applicable/Not reported. Ground truth in a clinical sense was not established for non-clinical performance tests. "Designated technical professionals" reviewed test results for software validation, but their qualifications are not specified beyond that title.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not applicable/Not reported. This relates to clinical studies for establishing ground truth, which were not performed.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No. The summary explicitly states: "The subject of this traditional 510k notification, iNtuition, did not require clinical studies to show safety and effectiveness of the software." Therefore, no MRMC study comparing human readers with or without AI assistance was performed.
- Effect Size: Not applicable.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Not explicitly detailed as a separate study. The device is a "software device" and "offers convenient tools to support creation of a report," but its performance metrics are established through non-clinical software validation and comparison to predicate devices, not through a standalone performance study with specific metrics like sensitivity/specificity against a gold standard. The device is "intended to assist the physician in diagnosis, who is responsible for making all final patient management decisions," implying a human-in-the-loop context for diagnostic use.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: Not applicable/Not reported for demonstrating safety and effectiveness. For non-clinical software performance tests, the "ground truth" would be established by the expected output based on the defined system requirements and internal test protocols.
8. The sample size for the training set
- Sample Size for Training Set: Not applicable/Not reported. The device is a general medical imaging system, not an AI/ML device in the modern sense that requires a specific training set to learn from data for a particular diagnostic task. Its functionality is based on established algorithms and image processing techniques.
9. How the ground truth for the training set was established
- Ground Truth for Training Set: Not applicable/Not reported, as there is no mention of a training set for an AI/ML model.
Ask a specific question about this device
(13 days)
The AquariusAPS server receives medical images from medical imaging acquisition devices adhering to the DICOM protocol for image transfer such as EBT, CT, MRI, and other volumetric or planar medical imaging modalities, and performs digital image processing to derive certain information or new images from these image sets. The information or new images thus derived is transmitted using the DICOM protocol to other devices supporting this standard protocol.
Lossy compressed mammographic images and digitized film screen images must not be reviewed for primary image interpretations. Mammographic images may only be interpreted using an FDA approved monitor that offers at least 5 Mpixel resolution and meets other technical specifications reviewed and accepted by FDA.
The AquariusAPS server receives medical images from medical imaging acquisition devices adhering to the DICOM protocol for image transfer such as EBT, CT, MRI, and other volumetric or planar medical imaging modalities, and performs digital image processing to derive certain information or new images from these image sets. The information or new images thus derived is transmitted using the DICOM protocol to other devices supporting this standard protocol. Lossy compressed mammographic images and digitized film screen images must not be reviewed for primary image interpretations. Mammographic images may only be interpreted using an FDA approved monitor that offers at least 5 Mpixel resolution and meets other technical specifications reviewed and accepted by FDA."
The intended use of the device is to provide time-saving pre-processing of images to remove the need for an image review system to perform these activities while a user is waiting for processing to complete, to optimize the use of the user's time.
The AquariusAPS Server utilizes standard "off the shelf" personal computer systems as its hardware platform. The software requires the use of the Windows 2000 operating system, and a Pentium III - class processor or equivalent.
The provided text is a 510(k) Premarket Notification for the TeraRecon AquariusAPS Server. It is a regulatory document and does not contain information about acceptance criteria or a specific study designed to prove the device meets those criteria.
The document primarily focuses on:
- Device identification: Trade name, common name, classification, establishment name, and contact information.
- Substantial Equivalence: Listing equivalent devices (predicates) and claiming equivalence based on basic design, features, and intended use.
- Device Description: What the device does (receives and processes medical images, transmits derived information).
- Intended Use Statement: To provide time-saving pre-processing of images.
- Hardware & Software Information: Operating system, processor requirements, and compliance with FDA guidance for software.
- Feature Comparison Table: A table comparing the AquariusAPS Server's features to those of the predicate devices. This table highlights what features the AquariusAPS Server possesses, but it doesn't provide performance metrics or acceptance criteria for those features.
- FDA Approval Letter: Officially confirming the substantial equivalence determination.
Therefore, I cannot extract the requested information regarding acceptance criteria and a study proving device performance because that data is not present in the provided text. The document is primarily a statement of equivalence for regulatory purposes, not a performance study report.
Ask a specific question about this device
(72 days)
The AquariusNET Server acquires, stores, transmits, and enables compatible computers on a network to display medical images from medical scanning devices and patient reports of various types. Teleradiology, such as MRI, CT or NM and archiving, image manipulation, 3D and 4D visualization are supported. Calcium scoring from whole body computed tomography derived measurements, for non-invasive detection and quantification of atherosclerotic plaque. Tools for histogram analysis of the density distribution of certain regions of interest are provided. A database management and report generation tool is included.
AquariusNET is a device consisting of a DICOM server that receives and stores images from a PACS or other image giving modalities. It archives images in a scalable storage medium and delivers them in response to DICOM Query/Retrieve requests from other DICOM devices on the network (not part of AquariusNET). It also serves image requests to its remote "thin clients", which act as the graphical user interface to the AquariusNET server. The server can host multiple concurrent sessions from remote "thin clients". AquariusNET features an integrated 2D/3D streaming engine which allows regular PCs or notebooks to control the server, and to review 2D images and 3D reconstructions interactively over a network. AquariusNET is capable of image review, communications, archiving, database maintenance, reporting and basic 3D capabilities described elsewhere in this document. It is also capable of full-color Volume Rendering and Calcium Scoring.
The provided text is a 510(k) Premarket Notification Summary for the AquariusNET Server. It describes the device, its features, and its intended use, focusing on its substantial equivalence to predicate devices. However, the document does not contain the following information typically found in a study proving a device meets acceptance criteria:
- A table of acceptance criteria and the reported device performance: The document includes a "Feature Comparison Table" that lists features present in the AquariusNET Server and its predicate devices. This table serves to demonstrate functional equivalence, not performance against specific, quantitative acceptance criteria.
- Sample size used for the test set and the data provenance: No information about a test set, its size, or data origin is provided.
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts: No mention of experts or ground truth establishment for a test set.
- Adjudication method for the test set: Not applicable as no test set evaluation is described.
- If a multi-reader multi-case (MRMC) comparative effectiveness study was done: No MRMC study is mentioned.
- If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: The device is an image communication and storage system with advanced visualization; its performance is described in terms of features rather than an algorithmic standalone performance study.
- The type of ground truth used: Not applicable as no ground truth is mentioned for performance evaluation.
- The sample size for the training set: No training set is mentioned as this is not a machine learning algorithm claim in the modern sense.
- How the ground truth for the training set was established: Not applicable.
Summary of the document's content relevant to acceptance criteria and study:
The document demonstrates substantial equivalence to predicate devices (Imatron Ultra Access Workstation with Cardiac Software Extensions (K972903) and TeraRecon IiVS™ Integrated Image Viewing Station (K994329)) by comparing features.
Acceptance Criteria (Implied by Substantial Equivalence):
The implied acceptance criteria are that the AquariusNET Server possesses equivalent features and performance to the predicate devices for its intended use. The "Feature Comparison Table" acts as the primary evidence for meeting these implied criteria.
| Feature | AquariusNET Server Performance (Reported as Present) | Predicate Device 1 (Imatron K972903) | Predicate Device 2 (IiVS K994339) |
|---|---|---|---|
| 2D Image Review | Yes | Yes | Yes |
| Multiplanar reformatting | Yes | Yes | Yes |
| 3D Volume Rendering | Yes | Yes | Yes |
| Maximum Intensity Projection | Yes | Yes | Yes |
| Image Archiving | Yes | Yes | Yes |
| Image Filming | Yes | Yes | Yes |
| Image Transfer or Network Connectivity | Yes | Yes | Yes |
| Examination of 2D image data from a calcium scan | Yes | Yes | Yes |
| Examination of calcium scan as a 3D volume | Yes | Yes | Yes |
| Semi-automated identification of regions considered calcium | Yes | Yes | No |
| User override of automatically identified regions | Yes | Yes | No |
| Automatic calculation of calcium score | Yes | Yes | No |
| Ability to measure CT numbers on a 2D image | Yes | Yes | Yes |
| Saving of calcium data with patient exam data | Yes | Yes | No |
| Creation of a paper calcium report | Yes | Yes | No |
| Comparison of multiple scans | Yes | Yes | Yes |
| Indications for use - general medical imaging workstations | Yes | Yes | Yes |
Study Proving Acceptance Criteria (as presented in the 510(k) Summary):
The "study" is a feature comparison study against two predicate devices. The document implies that by demonstrating the presence of these features, the device is "substantially equivalent" to legally marketed devices, thereby meeting the regulatory requirements for market clearance.
- Sample Size for Test Set & Data Provenance: Not applicable. The submission relies on a feature-by-feature comparison rather than performance testing on a specific dataset.
- Number of Experts & Qualifications for Ground Truth: Not applicable. No ground truth for a test set is established.
- Adjudication Method: Not applicable.
- MRMC Comparative Effectiveness Study: Not performed or reported.
- Standalone Performance Study: The document focuses on the functional capabilities of the system rather than an isolated algorithmic performance evaluation.
- Type of Ground Truth: Not applicable.
- Sample Size for Training Set: Not applicable, as this is not an AI/ML model in the modern sense.
- Ground Truth for Training Set: Not applicable.
In essence, the "study" demonstrating acceptance is the comparison table itself, showing that the AquariusNET Server either matches or exceeds the features of its predicate devices, especially regarding calcium scoring functionalities compared to the IiVS, and matches the Imatron Ultra Access Workstation. The FDA's issuance of the 510(k) clearance acts as the formal acceptance that the device is substantially equivalent based on this comparison.
Ask a specific question about this device
(25 days)
Ask a specific question about this device
(156 days)
Acquire, store, transmit, and display medical images and patient reports of various types. Teleradiology image acquisition, distribution, archive and viewing.
The IiVS™ Integrated image Viewing Station is a product family, which comes in two different versions: DiVS: a DICOM viewer IiVS: a 3D viewer. The intended use of the devices is to provide solutions to various medical image-viewing problems, which come about as the modalities generate more and more images. They also support image distribution over networks, and are DICOM conformant. Finally, the IIVS™ Integrated image Viewing Station family supports the radiologist in writing a report, and transmitting and storing this report in digital form.
The provided text describes a 510(k) premarket notification for the TeraRecon IiVS™ Integrated Image Viewing Station (K994329). This document primarily focuses on demonstrating substantial equivalence to a predicate device rather than providing a detailed study proving the device meets specific acceptance criteria with quantifiable performance metrics.
Therefore, much of the requested information regarding acceptance criteria, study details, sample sizes, ground truth establishment, and MRMC studies is not explicitly available in the provided text.
However, I can extract and infer some information based on the document's content, particularly from the "Performance Standards" section and the "Comparison Table."
Here's the information that can be extracted and inferred:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a table of acceptance criteria with corresponding reported performance metrics in the way a typical clinical study would. Instead, it relies on demonstrating compliance with standards and comparing features to a predicate device.
Inferred "Acceptance Criteria" (based on compliance and comparison):
| Acceptance Criteria Category | Specific Criteria (Inferred from documentation) | Reported Device Performance (Inferred from documentation) |
|---|---|---|
| Voluntary Standards | Compliance with DICOM 3 Query and Retrieve | "Compliance with DICOM 3 Query and Retrieve" |
| Compliance with ISO/IEC 12207:1995 (Software Life Cycle) | Software designed to control and manipulate images follows this standard. | |
| Hardware Requirements | Execute on Silicon Graphics SGI-320 and 540 and VW 320/540 workstations | Device is configured to run on these specified workstations with detailed CPU, RAM, and storage. |
| Memory requirement of 640 MB | VW 320/540 configurations specify 640 MB of SDRAM. | |
| User Interface | Standard keyboard and wheel-mouse input | "User interacts with the system through a standard keyboard and a wheel-mouse." |
| Buttons marked with commonly understood symbols or English | "All buttons are marked with commonly understood symbols or English language notation." | |
| Data Input | Max. file size: 512 MB or 1000 slices | "Max. file size: 512 MB or 1000 slices" |
| NTSC video input at 30 frames per second | "NTSC video input signals at 30 frames per second." | |
| Data Output | User defined output as DICOM file | "Data output is user defined and is one of 3 options: 1. DICOM file" |
| User defined output as JPEG compressed image file | "Data output is user defined and is one of 3 options: 2. JPEG compressed image file" | |
| User defined output as BMP file | "Data output is user defined and is one of 3 options: 3. BMP file" | |
| Video Output | NTSC-compatible video output as an option | "NTSC-compatible video output as an option." |
| Report Functionality | Provide preset image format for included images, easily modifiable | "Provide preset image format for included images. This format has to be easily modified." |
| Overlay of figures and characters to annotate findings | "Overlay of figures and characters on top of the images to annotate findings." | |
| Placing report on a network as DICOM, BMP, or JPEG | "Provide for placing the report on a network as DICOM files or BMP or JPEG format." | |
| Provide a print function | "Provide a print function." | |
| Comparison to Predicate | Teleradiology image acquisition, distribution, archival and 3D viewing | "SAME" as predicate AccuImage™ Viewer Products. |
| Network Connectivity: Ethernet 100 Base T | Different from predicate (GPIB IEEE488), but considered equivalent ("Yes" in SE column). | |
| Computer Platform: SGI VW-320 and -540 | Different from predicate (PC), but considered equivalent ("Yes" in SE column). | |
| Lossy Image Compression: JPEG | Different from predicate (BMP), but considered equivalent ("Yes" in SE column). | |
| DICOM Compliant: YES | "YES" (same as predicate) | |
| Image Display: Color, Grey scale, 1600x1200 | Higher resolution than predicate (512x512), considered equivalent ("Yes" in SE column). | |
| Image Edit: 8 object segmentation clipping planes in double oblique orientation | More advanced than predicate (Manual and threshold segmentation), considered equivalent ("Yes" in SE column). | |
| Volume Rendering: Voxel transmission, parallel and perspective ray casting | Different algorithms than predicate (MIP, Surface rendering, Depth encoded surface), considered equivalent ("Yes" in SE column). | |
| 2D/3D Integration: Automatic display of orthogonal planes | More advanced than predicate (Display of basic 2D views), considered equivalent ("Yes" in SE column). | |
| Operating System: SGI320, SGI-540 Windows NT | Different from predicate (Win95/Win98), but considered equivalent ("Yes" in SE column). |
Note on "Reported Device Performance": For a 510(k) submission like this, "reported performance" often means demonstrating that the device meets its design specifications, is compliant with relevant standards, and is substantially equivalent to a legally marketed predicate device. Actual quantitative performance metrics from a dedicated study are typically not included unless addressing a specific performance claim.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not describe a specific "test set" for performance evaluation in the context of a clinical study or image dataset. The submission focuses on technical specifications, compliance with standards (DICOM, ISO/IEC 12207), and a feature-by-feature comparison to a predicate device. Therefore, information about sample size, data provenance, and retrospective/prospective nature is not provided.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Since there is no described "test set" or clinical study involving image interpretation, there is no information about experts establishing ground truth.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
As no test set involving expert review is described, there is no information on an adjudication method.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not describe an MRMC comparative effectiveness study, nor does it refer to AI assistance. This device is an image viewing and communication system, not an AI-powered diagnostic aid.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The device is an image viewing and communication system designed to be used by human radiologists. It is not an algorithm intended for standalone diagnostic performance. Therefore, analysis of "standalone" algorithm performance is not applicable or described.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
Since no clinical performance study involving diagnostic interpretation is described, there is no mention of the type of ground truth used. The ground truth for this type of device relates to its technical functionality (e.g., a DICOM file is correctly transferred and displayed, a report is correctly generated).
8. The sample size for the training set
The document does not describe a "training set" as it is not an AI/machine learning device. The software development follows ISO/IEC 12207, implying standard software testing and validation, but not machine learning training.
9. How the ground truth for the training set was established
As there is no "training set" in the context of machine learning, this question is not applicable and no information is provided.
Ask a specific question about this device
Page 1 of 2