Search Results
Found 2 results
510(k) Data Aggregation
(39 days)
Presentation quality mammography images sent via the DICOM standard can be displayed using features that are native to Emageon UV Inc.'s software. Other standard tools include intuitive tools for real-time pan, zoom, window/level and scroll; drag and drop series thumbnails for intuitive navigation; comprehensive set of grayscale and pseudo-color lookup tables; fully configurable magnifying glass; customizable window/level and zoom presets stored by modality and user; rotate images in 90-degree increments; flip image horizontally or vertically; customize display of on-screen user and patient demographics; sharpen, edge and blur filters; key image creation for communicating with other physicians, and multiple display support. Images can be displayed in any configuration or format the user specifies and associates with that specific user's profile. All display protocols and user configurable settings follow the user throughout the enterprise.
Lossy compressed or digitized screen film mammographic images must not be reviewed for primary image interpretations. Mammographic images may only be interpreted using a FDA approved monitor that offers at least 5 Mpixel resolutions and other technical specifications reviewed and accepted by FDA.
Emageon UV, Inc.'s Ultravisual™ software is integrated client-server software package comprised of features that were previously cleared in Vortex TM 510(k): K012097. The main difference is that the software will now allow display of presentation quality digital mammography images, sent via the DICOM standard in order to make viewing of these images more convenient for the user.
Here's a breakdown of the acceptance criteria and study information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Not explicitly stated as specific performance metrics. The underlying claim is substantial equivalence to a cleared predicate device. | The Ultravisual™ software is stated to have features previously cleared in Vortex™ (K012097) and will now allow display of presentation quality digital mammography images, sent via the DICOM standard in order to make viewing of these images more convenient for the user. |
The device must not pose new issues of safety and effectiveness compared to its predicate. | The device is deemed substantially equivalent to the original Vortex 130 software, 510(k) K012097, and does not pose any new issues of safety and effectiveness. |
Compliance with general controls and regulations (registration, listing, GMP, labeling, etc.) | The FDA's substantial equivalence determination allows marketing, subject to general controls and existing major regulations. |
Lossy compressed or digitized screen film mammographic images must not be reviewed for primary image interpretations. | Explicitly stated as a condition for use. |
Mammographic images may only be interpreted using an FDA-approved monitor that offers at least 5 Mpixel resolutions and other technical specifications reviewed and accepted by FDA. | Explicitly stated as a condition for use. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not describe a specific test set or a study with a test set for evaluating the performance of the Ultravisual™ software against defined performance criteria. The submission relies on substantial equivalence to a previously cleared device. Therefore, details about sample size or data provenance for a direct performance study are not applicable or provided.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
Since no new performance study with a test set is described, this information is not available.
4. Adjudication Method for the Test Set
As no specific test set or performance study is described, adjudication method information is not available.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
No MRMC comparative effectiveness study is described in the provided text. The submission focuses on substantial equivalence based on the predicate device's features and the addition of digital mammography display capability. Therefore, this information is not available.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
No standalone performance study for the Ultravisual™ software is described. The device's primary function is an image processing, communication, and visualization workstation intended for human interpretation. Therefore, this information is not available.
7. The Type of Ground Truth Used
As no new performance study is described, information on the type of ground truth used is not available. The basis for clearance is substantial equivalence.
8. The Sample Size for the Training Set
The document does not describe a training set as it relies on substantial equivalence and does not detail the development of a de novo algorithm requiring new training data.
9. How the Ground Truth for the Training Set Was Established
Since no training set is described, this information is not available. The device's validation and effectiveness are explicitly linked to the procedures documented in the predicate device's 510(k) (K012097), suggesting that any software development and testing followed a similar approach, focusing on functionality and safety rather than a new discriminative algorithm requiring a separate training set and ground truth.
Ask a specific question about this device
(58 days)
PracticeBuilder 1-2-3 is a PACS and teleradiology system used to receive DICOM images, scheduling information and textual reports, organize and store them in an internal format, and to make that information available across a network via web and customized user interfaces.
PracticeBuilder 1-2-3 is for use in hospitals, imaging centers, radiologist reading practices and any user who requires and is granted access to patient image, demographic and report information.
PracticeBuilder 1-2-3 is a PACS system, comprised of acquisition components (GatewayServer and SendServer), a central system manager component (SmartServer), a diagnostic workstation component (Workstation and Viewer), and an archiving component (ArchiveServer). The data flow is such that patient and procedure information is optionally delivered to the central system manager, followed by the acquisition of the image objects directly from the image sources or by one of the acquisition components. After receiving the procedure information or after receiving image objects, the central system manager searches for and retrieves relevant prior procedure data from the archive component. When the central system manager registers the acquired image objects and the retrieved prior procedure data, a user can access the information by selecting the item from the operator worklist. The image data is transmitted to and rendered on the user's workstation using the diagnostic workstation components. After using the workstation to view the images, the user optionally dictates a report into the system, after which, a user can play back the dictation and transcribe it to text. Once PracticeBuilder 1-2-3's central system manager registers a report, the report is available for access by the referring physician, or it can be exported into an information system. At some configured point in time, the image data and the report information is delivered to the archiving component for backup and long-term storage.
PracticeBuilder 1-2-3 is also a teleradiology system used to receive DICOM images, scheduling information and textual reports, organize and store them in an internal format, and to make that information available across a network via web and customized user interfaces.
The provided text is a 510(k) summary for the PracticeBuilder 1-2-3 PACS system. It focuses on demonstrating substantial equivalence to predicate devices and does not contain information about specific acceptance criteria, performance studies with quantitative metrics, or details about ground truth establishment, sample sizes for training/test sets, or expert qualifications for a standalone AI algorithm or MRMC study.
Here's a breakdown of what is and is not available in the provided document, based on your request:
1. Table of Acceptance Criteria and Reported Device Performance:
- Not Available. The document does not specify quantitative acceptance criteria (e.g., sensitivity, specificity, AUC, image quality metrics) or report device performance against such criteria. The "Validation and Effectiveness" section only states that "Extensive testing of the software package has been performed by programmers, by non-programmers, quality control staff, and by potential customers." This is a general statement about testing practices, not a detailed performance study.
2. Sample Size for the Test Set and Data Provenance:
- Not Available. The document does not mention any specific test set, its sample size, or the provenance of any data used for testing.
3. Number of Experts Used to Establish Ground Truth and Qualifications:
- Not Available. Ground truth establishment, the number of experts involved, or their qualifications are not mentioned.
4. Adjudication Method:
- Not Available. Since no ground truth establishment is described, no adjudication method is mentioned.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- Not Available. There is no mention of an MRMC study or any comparison of human reader performance with or without AI assistance. This device is a PACS system, not an AI diagnostic aid.
6. Standalone (Algorithm Only) Performance Study:
- Not Available. The device is a PACS system designed for image management and display, not a standalone AI algorithm. Therefore, no standalone performance study in the context of an AI algorithm is described.
7. Type of Ground Truth Used:
- Not Available. Given the nature of the device (PACS), the concept of "ground truth" for a diagnostic algorithm is not applicable here in the way it would be for an AI diagnostic device. The document focuses on the functional equivalence of the PACS system.
8. Sample Size for the Training Set:
- Not Available. This information is not relevant for a PACS system that is not an AI algorithm requiring a training set.
9. How Ground Truth for the Training Set Was Established:
- Not Available. Same reasoning as point 8.
In summary:
This 510(k) submission for the PracticeBuilder 1-2-3 PACS system relies on establishing substantial equivalence to existing legally marketed predicate devices (Stentor iSite, Agfa IMPAX, Ultravisual Vortex). The "effectiveness" is primarily demonstrated through the functional equivalence of its components (acquisition, central manager, workstation, archive, teleradiology capabilities) to these predicates. The review focuses on software development practices and general safety and effectiveness concerns rather than specific quantitative performance metrics for a diagnostic algorithm.
Ask a specific question about this device
Page 1 of 1