Search Results
Found 1 results
510(k) Data Aggregation
(42 days)
RADIN, MODEL V 3.0
RADIN can be used whenever digital images and associated data acquired or generated by different third party modalities have to be accepted, displayed, transmitted, stored, distributed, processed and archived in order to be available for professional health care personnel. RADIN is not intended to assist the healthcare personnel in diagnosis. RADIN can be used together with appropriate and proper installed computer platforms according to the recommendations made in the labeling.
Lossy compressed mammographic images and digitized film screen images must not be reviewed for primary image interpretations. Mammographic images may only be interpreted using an FDA approved monitor that offers at least 5 Mpixel resolution and meets other technical specifications reviewed and accepted by FDA.
Typical users are trained healthcare professionals including but not limited to physicians, licenced practitioners, nurses.
RADIN 3.0 is a system to distribute medical images and reports within and outside of health care environments. It is available as a stand-alone software package. RADIN consists of the following set of software modules: RADIN.online, RADIN.web, RADIN.archive. RADIN offers three types of clients: RADIN.Classic Client, RADIN.Expert Client, RADIN.Expert dual monitor Client. RADIN requires standard PC-Hardware.
The provided document is a 510(k) Premarket Submission for the SOHARD RADIN 3.0 device, which is a Picture Archiving and Communications System (PACS). The submission focuses on establishing substantial equivalence to a predicate device and ensuring compliance with quality system regulations. It does not describe an AI/algorithm-driven diagnostic device, and therefore, many of the requested elements for acceptance criteria and study design (like ground truth, expert adjudication, MRMC studies, or standalone performance) are not applicable.
Here's a breakdown of the available information based on your request:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria or performance metrics for the RADIN device in terms of diagnostic accuracy or clinical effectiveness. Instead, the "acceptance criteria" are implied by its substantial equivalence to a predicate device (Thinking Systems ThinkingNet (K010271)) and compliance with various regulatory standards and quality systems.
The reported "performance" focuses on its functionality and technical characteristics, demonstrating its ability to distribute, store, and display medical images and reports, mirroring the predicate device.
Feature/Characteristic | Acceptance Criteria (Implied by Substantial Equivalence & Compliance) | Reported Device Performance (RADIN 3.0) |
---|---|---|
Intended Use | Equivalent to predicate device; Distribution of medical images/reports within and outside healthcare environments; Not for primary image interpretation of lossy compressed mammograms. | Distributes medical images and reports within and outside of healthcare environments. Receives DICOM data from hospital network; Transfers data to clients (Intranet/Internet); Integrates with HIS/RIS/CIS; Displays images and reports in web browser; Offers image manipulation and measurements. Lossy compressed mammographic images and digitized film screen images must not be reviewed for primary image interpretations. Mammographic images may only be interpreted using an FDA approved monitor. |
Technological Characteristics | Equivalent to predicate device (e.g., networking, DICOM compliance, platform, operating system, compression, security, client features). | Networking: TCP/IP |
Image Acquisition/Communication: DICOM Compliant, DICOM 3.0 file formats | ||
Imaging Modalities: Multi Modality (CR, CT, DR, DS, DX, ES, GM, IO, MG, MR, NM, PT, OT, RF, RT, US, XA, XC) | ||
Platform: PC, Windows OS | ||
Data Compression: Original Format, JPEG Lossless, JPEG Lossy (5-100%), Wavelet (5-100%) | ||
Security: User authentication, SSL encryption/VPN for data transmission, User Management (accounts, groups, levels) | ||
Viewing Clients: RADIN.Classic, RADIN.Expert, RADIN.Expert Dual Monitor | ||
Image Manipulation: Zoom, Quick Zoom, Magnifying glass, Pan, Window leveling, Edge enhancement, Grayscale inversion, Rotating, flipping | ||
Measurements: Distance, Angulation, Greyscale density (probe), Manual distance calibration | ||
Workflow: Database Filters, DICOM query/retrieve, patient assignment changes, multiple series loading, preloading, study availability, display with reports, RIS/HIS integration, Windows Copy/Print. | ||
Archiving: DVD-R Jukebox, Harddisk RAID, Data verification, Manipulation detection, Database consistency check. | ||
Safety & Effectiveness | No new safety or effectiveness issues compared to predicate; Compliance with quality systems and regulations; Mitigates identified hazards. | Risk analysis performed; Hazards controlled by risk management plan; Verification and validation tests performed; Evaluations by hospitals; No software components expected to result in death or injury; Requirement tracing, integration testing, and decision reviews ensure fulfillment of requirements. All potential hazards classified as minor. |
Regulatory Compliance | Compliance with relevant standards and regulations. | Complies with 21 CFR 820, ISO 9001:2000, ISO 13485:2000, 93/42/EEC, (IEC) 60601-1-4. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not describe a "test set" in the context of an algorithm's performance evaluation against ground truth. The validation and verification mention "evaluations by hospitals" and "integration and system testing including full testing of hazard mitigation," but no specific sample size of medical cases or data provenance is provided for such evaluations within this submission.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
Not applicable. As this is not an AI/algorithm-driven device requiring diagnostic performance evaluation, there is no mention of experts establishing ground truth for a test set.
4. Adjudication Method for the Test Set
Not applicable. No test set requiring expert adjudication is described.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs without AI Assistance
Not applicable. The RADIN device is a PACS system for image distribution and viewing, not an AI-assisted diagnostic tool.
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) Was Done
Not applicable. The RADIN device is a standalone software package, but its "standalone performance" refers to its functionality as a PACS, not its performance as a diagnostic algorithm independently of human review. The document explicitly states: "A physician, providing ample opportunity for competent human intervention interprets images and information delivered by RADIN." And for primary image interpretation, it emphasizes that "The final decision regarding diagnoses, however, lies with the doctors and/or their medical staff in their very own responsibility."
7. The Type of Ground Truth Used
Not applicable in the context of diagnostic accuracy. The "ground truth" for this device would be its adherence to DICOM standards, successful image transfer, display, storage, and retrieval, and compliance with general software quality and safety regulations, which are implicitly verified through testing and validation activities mentioned (e.g., integration test plan, hazard analysis).
8. The Sample Size for the Training Set
Not applicable. This is not an AI/machine learning device that requires a training set.
9. How the Ground Truth for the Training Set Was Established
Not applicable. This is not an AI/machine learning device that requires a training set.
Summary:
The SOHARD RADIN 3.0 submission details a PACS system. Its acceptance criteria are primarily based on demonstrating substantial equivalence to a legally marketed predicate device (Thinking Systems ThinkingNet, K010271) and adherence to established quality system regulations (e.g., 21 CFR 820, ISO standards). The "study" proving it meets these criteria consists of software development processes including verification and validation tests, risk analysis, and compliance with relevant standards. The document does not describe an AI/algorithm-driven diagnostic device and thus lacks information related to specific clinical performance metrics, ground truth establishment, expert review, or machine learning-related study designs.
Ask a specific question about this device
Page 1 of 1