Search Results
Found 2 results
510(k) Data Aggregation
(159 days)
The ThinkingNet is a Medical Image Management and Review System, commonly known as PACS. ThinkingNet made by Thinking Systems Corporation, Florida, USA, is indicated for acceptance, transmission, storage, archival, reading, interpretation, clinical review, analysis, annotation, distribution, printing, editing and processing of digital images and data acquired from DICOM compatible diagnostic device, by healthcare professionals, including radiologists, cardiologists, physicians, technologists and clinicians.
- With a ThinkingWeb option it can be used to access diagnostic information remotely with all workstation functionality or to collaborate with other users. The client device is cross platform for all but the thick-client ThinkingNet.Net option.
- With the molecular imaging option it can be used for processing and interpreting nuclear medicine and other molecular imaging studies.
- With image co-registration and fusion option it can be used for processing and interpreting PET-CT, PET-MRI, SPECT-CT and other hybrid imaging studies.
- With the Mammography option it can be used for screening and diagnosis (with -MG, "For presentation" images only) from FDA approved modalities in softcopy (using FDA cleared displays for mammography) and printed formats.
- With the cardiology option it can be used for reading, interpreting and reporting cardiac studies, such as nuclear cardiac, PET cardiac, echocardiographic, X-ray angiographic and CTA studies.
- With the Orthopedic option it can be used to perform common orthopedic measurements of the hip, knee, spine, etc.
OR - With the 3D/MPR option it can be used to volumetric image data visualization: -MIP, MPR, VR and triangulation.
- With the Quality Assurance option it can be used by PACS administrators or clinicians to perform quality control activities related to patient and images data.
ThinkingNet is a multi-modality PACS/RIS with applications optimized for each individual imaging modality. The image data and applications can be accessed locally or remotely. ThinkingNet workstation software is designed as diagnostic reading and processing software packages, which may be marketed as software only, as well as packaged with standard off-the-shelf computer hardware.
The base functions include receiving, transmission, storage, archival, display images from all imaging modalities. When enabled, the system allows remote access of image data and applications over a local or wide area network, using a Web browser, thick-client, thin-client or cloud-based remote application deployment method.
Options allow for additional capability, including modality specific applications, quantitative postprocessing, modality specific measurements, multi-planar reformatting and 3D visualization.
ThinkingNet Molecular Imaging modules offer the image processing functionality, through MDStation and ThinkingWeb, that have the same indication as the predicate modality workstations. It delivers image processing and review tools for applications used in functional imaging modalities, such as nuclear medicine, PET, PET/CT, SPECT/CT and PET/MRI.
ThinkingNet Mammo module is a diagnostic softcopy breast imaging workstation with diagnostic print capability.
- It displays and prints regionally approved DICOM DR Digital Mammography Images (MG . SOP class) with a default or user defined mammography hanging protocol.
- . It displays and prints regionally approved DICOM CR Digital Mammography Images (CR SOP class) a default or user defined mammography hanging protocol.
- It displays adjunct breast imaging modality studies (i.e. Breast MR, Breast . PET and Breast gamma camera) for comparison.
ThinkingWeb modules offer comprehensive remote image and application access methods to allow clinicians to review and process images remotely. It has the following modules. - . ThinkingWebLite: Clientless image distribution via simple Web browser (see NuWEB in K010271). It is primarily a referral physician's portal, not intended for primary reading.
- . ThinkingNet.Net: A thick-client implementation using an existing image review module (NuFILM) with a proprietary image streaming mechanism.
- . ThinkingWeb: Cross-platform thin-client remote application access based the existing MDStation software and off-the-shelf remote computing technology.
- . ThinkingWeb Extreme: A cloud-based remote application deployment implementation based the existing MDStation software and off-the-shelf cloud computing technologies.
Besides ThinkingNet.Net, all ThinkingWeb products support cross-platform client computer devices. Thinking Net uses Windows-based client computer.
The provided 510(k) summary for "ThinkingNet Modality Applications and Web Extensions" does not contain specific acceptance criteria or performance study results in the typical format of a clinical or technical validation study.
Instead, the submission primarily focuses on demonstrating substantial equivalence to predicate devices. This means that, for this type of submission, the manufacturer asserts that their device is as safe and effective as existing legally marketed devices, rather than needing to prove new performance metrics against predefined acceptance criteria. The "Performance testing" mentioned refers to internal verification and validation activities to ensure the new features (like Web options and modality-specific modules) still perform as expected and maintain equivalence to the predicate devices.
Therefore, many of the requested details about acceptance criteria, specific performance numbers, sample sizes for test sets, expert qualifications, and ground truth methodologies for a new performance claim are not present in the provided document.
Here's an analysis of the available information:
1. A table of acceptance criteria and the reported device performance
- Acceptance Criteria: Not explicitly stated as quantifiable metrics for a new claim. The "acceptance criteria" here are implicitly linked to demonstrating substantial equivalence with the listed predicate devices. This means the device must function similarly in terms of image management, communication, archiving, and processing capabilities.
- Reported Device Performance: No specific numerical performance metrics (e.g., sensitivity, specificity, accuracy) are reported for the device in the context of a clinical study designed to establish new performance claims. The document states that "Performance testing was conducted to show that ThinkingNet is safe and effective," but does not provide details of these tests or their results.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size: Not specified.
- Data Provenance: Not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not specified.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No MRMC study is mentioned. This device is a PACS/image management system, not an AI-assisted diagnostic tool for which such studies are typically conducted.
- Effect Size: Not applicable as no MRMC study was performed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Not applicable/not specified. The device is an image management and review system, inherently designed for human interaction.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: Not specified. For a PACS, the "ground truth" would generally relate to the accurate display, storage, and retrieval of medical images and data, ensuring data integrity and functionality mirroring predicate devices.
8. The sample size for the training set
- Sample Size for Training Set: Not applicable/not specified. This is a PACS system, not a machine learning model that typically requires a discrete training set.
9. How the ground truth for the training set was established
- Ground Truth for Training Set: Not applicable.
Summary of available information regarding the "study" (internal V&V) and acceptance criteria:
The "study" referenced in the document is the internal verification and validation (V&V) process conducted by Thinking Systems, following their ISO 13485 and FDA 21 CFR Part 820 compliant Quality System.
- Acceptance Criteria (Implicit): The primary acceptance criterion is that the "ThinkingNet with its Web options and modality specific modules to be as safe, as effective, and performance is substantially equivalent to the predicate device(s)." This means the device must meet the functional and performance characteristics established by the legally marketed predicate devices.
- Proof of Meeting Criteria:
- Performance Testing: "Performance testing was conducted to show that ThinkingNet is safe and effective." This would have involved internal tests to ensure the system's various functions (receiving, transmission, storage, display, advanced processing, remote access) operate correctly and reliably. These tests would validate that the device's features, especially the new Web options and modality-specific applications, perform comparably to or within the established safety and effectiveness parameters of the predicate devices.
- Quality Assurance Measures: The document lists several QA measures applied to the development, including Requirements Specification, Design Specification, Hazard and Risk Analysis, Modular Testing, Verification Testing, Validation Testing, and Integration Testing. These processes ensure that the device was designed and built to meet its intended purpose and functions properly.
- Substantial Equivalence Argument: The core of the submission is the detailed argument for substantial equivalence, comparing ThinkingNet's intended use, indications, target populations, and technical characteristics with multiple predicate devices across various modalities (PACS, molecular imaging, mammography, cardiology, RECIST, echocardiography). The differences identified (e.g., more built-in modality-specific applications, server-side processing for Web clients, cross-platform support) are then argued not to affect safety and effectiveness, supported by the internal performance testing.
In essence, for this 510(k) submission, the "acceptance criteria" were met by demonstrating that the device functions comparably to existing cleared devices, and the "study" was the manufacturer's well-documented internal verification and validation process designed to ensure this equivalence and the device's safety and effectiveness. No independent clinical efficacy study with specific numerical performance targets was required or presented for this type of device and submission pathway.
Ask a specific question about this device
(42 days)
RADIN can be used whenever digital images and associated data acquired or generated by different third party modalities have to be accepted, displayed, transmitted, stored, distributed, processed and archived in order to be available for professional health care personnel. RADIN is not intended to assist the healthcare personnel in diagnosis. RADIN can be used together with appropriate and proper installed computer platforms according to the recommendations made in the labeling.
Lossy compressed mammographic images and digitized film screen images must not be reviewed for primary image interpretations. Mammographic images may only be interpreted using an FDA approved monitor that offers at least 5 Mpixel resolution and meets other technical specifications reviewed and accepted by FDA.
Typical users are trained healthcare professionals including but not limited to physicians, licenced practitioners, nurses.
RADIN 3.0 is a system to distribute medical images and reports within and outside of health care environments. It is available as a stand-alone software package. RADIN consists of the following set of software modules: RADIN.online, RADIN.web, RADIN.archive. RADIN offers three types of clients: RADIN.Classic Client, RADIN.Expert Client, RADIN.Expert dual monitor Client. RADIN requires standard PC-Hardware.
The provided document is a 510(k) Premarket Submission for the SOHARD RADIN 3.0 device, which is a Picture Archiving and Communications System (PACS). The submission focuses on establishing substantial equivalence to a predicate device and ensuring compliance with quality system regulations. It does not describe an AI/algorithm-driven diagnostic device, and therefore, many of the requested elements for acceptance criteria and study design (like ground truth, expert adjudication, MRMC studies, or standalone performance) are not applicable.
Here's a breakdown of the available information based on your request:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria or performance metrics for the RADIN device in terms of diagnostic accuracy or clinical effectiveness. Instead, the "acceptance criteria" are implied by its substantial equivalence to a predicate device (Thinking Systems ThinkingNet (K010271)) and compliance with various regulatory standards and quality systems.
The reported "performance" focuses on its functionality and technical characteristics, demonstrating its ability to distribute, store, and display medical images and reports, mirroring the predicate device.
Feature/Characteristic | Acceptance Criteria (Implied by Substantial Equivalence & Compliance) | Reported Device Performance (RADIN 3.0) |
---|---|---|
Intended Use | Equivalent to predicate device; Distribution of medical images/reports within and outside healthcare environments; Not for primary image interpretation of lossy compressed mammograms. | Distributes medical images and reports within and outside of healthcare environments. Receives DICOM data from hospital network; Transfers data to clients (Intranet/Internet); Integrates with HIS/RIS/CIS; Displays images and reports in web browser; Offers image manipulation and measurements. Lossy compressed mammographic images and digitized film screen images must not be reviewed for primary image interpretations. Mammographic images may only be interpreted using an FDA approved monitor. |
Technological Characteristics | Equivalent to predicate device (e.g., networking, DICOM compliance, platform, operating system, compression, security, client features). | Networking: TCP/IP |
Image Acquisition/Communication: DICOM Compliant, DICOM 3.0 file formats | ||
Imaging Modalities: Multi Modality (CR, CT, DR, DS, DX, ES, GM, IO, MG, MR, NM, PT, OT, RF, RT, US, XA, XC) | ||
Platform: PC, Windows OS | ||
Data Compression: Original Format, JPEG Lossless, JPEG Lossy (5-100%), Wavelet (5-100%) | ||
Security: User authentication, SSL encryption/VPN for data transmission, User Management (accounts, groups, levels) | ||
Viewing Clients: RADIN.Classic, RADIN.Expert, RADIN.Expert Dual Monitor | ||
Image Manipulation: Zoom, Quick Zoom, Magnifying glass, Pan, Window leveling, Edge enhancement, Grayscale inversion, Rotating, flipping | ||
Measurements: Distance, Angulation, Greyscale density (probe), Manual distance calibration | ||
Workflow: Database Filters, DICOM query/retrieve, patient assignment changes, multiple series loading, preloading, study availability, display with reports, RIS/HIS integration, Windows Copy/Print. | ||
Archiving: DVD-R Jukebox, Harddisk RAID, Data verification, Manipulation detection, Database consistency check. | ||
Safety & Effectiveness | No new safety or effectiveness issues compared to predicate; Compliance with quality systems and regulations; Mitigates identified hazards. | Risk analysis performed; Hazards controlled by risk management plan; Verification and validation tests performed; Evaluations by hospitals; No software components expected to result in death or injury; Requirement tracing, integration testing, and decision reviews ensure fulfillment of requirements. All potential hazards classified as minor. |
Regulatory Compliance | Compliance with relevant standards and regulations. | Complies with 21 CFR 820, ISO 9001:2000, ISO 13485:2000, 93/42/EEC, (IEC) 60601-1-4. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not describe a "test set" in the context of an algorithm's performance evaluation against ground truth. The validation and verification mention "evaluations by hospitals" and "integration and system testing including full testing of hazard mitigation," but no specific sample size of medical cases or data provenance is provided for such evaluations within this submission.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
Not applicable. As this is not an AI/algorithm-driven device requiring diagnostic performance evaluation, there is no mention of experts establishing ground truth for a test set.
4. Adjudication Method for the Test Set
Not applicable. No test set requiring expert adjudication is described.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs without AI Assistance
Not applicable. The RADIN device is a PACS system for image distribution and viewing, not an AI-assisted diagnostic tool.
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) Was Done
Not applicable. The RADIN device is a standalone software package, but its "standalone performance" refers to its functionality as a PACS, not its performance as a diagnostic algorithm independently of human review. The document explicitly states: "A physician, providing ample opportunity for competent human intervention interprets images and information delivered by RADIN." And for primary image interpretation, it emphasizes that "The final decision regarding diagnoses, however, lies with the doctors and/or their medical staff in their very own responsibility."
7. The Type of Ground Truth Used
Not applicable in the context of diagnostic accuracy. The "ground truth" for this device would be its adherence to DICOM standards, successful image transfer, display, storage, and retrieval, and compliance with general software quality and safety regulations, which are implicitly verified through testing and validation activities mentioned (e.g., integration test plan, hazard analysis).
8. The Sample Size for the Training Set
Not applicable. This is not an AI/machine learning device that requires a training set.
9. How the Ground Truth for the Training Set Was Established
Not applicable. This is not an AI/machine learning device that requires a training set.
Summary:
The SOHARD RADIN 3.0 submission details a PACS system. Its acceptance criteria are primarily based on demonstrating substantial equivalence to a legally marketed predicate device (Thinking Systems ThinkingNet, K010271) and adherence to established quality system regulations (e.g., 21 CFR 820, ISO standards). The "study" proving it meets these criteria consists of software development processes including verification and validation tests, risk analysis, and compliance with relevant standards. The document does not describe an AI/algorithm-driven diagnostic device and thus lacks information related to specific clinical performance metrics, ground truth establishment, expert review, or machine learning-related study designs.
Ask a specific question about this device
Page 1 of 1