Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K021913
    Device Name
    CALSCOR SOFTWARE
    Manufacturer
    Date Cleared
    2002-09-06

    (87 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    3D MED CO., LTD.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CalScoR Software is intended to be used with the Rapidia software by a trained physician for the review and analysis of CT images as an aid in cardiac analysis.

    Device Description

    CalScoR is a Windows-compatible software program that runs with the previously cleared Rapidia Software (K012290). The Rapidia Software is a fast, practical, and accurate tool for 3D (three dimensional) and 2D (two dimensional) viewing and manipulation of CT and MRI images using the most advanced graphics-rendering technology.

    CalScoR can specify the location of calcium in images, calculate the calcium score based on Agaston's method display the percentile of the score with the graph, and issue reports.

    AI/ML Overview

    Here's a summary of the acceptance criteria and study details for the 3DMed Co., Ltd. CalScoR software, based on the provided text:

    Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Scores obtained from CalScoR and predicate device (AccuScore) are similar. Linear regression slope = 1, intercept = 0.Linear regression demonstrated the scores were similar, with a calculated slope of 1 and intercept of 0.
    Scores obtained from CalScoR and predicate device (AccuScore) are essentially identical.F-test resulted in an F-value of 13742853.39, showing the two scores are identical within 0.0001 (0.01%) error level.
    CalScoR performs all functions according to the functional requirements specified in the Software Requirements Specifications.Testing demonstrated that the CalScoR performs all functions according to the functional requirements.

    Study Information

    1. Sample size used for the test set and the data provenance: Not explicitly stated. The document mentions "Tests were performed to compare the accuracy," but does not specify the number of cases or the origin of the data.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable, as the comparison was against a predicate device's output, not expert-established ground truth.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set: Not applicable, as the comparison was against a predicate device's output.
    4. If a multi-reader, multi-case (MRMC) comparative effectiveness study was done: No, an MRMC study was not done. The study focused on comparing the algorithm's output to a predicate device's output.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: Yes, the testing compared the CalScoR software's output directly to the predicate device's output, indicating a standalone algorithm comparison.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.): The "ground truth" for this study was the output of a legally marketed predicate device, AccuScore/Accuview Software (K990241), which also calculates calcium scores based on Agaston's method.
    7. The sample size for the training set: Not applicable. The document does not describe a machine learning model that would require a distinct training set. The device's algorithm for calculating calcium score is stated to be based on Agaston's method.
    8. How the ground truth for the training set was established: Not applicable.

    Overall, the study focused on demonstrating substantial equivalence to a predicate device by comparing the outputs of the two software programs using statistical methods (linear regression and F-test), rather than directly validating against clinical outcomes or a human expert standard.

    Ask a Question

    Ask a specific question about this device

    K Number
    K012290
    Device Name
    RAPIDIA
    Manufacturer
    Date Cleared
    2001-09-28

    (70 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    3D MED CO., LTD.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Rapidia® is a software package intended for viewing and manipulating DICOM-compliant medical images from CT (computerized tomography) and MR (magnetic resonance) scanners. Rapidia can be used for real-time viewing, image manipulation, segmentation, 3D volume and surface rendering, virtual endoscopy, and reporting.

    Device Description

    Rapidia® is a fast, practical and accurate tool for 3D (three dimensional) and 2D (two dimensional) viewing and manipulation of CT and MRI images using the most advanced graphics rendering technology. The proposed software provides volume rendering (VR), maximum/minimum intensity projection 3D (MIP/MinIP), surface shaded display (SSD), multi-planar reconstruction (MPR), virtual endoscopy, 2D image editing and segmentation (2D), and issues reports.

    AI/ML Overview

    Here's an analysis of the provided text regarding the Rapidia® device, focusing on acceptance criteria and study details.

    Executive Summary:

    The provided 510(k) summary for the Rapidia® device offers very limited information regarding explicit acceptance criteria and a detailed study proving its performance. The primary focus of the document is on establishing substantial equivalence to a predicate device (Plug'n View 3D K993654) and demonstrating conformance to DICOM standards and internal functional requirements.

    Acceptance Criteria and Device Performance:

    Acceptance CriteriaReported Device Performance
    Conformance to DICOM Version 3"The proposed Rapidia® software conforms to DICOM (Digital Imaging and Communications in Medicine) Version 3."
    Performance of all input functions as specified in Software Requirements Specification (SRS)"Validation testing was provided that confirms that Rapidia® performs all input functions...according to the functional requirements specified in the Software Requirements Specification (SRS)."
    Performance of all output functions as specified in Software Requirements Specification (SRS)"Validation testing was provided that confirms that Rapidia® performs all...output functions...according to the functional requirements specified in the Software Requirements Specification (SRS)."
    Performance of all required actions as specified in Software Requirements Specification (SRS)"Validation testing was provided that confirms that Rapidia® performs all...required actions according to the functional requirements specified in the Software Requirements Specification (SRS)."

    Study Details:

    The document describes a "Validation testing" but provides very few specifics about its methodology or results in terms of concrete performance metrics.

    1. Sample size used for the test set and the data provenance: Not specified. The document only mentions "Validation testing was provided." We don't know the number of images, patient cases, or the origin (country, retrospective/prospective) of the data.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not specified. The document does not mention any expert involvement in establishing ground truth for testing.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set: Not specified.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: No, an MRMC comparative effectiveness study is not mentioned. The device is purely an image processing and visualization tool, not an AI-assisted diagnostic aid in the context of this 2001 submission.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: Yes, the "Validation testing" appears to be a standalone performance evaluation against the Software Requirements Specification (SRS), focusing on the software's functionality. It's an algorithm-only performance in the sense that it tests the software's ability to execute its programmed functions.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.): The "ground truth" for this device's validation appears to be its internal "Software Requirements Specification (SRS)." The testing confirmed the software performed "according to the functional requirements specified in the Software Requirements Specification (SRS)." This is essentially a black-box functional testing approach, not clinical ground truth.
    7. The sample size for the training set: Not applicable/specified. This device, submitted in 2001, is described as an "Image processing and 3D visualization system." It does not appear to employ machine learning or AI that would require a "training set" in the modern sense. Its functionality is based on explicit programming for rendering and manipulation.
    8. How the ground truth for the training set was established: Not applicable, as there's no mention of a training set or machine learning.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1