Search Results
Found 1 results
510(k) Data Aggregation
(368 days)
The SureTouch Mobile Pressure Mapping System is intended to produce a surface pressure map of the breast as an aid in documenting palpable breast lesions identified during a clinical breast examination. The SureTouch Mobile Pressure Mapping System is intended for use by a qualified healthcare professional trained in its use and is not for home use.
The SureTouch Mobile Pressure Mapping System ("SureTouch") is a computer-based device that produces a pressure map, called a tactile image, of specific areas of the breast as an aid to document lesions detected during a clinical breast exam. SureTouch utilizes a rechargeable, battery-powered hand-held wand (sensor unit) that incorporates a 30 x 40 mm array of pressure sensing elements to collect tactile data as the device is moved across the breast. Data collected using the wand are wirelessly transferred to the tablet display where they are used to generate tactile images and provide information on a lesion's size, shape and hardness. The final report includes a tactile image of each lesion along with its user inputted location. The SureTouch System also includes a calibration and training phantom, a scale to ensure correct force applied during calibration procedures, and a holder for the wand.
Here's an analysis of the provided text to fulfill your request, broken down by the specific points you asked for.
Please Note: The provided text is a 510(k) summary, which focuses on demonstrating substantial equivalence to a predicate device. It usually does not contain a detailed report of clinical study results, particularly for a device like this that is for documentation rather than diagnosis or treatment. Therefore, some of your requested information, especially regarding clinical performance metrics (like sensitivity, specificity, or reader improvement with AI), will not be explicitly available in this document.
1. A table of acceptance criteria and the reported device performance
The document mentions that various tests "met all acceptance criteria" but does not explicitly list the quantitative acceptance criteria for most of the performance tests. It also doesn't provide specific numerical performance results beyond stating that criteria were met.
Test Type | Acceptance Criteria (Explicitly Stated) | Reported Device Performance |
---|---|---|
Software Information | Met recommendations for minor level of concern software (FDA guidance) | Met recommendations |
Cybersecurity Information | Met recommendations (FDA guidance) | Met recommendations |
Electrical Safety | Per ANSI/AAMI EN60601-1:2006+A11:2013+A12:2014 | Met all acceptance criteria |
Electromagnetic Compatibility | Per IEC 60601-1-2:2007 (3rd edition) | Met all acceptance criteria |
Wireless Technology Information | Met recommendations (FDA guidance) | Met recommendations |
Intra and Inter-observer, and Inter-system Accuracy & Reproducibility Testing | Described in FDA "Class II Special Controls Guidance Document: Breast Lesion Documentation System" | Met all acceptance criteria |
Force Gauge Validation | (Not explicitly stated in document) | Met all acceptance criteria (for accuracy and reproducibility) |
Algorithm Output Sensitivity | (Not explicitly stated in document) | Met all acceptance criteria (to calibration errors) |
Phantom Testing (Sensor Measurement Accuracy) | (Not explicitly stated in document) | Met all acceptance criteria (under uniform calibration force) |
Phantom Testing (Aging of Calibration/Training Pad) | (Not explicitly stated in document) | Met all acceptance criteria (did not impact device calibration results) |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not provide details on the sample size used for any specific test sets or the provenance of the data (country, retrospective/prospective). This type of information is typically found in more detailed study reports, not a 510(k) summary.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not provide this information. It refers to "intra and inter-observer" testing, which implies multiple observers, but does not specify their number, qualifications, or how ground truth was established for these tests. For a Breast Lesion Documentation System
rather than a diagnostic one, "ground truth" might refer to the actual physical properties of simulated lesions or consensus on manual palpation findings, rather than a diagnostic gold standard like pathology.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not provide this information.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not describe an MRMC comparative effectiveness study involving human readers with and without AI assistance to improve diagnostic performance. The device is a "documentation system," not an AI diagnostic aid in the traditional sense discussed in MRMC studies.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
The document describes "Algorithm output sensitivity" testing, which suggests standalone algorithm performance was evaluated for sensitivity to calibration errors. However, it does not provide details on a standalone diagnostic performance (e.g., sensitivity/specificity of the algorithm alone to detect lesions) because the device's intended use is to document palpable lesions identified by a healthcare professional, not to independently detect them.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The document does not explicitly state the type of ground truth used for its performance testing. Given the "phantom testing" and "intra and inter-observer" testing in the context of a "documentation system" for palpable lesions, the ground truth likely involved:
- Physical characteristics of simulated lesions in phantoms: For tests related to accuracy of sensor measurements or calibration pad aging.
- Expert consensus on palpable characteristics: For tests related to intra- and inter-observer reproducibility of documenting palpable lesions.
It is highly unlikely that pathology or outcomes data would be used as ground truth for a device whose indication is documentation of palpable findings.
8. The sample size for the training set
The document is a 510(k) summary for a medical device that processes sensor data rather than being a deep learning AI model that requires a "training set" in the machine learning sense. The term "training set" is not mentioned, and thus no sample size is provided. The device described appears to be a tactile sensor system, not an AI imaging interpretation system.
9. How the ground truth for the training set was established
As there is no mention of a "training set" in the context of machine learning, this information is not applicable and not provided.
Ask a specific question about this device
Page 1 of 1