Search Results
Found 1 results
510(k) Data Aggregation
(24 days)
The Sleuth AT system is an implantable, patient- and automatically-activated monitoring system that records subcutaneous ECG and is indicated for:
- . Patients with clinical syndromes or situations at increased risk of cardiac arrhythmias
- Patients who experience transient symptoms that may suggest a cardiac . arrhythmia
The Sleuth AT Implantable Cardiac System is an electrocardiogram (ECG) monitoring system that includes an implantable component and that provides continuous ECG monitoring and episodic or segmented ECG recording. The Sleuth AT Implantable Cardiac System comprises three interrelated components: Implantable Loop Recorder (ILR), Personnel Diagnostic Manager (PDM) and Base Station,
The PDM software was updated from Version 4.2 to Version 4.3. The Version 4.3 software incorporates a Service Menu in the PDM and minor software bug fixes. The Service Menu includes time zone set-up, data review option and transfer log. No other changes to the system were made.
The provided text describes a 510(k) submission for the Sleuth AT Implantable Cardiac Monitoring System. The submission is a "Special 510(k)" indicating minor changes to an already cleared device, where the changes do not affect the intended use or fundamental scientific technology.
Based on the provided information, the following answers can be given:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria | Reported Device Performance |
---|---|
Software Design Verification and Validation Testing leading to substantial equivalence to the predicate device. | The summary states, "The substantial equivalence of Sleuth AT System has been demonstrated via software design verification and validation testing." |
"Based on the information provided above, the Sleuth AT System incorporating the Service Menu is substantially equivalent to the predicate Sleuth AT System." |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not provide details on the sample size used for the test set or the data provenance. This submission is for a software update with minor bug fixes and a new "Service Menu," not for a new device requiring extensive clinical or performance data. The "test set" here refers to software verification and validation, not a clinical study on patients.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not provide information on the number or qualifications of experts used to establish ground truth for any testing. Given the nature of a Special 510(k) for a software update and bug fixes, expert review would likely be internal (e.g., software engineers, quality assurance personnel) rather than external clinical experts.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not provide information on any adjudication method used for a test set. This type of detail is typically relevant for clinical studies involving interpretation of medical images or data by multiple readers.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A multi-reader multi-case (MRMC) comparative effectiveness study was not done. The device is an Implantable Cardiac Monitoring System, and the submission pertains to a software update, not an AI-assisted diagnostic tool. Therefore, there is no mention of human readers improving with AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The provided text does not describe a standalone algorithm performance study. The device is an "Implantable Cardiac Monitoring System" with continuous ECG monitoring and recording capabilities. The "performance" described relates to software functionality and substantial equivalence to a predicate device, not an algorithm's standalone diagnostic accuracy.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The document does not explicitly state the type of "ground truth" used. For software verification and validation, ground truth would typically refer to expected software behaviors, defined specifications, and absence of identified bugs, rather than clinical outcomes or pathology.
8. The sample size for the training set
The document does not provide information on a training set sample size. This submission is for a software update and bug fixes, not for a newly developed machine learning or AI algorithm that would typically involve a "training set."
9. How the ground truth for the training set was established
The document does not provide information on how ground truth for a training set was established, as a training set for a new algorithm is not relevant to this Special 510(k) submission.
Ask a specific question about this device
Page 1 of 1