K Number
K241620
Device Name
ChestView US
Manufacturer
Date Cleared
2025-02-27

(267 days)

Product Code
Regulation Number
892.2070
Reference & Predicate Devices
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

ChestView US is a radiological Computer-Assisted Detection (CADe) software device that analyzes frontal and lateral chest radiographs of patients presenting with symptoms (e.g. dyspnea, cough, pain) or suspected for findings related to regions of interest (ROIs) in the lungs, airways, mediastinum/hila and pleural space. The device uses machine learning techniques to identify and produces boxes around the ROIs. The boxes are labeled with one of the following radiographic findings: Nodule, Pleural space abnormality, Mediastinum/Hila abnormality, and Consolidation.

ChestView US is intended for use as a concurrent reading aid for radiologists and emergency medicine physicians. It does not replace the role of radiologists and emergency medicine physicians or of other diagnostic testing in the standard of care. ChestView US is for prescription use only and is indicated for adults only.

Device Description

ChestView US is a radiological Computer-Assisted Detection (CADe) software device intended to analyze frontal and lateral chest radiographs for suspicious regions of interest (ROIs): Nodule, Consolidation, Pleural Space Abnormality and Mediastinum/Hila Abnormality.

The nodule ROI category was developed from images with focal nonlinear opacity with a generally spherical shape situated in the pulmonary interstitium.

The consolidation ROI category was developed from images with area of increased attenuation of lung parenchyma due to the replacement of air in the alveoli.

The pleural space abnormality ROI category was developed from images with:

  • Pleural Effusion that is an abnormal presence of fluid in the pleural space
  • Pneumothorax that is an abnormal presence of air or gas in the pleural space that separates the parietal and the visceral pleura

The mediastinum/hila abnormality ROI category was developed from images with enlargement of the mediastinum or the hilar region with a deformation of its contours.

ChestView US can be deployed on cloud and be connected to several computing platforms and X-ray imaging platforms such as radiographic systems, or PACS. More precisely, ChestView US can be deployed in the cloud connected to a DICOM Source/Destination with a DICOM Viewer, i.e. a PACS.

After the acquisition of the radiographs on the patient and their storage in the DICOM Source, the radiographs are automatically received by ChestView US from the user's DICOM Source through intermediate DICOM node(s) (for example, a specific Gateway, or a dedicated API). The DICOM Source can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems).

Once received by ChestView US, the radiographs are automatically processed by the AI algorithm to identify regions of interest. Based on the processing result, ChestView US generates result files in DICOM format. These result files consist of annotated images with boxes drawn around the regions of interest on a copy of all images (as an overlay). ChestView US does not alter the original images, nor does it change the order of original images or delete any image from the DICOM Source.

Once available, the result files are sent by ChestView US to the DICOM Destination through the same intermediate DICOM node(s). Similar to the DICOM Source, the DICOM Destination can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems). The DICOM Source and the DICOM Destination are not necessarily identical.

The DICOM Destination can be used to visualize the result files provided by ChestView US or to transfer the results to another DICOM host for visualization. The users are them as a concurrent reading aid to provide their diagnosis.

For each exam analyzed by ChestView US, a DICOM Secondary Capture is generated.

If any ROI is detected by ChestView US, the output DICOM image includes a copy of the original images of the study and the following information:

  • Above the images, a header with the text "CHESTVIEW ROI" and the list of the findings detected in the image.
  • Around the ROI(s), a bounding box with a solid or dotted line depending on the confidence of the algorithm and the type of ROI written above the box:
    • Dotted-line Bounding Box: Identified region of interest when the confidence degree of the AI algorithm associated with the possible finding is above "high-sensitivity operating point" and below "high specificity operating point" displayed as a dotted bounding box around the area of interest.
    • Solid-line Bounding Box: Identified region of interest when the confidence degree of the AI algorithm associated with the finding is above "high-specificity operating point" displayed as a solid bounding box around the area of interest.
  • Below the images, a footer with:
    • The scope of ChestView US to allow the user to always have available the list of ROI type that are in the indications for use of the device and avoid any risk of confusion or misinterpretation of the types of ROI detected by ChestView US.
    • The total number of regions of interest identified by ChestView US on the exam (sum of solid-line and dotted-line bounding boxes)

If no ROI is detected by ChestView US, the output DICOM image includes a copy of the original images of the study and the text "NO CHESTVIEW ROI" with the scope of ChestView US to allow the user to always have available the list of ROI type that are in the indications for use of the device and avoid any risk of confusion or misinterpretation of the types of ROI detected by ChestView US. Finally, if the processing of the exam by ChestView US is not possible because it is outside the indications for use of the device or some information is missing to allow the processing, the output DICOM image includes a copy of the original images of the study and, in a header, the text "OUT OF SCOPE" and a caution message explaining the reason why no result was provided by the device.

AI/ML Overview

Here's a breakdown of the acceptance criteria and study details for ChestView US:

1. Table of Acceptance Criteria and Reported Device Performance

Standalone Performance (ChestView US)

ROIsAcceptance Criteria (AUC)Reported Device Performance (AUC)95% Bootstrap CI (AUC)Acceptance Criteria (Sensitivity @ High-Sensitivity OP)Reported Device Performance (Sensitivity @ High-Sensitivity OP)95% Bootstrap CI (Sensitivity @ High-Sensitivity OP)Acceptance Criteria (Specificity @ High-Sensitivity OP)Reported Device Performance (Specificity @ High-Sensitivity OP)95% Bootstrap CI (Specificity @ High-Sensitivity OP)Acceptance Criteria (Sensitivity @ High-Specificity OP)Reported Device Performance (Sensitivity @ High-Specificity OP)95% Bootstrap CI (Sensitivity @ High-Specificity OP)Acceptance Criteria (Specificity @ High-Specificity OP)Reported Device Performance (Specificity @ High-Specificity OP)95% Bootstrap CI (Specificity @ High-Specificity OP)
NODULE(Not explicitly stated)0.93[0.921; 0.938](Not explicitly stated)0.829[0.801; 0.86](Not explicitly stated)0.956[0.948; 0.963](Not explicitly stated)0.482[0.455; 0.518](Not explicitly stated)0.994[0.99; 0.996]
MEDIASTINUM/HILA ABNORMALITY(Not explicitly stated)0.922[0.91; 0.934](Not explicitly stated)0.793[0.739; 0.832](Not explicitly stated)0.975[0.971; 0.98](Not explicitly stated)0.535[0.475; 0.592](Not explicitly stated)0.992[0.99; 0.994]
CONSOLIDATION(Not explicitly stated)0.952[0.947; 0.957](Not explicitly stated)0.853[0.822; 0.879](Not explicitly stated)0.946[0.938; 0.952](Not explicitly stated)0.61[0.583; 0.643](Not explicitly stated)0.985[0.981; 0.989]
PLEURAL SPACE ABNORMALITY(Not explicitly stated)0.973[0.97; 0.975](Not explicitly stated)0.892[0.87; 0.911](Not explicitly stated)0.965[0.958; 0.971](Not explicitly stated)0.87[0.85; 0.896](Not explicitly stated)0.975[0.97; 0.981]

MRMC Study Acceptance Criteria and Reported Performance (Improvement with AI Aid)

ROI CategoryReader TypeAcceptance Criteria (AUC Improvement)Reported AUC Improvement95% Confidence Interval for AUC ImprovementP-value
NoduleEmergency Medicine Physicians(Not explicitly stated as a numerical threshold, but "significantly improved")0.136[0.107, 0.17]< 0.001
NoduleRadiologists(Not explicitly stated as a numerical threshold, but "significantly improved")0.038[0.026, 0.052]< 0.001
Mediastinum/Hila AbnormalityEmergency Medicine Physicians(Not explicitly stated as a numerical threshold, but "significantly improved")0.158[0.14, 0.178]< 0.001
Mediastinum/Hila AbnormalityRadiologists(Not explicitly stated as a numerical threshold, but "significantly improved")0.057[0.039, 0.077]< 0.001
ConsolidationEmergency Medicine Physicians(Not explicitly stated as a numerical threshold, but "significantly improved")0.099[0.083, 0.116]< 0.001
ConsolidationRadiologists(Not explicitly stated as a numerical threshold, but "significantly improved")0.059[0.038, 0.079]< 0.001
Pleural Space AbnormalityEmergency Medicine Physicians(Not explicitly stated as a numerical threshold, but "significantly improved")0.127[0.078, 0.18]< 0.001
Pleural Space AbnormalityRadiologists(Not explicitly stated as a numerical threshold, but "significantly improved")0.034[0.019, 0.049]< 0.001

The acceptance criteria for the standalone performance are implied by the presentation of high AUC, sensitivity, and specificity metrics, suggesting that these values met an internal performance threshold deemed acceptable by the manufacturer and the FDA for market clearance. The MRMC study explicitly states that "Reader AUC estimates for both specialties significantly improved for all four categories (p-values < 0.001)," which serves as the acceptance criterion for the human-in-the-loop performance.

2. Sample Size Used for the Test Set and Data Provenance

  • Sample Size for Standalone Test Set: 3,884 chest radiograph cases.
  • Data Provenance (Standalone and MRMC): "representative of the intended use population." While the document does not explicitly state the country of origin or whether the data was retrospective or prospective, most such studies use retrospective data from diverse patient populations to represent real-world clinical scenarios. The use of "U.S. board-certified radiologists" for ground truth suggests U.S. data sources are likely.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

  • Number of Experts: A "panel of U.S. board-certified radiologists" was used. The exact number is not specified.
  • Qualifications of Experts: U.S. board-certified radiologists. No specific experience levels (e.g., "10 years of experience") are mentioned.

4. Adjudication Method for the Test Set

The document does not explicitly state the adjudication method (e.g., 2+1, 3+1). It only mentions that a "panel of U.S. board-certified radiologists" assessed the presence or absence of ROIs. This typically implies a consensus-based approach, but the specific mechanics (e.g., how disagreements were resolved) are not provided.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

  • Yes, an MRMC comparative effectiveness study was done.
  • Effect Size of Human Readers Improvement with AI vs. without AI Assistance (Difference in AUC):
    • Nodule Detection:
      • Emergency Medicine Physicians: 0.136 (95% CI [0.107, 0.17])
      • Radiologists: 0.038 (95% CI [0.026, 0.052])
    • Mediastinum/Hila Abnormality Detection:
      • Emergency Medicine Physicians: 0.158 (95% CI [0.14, 0.178])
      • Radiologists: 0.057 (95% CI [0.039, 0.077])
    • Consolidation Detection:
      • Emergency Medicine Physicians: 0.099 (95% CI [0.083, 0.116])
      • Radiologists: 0.059 (95% CI [0.038, 0.079])
    • Pleural Space Abnormality Detection:
      • Emergency Medicine Physicians: 0.127 (95% CI [0.078, 0.18])
      • Radiologists: 0.034 (95% CI [0.019, 0.049])

6. Standalone (Algorithm Only without Human-in-the-Loop Performance)

  • Yes, a standalone clinical performance study was done. The results are presented in Table 2 (AUC) and Table 3 (Specificity/Sensitivity).

7. Type of Ground Truth Used

  • Expert Consensus: The ground truth for both the standalone and MRMC studies was established by a "panel of U.S. board-certified radiologists" who assessed the presence or absence of ROIs.

8. Sample Size for the Training Set

The document does not specify the sample size used for the training set. It only mentions the "standalone clinical performance study on 3,884 chest radiograph cases representative of the intended use population" for testing.

9. How the Ground Truth for the Training Set Was Established

The document does not provide details on how the ground truth for the training set was established. It only describes the establishment of ground truth for the test set by a panel of U.S. board-certified radiologists.

{0}------------------------------------------------

Image /page/0/Picture/0 description: The image shows the logo of the U.S. Food and Drug Administration (FDA). The logo consists of two parts: the Department of Health & Human Services logo on the left and the FDA logo on the right. The FDA logo is in blue and includes the letters "FDA" in a square and the words "U.S. FOOD & DRUG ADMINISTRATION".

February 27, 2025

Gleamer SAS % Antoine Tournier Chief Compliance Officer 47bis rue des Vinaigriers PARIS, 75010 FRANCE

Re: K241620

Trade/Device Name: ChestView US Regulation Number: 21 CFR 892.2070 Regulation Name: Medical Image Analyzer Regulatory Class: Class II Product Code: MYN Dated: January 27, 2025 Received: January 28, 2025

Dear Antoine Tournier:

We have reviewed your section 510(k) premarket notification of intent to market the device referenced above and have determined the device is substantially equivalent (for the indications for use stated in the enclosure) to legally marketed predicate devices marketed in interstate commerce prior to May 28, 1976, the enactment date of the Medical Device Amendments, or to devices that have been reclassified in accordance with the provisions of the Federal Food, Drug, and Cosmetic Act (the Act) that do not require approval of a premarket approval application (PMA). You may, therefore, market the device, subject to the general controls provisions of the Act. Although this letter refers to your product as a device, please be aware that some cleared products may instead be combination products. The 510(k) Premarket Notification Database available at https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm identifies combination product submissions. The general controls provisions of the Act include requirements for annual registration, listing of devices, good manufacturing practice, labeling, and prohibitions against misbranding and adulteration. Please note: CDRH does not evaluate information related to contract liability warranties. We remind you, however, that device labeling must be truthful and not misleading.

If your device is classified (see above) into either class II (Special Controls) or class III (PMA), it may be subject to additional controls. Existing major regulations affecting your device can be found in the Code of Federal Regulations, Title 21, Parts 800 to 898. In addition, FDA may publish further announcements concerning your device in the Federal Register.

{1}------------------------------------------------

Additional information about changes that may require a new premarket notification are provided in the FDA guidance documents entitled "Deciding When to Submit a 510(k) for a Change to an Existing Device" (https://www.fda.gov/media/99812/download) and "Deciding When to Submit a 510(k) for a Software Change to an Existing Device" (https://www.fda.gov/media/99785/download).

Your device is also subject to, among other requirements, the Quality System (QS) regulation (21 CFR Part 820), which includes, but is not limited to, 21 CFR 820.30. Design controls; 21 CFR 820.90. Nonconforming product; and 21 CFR 820.100, Corrective and preventive action. Please note that regardless of whether a change requires premarket review, the QS regulation requires device manufacturers to review and approve changes to device design and production (21 CFR 820.30 and 21 CFR 820.70) and document changes and approvals in the device master record (21 CFR 820.181).

Please be advised that FDA's issuance of a substantial equivalence determination does not mean that FDA has made a determination that your device complies with other requirements of the Act or any Federal statutes and regulations administered by other Federal agencies. You must comply with all the Act's requirements, including, but not limited to: registration and listing (21 CFR Part 807); labeling (21 CFR Part 801); medical device reporting of medical device-related adverse events) (21 CFR Part 803) for devices or postmarketing safety reporting (21 CFR Part 4, Subpart B) for combination products (see https://www.fda.gov/combination-products/guidance-regulatory-information/postmarketing-safety-reportingcombination-products); good manufacturing practice requirements as set forth in the quality systems (QS) regulation (21 CFR Part 820) for devices or current good manufacturing practices (21 CFR Part 4, Subpart A) for combination products; and, if applicable, the electronic product radiation control provisions (Sections 531-542 of the Act); 21 CFR Parts 1000-1050.

All medical devices, including Class I and unclassified devices and combination product device constituent parts are required to be in compliance with the final Unique Device Identification System rule ("UDI Rule"). The UDI Rule requires, among other things, that a device bear a unique device identifier (UDI) on its label and package (21 CFR 801.20(a)) unless an exception or alternative applies (21 CFR 801.20(b)) and that the dates on the device label be formatted in accordance with 21 CFR 801.18. The UDI Rule (21 CFR 830.300(a) and 830.320(b)) also requires that certain information be submitted to the Global Unique Device Identification Database (GUDID) (21 CFR Part 830 Subpart E). For additional information on these requirements, please see the UDI System webpage at https://www.fda.gov/medical-device-advicecomprehensive-regulatory-assistance/unique-device-identification-system-udi-system.

Also, please note the regulation entitled, "Misbranding by reference to premarket notification" (21 CFR 807.97). For questions regarding the reporting of adverse events under the MDR regulation (21 CFR Part 803), please go to https://www.fda.gov/medical-device-safety/medical-device-reportingmdr-how-report-medical-device-problems.

For comprehensive regulatory information about medical devices and radiation-emitting products, including information about labeling regulations, please see Device Advice (https://www.fda.gov/medicaldevices/device-advice-comprehensive-regulatory-assistance) and CDRH Learn (https://www.fda.gov/training-and-continuing-education/cdrh-learn). Additionally, you may contact the Division of Industry and Consumer Education (DICE) to ask a question about a specific regulatory topic. See the DICE website (https://www.fda.gov/medical-device-advice-comprehensive-regulatory

{2}------------------------------------------------

assistance/contact-us-division-industry-and-consumer-education-dice) for more information or contact DICE by email (DICE@fda.hhs.gov) or phone (1-800-638-2041 or 301-796-7100).

Sincerely,

Lu Jiang

Lu Jiang, Ph.D. Assistant Director Diagnostic X-Ray Systems Team DHT8B: Division of Radiological Imaging Devices and Electronic Products OHT8: Office of Radiological Health Office of Product Evaluation and Quality Center for Devices and Radiological Health

Enclosure

{3}------------------------------------------------

Indications for Use

Submission Number (if known)

K241620

Device Name

ChestView US

Indications for Use (Describe)

ChestView US is a radiological Computer-Assisted Detection (CADe) software device that analyzes frontal and lateral chest radiographs of patients presenting with symptoms (e.g. dyspnea, cough, pain) or suspected for findings related to regions of interest (ROIs) in the lungs, airways, mediastinum/hila and pleural space. The device uses machine techniques to identify and produces boxes around the ROIs. The boxes are labeled with one of the following radiographic findings: Nodule, Pleural space abnormality, Mediastinum/Hila abnormality, and Consolidation.

ChestView is intended for use as a concurrent reading aid for radiologists and emergency medicine physicians. It does not replace the role of radiologists and emergency medicine physicians or of other diagnostic testing in the standard of care. ChestView is for prescription use only and is indicated for adults only.

Type of Use (Select one or both, as applicable)

Prescription Use (Part 21 CFR 801 Subpart D)

Over-The-Counter Use (21 CFR 801 Subpart C)

CONTINUE ON A SEPARATE PAGE IF NEEDED.

This section applies only to requirements of the Paperwork Reduction Act of 1995.

DO NOT SEND YOUR COMPLETED FORM TO THE PRA STAFF EMAIL ADDRESS BELOW.

The burden time for this collection of information is estimated to average 79 hours per response, including the time to review instructions, search existing data sources, gather and maintain the data needed and complete and review the collection of information. Send comments regarding this burden estimate or any other aspect of this information collection, including suggestions for reducing this burden, to:

Department of Health and Human Services Food and Drug Administration Office of Chief Information Officer Paperwork Reduction Act (PRA) Staff PRAStaff(@fda.hhs.gov

"An agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless it displays a currently valid OMB number."

{4}------------------------------------------------

Image /page/4/Picture/1 description: The image shows the title "510(k) Summary" followed by "GLEAMER - ChestView US". The text is left-aligned and in a simple, sans-serif font. The text is likely the heading of a document or report related to medical device clearance.

Image /page/4/Picture/2 description: The image shows the logo for Gleamer. The logo consists of a blue circle with a white center on the left and the word "Gleamer" in black on the right. The circle has a pattern of small dots that create a gradient effect.

Date prepared: January 27th, 2025

In accordance with 21 CFR 807.87(h) and 21 CFR 807.92 the 510(k) Summary for ChestView US is provided below.

1. Submitter

SubmitterGLEAMER SAS47bis, rue des Vinaigriers75010, Paris 10, FRANCE
Primary ContactPersonAntoine TournierChief Compliance OfficerTel: 0033 6 15 81 23 45Email: antoine.tournier@gleamer.aiAlternate email: qara@gleamer.ai

2. Device

Trade NameChestView US
510(k) referenceK241620
Common NameMedical Image Analyzer
Classification NameMedical Image Analyzer
Regulation21 CFR 892.2070
Product CodeMYN
ClassificationClass II

3. Predicate Device

Predicate DeviceAorta-CAD
510(k) ReferenceK213353

4. Device Description

ChestView US is a radiological Computer-Assisted Detection (CADe) software device intended to analyze frontal and lateral chest radiographs for suspicious regions of interest (ROIs): Nodule, Consolidation, Pleural Space Abnormality and Mediastinum/Hila Abnormality.

The nodule ROI category was developed from images with focal nonlinear opacity with a generally spherical shape situated in the pulmonary interstitium.

The consolidation ROI category was developed from images with area of increased attenuation of lung parenchyma due to the replacement of air in the alveoli.

The pleural space abnormality ROI category was developed from images with:

{5}------------------------------------------------

510(k) Summary GLEAMER - ChestView US

Image /page/5/Picture/1 description: The image contains the word "Gleamer" in bold black font next to a blue circular icon. The icon is a sphere made up of many small blue dots. The word "Gleamer" is written in a simple, sans-serif typeface. The overall design is clean and modern.

  • Pleural Effusion that is an abnormal presence of fluid in the pleural space
  • . Pneumothorax that is an abnormal presence of air or gas in the pleural space that separates the parietal and the visceral pleura

The mediastinum/hila abnormality ROI category was developed from images with enlargement of the mediastinum or the hilar region with a deformation of its contours.

ChestView US can be deployed on cloud and be connected to several computing platforms and X-ray imaging platforms such as radiographic systems, or PACS. More precisely, ChestView US can be deployed in the cloud connected to a DICOM Source/Destination with a DICOM Viewer, i.e. a PACS.

After the acquisition of the radiographs on the patient and their storage in the DICOM Source, the radiographs are automatically received by ChestView US from the user's DICOM Source through intermediate DICOM node(s) (for example, a specific Gateway, or a dedicated API). The DICOM Source can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems).

Once received by ChestView US, the radiographs are automatically processed by the Al algorithm to identify regions of interest. Based on the processing result, ChestView US generates result files in DICOM format. These result files consist of annotated images with boxes drawn around the regions of interest on a copy of all images (as an overlay). ChestView US does not alter the original images, nor does it change the order of original images or delete any image from the DICOM Source.

Once available, the result files are sent by ChestView US to the DICOM Destination through the same intermediate DICOM node(s). Similar to the DICOM Source, the DICOM Destination can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems). The DICOM Source and the DICOM Destination are not necessarily identical.

The DICOM Destination can be used to visualize the result files provided by ChestView US or to transfer the results to another DICOM host for visualization. The users are them as a concurrent reading aid to provide their diagnosis.

For each exam analyzed by ChestView US, a DICOM Secondary Capture is generated.

If any ROI is detected by ChestView US, the output DICOM image includes a copy of the original images of the study and the following information:

  • Above the images, a header with the text "CHESTVIEW ROI" and the list of the findings detected in the image.
  • . Around the ROI(s), a bounding box with a solid or dotted line depending on the confidence of the algorithm and the type of ROI written above the box:
    • Dotted-line Bounding Box: Identified region of interest when the confidence degree of the Al algorithm associated with the possible finding is above "high-sensitivity operating point"

{6}------------------------------------------------

Image /page/6/Picture/1 description: The image contains the logo for Gleamer. The logo consists of a blue circle with a pattern of dots inside, followed by the word "Gleamer" in black, bold font. The circle is on the left side of the word.

and below "high specificity operating point" displayed as a dotted bounding box around the area of interest.

  • Solid-line Bounding Box: Identified region of interest when the confidence degree of the Al o algorithm associated with the finding is above "high-specificity operating point" displayed as a solid bounding box around the area of interest.
  • Below the images, a footer with: ●
    • o The scope of ChestView US to allow the user to always have available the list of ROI type that are in the indications for use of the device and avoid any risk of confusion or misinterpretation of the types of ROI detected by ChestView US.
    • o The total number of regions of interest identified by ChestView US on the exam (sum of solid-line and dotted-line bounding boxes)

If no ROI is detected by ChestView US, the output DICOM image includes a copy of the original images of the study and the text "NO CHESTVIEW RO!" with the scope of ChestView US to allow the user to always have available the list of ROI type that are in the indications for use of the device and avoid any risk of confusion or misinterpretation of the types of ROI detected by ChestView US. Finally, if the processing of the exam by ChestView US is not possible because it is outside the indications for use of the device or some information is missing to allow the processing, the output DICOM image includes a copy of the original images of the study and, in a header, the text "OUT OF SCOPE" and a caution message explaining the reason why no result was provided by the device.

5. Intended use/ Indications for Use

ChestView US is a radiological Computer-Assisted Detection (CADe) software device that analyzes frontal and lateral chest radiographs of patients presenting with symptoms (e.g. dyspnea, cough, pain) or suspected for findings related to regions of interest (ROIs) in the lungs, airways, mediastinum/hila and pleural space. The device uses machine learning techniques to identify and produces boxes around the ROIs. The boxes are labeled with one of the following radiographic findings: Nodule, Pleural space abnormality, Mediastinum/Hila abnormality, and Consolidation.

ChestView US is intended for use as a concurrent reading aid for radiologists and emergency medicine physicians. It does not replace the role of radiologists and emergency medicine pr of other diagnostic testing in the standard of care. ChestView US is for prescription use only and is indicated for adults only.

6. Substantial equivalence

The predicate device for ChestView US is Aorta-CAD (K213353). Aorta-CAD has the following FDA-cleared Indications for Use:

Aorta-CAD is a computer-assisted detection (CADe) software device that analyzes chest radiograph studies for suspicious regions of interest (ROIs). The device uses a deep learning algorithm to identify ROIs and produces boxes around the ROIs. The boxes are labeled with one of the following radiographic findings: Aortic calcification or Dilated aorta. Aorta-CAD is intended for use as a concurrent reading aid for physicians looking

{7}------------------------------------------------

510(k) Summary GLEAMER - ChestView US

Image /page/7/Picture/1 description: The image shows the logo for Gleamer. The logo consists of a blue circle with a pattern of dots inside, followed by the word "Gleamer" in black, bold font. The circle is on the left side of the word.

for ROIs with radiographic findings suggestive of Aortic Atherosclerosis or Aortic Ectasia. It does not replace the role of the physician or of other diagnostic testing in the standard of care. Aorta-CAD is indicated for adults only

Table 1 provides a comparison of the Indications for Use and Technological Characteristics of ChestView US to the predicate Aorta-CAD.

{8}------------------------------------------------

510(k) Summary GLEAMER — ChestView US

Image /page/8/Picture/1 description: The image contains the word "Gleamer" in bold, black font. To the left of the word is a blue circle with a white center. The circle is made up of many small dots.

Features andCharacteristicsSubject DeviceGleamerChestView USPredicate DeviceImagen Inc.Aorta-CAD (K213353)
Regulation Information
Classificationregulation21 CFR 892.2070 – Medical image analyzerSame
Product CodeMYNSame
RegulationDescriptionMedical image analyzers, including computer-assisted/aided detection (CADe) devices formammography breast cancer, ultrasound breast lesions,radiograph lung nodules, and radiograph dental cariesdetection, is a prescription device that is intended toidentify, mark, highlight, or in any other manner directthe clinicians' attention to portions of a radiology imagethat may reveal abnormalities during interpretation ofpatient radiology images by the clinicians. This deviceincorporates pattern recognition and data analysiscapabilities and operates on previously acquired medicalimages.Same
Indications for Use
Image ModalityX-raySame
Study typeChestSame
Clinical OutputWhen regions of interest (ROIs) are detected: Identifyand mark ROIs on chest radiographs and label the boxaround the ROI as one of the following: Nodule, Pleuralspace abnormality, Mediastinum/Hila abnormality, andConsolidation.When no ROI is detected: Mark the chest radiographs as"NO CHESTVIEW ROI" and provide the scope of theindications for use of ChestView US.When regions of interest (ROIs) aredetected: Identify and markregions of interest (ROIs) on chestradiographs and label the boxaround the ROI as one of thefollowing: Aortic calcification orDilated aorta.When no ROI is detected: Markthe chest radiographs as "NoAorta-CAD ROI(s)"
Intended UsersRadiologists/emergency medicine physiciansPhysicians
Intended UserWorkflowDevice intended for use as a concurrent reading aid forusers interpreting chest radiographsSame
PatientpopulationAdults with Chest RadiographsSame
Technological Information
MachineLearningMethodologySupervised Deep learningSame
DeploymentPlatformSecure cloud-based processing and delivery of chestradiographsSame
Image SourceDigital X-raySame
Image ViewingImage displayed on PACS systemSame

{9}------------------------------------------------

Image /page/9/Picture/1 description: The image shows the word "Gleamer" in bold, black font next to a blue circle. The circle is made up of many small dots, and it is located to the left of the word "Gleamer". The word "Gleamer" is written in a sans-serif font, and it is slightly larger than the circle.

Table 1: Indications for Use and Technological Comparison between ChestView US and Aorta-CAD

As outlined in the table above, ChestView US shares similar technological features with Aorta-CAD. ChestView US and Aorta-CAD both analyze chest radiographs and both detect, identify and categorize ROIs. ChestView US and Aorta-CAD are indicated for use as a concurrent reading aid. Both devices are intended as an aid to the user and not intended to replace the role of the user or of other diagnostic testing in the standard of care.

ChestView US differs from Aorta-CAD in that ChestView US identifies and categorizes ROIs as one of four categories (Nodule, Pleural space abnormality, Mediastinum/Hila abnormality, and Consolidation) compared to Aorta-CAD which identifies and categorizes ROIs as one of two categories (Aortic calcification or Dilated aorta). Despite these differences in application, the primary purpose of both devices is to identify ROIs on chest X-rays for further consideration by the intended users. The differences in indications for use do not constitute a new intended use (as both devices are intended to assist physicians by identifying and marking ROIs in chest radiographs) and do not raise different questions of safety and effectiveness. Thus, ChestView US is considered substantially equivalent to its predicate device, Aorta-CAD.

{10}------------------------------------------------

Image /page/10/Picture/1 description: The image contains the word "Gleamer" in bold black font next to a blue circle. The circle is made up of many small blue dots, with a white circle in the center. The word "Gleamer" is written in a sans-serif font and is slightly larger than the circle.

7. Performance data

7.1. Software Verification and Validation Testing

Product verification and validation testing were conducted and documented as per the requirements of the FDA guidance "Content of Premarket Submissions for Device Software Functions" for a Basic Documentation Level. Nonclinical tests include unit, integration and system levels testing for the final software version. ChestView US performed as intended and all results observed were as expected. All software requirements and risk analysis have been successfully verified and traced.

7.2. Bench testing

Gleamer conducted a standalone clinical performance study on 3,884 chest radiograph cases representative of the intended use population. The results of the standalone testing demonstrated that ChestView US detects ROIs with high Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) curve, high sensitivity and high specificity:

ROIsAUC95% Bootstrap Cl
NODULE0.93[0.921; 0.938]
MEDIASTINUM/HILA ABNORMALITY0.922[0.91; 0.934]
CONSOLIDATION0.952[0.947; 0.957]
PLEURAL SPACE ABNORMALITY0.973[0.97; 0.975]

Table 2: AUC of the ROC Curve for ChestView US Model Predictions by ROI

Table 3: Specificity/Sensitivity (with 95% Bootstrap Cl, at both operating points) of ChestView US at the exam/patient level on the standalone dataset

ROIsSpecificity/Sensitivity
High-sensitivity operating point(POSSIBLE FINDINGS)High-specificity operating point(FINDINGS)
Specificity – 95%Bootstrap ClSensitivity – 95%Bootstrap ClSpecificity – 95%Bootstrap ClSensitivity – 95%Bootstrap Cl
NODULE0.956 [0.948; 0.963]0.829 [0.801; 0.86]0.994 [0.99; 0.996]0.482 [0.455; 0.518]
MEDIASTINUM/HILAABNORMALITY0.975 [0.971; 0.98]0.793 [0.739; 0.832]0.992 [0.99; 0.994]0.535 [0.475; 0.592]
CONSOLIDATION0.946 [0.938; 0.952]0.853 [0.822; 0.879]0.985 [0.981; 0.989]0.61 [0.583; 0.643]
PLEURAL SPACEABNORMALITY0.965 [0.958; 0.971]0.892 [0.87; 0.911]0.975 [0.97; 0.981]0.87 [0.85; 0.896]

7.3. Clinical Testing

Gleamer conducted a fully-crossed multiple reader, multiple case (MRMC) retrospective reader study to determine the impact of ChestView US on reader performance in detecting Regions of Interest from the ChestView US's indications for use on chest radiograph cases.

{11}------------------------------------------------

510(k) Summary GLEAMER - ChestView US

The primary objective of this study was to determine whether the accuracy of readers (radiologists and emergency medicine physicians) aided by ChestView US ("Aided") was superior to the accuracy of readers when unaided by ChestView US ("Unaided") as determined by the case-level Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) curve. The sensitivity for each of the four categories of ROI detected by ChestView US was also reported at the case-level and ROI level.

Clinical readers each evaluated 240 cases in ChestView US's Indications for Use under both Aided and Unaided conditions. Each case was previously evaluated by a panel of U.S. board-certified radiologists who assessed the presence or absence of ROI from the indications for use of ChestView US.

The MRMC study consisted of two independent reading sessions separated by a washout period of at least 1 month in order to avoid memory bias.

For each case, each reader was required to determine if there is at least one ROI from the indications for use of ChestView US in the exam and to provide a confidence score representing their certainty. The accuracy of both readers specialties, radiologists and emergency medicine physicians, in the intended use population was superior when aided by ChestView US than when unaided by ChestView US for each category as calculated with the 95% Bootstrap confidence interval modeling approach.

The results of the study found that the diagnostic accuracy of readers in the intended use population is superior when aided by ChestView US than when unaided by ChestView US.

The results of the clinical study are shown in Figure 1 to Figure 4 below.

Image /page/11/Figure/8 description: This image contains two figure titles. The first title is "Figure 1: Reader Study Results - Aided and Unaided ROC Curves for Pleural space abnormality". The second title is "Figure 2: Reader Study Results - Aided and Unaided ROC Curves for Consolidation".

Image /page/11/Figure/9 description: This image is a plot with sensitivity on the y-axis and 1-specificity on the x-axis. There are 5 curves plotted on the graph: AI curve, Radiologists without AI, Radiologists with AI, Emergency doctors without AI, and Emergency doctors with AI. The AI curve is the highest performing, followed by Radiologists with AI, Radiologists without AI, Emergency doctors with AI, and Emergency doctors without AI.

Image /page/11/Figure/10 description: This image is a plot of sensitivity vs. 1-specificity, showing the performance of different diagnostic methods. The black line represents the AI curve, which is the highest performing method. The radiologists with AI, emergency doctors without AI, and emergency doctors with AI are all lower performing methods. The x-axis represents 1-specificity, and the y-axis represents sensitivity.

{12}------------------------------------------------

Image /page/12/Picture/1 description: The image contains the logo for Gleamer. The logo consists of a blue circle with a white center on the left, followed by the word "Gleamer" in black, sans-serif font on the right. The blue circle has a pattern of dots, creating a textured effect.

Figure 3: Reader Study Results - Aided and Unaided ROC Curves for Mediastinum/hila abnormality

Image /page/12/Figure/3 description: This image is a title for a figure. The title reads "Figure 4: Reader Study Results - Aided and Unaided ROC Curves for Nodule". The title indicates that the figure will present the results of a reader study, specifically focusing on Receiver Operating Characteristic (ROC) curves for nodule detection, with both aided and unaided conditions being compared.

Image /page/12/Figure/4 description: The image contains two identical ROC curves comparing the performance of AI, radiologists with and without AI, and emergency doctors with and without AI. The x-axis represents 1 - specificity, and the y-axis represents sensitivity. The AI curve is the highest, followed by radiologists with AI, radiologists without AI, emergency doctors with AI, and emergency doctors without AI. The AI operating point is marked on the AI curve.

In particular, the clinical study results demonstrated improvements when Aided versus Unaided:

  • Reader AUC estimates for both specialties significantly improved for all four categories (p-values < 0.001).
  • More precisely:
    • For Nodule Detection the difference in AUC at the exam/patient level between aided and o unaided reads was 0.136 (95% Cl [0.107, 0.17]) for emergency medicine physicians and 0.038 (95% CI [0.026, 0.052]) for radiologists.
    • For Mediastinum/Hila Abnormality Detection the difference in AUC at the exam/patient o level between aided and unaided reads was 0.158 (95% Cl [0.14, 0.178]) for emergency medicine physicians and 0.057 (95% CI [0.039, 0.077]) for radiologists.
    • For Consolidation Detection the difference in AUC at the exam/patient level between aided o and unaided reads was 0.099 (95% CI [0.083, 0.116]) for emergency medicine physicians and 0.059 (95% CI [0.038, 0.079]) for radiologists.
    • For Pleural Space Abnormality Detection the difference in AUC at the exam/patient level o between aided and unaided reads was 0.127 (95% Cl [0.078, 0.18]) for emergency medicine physicians and 0.034 (95% CI [0.019, 0.049]) for radiologists.

8. Conclusion

The conclusions drawn from the standalone and clinical studies demonstrate that ChestView US is as safe, as effective, and performs as well as Aorta-CAD. The special controls for the Medical Image Analyzer (CADe) 21 CFR 892.2070 regulation are satisfied by demonstrating effectiveness of the device in both the standalone testing and the clinical testing, showing superiority of Aided reads in the clinical testing, and communicating testing results in the labeling. ChestView US's technological characteristics, including but not limited to the intended end-users, imaging modality, output display on X-ray studies, and assistive functionality during chest radiograph interpretation workflows, are similar to those of Aorta-CAD. Thus, ChestView US is substantially equivalent to Aorta-CAD for the intended use of computer-assisted detection.

§ 892.2070 Medical image analyzer.

(a)
Identification. Medical image analyzers, including computer-assisted/aided detection (CADe) devices for mammography breast cancer, ultrasound breast lesions, radiograph lung nodules, and radiograph dental caries detection, is a prescription device that is intended to identify, mark, highlight, or in any other manner direct the clinicians' attention to portions of a radiology image that may reveal abnormalities during interpretation of patient radiology images by the clinicians. This device incorporates pattern recognition and data analysis capabilities and operates on previously acquired medical images. This device is not intended to replace the review by a qualified radiologist, and is not intended to be used for triage, or to recommend diagnosis.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed description of the image analysis algorithms including a description of the algorithm inputs and outputs, each major component or block, and algorithm limitations.
(ii) A detailed description of pre-specified performance testing methods and dataset(s) used to assess whether the device will improve reader performance as intended and to characterize the standalone device performance. Performance testing includes one or more standalone tests, side-by-side comparisons, or a reader study, as applicable.
(iii) Results from performance testing that demonstrate that the device improves reader performance in the intended use population when used in accordance with the instructions for use. The performance assessment must be based on appropriate diagnostic accuracy measures (
e.g., receiver operator characteristic plot, sensitivity, specificity, predictive value, and diagnostic likelihood ratio). The test dataset must contain a sufficient number of cases from important cohorts (e.g., subsets defined by clinically relevant confounders, effect modifiers, concomitant diseases, and subsets defined by image acquisition characteristics) such that the performance estimates and confidence intervals of the device for these individual subsets can be characterized for the intended use population and imaging equipment.(iv) Appropriate software documentation (
e.g., device hazard analysis; software requirements specification document; software design specification document; traceability analysis; description of verification and validation activities including system level test protocol, pass/fail criteria, and results; and cybersecurity).(2) Labeling must include the following:
(i) A detailed description of the patient population for which the device is indicated for use.
(ii) A detailed description of the intended reading protocol.
(iii) A detailed description of the intended user and user training that addresses appropriate reading protocols for the device.
(iv) A detailed description of the device inputs and outputs.
(v) A detailed description of compatible imaging hardware and imaging protocols.
(vi) Discussion of warnings, precautions, and limitations must include situations in which the device may fail or may not operate at its expected performance level (
e.g., poor image quality or for certain subpopulations), as applicable.(vii) Device operating instructions.
(viii) A detailed summary of the performance testing, including: test methods, dataset characteristics, results, and a summary of sub-analyses on case distributions stratified by relevant confounders, such as lesion and organ characteristics, disease stages, and imaging equipment.