Skip to main content
Fig. 2 | BMC Pulmonary Medicine

Fig. 2

From: Deep learning diagnostic and severity-stratification for interstitial lung diseases and chronic obstructive pulmonary disease in digital lung auscultations and ultrasonography: clinical protocol for an observational case–control study

Fig. 2

Overview of the DeepBreath binary classification model. Top to bottom: Data collection. Every patient will have 10 lung audio recordings corresponding to 1 per 10 anatomical sites (LAS, RAS: Left and Right Anterior Superior; LAI, RAI: Left and Right Anterior Inferior; LPS, RPS: Left and Right Posterior Superior; LPI, RPI: Left and Right Posterior Inferior; Left and Right Lateral [not shown on the figure]). Pre-processing. A band-pass filter is applied to clips before transformation to log-mel spectrograms which are batch-normalised and augmented and then fed into an audio classifier. Here, a CNN outputs both segment-level prediction and attention values which are aggregated into a single clip-wise output for each site. These are then aggregated by concatenation to obtain a feature vector of size for every patient, which is evaluated by a logistic regression. Finally, patient-level classification is performed by thresholding to get a binary output. The segment-wise outputs of the audio classifier are extracted for further analysis. Used with permission from Heitmann et al. (https://doi.org/10.1038/s41746-023-00838-3, Nature Digital Medicine) (Swiss Federal Institute of Technology EPFL, Lausanne, Switzerland)

Back to article page