A KL Divergence-Based Loss for In Vivo Ultrafast Ultrasound Image
Enhancement with Deep Learning
Abstract
Ultrafast ultrasound (US) imaging is a pioneering imaging modality that
achieves higher frame rates than traditional US imaging, enabling the
visualization and analysis of fast dynamics in tissues and flows.
Nevertheless, images resulting from this technique suffer from a
low-quality level. Recently, convolutional neural networks (CNN) have
demonstrated great potential for reducing image artifacts and recovering
speckle patterns without compromising the frame rate. As yet, CNNs have
been mostly trained on large datasets of simulated or in vitro phantom
images, but their performances on in vivo images remains suboptimal. In
the current study, we present a method to enhance the image quality of
single unfocused acquisitions by relying on a CNN. We introduce a
training loss function that accounts for the high dynamic range of the
radio frequency data and uses the Kullback–Leibler (KL) divergence to
preserve the probability distributions of the echogenicity values. We
conduct an extensive performance analysis of our approach using a new
large in vivo dataset of 20,000 images. The predicted images are
compared qualitatively to the target images obtained from the coherent
compounding of 87 plane waves (PW). The structural similarity index
measure, peak signal-to-noise ratio and KL divergence are used to
quantitatively analyze the performance of our method. Our results
demonstrate significant improvements in image quality of single PW
acquisitions, highly reducing artifacts.