Detection of breast lesions using an anchorless network from ultrasound images with segmentation-based enhancement


We evaluated the performance of our breast lesion detection system using various datasets. We also compared with many different enhancement methods and detection networks. Performance measurements and experimental results are described below.

Overview of Datasets and Breast Lesion Detection System

Datasets

In this study, we used three public datasets, namely Breast Ultrasound (BUS)21Breast Ultrasound Image Dataset (BUSI)22and Breast Ultrasound Image Segmentation Dataset (BUSIS)23. The BUS was collected from the UDIAT Diagnostic Center of the Parc Tauli Corporation, Sabadell (Spain). BUS contains 163 breast ultrasound images, of which 109 are benign and 54 are malignant. BUSI was collected from Baheya Hospital for Early Detection and Treatment of Women’s Cancer, Cairo, Egypt. Breast ultrasound images were collected from 600 patients aged 25 to 75 years. BUSI contains 437 benign images, 210 malignant images and 133 normal breast images, for a total of 730 breast ultrasound images. The BUSIS was collected from the Second Affiliated Hospital of Harbin Medical University, the Affiliated Hospital of Qingdao University and the Second Hospital of Hebei Medical University. BUSIS contains 562 images of women between 26 and 78 years old. These datasets contain multiple images for the same patient. The specific information of the datasets is presented in Table 1. In terms of image labels, BUS and BUSI include lesion shape labels and benign and malignant lesion classification labels (as shown in Fig. 2a,b), while BUSIS only contains lesion shape labels. . In this study, we used BUSIS for image preprocessing and BUS and BUSI for breast lesion detection.

Figure 2

(a) Original ultrasound images; (b) ground truth in binary mask, the yellow dots represent the upper left and lower right corners of the ground truth; (vs) represents a bounding box made according to the yellow dots.

Labels

The task of detecting breast lesions is to identify and locate the exact location of the lesion. Identification consists of classifying benign and malignant lesions and localization consists of giving location information of the lesion area. In the BUS and BUSI datasets, lesion category labels were given, but there is no coordinate information about the lesions. We propose a method to obtain lesion coordinates based on lesion shape labels. As shown in Figure 2b, we loop through all the non-zero pixels in Figure 2b and find the largest and smallest horizontal and vertical coordinates. (x_{min }), (x_{max }), (y_{min }), (y_{max }) among these non-zero pixels. We can get the top left point (p_{ul} = (x_{min }, y_{min })) and the lower right point (p_{lr} = (x_{max },y_{max })) of the injured area. The width of the injured area w equal (x_{max } – x_{min }) and height h equal (y_{max } – y_{min }). We can then determine a bounding box of the lesion (Fig. 2). Finally, we use the five sets of information from (p_{ul}), (p_{lr}), w, h and lesion category as a label for breast lesion detection. However, in the BUSIS dataset, as the lesion category is not indicated, it cannot be used as breast lesion detection data. Therefore, we use BUSIS in the image preprocessing step and we will introduce the use of the BUSIS dataset in detail in the next section.

Table 1 A comparison of BUS, BUSI and BUSIS.

Breast Lesion Detection System Overview

Our system consists of two parts, the image pre-processing part and the breast lesion detection part. First, in the image preprocessing part, we use a new image enhancement method called segmentation-based enhancement (SBE). A deep learning method is used to segment the breast lesion region, and the segmented image is multiplied with the original image to obtain an enhanced image. Second, we input the enhanced image into an anchorless object detection network (i.e. a fully convolutional one-step object detection network (FCOS)24) to detect the breast lesion.

Performance indicators

We used precision, recall, and average mean precision (mAP) as performance measures in our experiments. The calculation of Precision, Recall and mAP depends on the following parameters.

  • IoU, in medical image analysis, IoU is also known as Jaccard similarity index or Jaccard index. The IoU is defined by:

    $$begin{aligned} text {IoU}=dfrac{{text {Zone of overlap}}}{{text {Zone of union}}}. end{aligned}$$

    (1)

    Among them, the overlapping area refers to the area where the predicted bounding boxes (BBoxes) overlap the label BBox, and the union area refers to the union of the predicted BBox and the label BBox. Based on the IoU as a criterion, for each class we can calculate the following parameters:

  • Trust Probability of each class prediction.

  • True positives (TP) The BBox prediction with (text {IdU}>0.5) and compliance with the confidence threshold of the category.

  • False positives (FP) The BBox prediction with (text {IoU} and compliance with the confidence threshold of the category.

  • False negatives (FP) ({text {IoU}}=0).

According to the above parameters, we have

$$begin{aligned} text {Precision}= & {} dfrac{text {TP}}{text {TP+FP}}end{aligned}$$

(2)

$$begin{aligned} text {Recall}= & {} dfrac{text {TP}}{text {TP+FN}}. end{aligned}$$

(3)

By setting different category confidence thresholds, we can obtain the Precision-Recall (PR) curve. Average accuracy (AP) is the area under the PR curve, and mAP is the average of all AP categories. We have

$$begin{aligned} text {mAP} = dfrac{sum _{c=1}^{N}text {AP}}{N}, end{aligned}$$

(4)

where NOT is the total number of class categories.

Table 2 Comparison of experimental results with enhancement using SBE (proposed), Attention U-Net and R2U-Net.

Results

Comparison of experimental results with different image enhancement methods

We used different improvement methods (our proposed method SBE, Residual Residual Convolutional Neural Network based on U-Net (R2U-Net)25Caution U-Net26and Traditional Method Contrast-Limited Adaptive Histogram Equalization (CLAHE)27) and tested them based on a single dataset and a composite dataset (BUS+BUSI). The experimental results are shown in Tables 2 and 3 and the PR curves are shown in Figure 5. The results show that we obtained 8 best mAPs in 9 sets of comparative experiments. In the performance of detection of malignant lesions (M-Recall), we obtained all the best results. Note that the boundary of malignant tumors is usually irregular, and the contrast between malignant tumors and normal tissues is weak, so malignant tumors are not easy to detect. However, with our proposed SBE, the contrast is greatly improved, which facilitates the detection of malignant tumors. Images of experimental results are shown in Fig. 3. We also found that during SBE, some breast lesions were not segmented (Fig. 4b) and some incorrect segmentations occurred (Fig. 4f,j). However, our method can still correctly detect lesion areas, as shown in Figure 4, which demonstrates good detection performance. Finally, to facilitate visualization, we circle the predicted benign tumors with a green box and the predicted malignant tumors with a red box.

picture 3
picture 3

(a B) Represents the results of benign lesions detected, including multiple lesions; (CD) represents the results of the detected malignant lesions.

Figure 4
number 4

(a,e,i) are the original images. (b) The injured area was not segmented, (f,j) the lesion area was segmented incorrectly. (c,g,k) are the results after SBE. (d,h,l) are FCOS detection results.

Figure 5
number 5

(a B) PR curve of BUS+BUSI datasets; (CD) PR curve of BUSI datasets; (e,f) PR curve of the BUS datasets.

Table 3 Comparison of breast cancer screening results using different enhancement methods.

Comparison of experimental results with different detection networks

To further verify the performance of our proposed method (i.e., combining FCOS with SBE), we compared it with an ultrasound breast cancer detection method proposed by Mo et al.28 in 2020. This method used YOLO V3 as the detection network and made two changes to the original YOLO V3. First, Ref.28 adopted K-Means++ algorithm and K-Mediods algorithm to optimize the original K-Means algorithm to set the anchor size. Second, the residual structure of the original YOLO V3 has been modified, and a new residual network based on ResNet and DenseNet29 was built. We implement the method proposed by Ref.28 using our dataset for experimentation. We obtained three different anchor sizes via K-Means++ and K-Mediods, and named the network that changed the anchor size as YOLO V3 anchor. Three sets of anchors Sizes are (34, 45), (40, 45), (40, 54), (60, 80), (66, 109), (88, 99), (90, 99), ( 94, 217), (164, 220) for BUS+BUSI; (25.50), (35, 69), (76, 62), (89, 128), (95, 100), (107, 192), (164, 220), (187, 341), (196 , 208) for BUSI; (26, 27), (29, 59), (31, 78), (40, 54), (48, 57), (60, 80), (62, 134), (162, 134), (201 , 361) for BUS. We have reproduced a new residual structure according to the method proposed by Ref.28 and named it YOLO V3-res. The experimental results are shown in Table 4. Note that the performance of our method is not the best in all cases. However, as shown in Table 4, our method achieves the best results on both accuracy and recall of malignant lesion detection. More importantly, our method obtains the best results on the mAP performance measure.

Table 4 Comparison of results of breast cancer detection experiments between our method and Ref.28.
Previous 24-Hour Check Cashing Near Me: 6 Places to Cash Checks After Hours
Next Constraining Europa's subsolar atmosphere with joint analysis of HST spectral images and Galileo magnetic field data