1,627
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Ship detection and classification based on cascaded detection of hull and wake from optical satellite remote sensing imagery

, , &
Article: 2196159 | Received 21 Nov 2022, Accepted 22 Mar 2023, Published online: 06 Apr 2023

ABSTRACT

Satellite remote-sensing provides a cost- and time-effective tool for ship monitoring at sea. Most existing approaches focused on extraction of ship locations using either hull or wake. In this paper, a method of cascaded detection of ship hull and wake was proposed to locate and classify ships using high-resolution satellite imagery. Candidate hulls were fast located through phase spectrum of Fourier transform. A hull refining module was then executed to acquire accurate shapes of candidate hull. False alarms were removed through the shape features and textures of candidate hulls. The probability that a candidate hull is determined as a real one increased with the presence of wakes. After true ships were determined, ship classification was conducted using a fuzzy classifier combining both hull and wake information. The proposed method was implemented to Gaofen-1 panchromatic and multispectral (PMS) imagery and showed good performance for ship detection with recall, precision, overall accuracy, and specificity of 90.1%, 88.1%, 98.8%, and 99.3%, respectively, better than other state-of-the-art coarse-to-fine ship detection methods. Ship classification was successfully achieved for ships with detected wakes. The accuracy of correct classification was 83.8% while the proportion of false classification was 1.0%. Factors influencing the accuracy of the developed method, including texture features and classifiers combination and key parameters of the method, were also discussed.

1. Introduction

Maritime transport is the backbone of international trade and global economy and involves various types of ships. Taking the world merchant fleet as an example, the number of their registered ships amounted to 116,857 in 2018 (EMSA Citation2018). As an important task for maritime security, fishery management, vessel salvage, and transportation surveillance, ship monitoring has drawn a lot of attention. Although automatic identification system (AIS) allows the identification and tracking of ships, quite a few ships may close AIS or offer false information. At present, there are no freely available AIS data, whereas commercial AIS data are very expensive. In view of its advantage of coverage over large spatial and temporal scales, satellite remote-sensing provides an effective and economical tool for ship monitoring. Synthetic aperture radar (SAR) imagery has widely been utilized to monitor ships for its capacity independent of weather and illumination conditions. Numerous methods were proposed for ship detection based on SAR imagery, including traditional methods (Xing et al. Citation2013; Leng et al. Citation2015; Xu, Zhang, and Zhang) and deep learning methods (Lin et al. Citation2019; Sun et al. Citation2021; Kang et al. Citation2017; Sun et al. Citation2021; Xiong et al. Citation2022; Xu et al. Citation2022; Zhang and Zhang Citation2022a, Citation2022c; Zhang et al. Citation2019; Zhang and Zhang Citation2019; Xu, Zhang, and Zhang Citation2022; Zhang et al. Citation2021). Ship classification by SAR imagery were also investigated (Zhang and Zhang Citation2022b, Citation2022d). However, the boundaries of ship hulls in SAR images are usually vague subject to imaging characteristics, such as the dihedral reflector composed of hull and water, hull deformation by virtue of motion. In addition, most high-resolution SAR imagery is not freely available and expensive. Despite being affected by clouds, high-resolution optical satellite sensors became under scrutiny for ship detection and monitoring in recent years since they can provide larger data quantity than SAR sensors. Furthermore, detailed information on ship hulls and wakes can be obtained from optical imagery. This makes them complementary for maritime ship monitoring.

Hull and wake represent two intrinsic features for a moving ship. In this scenario, two types of remote-sensing-based methods for ship detection were developed, i.e. hull detection and wake detection. Ship hull is usually a salient target surrounded by dark water in optical images. Hull detection is similar to other target detection in the field of computer vision that has been developed for decades. Most previous researches focused on hull detection, for which the traditional coarse-to-fine approach was usually exploited since it had low demand for hardware and can be applied to extensive devices with less time consumption than other approaches (Zhu et al. Citation2010; Kanjir, Greidanus, and Oštir Citation2018; Nie et al. Citation2020; Shi et al. Citation2014). Candidate hulls were first searched fast across imagery. Shape and texture features of candidate hulls were then extracted to distinguish true hulls from false alarms through classifiers.

Recently, hull detection based on deep learning becomes a hot topic. There are mainly two schemes: two-stage framework and one-stage framework. In the two-stage framework, typified by the Faster R-CNN, region of interests (ROI) that may contain ship hulls is first obtained through region proposal network (RPN) and then refined classification of ROIs and regression of hull boundaries are conducted by other network branches (Ren et al. Citation2015; Zhang, Guo et al Citation2020; Liu et al. Citation2021; Guo et al. Citation2020). In order to improve detection efficiency, the one-stage framework abandons RPN and directly outputs confidence scores and coordinate offsets of boundary boxes (Wei et al. Citation2016; Redmon et al. Citation2016; Zhang et al. Citation2020). In addition, anchor-free framework and semantic segmentation network were also proposed for hull detection (Chen et al. Citation2020; Ma et al. Citation2019; Wang et al. Citation2021; Cui et al. Citation2021).

Compared with ship hulls, ship wakes present more remarkable features in satellite images since they can reach tens of thousands of meters, which is much larger than the size of ship hulls. Ship wakes can be divided into two forms based on their structures in remote sensing images (Liu and Deng Citation2018). One is linear wake, such as turbulent wake, Kelvin arm, and internal wake. The other is striped wake with periodic structure, mainly including transverse and divergent waves of Kelvin wakes. Different approaches are required for the detection of these two types of wakes.

Striped wakes can be well captured in high-resolution optical images despite their infrequent appearance. A literature review identifies few studies on the detection of striped wakes (Kuo and Chen Citation2003; Tian et al. Citation2019) and that linear wake detection is aligned into the mainstream of ship wake detection. Among the approaches to achieve the goal, the Radon transform (Radon Citation1986), the Hough transform (Hough Citation1962), and the scan curve (Eldhuset Citation1996) are most commonly used. They transform line detection to extreme point detection. However, background noises on sea surface pose challenges, which cannot meet the requirement that ship wakes are ideal lines with uniform brightness. Different solutions were proposed to mitigate the influence of noises, such as enhancing linear features of wakes through image processing and optimizing point detection in the transform space (Rey et al. Citation1990; Courmontagne Citation2005; Aggarwal and Karl Citation2006; Ai et al. Citation2011; Li, Qu, and Peng Citation2016; Biondi Citation2018, Citation2019; Karakuş, Rizaev, and Achim Citation2020; Graziano, D’errico, and Rufino Citation2016). These approaches were implemented to SAR data for a few special cases and their applicability to optical satellite imagery has never been tested yet. Liu et al. (Citation2021) proposed a novel approach to detect ship wakes from optical imagery, whose effectiveness was verified through application to multi-sensor satellite imagery.

Although different ship detection methods have been proposed to handle various scenarios, some problems still need solving. Firstly, ship hull detection and wake detection are independent of each other in existing methods, although they can be coupled for complementary purpose. Secondly, valuable and comprehensive information from combining hull and wake detection is inevitably missed if only one of them is employed, as done in most previous studies. Thirdly, classification of ships is scarcely conducted despite great desire to meet both civil and military needs, especially for small ships, since most existing methods focus on extracting ship locations.

In this study, we aim to (1) propose a method of cascaded ship hull and wake detection to provide a solution for the challenges mentioned above, (2) classify ships based on hull and wake information from optical imagery, and (3) increase ship detection accuracy and improve classification accuracy via combination of hull and wake. The rest of this paper is organized as follows. Section 2 presents the data and method in detail. The experiment results of our developed approach and comparison with the state-of-the-art detection methods are described in Section 3. Factors influencing the accuracy of the developed method are discussed in Section 4. Finally, conclusions are made in Section 5.

2. Data and method

2.1 Satellite data

Hull and wake detection involves different spatial resolution of satellite imagery. The shape and texture features of ship hulls can be more distinguishable in high-resolution satellite imagery than in low-resolution one. This situation holds true for striped wakes since the wavelength of striped wakes is usually short (Liu and Deng Citation2018). In contrast, a moderate resolution is needed for line wake detection to reduce the influence of noises and striped wakes based on the premise that as many wakes as possible can be identified (Liu, Zhao, and Qin Citation2021). In virtue of these reasons, satellite imagery collected by the Gaofen-1 (GF-1) instrument was used in this study, which is equipped with two 2-m resolution panchromatic and 8-m resolution multispectral cameras (PMS). The technical specifications of the GF-1 sensor are listed in .

Table 1. Technical specifications for the GF-1 sensor.

Examples with presence of ships in GF-1 imagery of different wavebands are depicted in . The multispectral data were all rescaled to 0–1. Through the comparison of panchromatic and multispectral bands, it can be found that ship hulls and striped wakes are more discernible at the panchromatic band of 2-m resolution, and turbulent wakes are smoother with less noises and the Kelvin arm presents as a more easily detectable line at the multispectral band of 8-m resolution. In addition, the contrast between background water and ship wakes is stronger at the infrared band than at other bands due to the strong water absorption at the infrared band. Therefore, ship hulls and striped wakes were detected from 2-m panchromatic images while linear wakes were extracted from 8-m near-infrared (NIR) images, following the approach proposed by Liu et al. (Citation2021). GF-1 data were downloaded from the Guangdong Data and Application Center for High-resolution Earth Observation System. Surface reflectance was produced using the ENVI software. The built-in Fast Line-of-Sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) module was exploited to remove aerosol contributions to multispectral signals. Ocean-land separation was done using coastline vector data and ship detection was conducted only in ocean areas. The location and category of 312 ships from 11 GF-1 images were recorded manually to test the proposed method.

Figure 1. Some ship examples in different bands of the GF-1 imagery.

Figure 1. Some ship examples in different bands of the GF-1 imagery.

2.2 Cascaded detection of ship hull and wake

In this study, ship location and classification were accomplished through cascaded detection of ship hull and wake. The flowchart is shown in . Candidate ship hulls were extracted ahead of wakes since the latter was delineated starting from ship hulls without azimuth shift in optical imagery. Isotropic descriptors were employed to recognize candidate ship hulls. Hull shapes, as a crucial parameter for wake detection, were directly extracted using a traditional coarse-to-fine method. Candidate hulls were fast located through a visual saliency detection method called phase spectrum of Fourier transform (PFT) (Guo, Ma, and Zhang Citation2008). Hull refining module was executed to generate accurate shapes of candidate hulls and false alarms were eliminated according to shape features. Texture features were then extracted for further distinguishing true hulls from false alarms by Gaussian processes (GP) classifier. Both low- and high-resolution subimages were clipped with each candidate hull at the center for the detection of striped and linear wakes, respectively. Striped wakes were detected in the Fourier transform space of the subimage of high resolution. Meanwhile, line wakes were detected in the filtered subimage using the method proposed by Liu et al. (Citation2021). Finally, the categories of candidate ships were determined by expert decision-making rules according to hull and wake information. The details are described in the following subsections.

Figure 2. Flow chart of cascaded detection of ship hull and wake in this study.

Figure 2. Flow chart of cascaded detection of ship hull and wake in this study.

2.2.1 Hull detection

Ship hulls present remarkable discrepancy from the adjacent water while occupy only a small proportion of whole images. In view of these characteristics, ship hulls can be readily captured through visual inspection and regarded as salient targets, which is suitable for preliminary hull identification. Although satellite imagery of high resolution contains tens to hundreds of millions of pixels, PFT requires low computational complexity with good accuracy and can generate the saliency map of the whole image with one-time calculation. Therefore, PFT was employed to achieve the saliency map given its high processing efficiency. Detailed steps are summarized below (Guo, Ma, and Zhang Citation2008).

(1) F(x,y)=F[I(x,y)](1)
(2) P(x,y)=P[F(x,y)](2)
(3) S(x,y)=g(x,y)F1expiP(x,y)2(3)

where I(x, y) is the image intensity; F and F−1 denote Fourier transform and inverse Fourier transform, respectively; P(F) represents the phase spectrum of the transformed image F; g(x, y) is a 2D Gaussian filter; denotes the modulo operation. Salient targets were extracted as candidate hulls through threshold segmentation. The threshold (Th) was calculated from:

(4) Th=m+kσ(4)

where m andσare the mean and standard deviation of S(x, y), respectively. k is a constant and empirically set to 1.5 to keep as many targets as possible.

Salient targets may contain adjacent bright noises since a global threshold was used for the entire image. To achieve actual shapes of salient targets, a hull refining module was designed based on the approach proposed by Liu et al. (Citation2021). The schematic diagram for the process is shown in and described below.

  1. A subimage was segmented from the whole image through dilating by 200 pixels from the bounding box of a candidate hull in four directions (left, right, upward, and downward), which ensured that the hull was totally inside the subimage with enough water pixels around.

  2. A series of hulls were achieved through binarization as the threshold growing from the initial value calculated by the Otsu method (Otsu Citation1979) to the maximum value for each subimage.

  3. The shape index (SI) of each hull was calculated as the summation of the convexness and the rectangularity based on the assumption that the hull shape in high-resolution images can be approximated as a rectangle. Each SI corresponds to a threshold Ti.

  4. Ti corresponding to the first peak of SI that exceeded 1.4 was used to binarize the subimage to obtain the final refined hull in order to avoid that only the brightest parts of a hull were retained. The final refined hulls must intersect with the unprocessed candidate hulls.

  5. If all candidate hulls were processed, the hull refining ended. Otherwise, the above steps would be repeated.

Figure 3. The process of hull refining module.

Figure 3. The process of hull refining module.

Finer shapes of candidate hulls were obtained by hull refining. False alarms were then eliminated based on shape features, including area, length, width, and length-width ratio. Their ranges for real ships were experimentally determined according to ground truth and listed in . The biggest ship in the world was about 400 m long. Given the turbulent regions beside and after ships, the largest length and width were set to 600 m and 100 m, respectively. Shapes of small ships tend to appear as circles rather than rectangles in satellite imagery due to the limitation of image resolution, which was inconsistent with reality. Therefore, the minimum length-width ratio was small enough to retain small candidate targets as many as possible.

Table 2. Minimum and maximum of hull shape features for real ships.

However, false alarms may still survive after shape filtering due to the complexity of rough sea surface. Texture features were then used to separate true hulls from false ones. Since ships may travel in all directions, resulting in the difficulty of obtaining accurate hull orientation, especially for small ships, rotation invariant features were then introduced. Local binary pattern (LBP) (Ojala, Pietikainen, and Maenpaa Citation2002), region covariance descriptor (RCD) (Tuzel, Porikli, and Meer Citation2006; Dong, Liu, and Fang Citation2018) and KAZE features (Alcantarilla, Bartoli, and Davison Citation2012) were exploited to generate rotation invariant features of candidate hulls.

Candidate hulls were labeled as true ships or false alarms according to texture features in typical coarse-to-fine methods. In this study, wake information was combined to improve the classification accuracy, which will be depicted in Section 2.2.3. First of all, the probability that each candidate hull can be determined as a real one was needed. The powerful machine learning-based GP was employed as the classifier since it can deliver clear probabilistic interpretation for label predictions (Rasmussen Citation2004). In addition, GP can automatically determine the hyperparameters and covariance from the training set without cross-validation and consider predictive variance during the decision procedure. Comparison between the performance of GP and other representative classifiers will be discussed in Section 4.

The derivation of the GP classifier is briefly described here. Please refer to Rasmussen (Citation2004), Bazi and Melgani (Citation2010) for more details. The training and test datasets from texture feature matrixes are denoted by x=x1;x2;;xNRN×M, x=x 1;x 2;;xN\breakRN×M, respectively. Their corresponding label vectors are y=y 1;y2;;y NRN and y=y 1;y2;;y NRN with yi,yi1,+1.

Different from the regression scenario, labels in the classification do not satisfy the prior hypothesis for GP. Therefore, two latent variables f and f* associated with the posterior probability of y and y* are introduced to bridge the gap. In this paper, the numerical connection of the latent variables and the posterior probability was described by the Logistic function:

(5) Ff=11+expf(5)

The probability of y* = + 1 can then be calculated from:

(6) Py=+1f=FfPfx,y,xdf(6)

The posterior distribution Pfx,y,x should be derived before estimating the posterior probability of y*. According to the GP principle, the latent variables are supposed to subject to the following joint normal distribution:

(7) Pf,fx,y,x=N00,Kx,xKx,xKx,xKx,x(7)

where K(·) is the kernel function. Pfx,y,x can be achieved through marginalization over f:

(8) Pfx,y,x=Pfx,y,fPfx,ydf(8)

Pfx,y,f in EquationEq. 7 follows a normal distribution and can be calculated from the conditional distribution of EquationEq. 6:

(9) Pfx,y,fNMf,Vf(9)
(10) Mf=Kx,xKx,x1f(10)
(11) Vf=Kx,xKx,xKx,x1Kx,x(11)

Pfx,y in EquationEq. 7 does not follow a normal distribution. The Laplacian technique was introduced to approximate Pfx,y by an optimal normal distribution Qfx,y:

(12) Pfx,yQfx,yexp12ffTΣf1ff(12)

where f and Σf denote the mean vector and covariance matrix, respectively. They are given by:

(13) f=argmaxfPfx,y(13)
(14) Σf=logPfx,y f=f(14)

In order to calculate f and Σf, Pfx,y can be factored using the Bayes’ theorem:

(15) Pfx,y=PyfPfxPyx(15)

Only Pyf and Pfx are dependent on f and left to the derivation of f. Pyf follows a normal distribution below:

(16) Pfxexp12fTΣ1f(16)

Through the above manipulation, EquationEq. 12 was further converted to the following formula:

(17) f=logPyf12fTΣ1f(17)

However, the above equation can hardly be solved analytically. The Newton method was utilized based on the following iterative equation:

(18) ft+1=ftlogPyftΣ11logPyftΣ1ft(18)

The covariance matrix was then achieved via

(19) Σf=logPyfΣ11(19)

Once the calculations of f and Σf were finished, Pfx,y was estimated from EquationEq. 12. Finally, the probability of y* = + 1 was achieved according to EquationEqs. 6 and 8.

The classifier should be trained in advance using ship hull and false alarm samples. The samples were extracted through candidate hull detection method introduced above from 18 GF-1 images from 2013 to 2015 over the coastal regions of Guangdong Province, China. The detected candidate hulls were labeled manually as ship hulls and false alarms. In order to increase the robustness of the classifier, each sample was rotated 7 times with an interval of 45º. In total, 4064 ship hull and 4192 false alarm samples were generated.

2.2.2 Wake detection

Ship wakes were detected in subimages with ship hulls at the center. Linear wake detection followed the method proposed by Liu et al. (Citation2021). Given the periodic wave structure, striped wakes were detected in the Fourier transform domain, in which striped wakes appeared as bright contrasts with respect to surrounding water. Wake detection then turned into peak point detection. Natural waves may also present as periodic structure and hamper the detection of striped wakes. However, characteristics of striped wakes and natural waves can be easily distinguished. Striped wakes usually form a cone shape () and a crescent shape () in the image before and after the Fourier transform, respectively. In contrast, natural waves exhibit no regular shape in the original image () while a nearly circle when the Fourier transform was applied (). Based on this, a striped wake detection method was designed and the entire procedure is shown in . The following steps were carried out.

Figure 4. Simulated images with the presence of striped wakes (a) and natural waves (c). The corresponding images after Fourier transform are shown in (b) and (d).

Figure 4. Simulated images with the presence of striped wakes (a) and natural waves (c). The corresponding images after Fourier transform are shown in (b) and (d).

Figure 5. Flow chart for striped wake detection.

Figure 5. Flow chart for striped wake detection.
  1. A subimage was obtained by cropping the original image with a candidate hull at the center.

  2. The frequency image was generated using the fast Fourier transform and shifting the zero-frequency component to the center.

  3. Connected components were acquired by threshold segmentation using the Otsu method (Otsu Citation1979). Whether a component corresponded to a striped wake was determined by its shape features, including length, maximum width, and their ratio. After implementing the above steps to simulate wakes with different imaging conditions using the method proposed in Liu, Deng, and Zhao (Citation2019), the threshold of each shape feature was obtained through statistics of shape features for connected components in the frequency domain. Gaussian noises were added to provide close enough to actual circumstance. Connected components whose length, width, and length-width ratio exceeded the thresholds were then eliminated. If more than one pair of connected components were left, the pair of connected components with the largest area would be reserved.

  4. Based on the principle of the fast Fourier transform, the wave number kt and the propagation direction Φt for the center of the transverse wake were calculated after a striped wake was detected through

(20) kt=2πxkNΔx2+ykMΔy2(20)
(21) Φt=arctanykMΔy/xkNΔx(21)
where xk and yk are coordinates for the pixel with the maximum value in the connected component corresponding to the striped wake; M and N are the number of rows and column, respectively; Δx and Δy are the resolutions in the row and column direction, respectively, both of which are 2 m in this study. The striped wake in far region can be considered as freely propagating gravity wave (Liu and Deng Citation2018). The propagation velocity was calculated according to the dispersion relationship of gravity wave in deep water via where g is the gravity acceleration. Here, the value of ship velocity was equal to ct according to the linear model of Kelvin wakes (Zilman, Zapolski, and Marom Citation2015; Oumansour, Wang, and Saillard Citation1996).
  • (5) Fourier transform images were centrally symmetric. This means that wakes may propagate in a direction of Φt or Φt + 180. In order to confirm the propagation direction, the subimage was divided equally into two parts. If −45º<Φt < 45º, left and right parts were obtained. Otherwise, upper and lower parts were obtained. The striped wake detection was repeated for each part. The propagation direction was determined from which part the striped wake was observed in. If striped wakes existed in both parts, they were actually natural waves and identified as false alarms.

It should be noted that striped wakes were mainly used to distinguish between cargo ships and warships as depicted below. Although some weak striped wakes generated by slow ships may not present a crescent shape in the image after the Fourier transform and were left out, it would not influence ship classification.

2.2.3 Ship detection and classification

Most previous studies focused on determination of true ships from candidate hulls using machine learning or deep learning classifiers. In this study, hull classification and wake presence were coupled to improve detection accuracy. The classifier produced the probability that a candidate hull belonged to a true ship, rather than classification labels. The probability would be large when the number of wakes is large and the category of wakes is also large. As a rule of thumb, the number of wakes would increase the probability: (1) by 0.1 if only one turbulent wake existed, (2) by 0.2 if one turbulent wake and one Kelvin arm existed, (3) by 0.4 if one turbulent wake and two Kelvin arms existed or the striped wake was detected or internal waves appeared. Finally, candidate hulls with a probability more than 0.5 were identified as true ships. Furthermore, ship information was complete when wakes were involved and some false alarms could be eliminated through expert decision.

After ship detection, targets identified as true ships were further classified as fishing vessel, motorboat, cargo ship, or warship. The comprehensive information on ship hulls and wakes was extracted from the visual inspection and statistical analysis of large amount of ships with various Froude numbers and background as reported in our previous works (Liu and Deng Citation2018; Liu, Zhao, and Qin Citation2021). The results showed that each ship category had characteristic wakes and there were significant differences among the categories. The hull and wake features for different kinds of ships are summarized in .

Table 3. Hull and wake features for different kinds of ships.

It should be noted that the four ship types were categorized by their hull and wake features presented in optical images, rather than the actual types of ships. These criteria classified ships using their features shown in optical remote-sensing imagery and assured that most ships were included (Liu and Deng Citation2018). From , it can be seen that the same kind of ships may present different features of hull and wake while the same feature of hull and wake may correspond to different types of ships. A fuzzy classifier following the soft voting of the ensemble learning that returned the sum of predicted probabilities was designed to address the situation. Each hull or wake feature was treated as a classifier and the probability that a ship belonged to each category was predicted through expert decision and listed in where wake features denote the wake existence. The final class label was then derived from the category with the largest averaged probability.

Table 4. Probability that a ship was classified as one of the categories based on hull and wake features.

In this study, the bright turbulent regions right after ship hulls were treated as part of ship hulls. Turbulent wakes were abandoned since they can be generated by all moving ships except when internal waves appeared, which had no influence on the classification. The hull length of fishing vessels and motorboats is relatively small while cargo ships and warships are relatively long. Thresholds of hull length and length-width ratio were experimentally set to 30 m and 4, respectively. A ship with length larger than 30 m may be a motorboat that can generate long bright turbulent regions. The likelihood to produce classical Kelvin arms by warships is higher than by cargo ships. A higher probability can thus be expected for a ship to be classified as a warship than as a cargo ship when classical Kelvin arms exist. Narrow V-shape arms can be observed when a ship moves very fast. The speed for motorboats of small hulls can easily reach more than 30 kt with most narrow V-shape arms. The speed of warships can also amount to 30 kt but for a limited number of cases. The vote was then set to 0.8 and 0.2 for motorboats and warships, respectively. The detected striped wakes in this study were obvious and generated by fast-moving ships. Only warships and some cargo ships, such as container ships, met this condition and got the votes. Only cargo ships can generate internal waves and got 100% of the vote when internal waves appeared.

Each feature was treated equally and assigned the same weights. Wakes that were not detected were dropped out during the final probability calculation. The category with the highest averaged probability referred to the final class label for each ship. It should be noted that each feature was entitled to exercise a veto with only one vote. In other words, if a feature yielded a probability of 0 for a category, the ship to be classified would no longer be included in the specific category. In addition, if a ship was classified as warship, there must be narrow V-shape arms or striped wakes as a result of the large velocity in most cases.

2.3 Evaluation criteria

Ship detection is a binary classification problem in essence. The predicted samples were divided into four categories, namely, (1) True positive (TP): The samples predicted as ships were true ships; (2) True negative (TN): The samples predicted as false alarms were true false alarms; (3) False positive (FP): The samples predicted as ships were false alarms; (4) False negative (FN): The samples predicted as false alarms were ships.

Several metrics were then calculated to assess the performance of the proposed method in this study. They were defined as follows:

(22) R=TPTP+FN(22)
(23) P=TPTP+FP(23)
(24) A=TP+TNTP+TN+FP+FN(24)
(25) S=TNTN+FP(25)

where R, P, A, and S represent recall, precision, overall accuracy, and specificity, respectively. The larger metrics, the better performance.

The F-measure (F) can provide a way of combining recall and precision and was also employed to comprehensively evaluate the detection performance (Chinchor and Sundheim Citation1993):

(26) F=β2+1×Recall×Precisionβ2×Recall+Precision(26)

where β2 denotes the relative importance of recall over precision. It was set to 1, which means that recall and precision are of equal weight. The larger F, the better performance.

3. Results

Authentic ship hull shapes are favorable for false alarm elimination, wake detection, and ship classification from satellite imagery. In this regard, the PFT algorithm and hull refining module were utilized to obtain shapes and locations of candidate targets. Examples of ship hulls from visual inspection are shown in to illustrate the effectiveness of PFT and hull refining. The ship hulls were accurately located by the PFT algorithm and the clutter noises were suppressed. On the other hand, adjacent areas of ship hulls were also involved in the salient regions resulting in dilatant candidate hulls with irregular boundaries. After hull refining, shapes of ship hulls were restored well.

Figure 6. The saliency detection and hull refining results of ship hulls.

Figure 6. The saliency detection and hull refining results of ship hulls.

Representative results of ship detection and classification are shown in . Although many speckle noises and wind-induced waves existed, anchored ships can be accurately recognized, as displayed in . The brightness of ship hulls demonstrates discrepancy. There were more than one ships in . The small ship in the middle of the image was successfully detected from complicated background with only one turbulent wake () and classified as fishing vessel. The other ship was not the focus of attention and the detection result was not shown. As shown in , the ship hull was originally small but covered by surrounding bright turbulent regions, which led to the big detected hull. The ship was correctly classified as motorboat thanks to the narrow V-shape Kelvin arms. A ship longer that 30 m with a turbulent wake and a Kelvin arm can be clearly seen in . It was classified as cargo ship. A small turbulent region besides the ship was excluded from the detected ship hull. Similarly, show another cargo ship with a turbulent wake and a Kelvin arm in the other direction. It was a container ship by the lattice-like hull. The Kelvin arm in the 2-m resolution image presents as cusp waves with distinct crests and troughs. The crests dominate the reflectance result in a bright line in the 8 m-resolution NIR image. The ship in was identified as a false target only by the hull, which was successfully fixed by the detected turbulent region shown in . The ship was finally classified as a cargo ship by combining the hull information. illustrate a ship moving fast with obvious transverse wave. The striped wake was nearly eliminated by frequency domain filtering with the ship hull unchanged in the subimage. This demonstrates that the striped wake was detected through searching peak regions in the Fourier transform domain. Combined with large length and length-width ratio, the ship was classified as warship. There were two moving ships close to each other with one on the Kelvin arm of the other, as illustrated in . The hull and wake detection of the two ships were weakly influenced by each other and both ships were classified as cargo ships. Some transverse waves could be seen near the right Kelvin arm of the lower ship with alternating peaks and troughs, which made the Kelvin arm invisible in the NIR image where linear wakes were detected.

Figure 7. Ship detection and classification results for different scenario. The green polygons were boundaries of ship hulls. Blue and white lines demonstrate dark and bright linear wakes, respectively.

Figure 7. Ship detection and classification results for different scenario. The green polygons were boundaries of ship hulls. Blue and white lines demonstrate dark and bright linear wakes, respectively.

To our best knowledge, it is the first time that ship wakes have been used to improve hull detection accuracy and classify ships. Ship detection was conducted separately to compare with state-of-the-art coarse-to-fine ship detection methods using optical images of similar resolution, as reported in Shi et al. (Citation2014), Yang, Xu, and Li (Citation2017), Dong, Liu, and Fang (Citation2018), and Nie et al. (Citation2020). The algorithms and parameters for candidate target extraction were different among them, most of which used the Otsu method (Otsu Citation1979) to obtain thresholds for binarization of saliency maps; whereas this was unsuitable for imagery covering a large area. In order to focus on false alarm elimination, the candidate targets extracted from this study were fed into the four state-of-the-art algorithms. Furthermore, the training samples in this paper were collected for traditional machine learning methods and far from enough for deep learning methods. Therefore, the comparison between results from our method and those from the deep learning methods is beyond the scope of this study. The overall performance of each method is listed in . The method proposed in this paper shows the best performance, as demonstrated by the largest recall, precision, F, overall accuracy, and specificity. It is also worth noting that recall and precision for the proposed method based on only hull information are increased by 1.6% and 9.5%, respectively, when both hull and wake information were included, which demonstrated that wakes contribute to increase the accuracy of ship detection.

Table 5. Comparison between the performance for the method proposed in this study and state-of-the-art methods.

The statistics results for length, length-width ratio, and area of targets identified as true ships are shown in , where 1 pixel corresponds to 2 m. The area of the detected true ships ranged from 41 to 8373 pixels with a mean of 649.3 pixels and a standard deviation of 1035.9 pixels. The maximum count locates at the initial bins. More than half of ships fell in the first three bins. The length of detected true ships ranged from 10 to 240 pixels with a mean of 46.0 pixels and a standard deviation of 38.5 pixels. The distribution of length is similar to that of the area except that the maximum count was found in the bin of 20–25 pixels. More than 80% ships were shorter than 65 pixels. The length-width ratio of detected true ships ranged from 1.1 to 7.7 with a mean of 3.25 and a standard deviation of 1.34. The length-width ratio was close to a normal distribution and the maximum count locate in the bin of 2.6–2.75. The overall length-width ratios were lower than they actually are since the turbulent regions around ships were identified as ship hulls.

Figure 8. Statistics of the length, length-width ratio and area of targets identified as true ships.

Figure 8. Statistics of the length, length-width ratio and area of targets identified as true ships.

Most ships were small- to medium-sized in the high-resolution satellite images according to the distribution of length and area. Meanwhile, most false alarms in the images were also small, such as white foams caused by breaking waves, buoys, and floating debris. Focusing on small targets would lead to many false alarms left. It is difficult to balance the tradeoff. In the original methods proposed by Yang, Xu, and Li (Citation2017), targets with areas less than 100 pixels were all removed, resulting in a high precision of 87.8% but very low recall of 36.9%. Nevertheless, using the shape filters employed in this paper, many small ships were retained as indicated by the increased recall of 75.0%. On the other hand, the precision significantly decreased to 42.9% due to many survived false alarms. There was no shape filtering process in the other two methods and the small-sized false alarms were not removed effectively. Our proposed method balanced the recall and precision well even for small ships. The length and area of detected ships were as low as 10 pixels and 41 pixels, respectively. The statistics analysis shows that most small- to medium-sized ships can be identified with high accuracy.

Targets identified as true ships with detected wakes were further classified based on the combination of hull and wake features. Ship classification results are listed in . There are 99 moving ships with wakes that can be classified according to manual statistics. Among them, 83 ships were correctly classified and false classification happened for only 1 ship. The omission number was 15. The overall accuracy of correct classification was 83.8% while the proportion of false classification was 1.0%. Through manual validation, the omitted ships were not involved in the classification process since their wakes were not detected. The ship of false classification was a motorboat with a turbulent wake and only one obvious Kelvin arm and regarded as a cargo ship. All ships with correctly detected wakes were classified successfully and misclassification was caused by false wakes, which confirmed the effectiveness of the fuzzy classifier proposed in this paper.

Table 6. Classification results of ships with detected wakes.

4 Discussion

4.1 Feature and classifier selection

False alarm elimination using texture features is a key procedure for ship detection. Based on the extracted features, the ship detection problem can be considered as traditional classification task. According to whether the algorithm development relies on labeled samples provided by user, the classification algorithms can be mainly categorized into supervised and unsupervised approaches. Supervised classification algorithms can deliver more determined results and were employed in most ship detection methods. However, no uniform texture features and classifiers were adopted in existing detection methods. The combination of various texture features and classifiers would produce totally different results. In order to obtain optimal texture feature and classifier, typical rotation invariant texture features and classifiers were combined and tested here.

Moment invariants (MI) (Hu Citation1962), local binary pattern (LBP) (Ojala, Pietikainen, and Maenpaa Citation2002), radial gradient transform (RGT) (Dalal and Triggs Citation2005; Takacs et al. Citation2013), region covariance descriptor (RCD) (Tuzel, Porikli, and Meer Citation2006; Dong, Liu, and Fang Citation2018), speeded-up robust features (SURF) (Bay, Tuytelaars, and Van Gool Citation2006), and KAZE features (Alcantarilla, Bartoli, and Davison Citation2012) were proven to be effective and widely used as rotation invariant texture features for target classification. The considered classifiers covered various types of supervised learning methods, including artificial neural network (ANN) (Glorot and Bengio Citation2010), hyperplane-based support vector machine (SVM) with linear kernel function (Christianini and Shawe-Taylor Citation2000), decision tree-based random forest (RF) (Breiman Citation2001), and probabilistic-based GP (Rasmussen Citation2004). The texture features and classifiers are summarized in .

Table 7. Summarization of invariant texture feature and classifiers.

All potential combinations of texture features were tested to train each classifier for ship hull detection. The test was only implemented to part of sample images to ensure the robustness of algorithms. The optimal texture feature combinations for each classifier are listed in through exhaustive method. The largest F was achieved by the GP classifier trained by LBP, RCD, and KAZE features, which was used in this paper. The most popular texture features were RGH, RCD, and KAZE. Note that RCD can be found in each classifier. The MI feature was not selected by any classifier. The number of optimal texture features for each classifier varied between 3 and 4. Insufficient texture features would hinder distinguishing true ships from false alarms while excessive texture features would yield poor performance. A few but effective texture features were enough for binary ship classification.

Table 8. The optimal texture feature combinations for different classifiers.

4.2 Effects of key parameters

Adjustable parameters for linear wake detection were tuned according to Liu et al. (Citation2021). The shape parameters for striped wake detection were obtained from simulation samples with no need for optimization since striped wakes were mainly used to distinguish between cargo ships and warships and high precision can meet the requirement. Only parameters involved in hull detection are necessary to be discussed, including SI and shape filtering threshold. The minimum and maximum of shape features for shape filtering were not tuned to avoid over-fitting and assure robustness. Therefore, the key parameter for hull detection, SI, was assessed below.

The threshold for SI (refer to as TSI hereafter) was determined through whether a hull corresponded to the shape of a ship. Small TSI usually led to swollen hulls that contained turbulent regions and foams, resulting in false hull retained. Nevertheless, the refined hulls may not only include the brightest parts of ships but also provide false shape features. In order to determine the optimal TSI, different values were tested on sample images with ships at the center. The minimum enclosing rectangle of each ship was recorded manually. The Intersection-over-Union (IoU) between the detected hull, H, and the minimum enclosing rectangle, R, was employed to assess the performance. It was calculated from

(27) IoU=HRHR(27)

where H∩R and H∪R are the intersection and union areas between H and R, respectively. The larger IoU, the more optimal TSI.

The mean IoU for all samples was calculated to test the performance of TSI and IoU was used for convenience. The variation curve is depicted in . With small TSI, the shape feature filter condition SI > TSI can be easily satisfied for candidate hulls. Refined hulls were derived from the binary segmentation by the initial threshold. As expected, IoU remained unchanged with TSI. Shape feature filter started to vary as TSI increased. The peak of IoU reached at TSI = 1.4. IoU then decreased sharply to almost 0 with the continuous increase in TSI. This can be explained by the fact that only the brightest parts of candidate hulls were left when TSI was too large.

Figure 9. The minimum IoU of all samples with the change of TSI.

Figure 9. The minimum IoU of all samples with the change of TSI.

5. Conclusion

A method for cascaded detection of ship hull and wake was proposed in this paper. Firstly, locations and shapes of candidate ship hulls were acquired by PFT algorithm and hull refining module from 2 m-resolution panchromatic images. Obvious false hulls were preliminarily removed through shape features composed of area, length, width, and length-width ratio. Texture features of candidate hulls were then extracted and imported into a Gaussian process classifier to obtain the probability of each candidate hull as a ship. Meanwhile, linear wakes and striped wakes were detected around all candidate hulls using 8 m-resolution NIR images and 2 m-resolution panchromatic images, respectively. The probability of a candidate identified as a ship increased if wakes were included. Finally, candidates with the probability higher than 0.5 were regarded as ships. True ships with wakes were further classified as fishing vessels, motorboats, cargo ships, and warships using a fuzzy classifier following the soft voting of the ensemble learning. The proposed method was implementing to multispectral high-resolution GF-1 PMS images. Recall, precision, overall accuracy, and specificity of hull detection amounted to 90.1%, 88.1%, 98.8%, and 99.3%, respectively, demonstrating better performance relative to other state-of-the-art coarse-to-fine ship detection methods. To our best knowledge, it is the first time that ship hull and wake were cascaded for ship detection and wakes were used to improve hull detection accuracy and classify ships. Factors influencing the accuracy of the developed method, including texture features and classifiers combination and key parameters of the method, were discussed.

Combination of hull and wake gives an opportunity to classify ships using images of relatively low resolution. The proposed method can also be applied to other satellite sensors of both high and moderate resolution. It should be noted that the ship classification was designed for normal traveling ships and not applicable for exceptional cases, such as low-speed motorboats, which is a considerable limitation of the proposed method. Nonetheless, it is of great significance in practice since static ships are mainly anchored in ports under supervision while moving ships in the open sea show higher uncertainty. In the future, a method that separate ship hulls from bright turbulent regions will be designed to obtain precise ship shapes and more ship categories will be distinguished. The newly designed method can be operationally implemented for real-time ship monitoring.

Disclosure statement

No potential conflict of interest was reported by the authors.

Data availability statement

The data that support the findings of this study are from Guangdong Data and Application Center for High-resolution Earth Observation System at http://gdgf.gd.gov.cn/GDGF_Portal/index.jsp.

Additional information

Funding

This work is supported by the Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai) Grant [number SML2021SP308]; the Innovation Group Project of Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai) Grant [number 311020004]; the National Natural Science Foundation of China Grant [number 42176173, 41901352, 41071230]; the China Postdoctoral Science Foundation Grant [number 2020M672938]; and the Guangdong Basic and Applied Basic Research Foundation Grant [number 2019A1515111151]. We would like to thank Guangdong Data and Application Center for High-resolution Earth Observation System for providing GF-1 data.

References

  • Aggarwal, N., and W. C. Karl. 2006. “Line Detection in Images Through Regularized Hough Transform.” IEEE Transactions on Image Processing 15 (3): 582–19. doi:10.1109/TIP.2005.863021.
  • Ai, J., Q. Xiangyang, Y. Weidong, Y. Deng, F. Liu, L. Shi, and Y. Jia. 2011. “A Novel Ship Wake CFAR Detection Algorithm Based on SCR Enhancement and Normalized Hough Transform.” IEEE Geoscience and Remote Sensing Letters 8 (4): 681–685. doi:10.1109/LGRS.2010.2100076.
  • Alcantarilla, P. F., A. Bartoli, and A. J. Davison. 2012. Berlin, Heidelberg: KAZE Features.
  • Bay, H., T. Tuytelaars, and L. Van Gool. 2006. SURF: Speeded Up Robust Features. Berlin, Heidelberg.
  • Bazi, Y., and F. Melgani. 2010. “Gaussian Process Approach to Remote Sensing Image Classification.” IEEE Transactions on Geoscience and Remote Sensing 48 (1): 186–197. doi:10.1109/TGRS.2009.2023983.
  • Biondi, F. 2018. “Low-Rank Plus Sparse Decomposition and Localized Radon Transform for Ship-Wake Detection in Synthetic Aperture Radar Images.” IEEE Geoscience and Remote Sensing Letters 15 (1): 117–121. doi:10.1109/LGRS.2017.2777264.
  • Biondi, F. 2019. “A Polarimetric Extension of Low-Rank Plus Sparse Decomposition and Radon Transform for Ship Wake Detection in Synthetic Aperture Radar Images.” IEEE Geoscience and Remote Sensing Letters 16 (1): 75–79. doi:10.1109/lgrs.2018.2868365.
  • Breiman, L. 2001. “Random Forest.” Machine Learning 45: 5–32. doi:10.1023/A:1010933404324.
  • Chen, J., F. Xie, Y. Lu, and Z. Jiang. 2020. “Finding Arbitrary-Oriented Ships from Remote Sensing Images Using Corner Detection.” IEEE Geoscience and Remote Sensing Letters 17 (10): 1712–1716. doi:10.1109/LGRS.2019.2954199.
  • Chinchor, N., and B. Sundheim. 1993. “MUC-5 Evaluation Metrics.” In Proceedings of the 5th conference on Message understanding, Baltimore, Maryland: Association for Computational Linguistics. 69–78.
  • Christianini, N., and J. C. Shawe-Taylor. 2000. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. UK: Cambridge University Press.
  • Courmontagne, P. 2005. “An Improvement of Ship Wake Detection Based on the Radon Transform.” Signal Processing 85 (8): 1634–1654. doi:10.1016/j.sigpro.2005.02.013.
  • Cui, Z., J. Leng, Y. Liu, T. Zhang, P. Quan, and W. Zhao. 2021. “SKNet: Detecting Rotated Ships as Keypoints in Optical Remote Sensing Images.” IEEE Transactions on Geoscience and Remote Sensing 59 (10): 8826–8840. doi:10.1109/TGRS.2021.3053311.
  • Dalal, N., and B. Triggs. 2005. “Histograms of Oriented Gradients for Human Detection.” Paper presented at the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 20-25 June 2005.
  • Dong, C., J. Liu, and X. Fang. 2018. “Ship Detection in Optical Remote Sensing Images Based on Saliency and a Rotation-Invariant Descriptor.” Remote Sensing 10 (3): 400. doi:10.3390/rs10030400.
  • Eldhuset, K. 1996. “An Automatic Ship and Ship Wake Detection System for Spaceborne SAR Images in Coastal Regions.” IEEE Transactions on Geoscience and Remote Sensing 34 (4): 1010–1019. doi:10.1109/36.508418.
  • EMSA. 2018. The World Merchant Fleet in 2018. Statistics from Equasis.
  • Glorot, X., and Y. Bengio. 2010. “Understanding the Difficulty of Training Deep Feedforward Neural Networks.” In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, edited by T. Y. Whye and T. Mike, Proceedings of Machine Learning Research: PMLR. 249–256.
  • Graziano, M., M. D’errico, and G. Rufino. 2016. “Wake Component Detection in X-Band SAR Images for Ship Heading and Velocity Estimation.” Remote Sensing 8 (6): 498. doi:10.3390/rs8060498.
  • Guo, C., Q. Ma, and L. Zhang. 2008. “Spatio-Temporal Saliency Detection Using Phase Spectrum of Quaternion Fourier Transform.” Paper presented at the 2008 IEEE Conference on Computer Vision and Pattern Recognition, 23-28 June 2008.
  • Guo, H., X. Yang, N. Wang, B. Song, and X. Gao. 2020. “A Rotational Libra R-CNN Method for Ship Detection.” IEEE Transactions on Geoscience and Remote Sensing 58 (8): 5772–5781. doi:10.1109/TGRS.2020.2969979.
  • Hough, P. V. C. 1962. Method and Means for Recognizing Complex Patterns. United States.
  • Hu, M. 1962. “Visual Pattern Recognition by Moment Invariants.” IRE Transactions on Information Theory 8 (2): 179–187. doi:10.1109/TIT.1962.1057692.
  • Kang, M., J. Kefeng, X. Leng, and Z. Lin. 2017. “Contextual Region-Based Convolutional Neural Network with Multilayer Fusion for SAR Ship Detection.” Remote Sensing 9 (8): 860. doi:10.3390/rs9080860.
  • Kanjir, U., H. Greidanus, and K. Oštir. 2018. “Vessel Detection and Classification from Spaceborne Optical Images: A Literature Survey.” Remote Sensing of Environment 207: 1–26. doi:10.1016/j.rse.2017.12.033.
  • Karakuş, O., I. Rizaev, and A. Achim. 2020. “Ship Wake Detection in SAR Images via Sparse Regularization.” IEEE Transactions on Geoscience and Remote Sensing 58 (3): 1665–1677. doi:10.1109/TGRS.2019.2947360.
  • Kuo, J. M., and K. Chen. 2003. “The Application of Wavelets Correlator for Ship Wake Detection in SAR Images.” IEEE Transactions on Geoscience and Remote Sensing 41 (6): 1506–1511. doi:10.1109/TGRS.2003.811998.
  • Leng, X., K. Ji, K. Yang, and H. Zou. 2015. “A Bilateral CFAR Algorithm for Ship Detection in SAR Images.” IEEE Geoscience and Remote Sensing Letters 12 (7): 1536–1540. doi:10.1109/LGRS.2015.2412174.
  • Lin, Z., K. Ji, X. Leng, and G. Kuang. 2019. “Squeeze and Excitation Rank Faster R-CNN for Ship Detection in SAR Images.” IEEE Geoscience and Remote Sensing Letters 16 (5): 751–755. doi:10.1109/LGRS.2018.2882551.
  • Li, J., C. Qu, and S. Peng. 2016. “Localized Ridgelet Transform-Based Detection of Ship Wakes in SAR Images.” Paper presented at the 2016 IEEE 13th International Conference on Signal Processing (ICSP), Nov 6–10.
  • Liu, Y., and R. Deng. 2018. “Ship Wakes in Optical Images.” Journal of Atmospheric and Oceanic Technology 35 (8): 1633–1648. doi:10.1175/jtech-d-18-0021.1.
  • Liu, Y., R. Deng, and J. Zhao. 2019. “Simulation of Kelvin Wakes in Optical Images of Rough Sea Surface.” Applied Ocean Research 89: 36–43. doi:10.1016/j.apor.2019.05.006.
  • Liu, Q., X. Xiang, Z. Yang, Y. Hu, and Y. Hong. 2021. “Arbitrary Direction Ship Detection in Remote-Sensing Images Based on Multitask Learning and Multiregion Feature Fusion.” IEEE Transactions on Geoscience and Remote Sensing 59 (2): 1553–1564. doi:10.1109/TGRS.2020.3002850.
  • Liu, Y., J. Zhao, and Y. Qin. 2021. “A Novel Technique for Ship Wake Detection from Optical Images.” Remote Sensing of Environment 258: 112375. doi:10.1016/j.rse.2021.112375.
  • Ma, J., Z. Zhou, B. Wang, H. Zong, and W. Fei. 2019. “Ship Detection in Optical Satellite Images via Directional Bounding Boxes Based on Ship Center and Orientation Prediction.” Remote Sensing 11 (18): 2173. doi:10.3390/rs11182173.
  • Nie, T., X. Han, H. Bin, L. Xiansheng, H. Liu, and B. Guoling. 2020. “Ship Detection in Panchromatic Optical Remote Sensing Images Based on Visual Saliency and Multi-Dimensional Feature Description.” Remote Sensing 12 (1): 152. doi:10.3390/rs12010152.
  • Ojala, T., M. Pietikainen, and T. Maenpaa. 2002. “Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns.” IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (7): 971–987. doi:10.1109/TPAMI.2002.1017623.
  • Otsu, N. 1979. “A Threshold Selection Method from Gray-Level Histograms.” IEEE Transactions on Systems, Man, and Cybernetics 9 (1): 62–66. doi:10.1109/TSMC.1979.4310076.
  • Oumansour, K., Y. Wang, and J. Saillard. 1996. “Multifrequency SAR Observation of a Ship Wake.” IEE Proceedings - Radar, Sonar and Navigation 143 (4): 275–280. doi:10.1049/ip-rsn:19960402.
  • Radon, J. 1986. “On the Determination of Functions from Their Integral Values Along Certain Manifolds.” IEEE Transactions on Medical Imaging 5 (4): 170–176. doi:10.1109/TMI.1986.4307775.
  • Rasmussen, C. E. 2004. “Gaussian Processes in Machine Learning.” In Advanced Lectures on Machine Learning: ML Summer Schools 2003, Canberra, Australia, February 2 - 14, 2003, Tübingen, Germany, August 4 - 16, 2003, Revised Lectures, edited by O. Bousquet, U. von Luxburg, and G. Rätsch, 63–71. Berlin, Heidelberg: Springer Berlin Heidelberg.
  • Redmon, J., S. Divvala, R. Girshick, and A. Farhadi. 2016. “You Only Look Once: Unified, Real-Time Object Detection.” Paper presented at the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 27-30 June 2016.
  • Ren, S., K. He, R. Girshick, and J. Sun. 2015. “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.” IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (6): 1137–1149. doi:10.1109/TPAMI.2016.2577031.
  • Rey, M. T., J. K. Tunaley, J. T. Folinsbee, P. A. Jahans, J. A. Dixon, and M. R. Vant. 1990. “Application of Radon Transform Techniques to Wake Detection in Seasat-A SAR Images.” IEEE Transactions on Geoscience and Remote Sensing 28 (4): 553–560. doi:10.1109/TGRS.1990.572948.
  • Shi, Z., X. Yu, Z. Jiang, and B. Li. 2014. “Ship Detection in High-Resolution Optical Imagery Based on Anomaly Detector and Local Shape Feature.” IEEE Transactions on Geoscience and Remote Sensing 52 (8): 4511–4523. doi:10.1109/TGRS.2013.2282355.
  • Sun, Z., M. Dai, X. Leng, Y. Lei, B. Xiong, K. Ji, and G. Kuang. 2021. “An Anchor-Free Detection Method for Ship Targets in High-Resolution SAR Images.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14: 7799–7816. doi:10.1109/JSTARS.2021.3099483.
  • Sun, Z., X. Leng, Y. Lei, B. Xiong, J. Kefeng, and G. Kuang. 2021. “BiFA-YOLO: A Novel YOLO-Based Method for Arbitrary-Oriented Ship Detection in High-Resolution SAR Images.” Remote Sensing 13 (21): 4209. doi:10.3390/rs13214209.
  • Takacs, G., V. Chandrasekhar, S. S. Tsai, D. Chen, R. Grzeszczuk, and B. Girod. 2013. “Fast Computation of Rotation-Invariant Image Features by an Approximate Radial Gradient Transform.” IEEE Transactions on Image Processing 22 (8): 2970–2982. doi:10.1109/TIP.2012.2230011.
  • Tian, M., Z. Yang, X. Huajian, G. Liao, and Y. Sun. 2019. “A Detection Method for Near-Ship Wakes Based on Interferometric Magnitude, Phase and Physical Shape in ATI-SAR Systems.” International Journal of Remote Sensing 40 (11): 4401–4415. doi:10.1080/01431161.2018.1563839.
  • Tuzel, O., F. Porikli, and P. Meer. 2006. “Region Covariance: A Fast Descriptor for Detection and Classification.” In Proceedings of the 9th European conference on Computer Vision - Volume Part II, 589–600. Graz, Austria: Springer-Verlag.
  • Wang, Z., Y. Zhou, F. Wang, S. Wang, and X. Zhiyu. 2021. “SDGH-Net: Ship Detection in Optical Remote Sensing Images Based on Gaussian Heatmap Regression.” Remote Sensing 13 (3): 499. doi:10.3390/rs13030499.
  • Wei, L., D. Anguelov, D. Erhan, C. Szegedy, S. Reed, F. Cheng-Yang, and A. C. Berg. 2016. SSD: Single Shot MultiBox Detector. Paper presented at the European Conference on Computer Vision, Cham.
  • Xing, X., K. Ji, H. Zou, W. Chen, and J. Sun. 2013. “Ship Classification in TerraSAR-X Images with Feature Space Based Sparse Representation.” IEEE Geoscience and Remote Sensing Letters 10 (6): 1562–1566. doi:10.1109/LGRS.2013.2262073.
  • Xiong, B., Z. Sun, J. Wang, X. Leng, and J. Kefeng. 2022. “A Lightweight Model for Ship Detection and Recognition in Complex-Scene SAR Images.” Remote Sensing 14 (23): 6053. doi:10.3390/rs14236053.
  • Xu, X., X. Zhang, Z. Shao, J. Shi, S. Wei, T. Zhang, and T. Zeng. 2022. “A Group-Wise Feature Enhancement-And-Fusion Network with Dual-Polarization Feature Enrichment for SAR Ship Detection.” Remote Sensing 14 (20): 5276. doi:10.3390/rs14205276.
  • Xu, X., X. Zhang, and T. Zhang. 2022. “Lite-YOLOv5: A Lightweight Deep Learning Detector for On-Board Ship Detection in Large-Scene Sentinel-1 SAR Images.” Remote Sensing 14 (4): 1018. doi:10.3390/rs14041018.
  • Yang, F., Q. Xu, and B. Li. 2017. “Ship Detection from Optical Satellite Images Based on Saliency Segmentation and Structure-LBP Feature.” IEEE Geoscience and Remote Sensing Letters 14 (5): 602–606. doi:10.1109/LGRS.2017.2664118.
  • Zhang, Y., L. Guo, Z. Wang, Y. Yang, X. Liu, and X. Fang. 2020. “Intelligent Ship Detection in Remote Sensing Images Based on Multi-Layer Convolutional Feature Fusion.” Remote Sensing 12 (20): 3316. doi:10.3390/rs12203316.
  • Zhang, Y., W. Sheng, J. Jiang, N. Jing, Q. Wang, and Z. Mao. 2020. “Priority Branches for Ship Detection in Optical Remote Sensing Images.” Remote Sensing 12 (7): 1196. doi:10.3390/rs12071196.
  • Zhang, T., and X. Zhang. 2019. “High-Speed Ship Detection in SAR Images Based on a Grid Convolutional Neural Network.” Remote Sensing 11 (10): 1206. doi:10.3390/rs11101206.
  • Zhang, T., and X. Zhang. 2022a. “A Mask Attention Interaction and Scale Enhancement Network for SAR Ship Instance Segmentation.” IEEE Geoscience and Remote Sensing Letters 19: 1–5. doi:10.1109/LGRS.2022.3189961.
  • Zhang, T., and X. Zhang. 2022b. “Squeeze-And-Excitation Laplacian Pyramid Network with Dual-Polarization Feature Fusion for Ship Classification in SAR Images.” IEEE Geoscience and Remote Sensing Letters 19: 1–5. doi:10.1109/LGRS.2021.3119875.
  • Zhang, T., and X. Zhang. 2022c. “HTC+ for SAR Ship Instance Segmentation.” Remote Sensing 14 (10): 2395.
  • Zhang, T., and X. Zhang. 2022d. “A Polarization Fusion Network with Geometric Feature Embedding for SAR Ship Classification.” Pattern Recognition 123: 108365. doi:10.1016/j.patcog.2021.108365.
  • Zhang, T., X. Zhang, C. Liu, J. Shi, S. Wei, I. Ahmad, X. Zhan, et al. 2021. “Balance Learning for Ship Detection from Synthetic Aperture Radar Remote Sensing Imagery.” ISPRS Journal of Photogrammetry and Remote Sensing 182: 190–207. doi:10.1016/j.isprsjprs.2021.10.010.
  • Zhang, T., X. Zhang, J. Shi, and S. Wei. 2019. “Depthwise Separable Convolution Neural Network for High-Speed SAR Ship Detection.” Remote Sensing 11 (21): 2483. doi:10.3390/rs11212483.
  • Zhu, C., H. Zhou, R. Wang, and J. Guo. 2010. “A Novel Hierarchical Method of Ship Detection from Spaceborne Optical Image Based on Shape and Texture Features.” IEEE Transactions on Geoscience and Remote Sensing 48 (9): 3446–3456. doi:10.1109/TGRS.2010.2046330.
  • Zilman, G., A. Zapolski, and M. Marom. 2015. “On Detectability of a Ship’s Kelvin Wake in Simulated SAR Images of Rough Sea Surface.” IEEE Transactions on Geoscience and Remote Sensing 53 (2): 609–619. doi:10.1109/TGRS.2014.2326519.