1,420
Views
2
CrossRef citations to date
0
Altmetric
Research Article

Monocular vision based on the YOLOv7 and coordinate transformation for vehicles precise positioning

, , &
Article: 2166903 | Received 21 Oct 2022, Accepted 05 Jan 2023, Published online: 28 Jan 2023

Abstract

Logistics tracking and positioning is a critical part of the discrete digital workshop, which is widely applied in many fields (e.g. industry and transport). However, it is distinguished by dispersed manufacturing machinery, frequent material flows, and complicated noise environments. The positioning accuracy of the conventional radio frequency positioning approach is severely impacted. The latest panoramic vision positioning technology relies on binocular cameras. And that cannot be used for monocular cameras in industrial scenarios. This paper proposes a monocular vision positioning method based on YOLOv7 and coordinate transformation to solve the problem of positioning accuracy in the digital workshop. Positioning beacons are placed on the top of the moving vehicle with a uniform height. The coordinate position of the beacon on the image is obtained through the YOLOv7 model based on transfer learning. Then, coordinate transformation is applied to obtain the real space coordinates of the vehicle. Experimental results show that the proposed single-eye vision system can improve the positioning accuracy of the digital workshop. The code and pre-trained models are available on https://github.com/ZS520L/YOLO_Positioning.

1. Introduction

The smart workshop has gained extensive interest from people from all walks of life due to its highly digital, informative, and intelligent characteristics and has progressively come to be seen as the future development path for discrete production workshops. In contrast, the workshop logistics trolley and related mobile device positioning technology is a fundamental part of the realisation of the smart workshop. The positioning of the logistics trolley in the discrete workshop of smart manufacturing was a problem that the authors encountered during the implementation of the digital workshop construction of the smart factory. Based on the idea of resolving this practical problem and its significance for the positioning of the logistics trolley in the discrete workshop, this paper was created.

In recent years, indoor positioning has become a fundamental requirement for mobile users driven by indoor location services (ILBS). With the rise of the Internet of Things (IoT), heterogeneous smartphones and wearable devices are becoming ubiquitous. However, ILBS for heterogeneous IoT devices face significant challenges, such as differences in received signal strength (RSS) due to hardware heterogeneity, multi-path reflections in complex environments, and positioning times limited by computational resources.(Ye et al., Citation2021)

At the present time, indoor positioning technologies have mainly relied on WIFI (Yang & Shao, Citation2015), radio frequency identification (RFID) (Merenda et al., Citation2021), WSN (Gao et al., Citation2021; Huang et al., Citation2022), and UWB (Yao et al., Citation2021). However, implementing the above methods requires the appropriate equipment to be arranged in advance. It is only suitable for indoor positioning with low accuracy requirements due to signal interference and attenuation. Vision-based indoor localisation techniques rely on a priori maps and feature descriptors for image retrieval and image matching. In enclosed, semi-enclosed, multi-layered interior situations with strong electron magnetic solid interference, they are able to localise with a high degree of accuracy. In addition, vision-based positioning is an accurate and cost-effective solution for indoor positioning. It relies on cameras to collect information about the structure of the house, texture differences and static objects (doors, windows, etc.) from the environment to confirm the position, avoiding interference from reflections and refraction caused by the use of radio magnetic signals when obstacles are encountered(Li et al., Citation2020). As a result, multistorey enclosed or semi-enclosed indoor spaces are suitable for the employment of vision-based indoor localisation systems. However, current research directions in vision-based positioning require high deployment costs and cannot be adapted to simple changes in the environment. Therefore, there is huge potential for applications for the research and development of indoor positioning techniques with minimal equipment costs and straightforward deployment.

Existing vision-based indoor localisation techniques include mobile camera-based indoor localisation techniques(Chen & Chen, Citation2021), mainly based on image matching(Xia et al., Citation2018), in which localisation is calculated by matching the current photo with a photo stored in an image database, and such methods suffer from two problems. On the one hand, because there are so many photographs in the offline database, image retrieval takes a long time. On the other hand, the localisation accuracy is also unsatisfactory, as there are many mismatched pairs between the query image and the matched image and it is difficult to establish an accurate coordinate transformation relationship(Jia et al., Citation2021).

To address the problems mentioned above, this paper proposes an indoor localisation scheme based on target detection and image-to-space mapping, which can achieve centimetre level localisation accuracy. In fact, prior to that, our team suggested a mine video monitoring system based on cloud-side collaboration and a real-time video processing system for underground coal mines based on the edge-cloud collaboration framework. The main contributions of this paper can be summarised as follows:

  1. The idea of two-dimensional space was first put forth as a way to unify the placement height of positioning beacons, and a three-point approach for creating a coordinate mapping from two-dimensional images to two-dimensional space was developed.

  2. The target detection algorithm is applied to the indoor movable object localisation problem and successfully transforms the accuracy problem of spatial object localisation into the validity problem of image target detection.

  3. The proposed localisation system is training-free for different scenarios. As the pre-trained target detection model is stable, no secondary training is required without changing the localisation beacon. The system can also work with existing security surveillance systems, which dramatically reduces the deployment cost and is easy to promote.

2. Related work

With the rapid development of computer vision and deep learning technologies, it has become possible to achieve real-time target detection and obtain location information from images. Additionally, image-based visual localisation provides excellent visualisation effects, is extremely interference-resistant, and is replete with contextual data. As a result, academics from all around the world have given optical localisation techniques a lot of attention. Based on whether or not human markers are used, there are currently two basic groups of visual localisation techniques.

2.1. Indoor positioning techniques that rely only on pre-existing environments

Three steps are typically involved in this type of approach: extraction of environmental information for con-structuring a feature database(Liao et al., Citation2019), image retrieval to find the best feature map to match, and image to space coordinate transformation to determine the camera's position (Zhang et al., Citation2021). For example, Yu M et al(Yu et al., Citation2021) analyzed and converted image data into mobile phone movement distance and pose by a coordinate transformation method (four-parameter fitting model), (Zhou et al., Citation2022) used improved convolutional neural network based on monocular vision for indoor localisation, and (Jung et al., Citation2021) used point cloud and RGB feature information to accurately acquire indoor 3D space. Citations (Chae et al., Citation2016) Using a stereo vision system, the saliency map is found, parallax and distance from the stereo vision image are calculated, and then absolute distance is found based on camera characteristics (e.g. focal length) and parallax influenced by point-of-view differences between cameras.

Such methods, however, necessitate the construction of a sizable database of environmental variables for image retrieval beforehand and are sensitive to changes in the environment, making them unsuitable for the application's large-scale extension. The large database makes the retrieval take a long time, and the mainstream solution to this problem is to divide the database according to semantics (Dai et al., Citation2019; Jia et al., Citation2021; Zatout & Larabi, Citation2022). For example, (Dai et al., Citation2019) proposed a semantic and content-based image retrieval (SCBIR) approach. By dividing the offline database into semantic databases of different semantic types, the retrieval of images is narrowed down and the retrieval time is reduced. Jia S et al(Jia et al., Citation2021) also proposed a semantic-based indoor visual localisation method, in which representative infra-structure objects were first selected using semantic extraction and classification to build a semantic-based offline database; a semantic constraint-based feature point selection method was used to process the image retrieval The best matching images are obtained to perform user location estimation. There are also clustering classification databases (Jia et al., Citation2020).

Image retrieval's second issue is that it struggles to adjust to environmental changes. for this problem, extracting key semantic features is an effective solution(Jia et al., Citation2021), and Wen H et al(Wen et al., Citation2018) propose the idea of lifelong learning by iterative compression to obtain reliable features. The third problem of image retrieval is the low accuracy of matching, for which the resolution of the image can be improved and the image can be deblurred to facilitate feature extraction(Jia et al., Citation2022). Jia S et al(Jia et al., Citation2021) suggested matching multiple images by considering the contextual information of the environment. There are also processing strategies such as wavelet denoising(Wang et al., Citation2019) and foreground background separation(Zheng et al., Citation2021).

2.2. Indoor positioning method relying on pre-arranged beacons

For example, Bookmark (Pearson et al., Citation2017) supports scanning barcodes of books in a library to obtain the current location relative to the library. Robinson et al. Citations (Robinson et al., Citation2014) demonstrate the potential of barcodes for localisation in real large library scenarios. Reference (Kunhoth et al., Citation2019) also proposes a system that uses QR codes and BLE beacons to locate the user's position. The use of mobile robots to identify ceiling features is also a hot area of research due to the nature of ceilings that are not easily obscured(Xu et al., Citation2009; Zhang et al., Citation2018). Tyukin et al(Tyukin et al., Citation2016) proposed an image-processing based robot navigation and positioning system consisting of a simple monocular camera and non-illuminated coloured beacons. However, these techniques necessitate the pre-arrangement of a sizable number of beacons, are insensitive to beacon motions or missing beacons, and do not provide localisation accuracy guarantees.

2.3. Indoor localisation method based on target detection

From the perspective of image processing, the above methods all belong to the category of image classification (Shereena & David, Citation2014). Compared with previous methods, the literature(Wang et al., Citation2018) proposed a binocular visual localisation method based on region of interest, and inspired by this, this paper introduces monocular camera-based target detection to the indoor localisation problem for the first time. first establishing a coordinate mapping from a two-dimensional image to a two-dimensional space by the three-point method, then performing target detection on fiducials placed at a uniform height, followed by an image-to-space coordinate transformation of the detection centroid to obtain the object to be located the exact actual spatial coordinates of the object to be located are then obtained. Without the need for extra hardware, this solution can be implemented on top of already installed security surveillance systems. In addition, as the positioning beacon is placed on top of the object to be located, there are few problems with line-of-sight obstruction and it is not affected by changes in the environment.

3. Proposed system

In this section, first, the proposed method is described in detail. Then, the key modules and important algorithms are analyzed in detail, including the initialisation of the system parameters, YOLOv7-based target detection and the mapping of images to spatial coordinates.

3.1. System architecture

We have designed an indoor movable object condition monitoring system based on a single image, which is capable of achieving centimetre-level positioning accuracy.

The system consists of three parts, as shown in Figure .

Figure 1. Discrete shop floor logistics forklift positioning method.

Figure 1. Discrete shop floor logistics forklift positioning method.

The process consists of (a) initialisation of the system parameters and (b) determination of the image model to the spatial location.

The system parameters are initialised as shown in Figure a. First, we place the camera in a suitable position and then collect the 3D actual coordinates and the corresponding 2D pixel coordinates of the three different position markers, which require human assisted markers. We use the 3D actual coordinates and 2D pixel coordinates to obtain the internal reference matrix of the camera, which will be mentioned in the following.

The model from image to position is shown in Figure b. We obtain the coordinates of the locating beacon centroid on the image through the YOLOv7 model, and then obtain the location of the forklift in the real world through a mapping of image to spatial coordinates.

3.2. System parameters

In this section, the method of initialising the system parameters is described. It is always possible to artificially obtain the real-world coordinates of the beacon centroid and the corresponding image coordinates when the beacon is in a different position, the input to the system being the three pairs of corresponding coordinates that are not co-linear.

According to the principle of small aperture imaging, the midpoint of a line segment in the real world corresponds to the midpoint of that line segment in the image. For a planar coordinate system, two non-coincident vectors can represent any vector in that plane, so we only need three non-coincident corresponding points to build the system to find the parameters needed for the system. Assuming that the camera shoots without distortion, Figures and show the rules for establishing the spatial and image co-ordinate systems respectively.

Figure 2. Spatial coordinate system.

Figure 2. Spatial coordinate system.

Figure 3. Image coordinate system.

Figure 3. Image coordinate system.

As shown in Figure , we constructed a world coordinate system with a corner of a wall in real space as the origin, parallel to the wall as the X and Y axes, and perpendicular to the ground as the Z axis, respectively. The camera coordinate system is based on the camera optical centre as the origin, and the X and Y axes are parallel to the X and Y axes of the image coordinate system (shown in Figure ). For the experiments we placed the auxiliary positioning beacons uniformly on the top of the machine. Since the logistics vehicles in the workshop are always of equal height, we can ignore the spatial dimension occupied by the height and thus simplify the problem. As shown in Figure , the point O is the midpoint of the line AB and the three points ABC are the inputs to the system, i.e. the corresponding real-world coordinates (Xw,Yw) are known. the spatial coordinates of the point O are calculated by the following equation Equation(1). (1) (XwoYwo)=(XwaXwbYwaYwb)(1212)(1) where (Xwo,Ywo), (Xwa,Ywa), (Xwb,Ywb) are the coordinates of points O, A and B in space respectively.

Considering the linear relationship between the three points of AOB, equation Equation(2) can be obtained. (2) {Xwo=Xwa+XcoXcaXcbXca(XwbXwa)Ywo=Ywa+YcoYcaYcbYca(YwbYwa)(2) where (Xco,Yco),(Xca,Yca),(Xcb,Ycb) are the coordinates of points O, A and B in the image respectively.

The calculation of the midpoint coordinates reveals that we need a relative reference point and a deflation ratio on the XY axis, which first leads us to the two key parameters of the image coordinate system, Kcwx and Kcwy. Equation Equation(3) is calculated as follows. (3) {Kcwx=|XwbXwaXcbXca|Kcwy=|YwbYwaYcbYca|(3) where Kcwx and Kcwy are the image to space deflation ratios on the X and Y axes respectively.

Finally, the point Oc is chosen as the relative reference point with the following equation Equation(4). (4) {Ocx=XwaKcwxXcaOcy=YwaKcwyYca(4) In the above derivation process, it is actually sufficient to use two points that are not parallel to the image coordinate axis. In order to ensure the normal initialisation of the system parameters, it is recommended to choose three points that are not co-linear to balance the error, where the formula for determining the system parameters with any two points is the same as above. In addition, when the special case of Ac, Ab parallel to the image xy axis in Figure  arises, the system will take the midpoint o of bc for the calculation.

3.3. YOLOv7-based target detection

In this section, the rationale for model selection is explained, the process of constructing the dataset used for training is analyzed, and the migration learning strategy introduced to increase the speed of training.

As an end-to-end target detection model, the YOLOv7 Transformer-based detector SWIN-L Cascade-Mask R-CNN is 509% faster and 2% more accurate, and is 551% faster and 0.7% more accurate than the convolution-based detector ConvNeXt-XL Cascade-Mask R-CNN. (Wang et al., Citation2022)

The YOLOv7 series models include YOLOv7-E6E, YOLOv7-D6, YOLOv7-E6, YOLOv7-W6, YOLOv7-X, YOLOv7 and YOLOv7-tiny-SiLU. Among them YOLOv7-tiny-SiLU has the lowest number of parameters and boasts a GPU operating speed of up to 286 FPS. YOLOv7-tiny-SiLU was chosen first in order to meet the real-time requirements in industrial application scenarios, and the accuracy after testing met expectations and the model was chosen.

Traditional neural networks require learning parameters from large amounts of data, and although small-sample training methods such as those proposed by Liu J et al. have emerged(Liu et al., Citation2021), they require modifying the model to work with them. methods such as Dropout and regularisation also require changing the model structure to reduce model complexity, and they all limit the distribution of model parameters, making the model more difficult to understand. Data augmentation, on the other hand, does not reduce the complexity of the network, nor does it increase the computational complexity or tuning effort, and is an implicit regularisation method. It is more meaningful in practical applications and reflects the centrality of data. This paper therefore chooses data augmentation to expand the dataset.

Data augmentation is a machine learning technique that improves the performance of a model by adding new samples to the training data. The advantage of data augmentation over traditional neural networks is that it can improve the generalisation of the model, thus making it more adaptable to new data. Data augmentation can also reduce the risk of overfitting the model, thus improving the robustness of the model. In addition, data augmentation can effectively expand the training data set, thus providing richer information to the model and thus improving its performance. Thus, data augmentation techniques have numerous advantages that can effectively improve the performance of a model.

After varying degrees of rotation, panning, blurring, noise addition and colour interference processing, a data enhancement dataset 81 times larger than the manually acquired dataset was obtained. It is worth noting that we have used rotation and panning as the first step in the processing, on which blurring, noise addition and colour interference are based, as shown in Figure . Figure  shows the before and after comparison.

Figure 4. Data enhancement process.

Figure 4. Data enhancement process.

Figure 5. Comparison of before and after image processing.

Figure 5. Comparison of before and after image processing.

For the above data enhancement methods, the main advantages are: rotating the image increases the robustness of the model to the object's orientation. Panning the image increases the robustness of the model to the position of the object. Adjusting the image contrast increases the model's ability to adapt to changes in object brightness. Adding noise can increase the robustness of the model to disturbances. In summary, data augmentation in the above ways can effectively improve the model's ability to generalise, thus making the model more adaptable to new data.

The dataset for target detection differs from image classification in that the centre coordinates (x,y) of the beacon to be detected and the height h and width w of the anchor frame are also required by the network. To reduce the human annotation workload, we design a method for automatically generating the corresponding parameters for data augmentation. Firstly, we manually annotate the original dataset with the coordinates of the centre of the anchor box (x,y) before the move, +x0 for the upward move and +y0 for the rightward move, and then (x+x0,y+y0) for the centre of the anchor box after the move.

Rotating the image not only changes the centre coordinates of the anchor frame, but w and h also become meaningless, assuming that the centre of rotation is (x1,y1), the angle of rotation is θ and the scale is β. Then the parameters of the anchor frame after rotation are calculated as follows. (5) α=arctan(yy1xx1)+θ(5) The coordinate system is reconstructed with (x,y) as the origin and α is the angle of the centre of rotation, the line connecting the centre of the anchor frame after rotation, with respect to the x-axis of the new coordinate system. (6) d=β(xx1)2(yy1)2(6) Where: d is the length from the centre of rotation after deflation to the centre of the anchor frame after rotation. (7) f(x,y)=(dcosα+x1,dsinα+y1)(7) where: f(x,y) is the rotational coordinate transformation function (8) w1,h1=f(x,y)maxf(x,y)min(8) f(x,y)max and f(x,y)min are the maximum and minimum values of the horizontal and vertical coordinates of the four boundary points of the anchor frame after rotation, respectively. Blurring, noise addition and colour interference do not affect the change in the parameters of the anchor frame, so the parameters remain the same as before the change.

To improve the speed of model training, a popular choice is to base the training on existing model weights, which in this paper are based on the training weights of the COCO dataset.

3.4. Image to space coordinate mapping

In section 2.1, we constructed a coordinate mapping from 2D space to camera captured images, at that time to obtain the transformation parameters of the system, which will be used in this section to detail the inverse image to the space mapping process.

The coordinates of the centroid of the localised beacon, denoted as (Xp,Yp), can be obtained from the pre-trained target detection model and its conversion to a two-depersonalized space is calculated as follows. (9) {Xw = Ocx+Xp/KcwxYw=Ocy+Yp/Kcwy(9) where (Xw,Yw) are the two-depersonalized spatial coordinates, (Ocx,Ocy) are the camera image coordinate origins, and Kcwx and Kcwy are the deflation coefficients in the x, y axis direction respectively.

4. Experimental results

In order to test the effectiveness and reliability of the proposed method, both simulation and field deployment experiments have been carried out. Error tests were also carried out for different placement heights of the positioning markers and for two states of fixation and movement.

4.1. Simulation model

In the experimental setup, the 3D modelling software Solid Edge ST8 was used to build the simulation experimental scene shown in Figure  below, the length and width of the site were 20m and 8m respectively, and the size of the positioning beacon was 30cm*30cm and the height was unified at 1.5m from the ground, in order to calculate the system parameters, we treated the centre of the positioning beacon as a hole through, which will help to collect the three-point method required This will help to collect the coordinate parameters needed for the three-point method.

Figure 6. Simulation of the experimental environment.

Figure 6. Simulation of the experimental environment.

Without moving the main viewpoint, the object to be positioned is moved so as to simulate the camera's viewpoint. We used the grid method and set the grid size to 2m*2m and collected 27 experimental sample points within the field of view, the distribution of sample points is shown in Figure .

Figure 7. Distribution of sample points.

Figure 7. Distribution of sample points.

In order to verify the reliability of the three-point method, three sets of data required for the initialisation of the system parameters were collected, corresponding to the following results in .

Table 1. Initialisation of system parameters.

Without changing the camera angle, almost identical image origins and deflation factors can be obtained for any three pairs of corresponding coordinate points, verifying the correctness of the system parameter initialisation method.

4.2. Analysis

The experimental platform is a desktop computer with a 11th Gen Intel(R) Core i7-11700F(2.50GHz) CPU, NVIDIA GeForce RTX 3060 GPU and 32 GB RAM running on Windows 10 64-bit. Software tools used include CUDA 10.2, OpenCV 4.6, Python3.7.12, torch:1.10.0.

The results of the target detection are shown in Figure . As the locator beacon is relatively small in relation to the entire field of view of the camera, the error in the centre of the detection frame from the centre of the locator beacon is also relatively small.

Figure 8. Target detection results.

Figure 8. Target detection results.

We fed the 270 sample points collected into the pre-trained model, and the average error profile is shown in Figure .

Figure 9. Test sample error curve.

Figure 9. Test sample error curve.

The true-time distribution of sample points and a comparison of the predicted results are shown in Figure .

Figure 10. Sampling points versus predicted points.

Figure 10. Sampling points versus predicted points.

4.3. Error and reliability analysis

The height uniformity of the positioning beacons is a major drawback of the system. In order to assess the impact of height differences, we designed the following four sets of experiments, the results of which are shown in below.

Table 2. Positioning errors corresponding to different heights.

Without changing the system parameters, only the height of the positioning beacon was modified and, unsurprisingly, there was a significant increase in the positioning error for both, but even though the modifications were of the same magnitude, the errors were not the same due to the opposite direction. After analysis, we believe that this is due to the fact that the target detection itself is subject to a certain amount of error, and here the phenomenon of error neutralisation occurs. In addition, the change in beacon height corresponds to a movement on the image, and with x,y remaining constant, the error is theoretically affected by the angle of camera placement, which is not discussed too much here. Although the difference in height will introduce a small error, the system can be used perfectly for the location of personnel, that only need to use the helmet as a positioning beacon.

Although the simulation experiments have achieved good results, which only proves the feasibility of the theory, the field test is more necessary. To deploy the model, the layout of the workshop needs to be taken into account, so that a suitable coordinate origin and co-ordinate system can be chosen. Figure  shows an equally scaled plan of the workshop. A plane coordinate system is first established with the top left corner as the coordinate origin, the long side as the x-axis and the short side as the y-axis. The car is parked in the auxiliary marker area and the image coordinates are obtained by manually clicking on the centre of the marker at the monitoring end to obtain the coordinate transformation factor of the camera.

Figure 11. Plan of the workshop (1:1000).

Figure 11. Plan of the workshop (1:1000).

To improve the accuracy of target detection models, a common idea(Sun et al., Citation2022) is to design novel feature extraction networks that generate high quality feature representations. Meanwhile, li et al. (Xia et al., Citation2020) propose an efficient framework for salient target detection based on distributed edge guidance and iterative Bayesian optimisation, taking full account of colour, spatial and edge information. Inspired by this, in this paper we propose a new idea. As shown in Figure : the sign to be detected is placed on top of the logistics cart with a uniform height of 2.1m from the ground. When manually labelling the data set it was found that one would subconsciously locate the logistics cart first, so the joint detection frame was designed taking into account the relevance of locating the sign and the logistics cart, characterised by the fact that the sign frame is always in the area above the interior of the cart frame.

Figure 12. Joint detection box.

Figure 12. Joint detection box.

In the training process of the yolov7 model, the training epoch is set to 300, the batch size is set to 64, the adam optimiser is used, and the momentum parameter is set to 0.999. Figure  shows the results of the model training, where the mAP_0.5, precision and recall metrics are close to 1. The results of model training are as expected.

Figure 13. Model training results.

Figure 13. Model training results.

Figure  shows the results of the target detection: the appropriate enclosing frame is always well chosen, which makes the centroid of the target detection very close to the coordinates of the sign centroid on the image, and the test point error of the sign is shown in .

Figure 14. Target detection results.

Figure 14. Target detection results.

Table 3. Positioning errors for different test points.

The industrial scenario in which the experiments were conducted requires an accuracy of 0.2m or less, and the model meets this requirement very well, while our positioning accuracy is better than traditional positioning methods(Huang et al., Citation2022).

In addition to the fixation error test, we also designed a movement error test in which the estimated cart movement trajectory of the model is compared with the real movement trajectory of the cart while ensuring that the cart travels in a straight line as shown in Figure .

Figure 15. Track Comparison Chart.

Figure 15. Track Comparison Chart.

Figure 16. Detection results of the model at different light intensities.

Figure 16. Detection results of the model at different light intensities.

The above results show that there is some error when the car is at rest and moving, but the overall error is within an acceptable range. Compared with systems that require the deployment of a large number of devices to obtain high-precision positioning results, the system proposed in this paper is built on the basis of existing security monitoring systems, and the almost zero deployment cost will have greater deployment possibilities.

The brightness or darkness of the environment is an important indicator that affects the accuracy of target detection. In order to test the adaptability of the pre-trained model to different light intensities, we selected scenes at the same location and different time periods for the experiments. The results are shown in Figure . The pre-trained model has good detection accuracy in brighter or darker scenes. This shows that the model has a high resistance to interference and a better adaptation to the environment.

5. Conclusions

To obtain three-dimensional coordinates of space requires more complex system design, which is rather a rather difficult problem at present. In this paper, we propose the idea of two-dimensionalized space, which simplifies space into a plane, so that only plane to plane coordinate transformation relations need to be constructed, which is relatively easy to achieve, and the experiment also proves that the error is within the acceptable range.

The approach proposed in this paper shifts the pressure of the indoor localisation problem to the accuracy of target detection. As the localisation beacons can be reused, the parameters of the model do not require redundant training, making it easy to deploy and easy to generalise. The experiments validate the feasibility of the idea of treating space in two dimensions and demonstrate that even if there is a certain height difference in the beacons, it does not cause excessive errors, but rather that the height variation may have a positive impact due to the inherent error in the centroid coordinates obtained through target detection. This means that it is feasible, for example, to locate coordinates precisely by means of the human head. The system can be effectively combined with security monitoring systems to make the best use of the system; in addition, the precise positioning provides the basis for motion tracking and path planning.

The limitation for this paper is that although the process of coordinate conversion is reliable, there is an unavoidable error in the coordinates prior to conversion. To address this issue, we can consider improving the localisation beacon to make the results of target detection during the process more accurate. In order to further improve the detection accuracy, in the next step of our work we will discuss and validate the way in which multiple positioning beacons fit together.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by National Natural Science Foundation of China: [grant no 51874010]; University of Science and Technology of China: [grant no 51874010].

References

  • Chae, K. H., Moon, Y. S., & Ko, N. Y. (2016). Visual tracking of objects for unmanned surface vehicle navigation. 2016 16th International Conference on Control, Automation and Systems (ICCAS).
  • Chen, R., & Chen, L. (2021). Smartphone-based indoor positioning technologies. In Urban informatics (pp. 467–490). Singapore: Springer.
  • Dai, J., Ma, L., Qin, D., & Tan, X. (2019). High accurate and efficient image retrieval method using semantics for visual indoor positioning. International Conference in Communications, Signal Processing, and Systems.
  • Gao, Y., Lou, W., & Lu, H. (2021). An indoor positioning and prewarning system based on wireless sensor network routing algorithm. Journal of Sensors, 2021.
  • Huang, X., Han, D., Weng, T. H., Wu, Z., Han, B., Wang, J., Cui, M., & Li, K. C. (2022). A localization algorithm for DV-Hop wireless sensor networks based on manhattan distance. Telecommunication Systems, 81(2), 207–224. https://doi.org/10.1007/s11235-022-00943-w
  • Jia, S., Ma, L., Tan, X., & Qin, D. (2020). Bag-of-visual words based improved image retrieval algorithm for vision indoor positioning. 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring).
  • Jia, S., Ma, L., Yang, S., & Qin, D. (2021). Semantic and context based image retrieval method using a single image sensor for visual indoor positioning. IEEE Sensors Journal, 21(16), 18020–18032. https://doi.org/10.1109/JSEN.2021.3084618
  • Jia, S., Ma, L., Yang, S., & Qin, D. (2022). A novel visual indoor positioning method with efficient image deblurring. IEEE Transactions on Mobile Computing.
  • Jung, T.-W., Jeong, C.-S., Kwon, S.-C., & Jung, K.-D. (2021). Point-graph neural network based novel visual positioning system for indoor navigation. Applied Sciences, 11(19), 9187. https://doi.org/10.3390/app11199187
  • Kunhoth, J., Karkar, A., Al-Maadeed, S., & Al-Attiyah, A. (2019). Comparative analysis of computer-vision and BLE technology based indoor navigation systems for people with visual impairments. International Journal of Health Geographics, 18(1), 1–18. https://doi.org/10.1186/s12942-019-0193-9
  • Li, M., Chen, R., Liao, X., Guo, B., Zhang, W., & Guo, G. (2020). A precise indoor visual positioning approach using a built image feature database and single user image from smartphone cameras. Remote Sensing, 12(5), 869. https://doi.org/10.3390/rs12050869
  • Liao, X., Chen, R., Li, M., Guo, B., Niu, X., & Zhang, W. (2019). Design of a smartphone indoor positioning dynamic ground truth reference system using robust visual encoded targets. Sensors, 19(5), 1261. https://doi.org/10.3390/s19051261
  • Liu, J., Wang, T., & Qiao, Y. (2021). The unified framework of deep multiple kernel learning for small sample sizes of training samples. In Advances in Intelligent Information Hiding and Multimedia Signal Processing (pp. 485–493). Singapore: Springer.
  • Merenda, M., Catarinucci, L., Colella, R., Della Corte, F. G., & Carotenuto, R. (2021). Exploiting RFID technology for indoor positioning. 2021 6th International Conference on Smart and Sustainable Technologies (SpliTech).
  • Pearson, J., Robinson, S., & Jones, M. (2017). BookMark: Appropriating existing infrastructure to facilitate scalable indoor navigation. International Journal of Human-Computer Studies, 103, 22–34. https://doi.org/10.1016/j.ijhcs.2017.02.001
  • Robinson, S., Pearson, J. S., & Jones, M. (2014). A billion signposts: Repurposing barcodes for indoor navigation. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.
  • Shereena, V. B., & David, J. M. (2014). Content based image retrieval: Classification using neural networks. The International Journal of Multimedia & Its Applications, 6(5), 31. https://doi.org/10.5121/ijma.2014.6503
  • Sun, Y., Xia, C., Gao, X., Yan, H., Ge, B., & Li, K. C. (2022). Aggregating dense and attentional multi-scale feature network for salient object detection. Digital Signal Processing, 130, 103747. https://doi.org/10.1016/j.dsp.2022.103747
  • Tyukin, A. L., Priorov, A. L., & Lebedev, I. M. (2016). Research and development of an indoor navigation system based on the digital processing of video images. Pattern Recognition and Image Analysis, 26(1), 221–230. https://doi.org/10.1134/S1054661816010260
  • Wang, A., Hao, X., Zhang, X., Wang, A., & Hu, P. (2018). A dynamic target visual positioning method based on ROI. 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC).
  • Wang, C. Y., Bochkovskiy, A., & Liao, H. Y. M. (2022). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv preprint arXiv:2207.02696.
  • Wang, Z., Wang, G., & Zhang, G. (2019). Research on image retrieval based on wavelet denoising in visual indoor positioning algorithm. International Conference in Communications, Signal Processing, and Systems.
  • Wen, H., Clark, R., Wang, S., Lu, X., Du, B., Hu, W., & Trigoni, N. (2018). Efficient indoor positioning with visual experiences via lifelong learning. IEEE Transactions on Mobile Computing, 18(4), 814–829. https://doi.org/10.1109/TMC.2018.2852645
  • Xia, C., Gao, X., Li, K. C., Zhao, Q., & Zhang, S. (2020). Salient object detection based on distribution-edge guidance and iterative Bayesian optimization. Applied Intelligence, 50(10), 2977–2990. https://doi.org/10.1007/s10489-020-01691-7
  • Xia, Y., Xiu, C., & Yang, D. (2018). Visual indoor positioning method using image database. 2018 Ubiquitous Positioning, Indoor Navigation and Location-Based Services (UPINLBS).
  • Xu, D., Han, L., Tan, M., & Li, Y. F. (2009). Ceiling-based visual positioning for an indoor mobile robot with monocular vision. IEEE Transactions on Industrial Electronics, 56(5), 1617–1628. https://doi.org/10.1109/TIE.2009.2012457
  • Yang, C., & Shao, H. R. (2015). WiFi-based indoor positioning. IEEE Communications Magazine, 53(3), 150–157. https://doi.org/10.1109/MCOM.2015.7060497
  • Yao, L., Yao, L., & Wu, Y. W. (2021). Analysis and Improvement of indoor positioning accuracy for UWB sensors. Sensors, 21(17), 5731. https://doi.org/10.3390/s21175731
  • Ye, Q., Bie, H., Li, K. C., Fan, X., Gong, L., He, X., & Fang, G. (2021). EdgeLoc: A robust and real-time localization system toward heterogeneous IoT devices. IEEE Internet of Things Journal, 9(5), 3865–3876. https://doi.org/10.1109/JIOT.2021.3101368
  • Yu, M., Yu, J., Li, H., Li, H., & Guo, H. (2021). Indoor positioning and navigation methods based on mobile phone camera. International Conference on Multimedia Technology and Enhanced Learning.
  • Zatout, C., & Larabi, S. (2022). Semantic scene synthesis: Application to assistive systems. The Visual Computer, 38(8), 2691–2705. https://doi.org/10.1007/s00371-021-02147-w
  • Zhang, L., Xia, H., Liu, Q., Wei, C., Fu, D., & Qiao, Y. (2021). Visual positioning in indoor environments using RGB-D images and improved vector of local aggregated descriptors. ISPRS International Journal of Geo-Information, 10(4), 195. https://doi.org/10.3390/ijgi10040195
  • Zhang, X., Zhu, S., Wang, Z., & Li, Y. (2018). Hybrid visual natural landmark–based localization for indoor mobile robots. International Journal of Advanced Robotic Systems, 15(6), 1729881418810143. https://doi.org/10.1177/1729881418810143
  • Zheng, P., Qin, D., Han, B., Ma, L., & Berhane, T. M. (2021). Research on feature extraction method of indoor visual positioning image based on area division of foreground and background. ISPRS International Journal of Geo-Information, 10(6), 402. https://doi.org/10.3390/ijgi10060402
  • Zhou, T., Ku, J., Lian, B., & Zhang, Y. (2022). Indoor positioning algorithm based on improved convolutional neural network. Neural Computing and Applications, 34(9), 6787–6798. https://doi.org/10.1007/s00521-021-06112-5