Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Artificial Intelligence in IR Thermal Imaging and Sensing for Medical Applications
Previous Article in Journal
Personalized Cognitive Support via Social Robots
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Time-Interval-Based Collision Detection for 4WIS Mobile Robots in Human-Shared Indoor Environments

1
Department of Autonomous Mobility, Korea University, Sejong 2511, Republic of Korea
2
Mobile Robotics Research and Development Center, FieldRo Co., Ltd., Sejong 2511, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(3), 890; https://doi.org/10.3390/s25030890
Submission received: 10 January 2025 / Revised: 23 January 2025 / Accepted: 27 January 2025 / Published: 31 January 2025
(This article belongs to the Section Sensors and Robotics)
Figure 1
<p>The 4WIS mobile robot platform used in this study.</p> ">
Figure 2
<p>Hardware architecture of mobile robot.</p> ">
Figure 3
<p>One of the primary driving methods employed by 4WIS mobile robot. (<b>a</b>) Illustration of the Parallel mode; (<b>b</b>) illustration of the bicycle model for Parallel mode.</p> ">
Figure 4
<p>Wheel model in 3D coordinate system.</p> ">
Figure 5
<p>Kinematic model of Parallel mode.</p> ">
Figure 6
<p>Kinematic model of Ackermann mode.</p> ">
Figure 7
<p>Flowchart for controlling a mobile robot.</p> ">
Figure 8
<p>Quadrant-based steering angle determination for wheel control.</p> ">
Figure 9
<p>Example of mobile robot trajectory calculation 3 s later in different speed setting.</p> ">
Figure 10
<p>Overall structure of mobile robot trajectory calculation.</p> ">
Figure 11
<p>(<b>a</b>) Preset human size parameters; (<b>b</b>) real-time human detection results.</p> ">
Figure 12
<p>Human detection algorithm architecture.</p> ">
Figure 13
<p>Visualization of real-time trajectory prediction results.</p> ">
Figure 14
<p>Trajectory prediction system architecture.</p> ">
Figure 15
<p>Collision detection approaches: (<b>a</b>) Traditional path-based detection; (<b>b</b>) Conventional method limitations; (<b>c</b>) Unnecessary collision responses; (<b>d</b>) Proposed trajectory prediction method.</p> ">
Figure 16
<p>Collision detection system visualization.</p> ">
Figure 17
<p>Overall flowchart of proposed collision detection system.</p> ">
Figure 18
<p>HD map of experimental environment. (<b>a</b>) A 3D point cloud visualization of the test environment; (<b>b</b>) Top-view representation of the indoor space showing the corridor and room layouts.</p> ">
Figure 19
<p>Environment mapping and localization system: (<b>a</b>) NDT matching visualization for real-time localization; (<b>b</b>) generated path planning overlay on the constructed HD map.</p> ">
Figure 20
<p>First experimental setup and real-time visualization of the mobile robot path.</p> ">
Figure 21
<p>First experimental setup and real-time visualization of mobile robot path.</p> ">
Figure 22
<p>Comparison between command velocities and actual velocities of a 4WIS mobile robot in Ackermann mode during autonomous driving: (<b>a</b>) 2 km/h, (<b>b</b>) 3 km/h, (<b>c</b>) 4 km/h, and (<b>d</b>) 5 km/h. The red lines represent command values and blue lines show the actual robot response. The results demonstrate increasing trajectory tracking errors as the autonomous driving speed increases, particularly noticeable in both the linear velocity (linear.x) and angular velocity (angular.z) measurements.</p> ">
Figure 23
<p>Human detection performance in Parallel and Ackermann modes at various speeds.</p> ">
Figure 24
<p>Time-to-collision prediction for multiple human positions in Parallel mode.</p> ">
Figure 25
<p>Human tracking results: (<b>a</b>) mobile robot speed of 2 km/h, (<b>b</b>) mobile robot speed of 3 km/h.</p> ">
Figure 26
<p>Comparison of collision detection activation time between conventional and proposed methods at different speeds of mobile robot.</p> ">
Versions Notes

Abstract

:
The recent growth in e-commerce has significantly increased the demand for indoor delivery solutions, highlighting challenges in last-mile delivery. This study presents a time-interval-based collision detection method for Four-Wheel Independent Steering (4WIS) mobile robots operating in human-shared indoor environments, where traditional path following algorithms often create unpredictable movements. By integrating kinematic-based robot trajectory calculation with LiDAR-based human detection and Kalman filter-based prediction, our system enables more natural robot–human interactions. Experimental results demonstrate that our parallel driving mode achieves superior human detection performance compared to conventional Ackermann steering, particularly during cornering and high-speed operations. The proposed method’s effectiveness is validated through comprehensive experiments in realistic indoor scenarios, showing its potential for improving the efficiency and safety of indoor autonomous navigation systems.

1. Introduction

The rapid growth of e-commerce and increasing urbanization have intensified challenges in urban transportation systems, particularly in last-mile delivery [1,2]. As online customers demand more frequent and faster deliveries, the concentration of delivery services in urban areas has led to increased pollution, congestion, and operational costs [3,4,5]. Last-mile delivery, accounting for 40% of total supply chain costs, represents the most inefficient segment of logistics services [6]. Many researchers have explored innovative solutions to improve last-mile delivery efficiency [7], with autonomous mobile robots emerging as a promising technology [8,9]. These systems aim to address key challenges in sustainability, customer service, and cost reduction. While aerial delivery systems like drones offer speed advantages, they are limited in carrying capacity [10]. Ground-based mobile robots present a more practical solution for indoor environments, capable of handling multiple parcels per delivery [11,12].
In this study, we utilize a 4WIS mobile robot that addresses several limitations of conventional systems. Traditional mobile robots with fixed or car-like steering mechanisms suffer from restricted maneuverability in confined spaces, unstable sensor orientation during turns, and inefficient trajectory adjustments in dynamic environments [13]. To overcome these limitations, our 4WIS configuration enables enhanced maneuverability through independently controlled wheels. Each wheel’s individual actuation allows precise trajectory adjustments and stable sensor orientation, which is particularly beneficial for indoor navigation [14,15,16].
In order to utilize 4WIS mobile robots in various environments, extensive research on driving modes has been conducted [17,18]. The unique capability of 4WIS robots to steer each wheel independently has led to the development of diverse driving modes, each with distinct characteristics. To provide adequate controllability, researchers have designed a four-mode steering strategy: Ackermann steering, Parallel steering, Crab steering, and Spinning. Ackermann steering, the most commonly used type in mobile robots, mimics car-like systems [19,20,21]. It allows for smooth turns by angling the wheels differently, making it ideal for traditional navigation scenarios. Parallel steering, on the other hand, aligns all wheels in the same direction, enabling sideways movement without changing the robot’s orientation, a valuable feature in tight spaces [22,23].
While each of these modes offers unique advantages, their effectiveness varies depending on the specific environment and task. For indoor environments, Parallel mode demonstrates particular promise due to its ability to maintain stable sensor orientation while maneuvering in confined spaces. However, the implementation of effective navigation strategies in indoor environments presents additional challenges, particularly in collision avoidance.
Traditional reactive collision avoidance behaviors often result in sudden movements and unnecessary path adjustments by robots when sharing spaces with humans. These abrupt changes in robot motion and unexpected speed variations reduce the efficiency and reliability of the system. Conventional collision detection methods, focusing primarily on immediate collision avoidance, lead to jarring movements and path deviations that make the robot’s behavior appear unnatural and unpredictable, ultimately undermining trust in the autonomous navigation system.
To address this limitation, we propose a time-interval-based collision detection method integrating Light Detection and Ranging (LiDAR)-based sensing [24,25,26] with kinematic prediction. Similar to how insects use their antennae to sense and navigate their environment before physical contact, our system proactively predicts potential collisions by extending its perception into the near future. This predictive approach enables the robot to anticipate and respond to potential obstacles well before physical proximity triggers reactive behaviors.
This anticipatory approach combines three key elements for effective collision prediction. First, a kinematic model computes the robot’s future positions based on current state and control inputs. Second, LiDAR sensors provide high-resolution spatial data for human detection [27,28]. Third, Kalman filter-based estimation enables the accurate prediction of human trajectories [29], essential for proactive collision avoidance. The main contributions of this research are as follows:
  • Development of a time-interval-based collision detection system that integrates robot kinematics with human trajectory prediction, enabling proactive obstacle avoidance;
  • Implementation of a stable human tracking method utilizing 4WIS Parallel mode capabilities, which maintains consistent sensor orientation during navigation;
  • Experimental validation of the system’s effectiveness in realistic indoor scenarios, demonstrating improved navigation efficiency and reliability.
The remainder of this paper is organized as follows. Section 2 reviews related works in autonomous delivery robots, 4WIS systems, human detection, and collision avoidance. Section 3 presents the system architecture and methodologies. Section 4 demonstrates the experimental results validating our approach. Finally, Section 5 concludes with discussions on system effectiveness and future improvements.

2. Related Works

2.1. Indoor Delivery Robot Systems

Research in autonomous indoor delivery systems has focused on optimizing navigation and interaction capabilities in human-shared environments [7,8]. While various robotic platforms have been developed, ground-based mobile robots have emerged as the preferred solution for indoor environments [11,12,30], offering advantages in carrying capacity and operational stability. However, traditional mobile robot platforms often struggle with the complex requirements of indoor navigation, particularly in dynamic human-shared spaces.

2.2. 4WIS Mobile Robot Systems

4WIS systems represent a significant advancement in mobile robot technology [13,14,15], offering enhanced maneuverability and control precision critical for indoor operations. Researchers have made significant progress in improving the stability and performance of these systems through various control strategies. The integration of Direct Yaw-moment Control (DYC) with Active Steering (AS) has significantly improved vehicle lateral dynamics [31,32], while its combination with Model Predictive Control (MPC) has optimized system performance by reducing the impact of DYC on the longitudinal velocity [33]. Recent research has established four primary steering strategies for 4WIS robots as follows [17,18]:
  • Ackermann steering for traditional car-like navigation [19,20,21];
  • Parallel steering enabling lateral movement while maintaining orientation [22,23];
  • Crab steering for combined forward and lateral motion [34];
  • Spinning mode for in-place rotation [16].
Particularly significant for indoor applications is the parallel steering mode, which ensures consistent sensor orientation during movement [22,23]. This stability is crucial for maintaining reliable human detection and tracking, addressing a key limitation of conventional steering systems in dynamic indoor environments.

2.3. Human Detection and Tracking

LiDAR-based human detection has become fundamental for autonomous systems in human-shared environments [24,25,26], offering superior performance across various lighting conditions [35,36,37]. Recent advances in point cloud processing have significantly improved detection accuracy and reliability [38,39]. The integration of Kalman filtering for trajectory prediction [29,40,41] has enhanced real-time human tracking capabilities [42,43,44]. Recent research has focused on developing more sophisticated motion models [45], though challenges remain in predicting human movement patterns in unstructured indoor environments.

2.4. Collision Avoidance in Indoor Environments

Traditional collision avoidance approaches typically rely on reactive strategies based on immediate proximity detection [46,47,48]. While these methods ensure basic safety through threshold-based responses [49,50], they often result in inefficient robot behavior and reduced system reliability in shared spaces. Recent research has explored predictive approaches [51,52], attempting to anticipate potential collisions by considering future trajectories [53,54,55]. However, integrating human trajectory prediction with robot path planning remains challenging [56,57], particularly in dynamic indoor environments where human behavior can be unpredictable. While advanced mapping technologies [58,59] and motion planning techniques [60,61] have improved navigation capabilities, balancing efficient operation with system reliability remains an active research challenge [62,63,64]. Conventional methods particularly struggle with multiple moving obstacles [65,66], highlighting the need for more sophisticated approaches to collision prediction and avoidance [46,47,48].
These existing approaches demonstrate the need for an integrated solution that combines stable sensing capabilities with predictive collision detection for effective human–robot interaction in indoor environments.

3. Materials and Methods

3.1. Modelings of 4WIS Mobile Robot

Figure 1 shows the mobile robot used in this study. The 4WIS mobile robot was designed with specific characteristics optimized for indoor navigation. A key characteristic of the 4WIS system is that each wheel features independent steering and drive capabilities through individual in-wheel motors. The independent control of each wheel enables multiple driving modes, including conventional Ackermann steering for efficient straight-line motion and omnidirectional movement for complex maneuvers, allowing the robot to adapt to various navigation scenarios encountered in indoor environments.

3.2. Hardware Architecture

Figure 2 shows the hardware architecture of the mobile robot used in this experiment. The robot, powered by a 24 V battery, is equipped with a LiDAR, camera, CAN BUS, and IMU as the primary sensors, which are connected to the MCU of the robot. The local PC facilitates sensor fusion between the driving components and MCU, acting as a data intermediary for various sensors. These data are utilized by the ROS2 internal algorithms to actuate the actuators and send feedback data back and forth.

3.3. Driving Mode of 4WIS Mobile Robot

Figure 3a illustrates the parallel driving mode, one of the key steering strategies of 4WIS mobile robots. In Parallel mode, there is no rotational movement, which distinguishes it from other modes like Ackermann steering. Instead, only longitudinal (X-direction of the robot) and transverse (Y-direction of the robot) movements are involved, allowing for precise control in two dimensions. The most notable feature of this mode is that all four wheels steer in the same direction simultaneously. Given that the front and rear wheels are situated on the same horizontal plane, the motion vector of the mobile robot in Parallel mode can be derived through the application of the bicycle model [67]:
V ˙ = V X V Y 0
The bicycle model is a simplified representation commonly used in vehicle dynamics and control. In the context of 4WIS mobile robots operating in Parallel mode, this model is particularly useful due to the uniform steering of all wheels. As shown in Figure 3b, the bicycle model reduces the four-wheel system to a two-wheel equivalent, with one wheel representing the front axle and another representing the rear axle. In Parallel mode, all four independent wheels are steered at the same angle. Consequently, the front and rear wheels of the bicycle model in Parallel mode are also steered at the same angle:
δ f = δ r
In this case, the steering angles of the wheels can be determined as follows:
θ 1 = θ 2 = θ 3 = θ 4 = tan 1 ( V Y V X )
This simplified model allows for easier analysis and control of the robot’s motion while still capturing its essential kinematic behavior.

3.4. Kinematics of Parallel Mode

To develop the kinematic-based future trajectory prediction algorithm, we now introduce the process of obtaining a kinematic model for Parallel mode. This model is crucial for understanding and predicting the motion of the 4WIS mobile robot in indoor environments. When considering a wheel in a three-dimensional coordinate system, as shown in Figure 4, several key parameters define its motion. These parameters include the following:
  • r: the radius of the wheel;
  • ϕ ˙ : the speed of the wheel;
  • θ : the direction of the wheel;
  • v: the velocity of the wheel;
  • ω : the rotational speed of the wheel.
x ˙ y ˙ θ ˙ = r cos ( θ ) 0 r sin ( θ ) 0 0 1 ϕ ˙ ω
Building upon the single-wheel kinematic model, we can extend our analysis to the entire 4WIS mobile robot in Parallel mode [68,69]. In this instance, the expression for the velocity of each wheel (v) is approximately equal to r ϕ ˙ , which allows us to generalize it as follows:
x ˙ y ˙ θ ˙ = cos ( θ ) 0 sin ( θ ) 0 0 1 v ω
This approximation simplifies our model while maintaining its accuracy for the purposes of trajectory prediction.
In accordance with Equation (5), we can express s, which represents the velocity of the mobile robot, as follows:
s = v cos ( θ )
This equation describes the overall motion of the 4WIS mobile robot in Parallel mode, taking into account the contributions of all four wheels. By utilizing these equations, we can effectively model and predict the motion of the 4WIS mobile robot in Parallel mode. This kinematic model serves as the foundation for our trajectory prediction algorithm, enabling us to accurately anticipate the robot’s movement in various indoor environments.
Figure 4. Wheel model in 3D coordinate system.
Figure 4. Wheel model in 3D coordinate system.
Sensors 25 00890 g004
Building upon our earlier equations, we can now develop a comprehensive kinematic model for the 4WIS mobile robot operating in Parallel mode. Figure 5 illustrates a representation of the Parallel mode drive, where θ represents the angle of rotation on the X-Y coordinate plane. This visual representation helps us understand how the robot’s orientation relates to its movement in two-dimensional space. The kinematic model of the Parallel mode can be represented as follows:
x ˙ y ˙ θ ˙ = cos ( + θ ) 0 sin ( + θ ) 0 0 1 v ω
This model encapsulates the unique motion characteristics of the 4WIS robot in Parallel mode, allowing for precise control and trajectory prediction. To fully appreciate the advantages of this model, it is instructive to compare it with the kinematic model of the more traditional Ackermann steering system. Therefore, we briefly introduce the kinematic model for Ackermann mode used in this paper.
Under normal circumstances, vehicles typically operate in the Ackermann mode, which mimics the driving style of a car. The Ackermann mode is a crucial component in modern automobiles as well as in autonomous and robotic vehicles.
Figure 6 illustrates the kinematic model for the Ackermann mode. The kinematic model of the Ackermann mode is expressed as follows:
x ˙ y ˙ θ ˙ = cos ( ) 0 sin ( ) 0 tan ( δ f / l ) 1 s ω

3.5. Control System Architecture

Figure 7 shows a flowchart of the control system for the mobile robot. The mobile robot accepts two command formats for velocity control: ( linear . x , linear . y ) and ( linear . x , angular . z ). The ROS2 internal algorithm on a local PC receives these data and is responsible for interpreting the command format, calculating the turning radius or steering angle, and switching the mobile robot to the appropriate driving mode. Previously, the driving mode was preset based on the signal format: if the signal included lateral movement ( linear . y ), the robot operated in Parallel mode; if it included rotational movement ( angular . z ), it switched to Ackermann driving mode.

3.6. Mobile Robot Trajectory Calculation

Our method for determining the appropriate steering angles and rotation directions is based on the received command signals. We use a simple yet effective criteria system to translate commands into wheel configurations.
Figure 8 and Table 1 illustrate how we determine the steering angle and wheel rotation direction based on the control signals and the quadrant in which the wheel is located. This system’s simplicity allows for rapid computation, enabling real-time adjustments even in fast-changing environments.
For example, if Linear.x > 0 and Linear.y > 0, the wheel is located in the first quadrant (1/4 quadrant). In this case, according to Table 1, the steering angle is positive (+), and the wheel direction is front. This means the wheel will be oriented towards the positive direction of the x-axis and y-axis and will rotate forward. Here, we can find that the direction of the wheel is determined by Linear.x and the steering angle is determined by Linear.y. Based on this relationship, we can rewrite Equation (3) using our control signal format as follows:
θ 1 = θ 2 = θ 3 = θ 4 = q × tan 1 V Y V X
where q represents the sign of steering angle from Table 1, and V X , V Y are the velocity components corresponding to Linear.x and Linear.y, respectively.
This approach offers a significant advantage: it potentially decreases the time required for wheel adjustments during direction changes. This improvement in responsiveness can be crucial in dynamic indoor environments where quick adaptations are often necessary. Moreover, the reduced adjustment time can lead to smoother overall motion and potentially lower energy consumption.
Having established our efficient method for determining wheel configurations, we now turn our attention to predicting the robot’s trajectory over time. While our innovative approach to steering angle and rotation direction determination provides precise and immediate control over the robot’s movement, we must further develop this into a comprehensive system for collision avoidance. In the following sections, we will detail our kinematic model and demonstrate its effectiveness in predicting robot trajectories and avoiding collisions in various indoor scenarios.
To begin this analysis, we present a kinematic approach to predict and visualize the future position of a mobile robot. This approach facilitates the detection of collisions between the robot and people. Kinematics is a mathematical approach to describing the motion of a robot, enabling the representation of state variables such as position and velocity as a function of time. In this study, we assume motion in a two-dimensional plane and model the robot’s position using the following equations:
x ( t ) = x 0 + v x × t
y ( t ) = y 0 + v y × t
where x ( t ) and y ( t ) are the position of the robot after time t, x 0 and y 0 are the current position, and v x and v y are the velocities along the x and y axes, respectively.
Building upon these kinematic principles, we can determine the values of v x and v y for our specific robot configuration. These values are calculated as follows:
v x = s cos ( θ )
v y = s sin ( θ )
By applying these equations, we can accurately estimate the robot’s future position by comparing the predicted values with the actual movement values obtained through the robot’s internal encoder:
x ( t ) = x 0 + s cos q × tan 1 V Y V X × t
y ( t ) = y 0 + s sin q × tan 1 V Y V X × t
A key feature of our kinematic model is the inclusion of the time parameter t. This parameter provides our system with the flexibility to predict the robot’s position at various future time points. By adjusting the value of t, we can anticipate potential collisions not just in the immediate future but at multiple time intervals ahead. This capability is particularly crucial during path tracking, where the robot must dynamically respond to potential collisions while following a pre-generated path, ensuring safe human–robot interaction in indoor environments.
As illustrated in Figure 9, our trajectory calculation algorithm visually distinguishes between the current and predicted positions of the mobile robot. The current location of the robot is represented by a red circle, while the predicted future location is shown as a blue circle. This intuitive visualization allows users to easily understand the anticipated movement of the robot. In the figure, each grid cell represents an area of 100 cm by 100 cm, providing a clear scale for the robot’s movement.
Figure 10 illustrates the overall structure of mobile robot trajectory prediction. The algorithm receives the velocity command sent from the controller in the form of (Linear.x, Linear.y) and uses the kinematic equations of the mobile robot to return the coordinates of the expected positions in a few seconds. It calculates this from the current position ( x 0 , y 0 ) to the predicted center position of the robot.
To conclude, this section presented a steering control strategy using kinematics and steering rules, along with a trajectory calculation algorithm that can predict the mobile robot’s position at desired time intervals. The proposed method effectively reduces the response time for path adjustments while maintaining precise control through our quadrant-based steering system.

3.7. Human Detection

The LiDAR-based human detection system, demonstrated in Figure 11, operates within a 5 m radius from the robot’s center, providing optimal coverage for indoor environments. This specific distance is chosen as an optimal balance between safety and operational efficiency. It takes into account typical human walking speeds, the mobile robot’s maximum speed, and provides sufficient time for the robot to react and adjust its trajectory if needed.
To enhance the accuracy of the recognition process, we predefine the range of human size parameters to filter potential human candidates [38,39] As shown in Figure 11a, these preset values for human detection are width 0.5∼1.0 m, length 1.0∼1.2 m, and height 1.5∼1.7 m. The point cloud data obtained from the LiDAR sensor are then clustered based on these parameters to detect people.
For detecting human objects (LiDAR-based 3D object) in point cloud data, we employ the Pillar Feature Net algorithm, as shown in Figure 12, which transforms irregular 3D LiDAR points into an organized grid structure through vertical discretization (pillars) [70,71]. The algorithm encodes point-wise features and spatial relationships within each pillar, converting the point cloud data into a pseudo-image format that can be efficiently processed by conventional 2D convolutional networks for real-time human detection.

3.8. Human Trajectory Prediction

The integration of LiDAR detection with Kalman filter-based prediction [29,40,41] enables robust human trajectory estimation. Figure 13 demonstrates our system’s ability to track and predict human movements in real-time, with yellow rectangles indicating current positions and yellow circles showing predicted future locations [42,43]. This prediction framework, combined with our Parallel mode’s stable sensor orientation [63], enables more reliable human tracking compared to conventional approaches [28,45].
The Kalman filter operates through an iterative two-step process:
  • Prediction step: estimates future states based on the current state and system model;
  • Update step: refines these predictions using actual measurements.
This continuous cycle of prediction and correction allows our system to accurately track and predict pedestrian movements. By combining robust human detection with the Kalman filter’s predictive capabilities, our system provides reliable trajectory predictions for pedestrians in the robot’s vicinity, enabling safer human–robot interaction in indoor environments. A demonstration of the proposed method in real-world scenarios is provided in Supplementary Video S1, available in the Supplementary Materials Section.
In this study, we implement a linear Kalman filter to track and predict the positions of humans detected in a point cloud environment [43,44]. The formulation and application of the Kalman filter in our method can be summarized as follows:
x = x p o s y p o s x v e l y v e l
This state vector encapsulates both position and velocity information of the detected objects.
The state transition matrix A is defined as
A = 1 0 1 0 0 1 0 1 0 0 1 0 0 0 0 1
This model predicts the currents state based on the previous state, incorporating changes in position due to velocity.
The measurement model is defined as
H = 1 0 0 0 0 1 0 0
However, in our system, the matrix only measures position information, neglecting velocity data. This is because simultaneously and continuously tracking both position and velocity for each person is challenging in real-world scenarios. Therefore, we propose an alternative approach which is detailed in this section.
In the prediction step, the prediction state and covariance equation of the algorithm at time ( t 1 ) are
x ^ = A x ^ k 1 + B μ k
P = A P k 1 A T + Q
The control input matrix typically represents the effect of external control inputs on the state. In many scenarios, such as the one presented here, this could include forces or acceleration acting on the detected object. However, in the current implementation, B is not explicitly utilized, as no control inputs are defined or applied.
In the update step, the Kalman gain K is calculated using Equation (21), where R represents the measurement noise covariance matrix. The matrix R, defined in Equation (22), reflects the uncertainty in our measurements. When the sensor measurements are highly accurate, R values can be set lower, while less accurate measurements require higher R values. This allows our system to be adaptable to different sensor characteristics and environmental conditions:
K = P H T ( H P H T + R ) 1
R = 1 0 0 1
According to these equations, the predicted state can be updated using the measurement vector z, which contains the observed position coordinates:
x ^ = x ^ + K ( z H x ^ )
z = z x z y
where z x and z y represent the measured positions in the x and y directions, respectively. These measurements are crucial for updating the state estimates in the Kalman filter, allowing us to correct our predictions based on actual observations.
Figure 14 illustrates the overall structure of our trajectory prediction system. To address the limitations of velocity measurements, our system collects 10 consecutive position observations at 10 ms intervals over a 100 ms period. The algorithm processes these position data through a state transition model and observation updates, continuously improving prediction accuracy through iterative refinement. This approach simplifies the tracking process while maintaining prediction accuracy by focusing solely on position data, which has proven effective in dynamic indoor environments where continuous velocity tracking is challenging.

3.9. Time Interval Collision Detection

In this section, we present our proposed collision detection algorithm that combines kinematic-based trajectory calculation and Kalman filter-based human trajectory prediction. Traditional collision detection approaches typically follow a sequential process: first generating a path on a map [46,47,48], then having the mobile robot navigate along this path while continuously correcting its position through localization [51,52]. During this process, motion planning enables the mobile robot to predict its position along the generated path at different time intervals.
Collision detection in mobile robotics can be categorized into two distinct scenarios. The first scenario occurs when an object is detected directly on the robot’s generated path [54,55]. In this case, traditional approaches either generate a new collision-free path or temporarily halt the robot until the moving object passes. This approach is relatively intuitive for humans sharing the space, as they can easily recognize that the robot is aware of their presence through its proactive responses and predict its next movements as shown in Figure 15a.
However, the second scenario, where potential collisions occur outside the generated path, has been handled less effectively by conventional methods [65,66] as illustrated in Figure 15b. Traditional approaches simply rely on LiDAR sensor’s detection radius, reducing speed or stopping whenever an object comes within a certain distance, regardless of its direction or movement pattern. This simplistic approach often results in unnecessary robot movements and stops, potentially causing discomfort and inconvenience to people sharing the same space, as the robot’s movements appear unnatural from a human perspective.
While traditional approaches may be appropriate when people are walking directly towards the robot’s planned path or the robot itself, they become problematic and unnatural when people are moving away from the path or the robot. In such cases, the robot’s reactive movements are unnecessary and can create awkward interactions as shown in Figure 15c. This not only leads to inefficient navigation but also undermines overall trust in the autonomous navigation system, as humans observe seemingly irrational responses to non-threatening situations.
To address these limitations in conventional path following, we propose mounting our prediction-based algorithm onto existing systems to enable smoother operation [46,47,48]. The collision detection system integrates the trajectory prediction algorithms of both the mobile robot and humans. Both algorithms return predicted positions in ( x , y ) coordinates, allowing comparison between the mobile robot’s anticipated path and the predicted trajectories of nearby pedestrians. A potential collision is identified when the predicted positions from both algorithms are within a 1-m radius of each other. This system allows for real-time collision risk assessment and proactive avoidance. The 1-m threshold is chosen based on the robot’s dimensions and typical human personal space. Figure 15d illustrates this collision detection algorithm.
Figure 16 shows the real-time visualization of our collision detection system in action. The sequence of three frames demonstrates how the system tracks and predicts human movement. The mobile robot’s position is indicated by the cyan cross at the center, while the detected human is represented by the yellow marker with coordinates displayed above. A cyan circle around the robot and a separate circle around the human indicate their respective detection and prediction zones. The visualization includes coordinate information ( x , y ) for precise position tracking, enabling the real-time assessment of potential collision risks [53,54,55]:
Z = { ( x , y ) | ( x r x h ) 2 + ( y r y h ) 2 d }
where Z is the collision risk zone, ( x r , y r ) is the robot position, ( x h , y h ) is the human position, and d is the safety threshold distance (1 m).

3.10. System Integration and Workflow

Figure 17 illustrates the complete workflow of our proposed system, which integrates path following, human detection, and collision prediction. The process consists of two parallel streams: robot control and human tracking. On the robot control side, the system determines the appropriate driving mode (Ackermann or Parallel) based on whether the robot is starting from a corner. In Parallel mode, the system utilizes steering angle and kinematic equations to calculate the robot’s trajectory.
Simultaneously, the LiDAR-based human detection system continuously monitors the environment. When a person is detected, the system accumulates 10 consecutive position measurements before initiating Kalman filter predictions. These two streams converge in the collision detection module, which compares the predicted trajectories of both the robot and detected humans.
The collision response operates in two distinct scenarios. When a collision is predicted on the predefined path, the system immediately activates conventional collision avoidance procedures. However, for potential collisions detected outside the path, the system employs a more nuanced approach, implementing a graduated speed reduction based on the predicted time-to-collision. Specifically, the robot reduces its speed to 80% when a collision is predicted 4 s ahead, further decreases to 60% at 3 s, drops to 30% at 2 s, and comes to a complete stop if a collision is predicted within 1 s. This graduated response system enables smooth and predictive collision avoidance while maintaining efficient path following operations. The effectiveness of this integrated system was validated through comprehensive experiments, which will be presented in the following section.

4. Results

4.1. Experiments Setup

To validate our proposed collision detection and avoidance system, we conducted experiments in indoor environments by integrating our algorithm with an autonomous navigation system. We first created a detailed 3D vector map of the indoor environment using LiO-SAM (Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping) technology. This high-definition map served as the foundation for planning experimental paths that would test our collision avoidance capabilities [58,59]. The experimental environment, as shown in Figure 18, consists of corridors and rooms typical of indoor spaces where mobile robots might operate. Figure 18a presents the 3D point cloud visualization of our test environment, while Figure 18b provides a top-view representation that clearly shows the layout of corridors and rooms. Figure 19a shows the NDT matching process used for real-time robot localization, while Figure 19b illustrates the pre-planned path overlaid on the HD map, with white arrows indicating the robot’s intended movement direction through the space [25,60]. The specifications of the 4WIS mobile robot are shown in Table 2. The Experimental specifications and parameters are shown in Table 3.
The experiments were conducted on a 4.40 GHz Intel® Core™ i5-1240P laptop with 16 GB of RAM, interfacing with Agilex’s ranger mini platform. This setup allowed us to process the sensor data and run our algorithms in real-time while controlling the mobile robot.
Figure 20 illustrates our experimental setup for comparing the Parallel and Ackermann modes. Figure 20a shows the experiment conducted in Parallel mode, while Figure 20b depicts the same path followed in Ackermann mode. In both cases, the mobile robot followed a preset path: moving 5.5 m straight forward, then turning left slightly before continuing forward. This path was designed to simulate typical situations that a mobile robot might encounter while navigating indoor environments [38,39,72].
The second set of experiments was conducted to validate the performance of our proposed algorithm. Figure 21a,b demonstrate our testing setup for evaluating the time-to-collision (TTC) detection capabilities at varying distances, with each setup designed to test different aspects of the system’s performance. For both experimental setups, we conducted tests at two different mobile robot speeds: 2 km/h and 3 km/h.
In Figure 21a, we position the human subjects at three distances from where the robot’s avoidance movements start, to verify the different TTC results for different positions:
  • Blue position: 0.5 m away;
  • Green position: 1.75 m away;
  • Orange position: 3.5 m away.
Figure 21b is designed to evaluate the time-to-collision (TTC) detection efficiency at varying distances, particularly focusing on how the robot performs in Parallel mode when human paths intersect with the robot’s trajectory. Human subjects walk 3 m while maintaining different initial distances from the robot’s Y-directional movement start point:
  • Blue position: 1.45 m away;
  • Green position: 3.25 m away;
  • Orange position: 5.0 m away.
This setup allows us to evaluate not only the robot’s collision detection capabilities but also its potential for interaction with humans. The system distinguishes the possibility of collision over time, testing its performance with humans at various proximity to its path.

4.2. Experiments 1

The experimental results shown in Figure 22 demonstrate the velocity tracking performance of the 4WIS mobile robot operating in Ackermann mode at different speeds. The linear velocity (linear.x) and angular velocity (angular.z) data reveal a clear correlation between driving speed and control accuracy. At lower speeds (2 km/h and 3 km/h), the robot shows relatively stable tracking with minimal deviation from the command values. However, as the speed increases to 4 km/h and 5 km/h, there is a noticeable increase in the tracking errors, particularly during directional changes.
This degradation in tracking accuracy can be attributed to factors such as increased dynamic effects, mechanical limitations, and control system response times at higher speeds. The angular velocity plots specifically show larger oscillations and delayed responses at higher speeds, indicating challenges in maintaining precise steering control as velocity increases. These unstable motions and oscillations can significantly impact the performance of human detection systems, as the inconsistent sensor orientation during high-speed operations may lead to degraded detection reliability and reduced tracking accuracy. This finding emphasizes the importance of maintaining stable sensor orientation for reliable human detection and tracking in dynamic indoor environments.
Figure 23a illustrates the detected rates of human object 1 over time for both Parallel and Ackermann modes at different speeds. In Parallel mode, the detection rates remain similar for both 2 km/h and 3 km/h speeds. However, in Ackermann mode, we observe a drop in detection rate as the speed increases, especially when the left turn starts. The detection rate is obtained by calculating the ratio of human detection time to elapsed time.
The 0 km/h speed serves as a control group, showing the detection rates when the mobile robot is stationary. The 2 km/h and 3 km/h speeds are chosen, as they represent the typical speeds used by mobile robots in indoor environments for safety reasons. This comparison reveals that Parallel mode maintains consistent detection performance across different speeds, while Ackermann mode’s performance degrades at higher speeds. This finding suggests that Parallel mode may be more suitable for maintaining reliable human detection in dynamic indoor environments.
Figure 23b presents the comparison results for human object detection with two people present. In Parallel mode (a), the mobile robot demonstrates stable detection of both human objects throughout its entire path. In contrast, Ackermann mode (b) exhibits difficulties in detecting human objects, particularly when executing a left turn. These results highlight the superior performance of Parallel mode in maintaining consistent human detection in indoor environments. The ability to reliably detect multiple human objects, even during turns, suggests that Parallel mode is better suited for navigation in dynamic indoor spaces where humans are present.

4.3. Experiments 2

Figure 24 presents the experimental results of Figure 21. These findings demonstrate that our proposed algorithm can not only detect potential collisions with humans at different positions but also calculate the time remaining until a collision might occur. This capability is expected to enable more natural interactions with humans by allowing the robot to define its reactions based on the time-to-collision information.
Figure 25 demonstrates the mobile robot’s human tracking performance as test subjects moved along a predetermined path covering 3 m, starting from three different initial positions. The experiments were conducted at two different robot speeds: 2 km/h (a) and 3 km/h (b). The results show successful tracking across all test cases, with the robot maintaining consistent tracking performance regardless of the humans’ starting positions at 1.43 m (Case 1), 3.22 m (Case 2), and 5.0 m (Case 3) from the start line.
Figure 26 demonstrates the comparison between conventional LiDAR-based detection and the proposed prediction-based detection algorithm. While the conventional method activates collision detection whenever obstacles enter the predetermined sensor range, our proposed method predicts the trajectory of moving obstacles and calculates potential collision points. This enables more precise detection timing and selective activation of the collision detection algorithm. As shown in the results, the proposed method significantly reduces unnecessary detection time compared to the conventional method (by up to 83% at 3 km/h), indicating its potential to minimize unnecessary speed adjustments while maintaining safety. This improved efficiency is particularly notable as the starting position of the moving obstacle increases. Additionally, in cases where no collision is predicted to occur (as shown in the case of 5.0 m starting position at 2 km/h), the proposed algorithm does not activate at all, further demonstrating its efficiency in avoiding unnecessary detection processes.
This feature is particularly crucial in indoor environments. Unlike road systems for vehicles, indoor spaces lack standardized traffic regulations, making it challenging to define clear behavioral guidelines or codes of conduct for robots. Consequently, representing interactions between robots and people in these environments is complex. Our algorithm addresses this challenge by providing a dynamic, time-based approach to human–robot interaction.

5. Discussion and Conclusions

As transportation costs have increased over the years, especially in last-mile delivery, many researchers are striving to reduce these costs by utilizing mobile robots [30]. Numerous studies have been conducted on stabilizing 4WIS mobile robots’ driving performance and analyzing their kinematics and driving modes [34]. However, the use of appropriate driving modes for 4WIS in specific situations and environments remains understudied. The results of our experiments reveal several significant findings regarding the effectiveness of 4WIS mobile robots in indoor environments.
Our experimental results showed that Parallel mode outperforms Ackermann mode in human detection and stability, especially at higher speeds and during turns. This suggests that Parallel mode is well suited for navigating dynamic indoor environments with frequent and unpredictable human presence. Its superior human detection and tracking capabilities make it particularly valuable for indoor applications where safety and reliability are paramount. The implemented time-to-collision prediction system represents a significant advancement in collision detection for mobile robots in human-shared environments. The algorithm’s ability to detect potential collisions and calculate the remaining time until a collision occurs enables greater operational efficiency based on human trajectory patterns. The system demonstrated up to an 83% reduction in unnecessary detection time compared to conventional methods at 3km/h, with improved efficiency particularly evident as the starting position of moving obstacles increased. This enhancement allows the robot to maintain efficient operations while ensuring safety in shared spaces.
However, several limitations need to be addressed in future work. In contrast to outdoor environments, where vehicles interact with pedestrians using standardized traffic rules and indicators such as turn signals and tail lamps, mobile robots lack such conventional communication methods [56,57]. In indoor settings, the absence of standardized traffic rules further complicates the establishment of consistent behavioral guidelines. Our time-based approach offers a flexible solution for defining robot behavior based on the remaining time to potential collisions, but further refinement is necessary. Unpredictable human movement patterns in indoor spaces pose significant challenges for collision prediction [45,63]. Current models may require further refinement to accommodate diverse human behavioral patterns and complex scenarios. Developing more sophisticated and context-aware interaction models will be crucial for enhancing the system’s effectiveness in real-world applications.
Future work should focus on extending our current dynamic path adjustment system in several ways. While our research has established the foundation for leveraging human trajectory data, there are opportunities to enhance the system’s sophistication and adaptability. Specifically, future research could explore the following:
1.
Integration of machine learning approaches to improve the prediction accuracy of human movement patterns in complex scenarios [62];
2.
Development of more sophisticated behavioral models that can account for group dynamics and social interactions in crowded indoor spaces [63,64,73].
These advancements would build upon our current time-based collision prediction framework while addressing the unique challenges of indoor human–robot interaction.

Supplementary Materials

The following supporting information can be downloaded at: https://drive.google.com/file/d/1w0vg-pFqLHeoHLKDXLTVhG0Q2xO4u6E4/view?usp=sharing, accessed on 9 January 2025. Video S1: Demonstration of the proposed 4WIS collision detection method in a human-shared indoor environment.

Author Contributions

Conceptualization, S.K.; methodology, S.K.; software, S.K. and H.J.; validation, S.K. and Y.S.; formal analysis, S.K.; investigation, S.K., H.J., J.H. and D.L.; resources, S.K. and Y.H.; data curation, S.K.; writing—original draft preparation, S.K.; writing—review and editing, S.K. and Y.S.; visualization, S.K.; supervision, Y.S.; project administration, Y.S.; funding acquisition, Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the “Regional Innovation Strategy (RIS)” program through the National Research Foundation of Korea, funded by the Ministry of Education (2021RIS-004). Additionally, it was supported by a grant from the Korea Institute for Advancement of Technology (KIAT), funded by the Korea Government (MOTIE), through the National Innovation Cluster R&D program (P0025248: Development of a cloud-based autonomous driving special automobile integrated control solution).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy restrictions, as they contain sensitive experimental information involving human interaction and specific proprietary algorithms used for collision detection and trajectory prediction.

Conflicts of Interest

Author Yeongho Ha was employed by the company FieldRo. However, this employment did not influence the design, execution, interpretation, or reporting of this study. The remaining authors declare no conflicts of interest.

References

  1. Jain, V.; Malviya, B.; Arya, S. An overview of electronic commerce (e-Commerce). J. Contemp. Issues Bus. Gov. 2021, 27, 665–670. [Google Scholar]
  2. Sun, L.; Chen, J.; Li, Q.; Huang, D. Dramatic uneven urbanization of large cities throughout the world in recent decades. Nat. Commun. 2020, 11, 5366. [Google Scholar] [CrossRef] [PubMed]
  3. Zhang, X.; Han, L.; Wei, H.; Tan, X.; Zhou, W.; Li, W.; Qian, Y. Linking urbanization and air quality together: A review and a perspective on the future sustainable urban development. J. Clean. Prod. 2022, 46, 130988. [Google Scholar] [CrossRef]
  4. Ventriglio, A.; Torales, J.; Castaldelli-Maia, J.; De Berardis, D.; Bhugra, D. Urbanization and emerging mental health issues. CNS Spectrums 2021, 26, 43–50. [Google Scholar] [CrossRef]
  5. Pradhan, R.P.; Arvin, M.B.; Nair, M. Urbanization, transportation infrastructure, ICT, and economic growth: A temporal causal analysis. Cities 2021, 115, 102343. [Google Scholar] [CrossRef]
  6. Mangiaracina, R.; Perego, A.; Seghezzi, A.; Tumino, A. Innovative solutions to increase last-mile delivery efficiency in B2C e-commerce: A literature review. Int. J. Phys. Distrib. Logist. Manag. 2019, 49, 901–920. [Google Scholar] [CrossRef]
  7. Boysen, N.; Fedtke, S.; Schwerdfeger, S. Last-Mile Delivery Concepts: A Survey from an Operational Research Perspective. Spectrum 2021, 43, 1–58. [Google Scholar] [CrossRef]
  8. Gu, Q.; Fan, T.; Pan, F.; Zhang, C. A vehicle-UAV operation scheme for instant delivery. Comput. Ind. Eng. 2020, 149, 106772. [Google Scholar] [CrossRef]
  9. Bakach, I.; Campbell, A.M.; Ehmke, J.F. A two-tier urban delivery network with robot-based deliveries. Networks 2021, 78, 461–483. [Google Scholar] [CrossRef]
  10. Pamucar, D.; Gokasar, I.; Ebadi Torkayesh, A.; Deveci, M.; Martínez, L.; Wu, Q. Prioritization of unmanned aerial vehicles in transportation systems using the integrated stratified fuzzy rough decision-making approach with the Hamacher operator. Inf. Sci. 2023, 622, 374–404. [Google Scholar] [CrossRef]
  11. Harrington, B.D.; Voorhees, C. The Challenges of Designing the Rocker-Bogie Suspension for the Mars Exploration Rover; Technical Report; NASA Center for AeroSpace Information (CASI): Cleveland, OH, USA, 2004. [Google Scholar]
  12. Alakshendra, V.; Chiddarwar, S.S. Adaptive robust control of Mecanum-wheeled mobile robot with uncertainties. Nonlinear Dyn. 2017, 87, 2147–2169. [Google Scholar] [CrossRef]
  13. Chang-Soo, H.; Sang-Ho, L.; Sung-Kyu, H.; Un-Koo, L. Four-Wheel Independent Steering (4WIS) System for Vehicle Handling Improvement by Active Rear Toe Control. J. Automot. Eng. 1999, 42, 947. [Google Scholar]
  14. Zheng, H.; Yang, S. A Trajectory Tracking Control Strategy of 4WIS/4WID Electric Vehicle with Adaptation of Driving Conditions. Appl. Sci. 2019, 9, 168. [Google Scholar] [CrossRef]
  15. Liu, X.; Wang, W.; Li, X.; Liu, F.; He, Z.; Yao, Y.; Ruan, H.; Zhang, T. MPC-based high-speed trajectory tracking for 4WIS robot. ISA Trans. 2022, 123, 413–424. [Google Scholar] [CrossRef]
  16. Danwei, W.; Feng, Q. Trajectory planning for a four-wheel-steering vehicle. In Proceedings of the 2001 IEEE International Conference on Robotics and Automation (ICRA), Seoul, Republic of Korea, 21–26 May 2001; pp. 3320–3325. [Google Scholar]
  17. Yim, S. Comparison among Active Front, Front Independent, 4-Wheel and 4-Wheel Independent Steering Systems for Vehicle Stability Control. Electronics 2020, 9, 798. [Google Scholar] [CrossRef]
  18. Ye, Y.; He, L.; Zhang, Q. Steering Control Strategies for a Four-Wheel-Independent-Steering Bin Managing Robot. IFAC-PapersOnLine 2016, 49, 39–44. [Google Scholar] [CrossRef]
  19. Mitchell, W.C.; Staniforth, A.; Scott, I. Analysis of Ackermann Steering Geometry; SAE Technical Paper; SAE International: Warrendale, PA, USA, 2006. [Google Scholar]
  20. Zhao, J.S.; Liu, X.; Feng, Z.J.; Dai, J.S. Design of an Ackermann-type steering mechanism. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2013, 227, 2549–2562. [Google Scholar]
  21. Malu, S.K.; Majumdar, J. Kinematics, localization and control of differential drive mobile robot. Glob. J. Res. Eng. 2014, 14, 1–9. [Google Scholar]
  22. Choi, M.W.; Park, J.S.; Lee, B.S.; Lee, M.H. The performance of independent wheels steering vehicle (4WS) applied Ackermann geometry. In Proceedings of the 2008 International Conference on Control, Automation and Systems, Seoul, Republic of Korea, 14–17 October 2008; pp. 197–202. [Google Scholar]
  23. Wang, J.; Wang, Q.; Jin, L.; Song, C. Independent wheel torque control of 4WD electric vehicle for differential drive assisted steering. Mechatronics 2011, 21, 63–76. [Google Scholar]
  24. Hutabarat, D.; Rivai, M.; Purwanto, D.; Hutomo, H. Lidar-based Obstacle Avoidance for the Autonomous Mobile Robot. In Proceedings of the 2019 12th International Conference on Information and Communication Technology and System (ICTS), Surabaya, Indonesia, 18 July 2019; pp. 197–202. [Google Scholar]
  25. Petrie, G. An introduction to the technology: Mobile mapping systems. GeoInformatics 2010, 13, 32–43. [Google Scholar]
  26. Zou, Q.; Sun, Q.; Chen, L.; Nie, B.; Li, Q. A Comparative Analysis of LiDAR SLAM-Based Indoor Navigation for Autonomous Vehicles. IEEE Trans. Intell. Transp. Syst. 2022, 23, 6907–6921. [Google Scholar] [CrossRef]
  27. Premebida, C.; Ludwig, O.; Nunes, U. LIDAR and vision-based pedestrian detection system. J. Field Robot. 2009, 26, 696–711. [Google Scholar] [CrossRef]
  28. Rudenko, A.; Palmieri, L.; Herman, M.; Kitani, K.M.; Gavrila, D.M.; Arras, K.O. Human motion trajectory prediction: A survey. Int. J. Robot. Res. 2020, 39, 895–935. [Google Scholar] [CrossRef]
  29. Welch, G. An Introduction to the Kalman Filter; Technical Report; University of North Carolina: Chapel Hill, NC, USA, 1995. [Google Scholar]
  30. Olsson, J.; Hellström, D.; Pålsson, H. Framework of Last Mile Logistics Research: A Systematic Review of the Literature. Sustainability 2019, 11, 7131. [Google Scholar] [CrossRef]
  31. Geng, C.; Mostefai, L.; Denai, M.; Hori, Y. Direct Yaw-Moment Control of an In-Wheel-Motored Electric Vehicle Based on Body Slip Angle Fuzzy Observer. IEEE Trans. Ind. Electron. 2009, 56, 1411–1419. [Google Scholar] [CrossRef]
  32. Liang, Y.; Li, Y.; Yu, Y.; Zheng, L.; Li, W. Integrated lateral control for 4WID/4WIS vehicle in high-speed condition considering the magnitude of steering. Veh. Syst. Dyn. 2020, 58, 1711–1735. [Google Scholar] [CrossRef]
  33. Song, Y.; Shu, H.; Chen, X. Chassis integrated control for 4WIS distributed drive EVs with model predictive control based on the UKF observer. Sci. China Technol. Sci. 2020, 63, 397–409. [Google Scholar] [CrossRef]
  34. Cariou, C.; Lenain, R.; Thuilot, B.; Berducat, M. Automatic guidance of a four-wheel-steering mobile robot for accurate field operations. J. Field Robot. 2009, 26, 504–518. [Google Scholar] [CrossRef]
  35. Jaboyedoff, M.; Oppikofer, T.; Abellán, A.; Derron, M.H.; Loye, A.; Metzger, R.; Pedrazzini, A. Use of LIDAR in landslide investigations: A review. Nat. Hazards 2012, 61, 5–28. [Google Scholar] [CrossRef]
  36. Taipalus, T.; Ahtiainen, J. Human detection and tracking with knee-high mobile 2D LIDAR. In Proceedings of the 2011 IEEE International Conference on Robotics and Biomimetics (ROBIO), Karon Beach, Thailand, 7–11 December 2011; pp. 1672–1677. [Google Scholar]
  37. Kobilarov, M.; Sukhatme, G.; Hyams, J.; Batavia, P. People tracking and following with mobile robot using an omnidirectional camera and a laser. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation (ICRA), Orlando, FL, USA, 15–19 May 2006; pp. 557–562. [Google Scholar]
  38. Yan, Z.; Duckett, T.; Bellotto, N. Online learning for 3D LiDAR-based human detection: Experimental analysis of point cloud clustering and classification methods. Auton. Robot. 2020, 44, 147–164. [Google Scholar] [CrossRef]
  39. Yan, Z.; Duckett, T.; Bellotto, N. Online learning for human classification in 3D LiDAR-based tracking. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 864–871. [Google Scholar]
  40. Wang, Y.; Luo, G.; Zhang, H.; Wang, L.; Fang, D.; Jiang, Y. Extended Kalman Filter Trajectory Prediction Method Based on Constant Angular Velocity and Velocity Kinematics Model. In Proceedings of the 2023 42nd Chinese Control Conference (CCC), Tianjin, China, 24–26 July 2023; pp. 4357–4362. [Google Scholar]
  41. Abbas, M.T.; Jibran, M.A.; Song, W.C.; Afaq, M. An adaptive approach to vehicle trajectory prediction using multimodel Kalman filter. Trans. Emerg. Telecommun. Technol. 2020, 31, e3932. [Google Scholar] [CrossRef]
  42. Xie, G.; Gao, H.; Qian, L.; Huang, B.; Li, K.; Wang, J. Vehicle Trajectory Prediction by Integrating Physics- and Maneuver-Based Approaches Using Interactive Multiple Models. IEEE Trans. Ind. Electron. 2018, 65, 5999–6008. [Google Scholar] [CrossRef]
  43. Simon, D. Kalman filtering with state constraints: A survey of linear and nonlinear algorithms. IET Control Theory Appl. 2010, 4, 1303–1318. [Google Scholar] [CrossRef]
  44. Li, Q.; Li, R.; Ji, K.; Dai, W. Kalman Filter and Its Application. In Proceedings of the 2015 8th International Conference on Intelligent Networks and Intelligent Systems (ICINIS), Tianjin, China, 1–3 November 2015; pp. 74–77. [Google Scholar]
  45. D’Arco, M.; Fratelli, L.; Graber, G.; Guerritore, M. Detection and Tracking of Moving Objects Using a Roadside LiDAR System. IEEE Instrum. Meas. Mag. 2024, 27, 49–56. [Google Scholar] [CrossRef]
  46. Gasparetto, A.; Boscariol, P.; Lanzutti, A.; Vidoni, R. Path Planning and Trajectory Planning Algorithms: A General Overview; Springer International Publishing: Berlin/Heidelberg, Germany, 2015; Volume 29. [Google Scholar]
  47. Zhang, H.y.; Lin, W.m.; Chen, A.x. Path Planning for the Mobile Robot: A Review. Symmetry 2018, 10, 450. [Google Scholar] [CrossRef]
  48. Sánchez-Ibáñez, J.R.; Pérez-Del-Pulgar, C.J.; García-Cerezo, A. Path Planning for Autonomous Mobile Robots: A Review. Sensors 2021, 21, 7898. [Google Scholar] [CrossRef]
  49. Czubenko, M.; Kowalczuk, Z. A Simple Neural Network for Collision Detection of Collaborative Robots. Sensors 2021, 21, 4235. [Google Scholar] [CrossRef]
  50. Das, N.; Yip, M. Learning-Based Proxy Collision Detection for Robot Motion Planning Applications. IEEE Trans. Robot. 2020, 36, 1096–1114. [Google Scholar] [CrossRef]
  51. Qiangqiang, Y.; Tian, Y.; Wang, Q.; Wang, S. Control Strategies on Path Tracking for Autonomous Vehicle: State of the Art and Future Challenges. IEEE Access 2020, 8, 161211–161222. [Google Scholar]
  52. Peng, H.; Wang, W.; An, Q.; Xiang, C.; Li, L. Path Tracking and Direct Yaw Moment Coordinated Control Based on Robust MPC with the Finite Time Horizon for Autonomous Independent-Drive Vehicles. IEEE Trans. Veh. Technol. 2020, 69, 6053–6066. [Google Scholar] [CrossRef]
  53. Mohammed, A.; Schmidt, B.; Wang, L. Active collision avoidance for human–robot collaboration driven by vision sensors. Int. J. Comput. Integr. Manuf. 2017, 30, 970–980. [Google Scholar] [CrossRef]
  54. Zeng, L.; Bone, G.M. Mobile robot collision avoidance in human environments. Int. J. Adv. Robot. Syst. 2013, 10, 41. [Google Scholar] [CrossRef]
  55. Almasri, M.M.; Alajlan, A.M.; Elleithy, K.M. Trajectory planning and collision avoidance algorithm for mobile robotics systems. IEEE Sens. J. 2016, 16, 5021–5028. [Google Scholar] [CrossRef]
  56. Onnasch, L.; Roesler, E. A taxonomy to structure and analyze human–robot interaction. Int. J. Soc. Robot. 2021, 13, 833–849. [Google Scholar] [CrossRef]
  57. Coronado, E.; Kiyokawa, T.; Ricardez, G.A.G.; Ramirez-Alpizar, I.G.; Venture, G.; Yamanobe, N. Evaluating quality in human-robot interaction: A systematic search and classification of performance and human-centered factors, measures, and metrics towards an Industry 5.0. J. Manuf. Syst. 2022, 63, 392–410. [Google Scholar] [CrossRef]
  58. Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. LIO-SAM: Tightly-Coupled Lidar Inertial Odometry via Smoothing and Mapping. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 5135–5142. [Google Scholar]
  59. Yang, B.; Liang, M.; Urtasun, R. HDNET: Exploiting HD Maps for 3D Object Detection. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020. [Google Scholar]
  60. Patle, B.K.; Pandey, A.; Parhi, D.R.K.; Jagadeesh, A.J.D.T. A review: On path planning strategies for navigation of mobile robots. Def. Technol. 2019, 15, 582–606. [Google Scholar] [CrossRef]
  61. Pandey, A.; Pandey, S.; Parhi, D.R. Mobile robot navigation and obstacle avoidance techniques: A review. Int. Robot. Autom. J. 2017, 2, 22. [Google Scholar] [CrossRef]
  62. Gao, J.; Ye, W.; Guo, J.; Li, Z. Deep Reinforcement Learning for Indoor Mobile Robot Path Planning. Sensors 2020, 20, 5493. [Google Scholar] [CrossRef]
  63. Li, Z.; Deng, J.; Lu, R.; Xu, Y.; Bai, J.; Su, C.Y. Trajectory-Tracking Control of Mobile Robot Systems Incorporating Neural-Dynamic Optimized Model Predictive Approach. IEEE Trans. Syst. Man Cybern. Syst. 2016, 46, 740–749. [Google Scholar] [CrossRef]
  64. Xiao, X.; Liu, B.; Warnell, G.; Stone, P. Motion Planning and Control for Mobile Robot Navigation Using Machine Learning: A Survey. Auton. Robot. 2022, 46, 569–597. [Google Scholar] [CrossRef]
  65. Yasin, J.N.; Mohamed, S.A.S.; Haghbayan, M.; Heikkonen, J.; Tenhunen, H.; Plosila, J. Unmanned Aerial Vehicles (UAVs): Collision Avoidance Systems and Approaches. IEEE Access 2020, 8, 105139–105155. [Google Scholar] [CrossRef]
  66. Sato, M.; Mikawa, M.; Fujisawa, M.; Hiiragi, W. Social Norm-Based Collision Avoidance in Human-Robot Coexistence Environment. In Proceedings of the IECON 2018—44th Annual Conference of the IEEE Industrial Electronics Society, Washington, DC, USA, 21–23 October 2018; pp. 3801–3806. [Google Scholar]
  67. Polack, P.; Altché, F.; d’Andrea Novel, B.; de La Fortelle, A. The kinematic bicycle model: A consistent model for planning feasible trajectories for autonomous vehicles. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 812–818. [Google Scholar]
  68. Gfrerrer, A. Geometry and kinematics of the Mecanum wheel. Comput. Aided Geom. Des. 2008, 25, 784–791. [Google Scholar] [CrossRef]
  69. Chakraborty, N.; Ghosal, A. Kinematics of wheeled mobile robots on uneven terrain. Mech. Mach. Theory 2004, 39, 1273–1287. [Google Scholar] [CrossRef]
  70. Lang, A.H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; Beijbom, O. PointPillars: Fast Encoders for Object Detection from Point Clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 12689–12697. [Google Scholar] [CrossRef]
  71. Li, J.; Luo, C.; Yang, X. PillarNeXt: Rethinking Network Designs for 3D Object Detection in LiDAR Point Clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 17567–17572. [Google Scholar] [CrossRef]
  72. Akai, N.; Morales, L.Y.; Takeuchi, E.; Yoshihara, Y.; Ninomiya, Y. Robust Localization Using 3D NDT Scan Matching with Experimentally Determined Uncertainty and Road Marker Matching. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 1356–1363. [Google Scholar]
  73. Lemardelé, C.; Estrada, M.; Pagès, L.; Bachofner, M. Potentialities of Drones and Ground Autonomous Delivery Devices for Last-Mile Logistics. Transp. Res. Part E 2021, 149, 102289. [Google Scholar] [CrossRef]
Figure 1. The 4WIS mobile robot platform used in this study.
Figure 1. The 4WIS mobile robot platform used in this study.
Sensors 25 00890 g001
Figure 2. Hardware architecture of mobile robot.
Figure 2. Hardware architecture of mobile robot.
Sensors 25 00890 g002
Figure 3. One of the primary driving methods employed by 4WIS mobile robot. (a) Illustration of the Parallel mode; (b) illustration of the bicycle model for Parallel mode.
Figure 3. One of the primary driving methods employed by 4WIS mobile robot. (a) Illustration of the Parallel mode; (b) illustration of the bicycle model for Parallel mode.
Sensors 25 00890 g003
Figure 5. Kinematic model of Parallel mode.
Figure 5. Kinematic model of Parallel mode.
Sensors 25 00890 g005
Figure 6. Kinematic model of Ackermann mode.
Figure 6. Kinematic model of Ackermann mode.
Sensors 25 00890 g006
Figure 7. Flowchart for controlling a mobile robot.
Figure 7. Flowchart for controlling a mobile robot.
Sensors 25 00890 g007
Figure 8. Quadrant-based steering angle determination for wheel control.
Figure 8. Quadrant-based steering angle determination for wheel control.
Sensors 25 00890 g008
Figure 9. Example of mobile robot trajectory calculation 3 s later in different speed setting.
Figure 9. Example of mobile robot trajectory calculation 3 s later in different speed setting.
Sensors 25 00890 g009
Figure 10. Overall structure of mobile robot trajectory calculation.
Figure 10. Overall structure of mobile robot trajectory calculation.
Sensors 25 00890 g010
Figure 11. (a) Preset human size parameters; (b) real-time human detection results.
Figure 11. (a) Preset human size parameters; (b) real-time human detection results.
Sensors 25 00890 g011
Figure 12. Human detection algorithm architecture.
Figure 12. Human detection algorithm architecture.
Sensors 25 00890 g012
Figure 13. Visualization of real-time trajectory prediction results.
Figure 13. Visualization of real-time trajectory prediction results.
Sensors 25 00890 g013
Figure 14. Trajectory prediction system architecture.
Figure 14. Trajectory prediction system architecture.
Sensors 25 00890 g014
Figure 15. Collision detection approaches: (a) Traditional path-based detection; (b) Conventional method limitations; (c) Unnecessary collision responses; (d) Proposed trajectory prediction method.
Figure 15. Collision detection approaches: (a) Traditional path-based detection; (b) Conventional method limitations; (c) Unnecessary collision responses; (d) Proposed trajectory prediction method.
Sensors 25 00890 g015
Figure 16. Collision detection system visualization.
Figure 16. Collision detection system visualization.
Sensors 25 00890 g016
Figure 17. Overall flowchart of proposed collision detection system.
Figure 17. Overall flowchart of proposed collision detection system.
Sensors 25 00890 g017
Figure 18. HD map of experimental environment. (a) A 3D point cloud visualization of the test environment; (b) Top-view representation of the indoor space showing the corridor and room layouts.
Figure 18. HD map of experimental environment. (a) A 3D point cloud visualization of the test environment; (b) Top-view representation of the indoor space showing the corridor and room layouts.
Sensors 25 00890 g018
Figure 19. Environment mapping and localization system: (a) NDT matching visualization for real-time localization; (b) generated path planning overlay on the constructed HD map.
Figure 19. Environment mapping and localization system: (a) NDT matching visualization for real-time localization; (b) generated path planning overlay on the constructed HD map.
Sensors 25 00890 g019
Figure 20. First experimental setup and real-time visualization of the mobile robot path.
Figure 20. First experimental setup and real-time visualization of the mobile robot path.
Sensors 25 00890 g020
Figure 21. First experimental setup and real-time visualization of mobile robot path.
Figure 21. First experimental setup and real-time visualization of mobile robot path.
Sensors 25 00890 g021
Figure 22. Comparison between command velocities and actual velocities of a 4WIS mobile robot in Ackermann mode during autonomous driving: (a) 2 km/h, (b) 3 km/h, (c) 4 km/h, and (d) 5 km/h. The red lines represent command values and blue lines show the actual robot response. The results demonstrate increasing trajectory tracking errors as the autonomous driving speed increases, particularly noticeable in both the linear velocity (linear.x) and angular velocity (angular.z) measurements.
Figure 22. Comparison between command velocities and actual velocities of a 4WIS mobile robot in Ackermann mode during autonomous driving: (a) 2 km/h, (b) 3 km/h, (c) 4 km/h, and (d) 5 km/h. The red lines represent command values and blue lines show the actual robot response. The results demonstrate increasing trajectory tracking errors as the autonomous driving speed increases, particularly noticeable in both the linear velocity (linear.x) and angular velocity (angular.z) measurements.
Sensors 25 00890 g022
Figure 23. Human detection performance in Parallel and Ackermann modes at various speeds.
Figure 23. Human detection performance in Parallel and Ackermann modes at various speeds.
Sensors 25 00890 g023
Figure 24. Time-to-collision prediction for multiple human positions in Parallel mode.
Figure 24. Time-to-collision prediction for multiple human positions in Parallel mode.
Sensors 25 00890 g024
Figure 25. Human tracking results: (a) mobile robot speed of 2 km/h, (b) mobile robot speed of 3 km/h.
Figure 25. Human tracking results: (a) mobile robot speed of 2 km/h, (b) mobile robot speed of 3 km/h.
Sensors 25 00890 g025
Figure 26. Comparison of collision detection activation time between conventional and proposed methods at different speeds of mobile robot.
Figure 26. Comparison of collision detection activation time between conventional and proposed methods at different speeds of mobile robot.
Sensors 25 00890 g026
Table 1. Steering angle criteria based on control signal format and wheel quadrant.
Table 1. Steering angle criteria based on control signal format and wheel quadrant.
Linear.xLinear.yQuadrantDirectionSteering Angle
(q)
++1Front+
+2Front
+3Rear+
4Rear
Table 2. The specifications of the 4WIS mobile robot.
Table 2. The specifications of the 4WIS mobile robot.
Size738 mm × 500 mm × 338 mm
Wheelbase494 mm
Tread364 mm
Ground Clearance107 mm
Wheel Hub Radius100 mm
Brake TypeElectronic brake
Suspension TypeSwing arm suspension
Table 3. Experimental specifications and parameters.
Table 3. Experimental specifications and parameters.
LiDAR channel/sampling rate16 ch, 10 hz
Human walking speed2 km/h
Robot velocity0∼3 km/h
HD map resolution50 mm/pixel
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, S.; Jang, H.; Ha, J.; Lee, D.; Ha, Y.; Song, Y. Time-Interval-Based Collision Detection for 4WIS Mobile Robots in Human-Shared Indoor Environments. Sensors 2025, 25, 890. https://doi.org/10.3390/s25030890

AMA Style

Kim S, Jang H, Ha J, Lee D, Ha Y, Song Y. Time-Interval-Based Collision Detection for 4WIS Mobile Robots in Human-Shared Indoor Environments. Sensors. 2025; 25(3):890. https://doi.org/10.3390/s25030890

Chicago/Turabian Style

Kim, Seungmin, Hyunseo Jang, Jiseung Ha, Daekug Lee, Yeongho Ha, and Youngeun Song. 2025. "Time-Interval-Based Collision Detection for 4WIS Mobile Robots in Human-Shared Indoor Environments" Sensors 25, no. 3: 890. https://doi.org/10.3390/s25030890

APA Style

Kim, S., Jang, H., Ha, J., Lee, D., Ha, Y., & Song, Y. (2025). Time-Interval-Based Collision Detection for 4WIS Mobile Robots in Human-Shared Indoor Environments. Sensors, 25(3), 890. https://doi.org/10.3390/s25030890

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop