Keywords

1 Introduction

Tech United Eindhoven represents the Eindhoven University of Technology during Robocup championships. The team participates in the Mid-size league and the RoboCup@Home league and consists of PhD, MSc, BSc students and former TU/e students, with academic staff members of different departments. The team started participating in the Middle-Size League 2006. In 2011 the service robot AMIGO was added to the team to participate in the RoboCup@Home league. Knowledge acquired in designing our soccer robots was extensively used in creating a service robot.

This paper starts with a short introduction on our robot hardware and software platform in Sect. 2, and elaborates next on the main software improvements we created to be able to win the 2016 Robocup competition.

Section 3 describes improvements in the area of perception. It starts with the explanation of a method to attain 3D ball information by combining 2D ball information from multiple robots. Section 3.2 continues on 3D ball perception and explains the integration of a Kinect v2 on our robots to get a full 3D image of the environment. Section 3.3 describes the last perception improvement and provides details on an improved obstacle detection method using omnivision images. In Sect. 4 our defensive strategy is described which was modified during the tournament. It also describes the penalty blocking strategy of our goalkeeper which in the end provided us the world-championship.

The last Sect. 5 elaborates on the tournament results, match results, as well as match statistics.

2 Robot Platform

Our robots have been named TURTLEs (acronym for Tech United RoboCup Team: Limited Edition). The platform is driven by three omni-directional wheels and contains an omnivision camera on top for localization. The software on the robot is executed on an industrial Beckhoff pc running Linux.

2.1 Hardware

The current hardware is based on the generation of 2009 with several small re-designs to improve ball-handling and robustness, see Fig. 1. Development of these robots started in 2005. During tournaments and numerous demonstrations, this generation of soccer robots has proven to be evolved in a very robust platform. The schematic representation published in the second section of our team description paper of 2014 [4] covers the main outline of our robot design. For 2016 a re-design of the upper body of the robot has been performed to integrate Kinect v2 cameras and create a more robust frame for the omni-vision unit on top of the robot. This prevents the need for recalibration of mirror parameters when the top of the robot is hit by a ball. A detailed list of hardware specifications, along with CAD files of the base, upper-body, ball handling and shooting mechanism, has been published on a ROP wiki.Footnote 1

Fig. 1.
figure 1

Fifth generation TURTLE robots, with on the left-handside the goalkeeper robot.

2.2 Software

The software on the robots is divided in three main processes: Vision, Worldmodel and Motion. These process communicate with each other through a real-time database (RTDB) designed by the CAMBADA team [7]. The vision process is responsible for environment perception using omni-vision images and provides the location of the ball, obstacles and the robot itself. The worldmodel combines the ball, obstacle and robot position information provided by vision with data acquired from other team members to get a unified representation of the world. The motion process is based on a layered software model with from top to bottom the strategy defining high-level team strategy based on worldmodel information. The second layer consists of actions which are executed by roles which are deployed on the turtles. These actions use a limited set of basic skills such as shoot, dribble with ball or just drive. The lowest level of the motion process contains the motion control of the robot actuators.

Inter-robot communication is based on UDP multicast communication at a fixed message rate of 25 Hz. The communication application sends a small selection of records from the real-time database written by the three main processes. The communicated information is used by the worldmodel of all robots and to execute multi-robot strategy such as passing. Further, the information can be received by any base-station next to the field for diagnostic purposes.

3 Improved Perception

The ball and obstacle perception of the robots have been improved in three ways, this section is structured according to these three improvements. In Sect. 3.1 an algorithm on 3D ball position estimation is described. Section 3.2 describes the implementation of the Kinect image processing, and the integration in the robot using RTDB. Section 3.3 elaborates on obstacle detection using omni-vision images.

3.1 3D Ball Position Estimation Using Cooperative Sensing

This research has been executed together with the CAMBADA team from Aveiro, Portugal [8]. To detect the position of the ball, most teams have equipped their robots with a catadioptric vision system, also known as omnivision [1, 2, 5]. Currently, the ball position is estimated by projecting the ball found in the 2D image on the field in the \(x\text {-}y\) plane, assuming that the ball is always on the ground when seen by the omni-vision. Finding a way to obtain the 3D ball position \(\left( x_{b},y_{b},z_{b}\right) \) enables the robot to follow the correct ball position in \(x\text {-}y\) plane. Moreover, the height \(\left( z_{b}\right) \) of the ball serves a purpose by enabling the interception of lob passes [2]. Cooperative sensing can be used to determine the ball position in three dimensions by triangulation of omnivision camera data. This is graphically represented in Fig. 2(a). Here, \(P_1\) and \(P_2\) are the projected ball positions estimated by respectively robot 1 and 2, \(P_{ball}\) is the actual ball position.

Fig. 2.
figure 2

3D ball position estimation using multi-robot triangulation.

3.1.1 Algorithm Structure

A schematic representation of the triangulation algorithm is presented in Fig. 2(b). Every execution, the new information from the robot itself and its peers is stored into a buffer, quantized to time instants defined by the executions. The state of the algorithm as presented in Fig. 2(b) is at time \(t_{n}\), information from peers is delayed by the robot-robot communication. For triangulation, the algorithm selects a time instant at which information from multiple robots is available. In the case of the state represented in Fig. 2(b), \(t_{n-4}\) is selected. The available 2D ball projections at this time instant are triangulated and the obtained 3D ball position is filtered with a Kalman filter, which combines this new measurement with the model of the ball. This yields a (filtered) 3D ball position at time instant \(t_{n-4}\) which is then fast-forwarded in time to \(t_{n}\) using the model of the ball.

3.1.2 Results

The algorithm presented in Fig. 2(b) has been implemented on the robots. Two kinds of tests have been executed: with a static ball and with a moving ball. Tests with a static ball show that the average accuracy obtained with the algorithm is 10.6 cm. Note that the mapping from camera coordinates to robot coordinates has not been calibrated specifically for this test. During the tests with a moving ball an attempt was made to track the position of the ball from the moment it was kicked by a robot (12 m/s). To be able to get a good estimation of the ball position when the ball has exceeded the height of the robot, the state of the Kalman filter has to be converged before this moment. To accommodate this, enough samples from peer robots have to be received. Calculations show that if the robot-robot communication is performed at 40 Hz this is satisfied.

3.2 Integration Kinect v2 Camera

For three-dimensional ball recognition, so far we have been using the Microsoft Kinect v1. While this device poses a great addition to the omnivision unit, it also has some drawbacks that makes it unreliable and suboptimal. There are four main shortcomings: (i) The CCD has low sensitivity, hence we need to increase the exposure time. This causes the Kinect to shoot video at only 15 Hz, instead of the theoretical maximum of 30 Hz. (ii) The CCD has bad quality colors, making color thresholding hard, and tuning cumbersome. (iii) There are many robustness problems, causing one of the image streams to drop out, or causing the complete device to crash when mounted on a robot. And (iv) The depth range is limited to 6 m. This means that a full speed ball at 12 m/s arrives 0.5 s after the first possible detection.

A possible solution to the Kinect v1’s shortcomings is the Kinect v2 [3]. It has a higher quality CCD with better color quality and improved sensitivity. It is therefore easier to find the ball in the color image, and it can always run at 30 Hz. The depth range has increased to 9 m, giving the goalkeeper more time to react. Early tests also have not shown any dropouts of the device or its video streams.

For processing the increased amount of data from the Kinect v2, a GPU is required. The robot software runs on an industrial computer, which does not have a GPU, nor can it be extended to include one. Therefore, a dedicated GPU development board, the Jetson TK1 [6], is used to process all the data from the Kinect. This board incorporates a 192-core GPU and a quad-core ARM CPU, which is just enough to process all data coming from one Kinect. The board runs Ubuntu 14.04 with CUDA for easy parallel computation. This enables us to offload some of the graphical operations to the GPU.

First, the video stream data is processed on the GPU. The ball is then detected using the following steps:

  1. 1.

    The color image is registered to the depth image, i.e. for each pixel in the depth image, the corresponding pixel in the color image is determined.

  2. 2.

    Color segmentation is performed on the color image using a Google annotated database that contains the chance of an RGB value belonging to a color.

  3. 3.

    A floodfill algorithm is performed for blob detection (CUDA).

  4. 4.

    The blobs are sorted based on their size/distance ratio and width/height ratio:

    $$\begin{aligned} p = \left[ 1+\alpha (w-h)^2\right] ^{-1}\left[ 1+\alpha ^2(wh-4r^2)^2\right] ^{-1} \end{aligned}$$
    (1)

    with w and h being the width and height of the blob respectively, r the radius of the ball and \(\alpha \) a scaling factor, all calculated in meters.

  5. 5.

    The found balls are transformed into robot coordinates.

The result is an almost 100% detection rate at 30 Hz when the ball is inside the field of view of the camera, and closer than 9 m. False positives are uncommon, but when present, they are filtered out by the ball model.

3.2.1 RTDB Multi-network Extension

We use the Real-time Database library of CAMBADA (RTDB, [7]) for inter-process as well as inter-robot communication. This database is based on records that can be marked either local or shared. A communication process (comm) is running on all robots, which broadcasts the shared records over WIFI using multicast UDP. The same process is also used to receive data to update the local RTDB instance with shared data from peers. This provides a flexible configurable communication architecture for inter-process and inter-robot communication.

With the introduction of the Jetson TK1 board for image processing of the Kinect v2, the processes on the robot are no longer executed on a single processing unit. As a result, the standard RTDB library can no longer fulfill all inter-process communication on a robot. Therefore RTDB and comm are extended to support multiple networks. The new communication architecture is illustrated in Fig. 3. Each robot PC runs two instances of comm. One broadcasts and listens on the wireless interface for inter-robot communication. A second comm instance is connected to a wired LAN interface which is connected to the Jetson board.

Fig. 3.
figure 3

Inter-process and inter-robot communication architecture using RTDB.

Modifications have been made to RTDB and comm to enable this new configuration. First, a network configuration file has been introduced. This file describes for each network the multicast connection properties, the frequency at which comm should share the agents’s shared records, and an option to mark the network default to be fully backwards compatible. Two modifications to RTDB have been added to reduce the traffic in the networks. The first one is compression of the data to be sent just before a UDP packet is created. The complete payload of this packet, i.e., data and comm header, is compressed using zlib which reduces the payload on average to about 70% of the original size. Using the second modification, the user can specify in the RTDB configuration file which (shared) records have to be broadcasted in a given network. For example, the Robot PC (agent 1–5), illustrated in Fig. 3, is sharing data in two networks. The two networks are configured such that all shared records are broadcasted to all peers through the WIFI network, while only a subset of data is sent to the Jetson board through the LAN network. The Jetson board only needs to know the current robot position and not all team related information. This implementation is also fully backwards compatible; if the network is not specified in the RTDB configuration file, all shared records will be broadcasted.

Tournament Results. One of our weak points in previous years was blocking high balls shot at the goal. Either the goalkeeper did not see them, or they were detected too late to be able to react. During this tournament the goalkeeper was one of our strengths. Especially during games against teams with a strong attacking strategy, many high balls were shot at the goal, detected by the Kinect camera, and stopped by the goalkeeper. During the final match, Team Water shot twelve high balls at our goalie from a distance of more than four meters, eleven were detected by the Kinect, and eight were stopped.

3.3 Obstacle Detection Enhancements

During the past RoboCup tournaments it was observed that the success rate of goal attempts is still too low. For the RoboCup tournament in Hefei 2015 the success rate was approximately 20% averaged over all matches according to the logged game data. By improving the obstacle detection the goalkeepers position can be estimated more accurately, which will increase the success rate of shots at the goal.

The current obstacle detection method is a relatively simple approach, which uses 200 radial rulers to detect dark objects in the image. The disadvantage of this approach is that the resolution decreases dramatically as a function of distance. Hence, at larger distances only wide obstacles are detected accurately. This results in a 0.25 m resolution at an 8 m distance. Considering the image resolution, a resolution of 0.03 m at 8 m distance could be achieved, which is about a factor of 8 better. Hence, the main improvement of the new algorithm focuses on using the available resolution in tangential direction. The new method consists of the following steps:

  1. 1.

    Iterate through radii starting from inside outwards;

  2. 2.

    Apply an intensity threshold for each circle;

  3. 3.

    Apply a circular closing filter to fill small holes;

  4. 4.

    Collect candidate obstacles;

  5. 5.

    Split obstacles that are too wide;

  6. 6.

    Check mandatory features (obstacle is inside field, obstacle large enough in both tangential and radial direction);

  7. 7.

    Collect all valid obstacles;

  8. 8.

    Update the mask with the found obstacles such that no obstacles can be found behind other obstacles.

Fig. 4.
figure 4

Comparison results of old and new obstacle detection algorithm.

When comparing the old and new method on the robot, the results as shown in Fig. 4(a) and (b) are obtained. In this experiment, a keeper is positioned at about (−0.5, 9) m pointing forward. The dots in Fig. 4(a) illustrate where the obstacle was seen by the robot with the old and new method. It can be seen that the standards deviation is significantly reduced. Figure 4(b) shows that the detection range is also increased. The lines show the trajectory of the moving robot. The color indicates whether the goalkeeper was detected from that position or not. As observed, the new method has an increased detection range.

Tournament Results. After analysis of the Robocup 2015 tournament matches we found that our success rate of shots at goal was around 20%, mainly caused by not detecting the goalkeeper. Analysis of our matches during Robocup 2016 showed that the efficiency of shots at the goal was only slightly higher. This is mainly caused by our changed strategy, more shooting attempts were performed, and probably by more effective defensive actions of our opponents. However, the number of shots directly at an opponent goalkeeper were reduced.

4 Improved Defensive Strategies

Two defensive strategies have been improved during 2016. The first one is described in Sect. 4.1 and describes our improved defense algorithm in standard attack situations. The second defensive strategy elaborates on our goalkeeper stopping penalties. The latter was of great importance during the final match which ended with penalties after a 3-3 draw.

4.1 Defending Attack Actions

Rules with respect to defending within the RoboCup MSL league are strict. When two robots from opponent teams are in a scrum, no other robot is allowed to make direct contact with the scrumming robots. This fault is illustrated by Fig. 5(a). When an opponent comes close to the goal, a defending team might, however, want to increase the number of defenders that defend that opponent. Defending with two robots requires an algorithm to make sure the “two-robots”-rule is respected.

Fig. 5.
figure 5

Defending one opponent with multiple defenders. Left: violation of the “two-robots”-rule. Right: proposed solution. Cyan: defending team, magenta: attacking team. (Color figure online)

During the RoboCup 2016 a solution was implemented to make sure defending one opponent with two robots is possible, without violating the “two-robots”-rule. Figure 5(b) shows a situation in which the proposed algorithm is controlling the position of the robots. Robot 4 will always try to gain possession of the ball, unless the ball and robot 4 are in the area denoted by P. If the ball and the target of robot 4 are in area denoted by P, robot 4 will actually position on the edge of area P, as shown in Fig. 5(b). Robot 2 will position on the line between the ball and the mid-point of the goal, close to the opponent with the ball. When robot 4 is already in the area denoted by P, robot 2 will not enter area P, and keep some distance to robot 4 to not violate the “two-robots”-rule.

Fig. 6.
figure 6

The implemented solution. Left: preventing violation of the “two-robots”-rule. Right: solution active during match. Cyan: defending team, magenta: attacking team. (Color figure online)

Tournament Results. During the matches at RoboCup 2016 this concept has proven to be very effective, Fig. 6(b) shows a still from the match versus the Chinese team Water where the solution is active. This concept has been proven effective especially versus teams with a relatively slow starting attack. This because, our second defender has an attacking role during the attack, this robot therefore has to make its way from the other side of the field. An improved role-assignment can be a possible solution to overcome this problem and have defense in position even faster.

4.2 Defending Penalties

In a penalty situation in the MSL, a robot shoots the ball from a distance of 3 m at a goal that is 2 m wide. The goalkeeper defending the goal has a maximum width of 0.7 m (\(\sqrt{2}\times 0.5\,\mathrm{m}\)). It is allowed to equip the goalkeeper with a movable frame that extends the width by 10 cm during one second, once every five seconds. An MSL robot can shoot a ball with a velocity up to 12 m/s, hence the reaction time of the goalkeeper is approximately 0.25 s. Within this time, the goalkeeper should detect the shot direction, and position itself at the right position. If the shot direction estimation is neglected and the complete reaction time of 0.25 s is available for positioning the goalkeeper 0.4 m to one side, an acceleration of approximately 14 m/s\(^2\) is required. Hence, in a real situation, it is impossible for the goalkeeper to stop a penalty when the ball is shot at full speed close to the goal post. Therefore, we implemented a basic algorithm that tries to position the goalkeeper at the right spot in the goal.

During gameplay the goalkeeper solely looks at the current ball-position and predicts where the ball crosses the goal line if the ball has a velocity larger than zero. Based on that, it positions itself at the best position in the goal to stop the ball. During a penalty session, however, the opponent position is taken into account as well. Since all MSL robots rotate around the ball to shoot in a certain direction, the goalkeeper can estimate the shot direction based on the opponent position relative to the ball. This is illustrated in Fig. 7(a) to (c).

Fig. 7.
figure 7

Defending penalty strategy: goalkeeper estimates shot direction based on opponent position relative to the ball. (Color figure online)

The opponent robot (magenta 4) grabs the ball, Fig. 7(a), and starts rotating around the ball to shoot in a goal corner. From the position of the opponent with respect to the ball it can be seen clearly that the shot direction will be the right hand side of the goal (Fig. 7(b)). The goalkeeper starts positioning to that corner and is in time to block the shot.

Tournament Results. During Robocup 2016, one penalty was given to our opponent during the round robins which was stopped by the goalkeeper. The final match ended in a draw (3-3) after extension and a penalty shoot-out was performed to find the winner. The goalkeeper was able to stop five out of five penalties by positioning to the right side of the goal based on the opponent position. Hence, the algorithm shows its significant value at the most important moment of the Robocup 2016 competition.

5 Tournament Results

The previous sections of this paper elaborated on improvement of algorithms of the Tech United software during 2016. With these improvements we managed to win the MSL competition of 2016. In total we played eleven matches during the tournament. In the three round robins, nine matches were played, seven matches were won and two times a draw. The semi-final ended in 5-0 and the final game ended in 4-3 after penalties. In total, Tech United scored 85 times and the opponents scored eight times.

During the tournament, the robots in the field drove in total 43.6 km. The goalkeeper, only active inside the goal area, was very active in blocking the goal, the total travelled distance of this robot only was 3.8 km. During 99.5% of the match time, the omnivision or the laser range finder of the goalkeeper found its location. For the other robots a localization percentage of 96.5% was obtained. These numbers, together with a high percentage of having five active robots in the field (over 85%), show the robustness of our robot platform.

6 Conclusions

In this paper we have discussed our improvements during the 2015/2016 season, which enabled us to regain the world championship. The paper elaborated on the improved perception using combined omnivision for a more accurate ball position estimation and integrating the Kinect v2 cameras onto the robots. This resulted in an improved perception of high balls of the goalkeeper. Furthermore, the new obstacle detection algorithm is described. With this algorithm the robots have a more accurate obstacle position estimation and obstacles can be detected from a wider range. This improvement made the attackers more effective in scoring goals. Our updated defensive strategy during game-play and during penalty sessions is described and the numbers show the effect of this. Especially the penalty blocking strategy was very efficient which made the goalkeeper block five out of five penalties during the final match. The tournament statistics in the last section prove the robustness of our improved robots.

Altogether we consider our improvements, made during 2016, successful for the tournament, while at the same time maintaining the attractiveness of our competition for a general audience.