WO2013111492A1 - Monitoring system - Google Patents
Monitoring system Download PDFInfo
- Publication number
- WO2013111492A1 WO2013111492A1 PCT/JP2012/083465 JP2012083465W WO2013111492A1 WO 2013111492 A1 WO2013111492 A1 WO 2013111492A1 JP 2012083465 W JP2012083465 W JP 2012083465W WO 2013111492 A1 WO2013111492 A1 WO 2013111492A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- monitoring
- image
- terminal device
- image information
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Definitions
- the present invention relates to a monitoring system.
- a security device that detects the occurrence of abnormalities is known by installing multiple security camera devices in shopping streets, store entrances, home entrances, and other streets, and monitoring surrounding images captured by the security camera device (Patent Document 1).
- An object of the present invention is to provide a monitoring system capable of transmitting monitoring information in real time.
- the present invention attaches a camera to a plurality of moving objects, acquires position information of the moving object and image information captured by the camera at a predetermined timing, and transmits position information and image information from a specific moving object.
- the above object is achieved by suppressing transmissions from surrounding mobile objects.
- image information from a camera mounted on a plurality of mobile bodies that travel at random and position information of the mobile body are acquired, and transmission from surrounding mobile bodies is suppressed when transmitting this information.
- the amount of communication data in the communication area is optimized, and monitoring information can be transmitted in real time.
- FIG. 1 is a schematic diagram showing a monitoring system according to an embodiment of the present invention. It is a block diagram which shows the monitoring system of FIG. It is a flowchart which shows the main control content by the monitoring terminal device side of the monitoring system of FIG. It is a flowchart which shows the main control content by the central monitoring apparatus side of the monitoring system of FIG. It is a perspective view which shows the imaging range of the camera in the monitoring system of FIG. It is a top view which shows the example of arrangement
- the monitoring system is embodied as a monitoring system 1 that centrally monitors the security of a city by authorities such as a police station and a fire station. That is, the position information of each of the plurality of moving objects, the image information around the moving objects, and the time information are acquired at a predetermined timing, and the position information, the image information, and the time information are acquired via wireless communication. The position information is displayed on the map information and, if necessary, the image information and the time information are displayed on the display. Therefore, the monitoring system 1 of this example inputs and processes the monitoring information via the telecommunications network 30 and the monitoring terminal device 10 that acquires monitoring information such as position information and image information as shown in FIG. Central monitoring device 20 is provided.
- FIG. 1 centrally monitors the security of a city by authorities such as a police station and a fire station. That is, the position information of each of the plurality of moving objects, the image information around the moving objects, and the time information are acquired at a predetermined timing, and the position information, the image information, and the time
- FIG. 2 is a block diagram illustrating a specific configuration of the monitoring terminal device 10 and the central monitoring device 20.
- the monitoring system 1 of this example when an incident or accident occurs, temporarily suppresses the transmission of image information from a moving body around the moving body that has discovered the incident, so that the image information from the discoverer The amount of line communication for transmitting the image at high speed or with high quality can be ensured, so that the image information in the field can be monitored in real time.
- the monitoring terminal device 10 is a terminal device mounted on a plurality of moving bodies V, and is mounted on each of the plurality of moving bodies and a position detection function that detects position information of each of the plurality of moving bodies V.
- An image generation function that captures an image of the periphery of the moving body to generate image information, a time detection function, an information acquisition control function that acquires position information, image information, and time information at a predetermined timing, and the position information and image It has a communication function for outputting information and time information to the central monitoring device 20 and receiving a command from the central monitoring device 20, and a function for reporting the occurrence of an abnormality.
- a plurality of in-vehicle cameras 11a to 11e, an image processing device 12, a communication device 13, an in-vehicle control device 14, a position detection device 15, and a notification button 16 are provided.
- the time information is mainly used for the post-analysis of the event and may be omitted.
- the mobile object whose transmission has been temporarily suppressed temporarily stores the image information acquired during that time. Since it may be stored and transmitted as necessary when transmission suppression is released, it may be used for post-mortem analysis, so it is effective to acquire time information from the meaning as well.
- the mobile body V on which the monitoring terminal device 10 is mounted is not particularly limited as long as it travels in the target monitoring area, and includes mobile bodies such as automobiles, motorcycles, industrial vehicles, and trams.
- the vehicle V2 and the emergency vehicle V3 are included, and in particular, a taxi or a route bus V1 that travels randomly and constantly in a predetermined area is particularly preferable.
- FIG. 1 illustrates an emergency vehicle V3 such as a taxi V1, a private vehicle V2, a police car, a fire engine or an ambulance, but these are collectively referred to as a moving body V or a passenger vehicle V.
- Each moving body V includes a plurality of in-vehicle cameras 11a to 11e (hereinafter collectively referred to as cameras 11), an image processing device 12, a communication device 13, an in-vehicle control device 14, a position detection device 15, and a notification button 16.
- the camera 11 is composed of a CCD camera or the like, images the surroundings of the moving object V, and outputs the image pickup signal to the image processing device 12.
- the image processing device 12 reads an imaging signal from the camera 11 and performs image processing on the image information. Details of this image processing will be described later.
- the position detection device 15 is composed of a GPS device and its correction device, etc., detects the current position of the moving object V, and outputs it to the in-vehicle control device 14.
- the notification button 16 is an input button installed in the passenger compartment, and is a manual button that is input when a driver or a passenger finds an incident (an incident related to security such as an accident, fire, or crime).
- the in-vehicle control device 14 includes a CPU, a ROM, and a RAM, and controls the image processing device 12, the communication device 13, and the position detection device 15 when the notification button 16 is pressed, and is generated by the image processing device 12.
- the image information, the position information of the moving object V detected by the position detection device 15, and the time information from the clock built in the CPU are output to the central monitoring device 20 via the communication device 13 and the telecommunication network 30. . Further, it receives an information acquisition command from the central monitoring device 20 received via the telecommunications network 30 and the communication device 13, controls the image processing device 12, the communication device 13 and the position detection device 15, and controls the image processing device 12.
- the central monitoring device through the communication device 13 and the telecommunications network 30 includes the image information generated in step S1, the position information of the moving body V detected by the position detection device 15, and the time information from the clock built in the CPU. 20 output. Details of these controls will also be described later.
- the communication device 13 is a communication means capable of wireless communication, and exchanges information with the communication device 23 of the central monitoring device 20 via the telecommunication network 30.
- the telecommunications network 30 is a commercial telephone network, a mobile phone communication device can be used widely, and when the telecommunications network 30 is a dedicated telecommunications network for the monitoring system 1 of this example, it is dedicated to it.
- the communication devices 13 and 23 can be used.
- a wireless LAN, WiFi (registered trademark), WiMAX (registered trademark), Bluetooth (registered trademark), a dedicated wireless line, or the like can be used.
- the central monitoring device 20 displays the information input function for inputting the position information and image information output from the monitoring terminal device 10 and the map information from the map database, and displays the received position information on the map information.
- the central control device 21 includes a CPU, a ROM, and a RAM, and controls the image processing device 22, the communication device 23, and the display 24, and receives position information, image information, and time information transmitted from the monitoring terminal device 10.
- the image is displayed on the display 24 after being subjected to image processing as necessary.
- the image processing device 24 has a map database, displays map information from the map database on the display 24, and displays position information detected by the position detection device 15 of the monitoring terminal device 10 on the map information. . Further, image processing for displaying image information captured by the vehicle-mounted camera 11 of the monitoring terminal device 10 and processed by the image processing device 12 on the display 24 is performed.
- the display 24 can be composed of, for example, a liquid crystal display device having a size capable of displaying two window screens on one screen or two liquid crystal display devices each displaying two window screens.
- One window screen displays a screen in which the position information of each moving object V is superimposed on the map information (see FIG. 1), and the other window screen displays an image captured by the in-vehicle camera 11. Such image information is displayed.
- the input device 25 is composed of a keyboard or a mouse, and is used when outputting an information acquisition command to a desired moving body V or inputting a processing command for various information displayed on the display 24.
- the communication device 23 is a communication means capable of wireless communication, and exchanges information with the communication device 13 of the monitoring terminal device 10 via the telecommunication network 30.
- the telecommunications network 30 is a commercial telephone network, a mobile phone communication device can be used widely, and when the telecommunications network 30 is a dedicated telecommunications network for the monitoring system 1 of this example, it is dedicated to it.
- the communication devices 13 and 23 can be used.
- the central monitoring device 20 of this example has a function of temporarily suppressing image information transmitted from the monitoring terminal device 10.
- image information has a larger amount of data than position information, so it is necessary to secure a sufficient amount of line communication to transmit at high speed or high quality.
- the transmission of the image information from the mobile bodies around the mobile body V that discovered the image is temporarily suppressed, thereby reducing the amount of line communication for transmitting the image information from the discovered mobile body V at high speed or with high quality. Secure.
- FIG. 13 is a diagram illustrating an example of suppressing transmission of image information.
- Va is a mobile body (hereinafter also referred to as a notification vehicle) in which an incident or accident is detected and the notification button 16 is pressed
- Ta is the mobile body Va.
- the communication area of the communication base station to which the monitoring terminal device 10 belongs, Tb is the communication area of another communication base station, and the communication area to which the monitoring terminal device 10 of the mobile unit Va does not belong, and Vb to Ve indicate other mobile units. .
- the central monitoring device 20 of this example obtains effective image information in addition to the image information from the reporting vehicle Va for the moving body Vb existing within the radius Ra (for example, within 50 m) with the reporting vehicle Va as a base point. Therefore, a command for transmitting the position information and the image information is output.
- the possibility of being obtained cannot be said to be zero, priority is given to securing the amount of line communication over urgency, and a command for transmitting only position information, that is, a command for suppressing transmission of image information is output.
- the image information is temporarily stored in the memory RAM or the like of the monitoring terminal device 10 of the moving body Vc, and is transmitted later as necessary.
- the central monitoring device 20 of the present example suppresses or cancels the transmission of image information depending on the relationship with the communication area Ta of the base station to which the monitoring terminal device 10 of the reporting vehicle Va belongs. That is, a command to suppress transmission of image information is output to the mobile body Vc that exists in the radius Rb far from the radius Ra with the reporting vehicle Va as a base point and belongs to the communication area Ta of the same base station. Since it is not necessary to secure the communication line amount for the mobile body Vd that does not belong to the communication area Ta of the base station and belongs to the communication area Tb of another base station, the suppression of transmission of image information is cancelled.
- transmission of image information is not suppressed for the mobile body Vf existing in a region farther than the radius Rb from the reporting vehicle Va and belonging to the communication area Tb of another base station, but the reporting vehicle Va is the starting point.
- transmission of image information is suppressed for a mobile unit Ve that exists in an area farther than the radius Rb and belongs to the communication area Ta of the same base station.
- the mounting positions and imaging ranges of the in-vehicle cameras 11a to 11e will be described.
- a passenger car V will be described as an example of the moving body V.
- the cameras 11a to 11e are configured using an image sensor such as a CCD, and the four on-vehicle cameras 11a to 11d are installed at different positions outside the passenger car V, respectively, and shoot four directions around the vehicle.
- the in-vehicle camera 11 a installed at a predetermined position in front of the passenger car V such as a front grill portion is an object or road surface (in the area SP1 in front of the passenger car V and in the space in front thereof)
- the in-vehicle camera 11b installed at a predetermined position on the left side of the passenger car V such as the left side mirror portion is an object or road surface (left side view) that exists in the area SP2 on the left side of the passenger car V and in the surrounding space.
- the in-vehicle camera 11c installed at a predetermined position in the rear part of the passenger car V, such as a rear finisher part or a roof spoiler part, is an object or road surface (rear view) existing in the area SP3 behind the passenger car V and in the space behind it.
- the in-vehicle camera 11d installed at a predetermined position on the right side of the passenger car V such as the right side mirror portion is an object or road surface (right side view) that exists in the area SP4 on the right side of the passenger car V and in the surrounding space.
- one in-vehicle camera 11e is installed, for example, on the ceiling of the passenger car interior, and images the indoor area SP5 as shown in FIG. Used for crime prevention or crime reporting.
- FIG. 6 is a view of the arrangement of the in-vehicle cameras 11a to 11e as viewed from above the passenger car V.
- the in-vehicle camera 11a that images the area SP1 the in-vehicle camera 11b that images the area SP2
- the in-vehicle camera 11c that images the area SP3 the in-vehicle camera 11d that images the area SP4 are It is installed along the outer periphery VE of the body along the counterclockwise direction (counterclockwise) or the clockwise direction (clockwise).
- the in-vehicle camera 11b is installed on the left side of the in-vehicle camera 11a, and the left side of the in-vehicle camera 11b.
- the vehicle-mounted camera 11c is installed on the left side of the vehicle-mounted camera 11c, and the vehicle-mounted camera 11a is installed on the left side of the vehicle-mounted camera 11d.
- the in-vehicle camera 11d is installed on the right side of the in-vehicle camera 11a.
- the vehicle-mounted camera 11c is installed on the right side
- the vehicle-mounted camera 11b is installed on the right side of the vehicle-mounted camera 11c
- the vehicle-mounted camera 11a is installed on the right side of the vehicle-mounted camera 11b.
- FIG. 7A shows an example of an image GSP1 in which the front in-vehicle camera 11a images the area SP1
- FIG. 7B shows an example of the image GSP2 in which the left in-vehicle camera 11b images the area SP2
- FIG. 7D shows an example of an image GSP3 in which the area SP3 is imaged
- FIG. 7D shows an example of an image GSP4 in which the right-side in-vehicle camera 11d images the area SP4
- FIG. 7E shows an indoor in-vehicle camera 11e.
- the size of each image is vertical 480 pixels ⁇ horizontal 640 pixels.
- the image size is not particularly limited, and may be any size as long as a general terminal device can reproduce a moving image.
- the number and position of the in-vehicle camera 11 can be appropriately determined according to the size, shape, detection area setting method, etc. of the passenger car V.
- the plurality of in-vehicle cameras 11 described above are assigned identifiers corresponding to the respective arrangements, and the in-vehicle control device 14 can identify each of the in-vehicle cameras 11 based on each identifier.
- the vehicle-mounted control apparatus 14 can send an imaging command and other commands to a specific vehicle-mounted camera 11 by attaching an identifier to the command signal.
- the in-vehicle control device 14 controls the image processing device 12 to acquire each image signal picked up by the in-vehicle camera 11, and the image processing device 12 processes the image pickup signal from each in-vehicle camera 11 to perform FIG. 7E is converted into image information.
- the in-vehicle controller 14 generates monitoring image information based on the four pieces of image information shown in FIGS. 7A to 7D (monitoring image generation function), and sets the monitoring image information on the side of the projection model of the columnar body.
- the mapping information to be projected onto the projected plane is associated with the monitoring image information (mapping information adding function) and output to the central monitoring device 20.
- the monitoring image generation function and the mapping information addition function will be described in detail.
- the monitoring image information is generated based on the four pieces of image information obtained by imaging the periphery of the passenger car V, and the process of associating the mapping information with this is performed by the monitoring terminal device 10 as in this example, and the central monitoring device 20 You can also run In this case, four pieces of image information obtained by imaging the periphery of the passenger car V are transmitted as they are from the monitoring terminal device 10 to the central monitoring device 20, and are monitored by the image processing device 22 and the central control device 21 of the central monitoring device 20. Image information may be generated and mapping information may be associated to perform projection conversion.
- the in-vehicle control device 14 of the monitoring terminal device 10 of the present embodiment controls the image processing device 12 to acquire the imaging signals of the in-vehicle cameras 11a to 11e, respectively, and further clockwise or along the outer periphery of the body of the passenger car V
- One monitoring image is generated so that the image information of the in-vehicle cameras 11a to 11d installed in the counterclockwise direction is arranged in the order of installation of these in-vehicle cameras 11a to 11d.
- the four in-vehicle cameras 11a to 11d are installed in the order of the cameras 11a, 11b, 11c, and 11d in the counterclockwise direction (counterclockwise) along the outer periphery VE of the body of the passenger car V. Therefore, the vehicle-mounted control device 14 integrates the four images captured by the vehicle-mounted cameras 11a to 11d in accordance with the order of installation of the vehicle-mounted cameras 11a to 11d (vehicle-mounted cameras 11a ⁇ 11b ⁇ 11c ⁇ 11d). Are connected in the horizontal direction to generate a single monitoring image. In the monitoring image of the present embodiment, the images are arranged such that the ground contact surface (road surface) of the passenger vehicle V is the lower side, and the images are connected to each other at sides in the height direction (vertical direction) with respect to the road surface.
- FIG. 8 is a diagram illustrating an example of the monitoring image K.
- the monitoring image K of the present embodiment includes a captured image GSP1 in which the front in-vehicle camera 11a images the area SP1 along the direction P from the left side to the right side in the drawing, and the left in-vehicle camera 11b.
- a captured image GSP2 obtained by imaging the area SP2 a captured image GSP3 obtained by the rear vehicle-mounted camera 11c imaging the area SP3, and a captured image GSP4 obtained by the right-side vehicle-mounted camera 11d imaging the area SP4 are arranged in this order in the horizontal direction. These four images are arranged as a series of images.
- the monitor image K generated in this way is displayed in order from the left end to the right side with the image corresponding to the road surface (vehicle contact surface) facing down, so that the monitor can rotate the periphery of the vehicle V counterclockwise. It can be visually recognized on the display 24 in a manner similar to the look around.
- one monitoring image K when one monitoring image K is generated, four images acquired at substantially the same time as the photographing timings of the in-vehicle cameras 11a to 11d are used. Thereby, since the information contained in the monitoring image K can be synchronized, the situation around the vehicle at a predetermined timing can be accurately expressed.
- the monitoring image K generated from the respective captured images having substantially the same imaging timing of the camera is stored with time, and the moving image monitoring image K including the plurality of monitoring images K per predetermined unit time is generated. It may be. By generating the moving image monitoring image K based on the images having the same imaging timing, it is possible to accurately represent changes in the situation around the vehicle.
- the conventional central monitoring device 20 has a disadvantage in that it cannot simultaneously watch images (moving images) in a plurality of directions and cannot monitor the entire vehicle periphery on a single screen.
- the vehicle-mounted control apparatus 14 of this embodiment produces
- the monitoring terminal device 10 of the present embodiment generates the monitoring image K by compressing the data amount of the image so that the number of pixels of the monitoring image K is substantially the same as the number of pixels of the images of the in-vehicle cameras 11a to 11d.
- the size of each image shown in FIGS. 7A to 7D is 480 ⁇ 640 pixels
- compression processing is performed so that the size of the monitoring image K is 1280 ⁇ 240 pixels as shown in FIG. Do.
- image processing and image reproduction can be performed.
- the in-vehicle control device 14 of the present embodiment can also attach a line figure indicating the boundary of each arranged image to the monitoring image K.
- the in-vehicle controller 14 forms a rectangular partition image Bb, Bc, Bd, Ba, Ba ′ between the images as a line figure indicating the boundary between the arranged images. Can be attached to the monitoring image K.
- the partition image functions as a frame of each captured image.
- the image distortion is large in the vicinity of the boundary of each captured image, it is possible to hide the image of the region with large distortion or to suggest that the distortion is large by arranging the partition image at the boundary of the captured image. .
- the vehicle-mounted control apparatus 14 of this embodiment can also generate
- image distortion is likely to occur.
- the distortion of the captured image tends to be large, so in order to correct the image distortion in advance. It is desirable to correct the distortion of the captured image using the defined image conversion algorithm and correction amount.
- the in-vehicle control device 14 reads out information of the same projection model as the projection model for projecting the monitoring image K in the central monitoring device 20 from the ROM, and images it on the projection plane of the projection model. It is also possible to project an image and correct in advance distortion generated on the projection surface.
- the image conversion algorithm and the correction amount can be appropriately defined according to the characteristics of the in-vehicle camera 11 and the shape of the projection model. In this way, by correcting in advance the distortion when the image K is projected with respect to the projection plane of the projection model, it is possible to provide the monitoring image K with good visibility with less distortion. Further, by correcting the distortion in advance, it is possible to reduce the positional deviation between the images arranged side by side.
- the mapping information addition function will be described.
- the in-vehicle control device 14 projects the generated monitoring image K on the projection plane set on the side surface of the projection model M of the columnar body with the ground contact surface of the passenger car V as the bottom surface.
- a process for associating the mapping information for monitoring with the monitoring image K is executed.
- the mapping information is information for allowing the central monitoring device 20 that has received the monitoring image K to easily recognize the projection reference position.
- FIG. 10 is a diagram illustrating an example of the projection model M of the present embodiment
- FIG. 11 is a schematic cross-sectional view along the xy plane of the projection model M illustrated in FIG.
- the projection model M of this embodiment is a regular octagonal prism body having a regular octagonal bottom surface and a height along the vertical direction (z-axis direction in the figure).
- the shape of the projection model M is not particularly limited as long as it is a column having side surfaces adjacent to each other along the boundary of the bottom surface, and is a cylinder, or a prism, such as a triangular column, a quadrangular column, or a hexagonal column, or An anti-rectangular column having a polygonal bottom surface and a triangular side surface can also be used.
- the bottom surface of the projection model M of this embodiment is parallel to the ground contact surface of the passenger car V.
- Projection surfaces Sa, Sb, Sc, and Sd (hereinafter collectively referred to as a projection surface S) that project an image around the passenger vehicle V that contacts the bottom surface of the projection model M are provided on the inner surface of the side surface of the projection model M. Is set.
- the projection surface S includes a part of the projection surface Sa and a part of the projection surface Sb, a part of the projection surface Sb and a part of the projection surface Sc, a part of the projection surface Sc and a part of the projection surface Sd, and the projection surface Sd. And a part of the projection surface Sa.
- the monitoring image K is projected on the projection plane S as an image of the passenger car V viewed from above the viewpoint R (R1 to R8, hereinafter referred to as viewpoint R) above the projection model M surrounding the passenger car V.
- the in-vehicle control device 14 associates the reference coordinates of the captured image arranged at the right end or the left end with the monitoring image K as mapping information.
- the in-vehicle control device 14 is arranged at the right end as mapping information (reference coordinates) indicating the start end position or the end position of the monitoring image K when projected onto the projection model M.
- mapping information reference coordinates
- the coordinates A (x, y) of the upper left vertex of the captured image GSP1 and the coordinates B (x, y) of the upper right vertex of the captured image GSP2 arranged at the left end are attached to the monitoring image K.
- the reference coordinates of the captured image indicating the start position or the end position are not particularly limited, and may be the lower left vertex of the monitoring image K arranged at the left end or the lower right vertex of the monitoring image K arranged at the right end.
- the mapping information may be attached to each pixel of the image data of the monitoring image K, or may be managed as a file different from the monitoring image K.
- the information indicating the start position or the end position of the monitoring image K that is, the reference coordinates used as a reference in the projection processing is associated with the monitoring image K as mapping information, whereby the central monitoring apparatus 20 that has received the monitoring image K Since the reference position at the time of the projection process can be easily recognized, the monitoring images K arranged in the order in which the in-vehicle cameras 11a to 11d are arranged are projected sequentially and easily on the projection surface S on the side surface of the projection model M. be able to. That is, as shown in FIG.
- the captured image GSP1 in front of the vehicle is projected onto the projection surface Sa positioned in the imaging direction of the in-vehicle camera 11a, and the captured image on the right side of the vehicle is projected onto the projection surface Sb positioned in the imaging direction of the in-vehicle camera 11b.
- GSP2 is projected, a captured image GSP3 behind the vehicle is projected onto a projection plane Sc located in the imaging direction of the in-vehicle camera 11c, and a captured image GSP4 on the left side of the vehicle is projected onto the projection plane Sd positioned in the imaging direction of the in-vehicle camera 11d. can do.
- the monitoring image K projected on the projection model M can show an image that can be seen as if looking around the passenger car V. That is, since the monitoring image K including four images arranged in a line in the horizontal direction according to the installation order of the in-vehicle cameras 11a to 11d is projected on the side surfaces that are also arranged in the horizontal direction in the column of the projection model M. An image around the passenger car V can be reproduced in the monitoring image K projected on the projection surface S of the projection model M of the columnar body while maintaining the positional relationship.
- the in-vehicle control device 14 of the present embodiment stores the correspondence relationship between each coordinate value of the monitoring image K and the coordinate value of each projection plane S of the projection model M as mapping information, and attaches it to the monitoring image K.
- it may be stored in the central monitoring device 20 in advance.
- the positions of the viewpoint R and the projection plane S shown in FIGS. 10 and 11 are examples, and can be arbitrarily set.
- the viewpoint R can be changed by the operation of the operator.
- the relationship between the viewpoint R and the projection position of the monitoring image K is defined in advance, and when the position of the viewpoint R is changed, a predetermined coordinate transformation is performed, so that the viewpoint R is viewed from the newly set viewpoint R.
- the monitoring image K can be projected onto the projection surface S (Sa to Sd). A known method can be used for this viewpoint conversion processing.
- the in-vehicle control device 14 generates the monitoring image K based on the image information captured at a predetermined timing, and the monitoring image K includes a line figure (mapping information, reference coordinates, and boundary). (Partition image) information is associated and stored over time according to the imaging timing.
- the in-vehicle control device 14 may store the monitoring image K as a single moving image file including a plurality of monitoring images K per predetermined unit time, or can be transferred / reproduced by a streaming method.
- the monitoring image K may be stored in
- the communication device 23 of the central monitoring device 20 receives the monitoring image K transmitted from the monitoring terminal device 10 and the mapping information associated with the monitoring image K.
- photographed with the indoor vehicle-mounted camera 11e is received separately.
- this monitoring image K images of the four in-vehicle cameras 11 installed at different positions of the body of the passenger car V are installed along the outer periphery of the body of the passenger car V along the clockwise or counterclockwise direction.
- the vehicle-mounted cameras 11a to 11d are arranged according to the installation order (clockwise or counterclockwise order along the outer periphery of the body of the vehicle V).
- the monitoring image K is associated with mapping information for projecting the monitoring image K onto the projection plane S of the octagonal prism projection model M.
- the communication device 23 sends the acquired monitoring image K and mapping information to the image processing device 22.
- the image processing device 22 reads the projection model M stored in advance, and sets the projection model M on the side surface of the octagonal prism projection model M with the ground contact surface of the passenger car V shown in FIGS. 10 and 11 as the bottom surface based on the mapping information.
- a display image is generated by projecting the monitoring image K onto the projected planes Sa to Sd. Specifically, according to the mapping information, each pixel of the received monitoring image K is projected onto each pixel of the projection surfaces Sa to Sd. Further, when projecting the monitoring image K onto the projection model M, the image processing device 22 recognizes the start point of the monitoring image K (the right end or the left end of the monitoring image K) based on the reference coordinates received together with the monitoring image K.
- the projection processing is performed so that the start point coincides with the start point (the right end or the left end of the projection surface S) defined in advance on the projection model M. Further, when projecting the monitoring image K onto the projection model M, the image processing device 22 arranges a line figure (partition image) indicating the boundary of each image on the projection model M.
- the partition image can be attached to the projection model M in advance, or can be attached to the monitoring image K after the projection processing.
- the display 24 displays the monitoring image K projected on the projection plane S of the projection model M.
- FIG. 12 shows an example of a display image of the monitoring image K.
- the input device 25 such as a mouse or a keyboard or the display 24 as the touch panel type input device 25
- the viewpoint can be freely set and changed by the operation of the supervisor. Since the correspondence relationship between the viewpoint position and the projection plane S is defined in advance in the image processing device 22 or the display 24 described above, the monitoring image K corresponding to the changed viewpoint is displayed on the display 24 based on this correspondence relationship. can do.
- FIG. 3 is a flowchart showing the operation on the monitoring terminal device 10 side
- FIG. 4 is a flowchart showing the operation on the central monitoring device 20 side.
- the surrounding video and the indoor video are acquired from the in-vehicle camera 11 at a predetermined time interval (one routine shown in FIG. 3), and image information is obtained by the image processing device 12. Conversion is performed (step ST1). Further, the current position information of the passenger car V on which the monitoring terminal device 10 is mounted is detected from the position detection device 15 (step ST2).
- step ST3 it is determined whether or not the report button 16 has been pressed. If the report button 16 has been pressed, the process proceeds to step ST4, where the image information acquired in step ST1, the position information acquired in step ST2, and The time information of the CPU is associated, and these are transmitted to the central monitoring device 20 via the communication device 13 and the telecommunication network 30 together with the abnormality information indicating that an abnormality has occurred. As a result, the occurrence of an abnormality related to security such as an accident or crime is automatically transmitted to the central monitoring device 20 together with the position information of the passenger car V and the image information around the passenger car V, thereby further strengthening the monitoring in the city. Will be.
- the image information and the position information are acquired in the first steps ST1 and ST2, but the image information and the position information may be acquired at a timing between steps ST3 and ST4.
- step ST5 if the report button 16 is not pressed, the process proceeds to step ST5, and an image transmission command is input from the central monitoring device 20. If there is an image transmission command from the central monitoring apparatus 20 in step ST6, the process proceeds to step ST7, and the image information acquired in step ST1, the position information acquired in step ST2, and the time information of the CPU are associated with each other. These are transmitted to the central monitoring device 20 via the communication device 13 and the telecommunication network 30. Thereby, even if the passenger of the passenger car V does not press the notification button 16, the required image information can be appropriately transmitted when requested by the supervisor operating the central monitoring device 20. In this example, the image information and the position information are acquired in the first steps ST1 and ST2, but the image information and the position information may be acquired at a timing between steps ST3 and ST4.
- step ST6 if there is no image transmission command from the central monitoring device 20, the process proceeds to step ST8, and it is determined whether there is an image transmission suppression command from the central monitoring device 20.
- the image transmission suppression command is a command that is executed to ensure the amount of line communication when the notification button 16 of the passenger vehicle V around the passenger vehicle V is pressed and abnormality information is transmitted to the central monitoring device 20. is there. If there is an image transmission suppression command from the central monitoring device 20, the process proceeds to step ST10, where the image information acquired in step ST1, the position information acquired in step ST2, and the time information of the CPU are associated and monitored. Temporarily stored in the memory RAM of the terminal device 10.
- step ST8 when there is no image transmission suppression command from the central monitoring device 20, the process proceeds to step ST9, and only the position information acquired in step ST2 is transmitted to the central monitoring device 20 via the communication device 13 and the telecommunication network 30. .
- the central monitoring device 20 displays the current position of each passenger car V on the map information (step ST12 in FIG. 4), so that the current position of each passenger car V can be grasped in a timely manner.
- the central monitoring device 20 acquires position information and abnormality information from all the passenger cars equipped with the monitoring terminal device 10 as shown in FIG. 4 (step ST11). If the communication load is not high, the image information may be acquired at this timing.
- step ST12 the passenger car V is displayed on the map information of the map database displayed on the display 24 as shown in the upper left of FIG. 1 based on the position information acquired in step ST11. Since the position information of the passenger car V is acquired and transmitted at a predetermined timing for each routine in FIG. 3, the supervisor can grasp the current position of the passenger car V in a timely manner.
- step ST13 it is determined whether or not abnormality information notified from the monitoring terminal device 10 of the passenger car V, that is, a notification that an abnormality relating to security such as an accident or a crime has occurred has been received.
- This abnormality information is output when the passenger of the passenger car V presses the notification button 16 of the monitoring terminal device 10. If there is abnormality information, the passenger vehicle V to which abnormality information was output is identified in step ST14, image information and time information are received from the monitoring terminal device 10 of the passenger vehicle, and the image information is displayed on the display 24. Further, as shown in the upper left of FIG. 1, highlighting is performed such as changing the color so that the passenger car displayed on the map information can be distinguished from other passenger cars. Thereby, the position where the abnormality has occurred can be visually recognized on the map information, and the abnormality content can be grasped on the display 24.
- the image information can be acquired from the passenger vehicle Vb that travels in the vicinity of the passenger car Va that has output the abnormality information. Therefore, the abnormality information can be obtained from a plurality of image information in addition to the image information from the passenger car Va that has output the abnormality information. The contents can be grasped in detail.
- emergency vehicles such as a police car, an ambulance, and a fire engine.
- image information may be attached and transmitted in order to notify the abnormal content.
- the emergency vehicle can be dispatched before a report from the site is entered, and a quick response to an accident or crime is possible.
- a command for transmitting only position information that is, a command for suppressing transmission of image information is output.
- the image information from these passenger cars Vc cannot be said to be effective in addition to the image information from the reporting vehicle Va, it is not intended that the possibility of obtaining effective information is zero. is there.
- the presence / absence of the image transmission suppression command output here is determined in step ST8 of FIG. 3 described above. If the transmission of the image information is suppressed in step ST16, the image information is temporarily stored in the memory RAM or the like of the monitoring terminal device 10 of the moving body Vc as described above in step ST10 of FIG. Sent later, if necessary.
- the central monitoring device 20 of this example suppresses or cancels the transmission of image information depending on the relationship with the communication area Ta of the base station to which the monitoring terminal device 10 of the reporting vehicle Va belongs. That is, as shown in FIG. 13, transmission of image information is suppressed for a moving body Vc that exists in a radius Rb far from the radius Ra with the reporting vehicle Va as a base point and belongs to the communication area Ta of the same base station. Although the command is output, since it is not necessary to secure the communication line amount for the mobile body Vd that does not belong to the communication area Ta of the same base station and belongs to the communication area Tb of another base station, transmission of image information is not necessary. Release the suppression.
- transmission of image information is not suppressed for the mobile body Vf existing in a region farther than the radius Rb from the reporting vehicle Va and belonging to the communication area Tb of another base station, but the reporting vehicle Va is the starting point.
- transmission of image information is suppressed for a mobile unit Ve that exists in an area farther than the radius Rb and belongs to the communication area Ta of the same base station.
- step ST17 all position information, image information, and time information received from the monitoring terminal device 10 are recorded on the recording medium. This record is used to resolve these after an accident or crime. If there is no abnormality information in step ST13, the process returns to step ST1 without performing the processes in steps ST14 to ST17.
- the monitoring system of the present embodiment has the following effects. (1) Since the monitoring system 1 of this example attaches the vehicle-mounted camera 11 to a plurality of passenger vehicles V, images the surroundings with the vehicle-mounted camera, and detects the position information of the passenger vehicle V with the position detection device 15, the fixed camera Compared with, it is possible to monitor a wide range with a small number of in-vehicle cameras. Further, since the plurality of passenger cars V travel at random, the number of blind spots can be reduced with a small number of in-vehicle cameras as compared with the fixed cameras. In addition, since the camera is mounted on the passenger car V, it is less likely to be destroyed for the purpose of monitoring prevention compared to a fixed camera. Moreover, since a wide range can be monitored, it is possible to reduce the patrol work of the supervisor.
- the monitoring system 1 of this example acquires time information in addition to position information and image information, the position information and image information are arranged along the time information when performing a post-mortem analysis of an accident or a crime. Can contribute to the resolution of the case.
- the monitoring system 1 of the present example always acquires and transmits position information, the image information is acquired and transmitted in an abnormal state or when requested by the central monitoring device 20.
- the capacity can be minimized, and a decrease in communication speed can be suppressed. Further, the information recording capacity can be minimized, and an inexpensive and small system can be obtained.
- the notification button 16 is provided in the monitoring terminal device 10 of this example, when the passenger of the passenger car V finds an abnormality, the central monitoring device 20 is immediately notified together with the position information and the image information. can do. As a result, it is possible to grasp the contents of the abnormality more accurately, quickly and easily than the explanation by telephone or the like, and to contribute to the initial investigation of the incident.
- the monitoring terminal device 10 of this example receives the image transmission command from the central monitoring device 20, the monitoring terminal device 10 transmits the image information together with the position information. Can be confirmed at the place where the central monitoring device 20 is installed.
- the central monitoring device 20 of this example displays the position information received from the monitoring terminal device on the map information on the display 24, the central monitoring device 20 grasps the arrangement of the passenger car V that can detect information. Can do. As a result, it is possible to grasp the distribution of the area from which image information can be acquired, and to contribute to the monitoring plan of the supervisor. Moreover, since the image information received from the monitoring terminal device 10 is displayed on the display 24 as necessary, the monitor can view the video of the desired position at the place where the central monitoring device 20 is installed without going to the site. Can be confirmed.
- the central monitoring device 20 of this example When the central monitoring device 20 of this example receives the abnormality information from the monitoring terminal device 10, the central monitoring device 20 highlights the display of the passenger car V notified on the display 24 and displays the image information received from the passenger vehicle V on the display 24. Since it is displayed, it is possible to immediately confirm the position and the video, and prompt response to the incident can be expected.
- the central monitoring device 20 of this example receives the abnormality information from the monitoring terminal device 10, as shown in FIG. 13, the central monitoring device 20 issues an image transmission command to the passenger vehicle Vb traveling in the vicinity of the passenger vehicle Va that has transmitted the abnormality information. Since the transmission is performed, not only the image information from one passenger car V but also the image information from a plurality of passenger cars V can be acquired immediately, and the contents of the abnormality can be easily grasped.
- the central monitoring device 20 of this example receives the abnormality information from the monitoring terminal device 10, as shown in FIG. Since the transmission suppression command is transmitted, the transmission speed of the image information from the reporting vehicle Va is increased, or a high-quality image can be transmitted from the reporting vehicle Va instead. Further, in the passenger car V in which the transmission of the image information is suppressed, the image information is stored in the memory RAM of the monitoring terminal device 10, so that it can be used for post-mortem analysis. In addition, since image transmission suppression is canceled for passenger cars Vd with different communication base stations even if they are not near the reporting vehicle Va, the image information to the central monitoring device 20 can be transmitted to the central monitoring device 20 when multiple accidents occur at the same time. Aggregation is possible.
- the central monitoring device 20 of this example receives the abnormality information from the monitoring terminal device 10, it transmits the abnormality information to emergency vehicles such as police cars, ambulances, and fire engines, so that the incident can be quickly handled. Can do. Further, by transmitting the position information and the image information together with the abnormality information to the emergency vehicle, it is possible to quickly and accurately grasp the abnormality content on the emergency vehicle side.
- emergency vehicles such as police cars, ambulances, and fire engines
- the position information of the passenger car V and the image information from the in-vehicle cameras 11a to 11e are acquired.
- the image information from the fixed camera 11f installed in the city shown in FIG. May be obtained.
- the passenger car V which acquires positional information and image information it is desirable to use the taxi V1 and bus
- the number of the four on-vehicle cameras 11a to 11d may be three or less, particularly in an environment where image information can be acquired from many passenger cars V, such as a monitoring area where there is a large amount of traffic.
- the passenger vehicle V corresponds to a moving body according to the present invention
- the position detection device 15 corresponds to a position detection unit according to the present invention
- the in-vehicle camera 11 and the image processing device 12 correspond to an image generation unit according to the present invention.
- the on-vehicle control device 14 corresponds to the information acquisition control means according to the present invention
- the CPU of the on-vehicle control device 14 corresponds to the time detection means according to the present invention
- the notification button 16 is a command input according to the present invention.
- the communication device 13 corresponds to a command receiving means and an information output means according to the present invention
- the communication device 23 corresponds to an information input means, an abnormality information receiving means and a command output means according to the present invention
- the display 24 corresponds to first display control means and second display control means according to the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Closed-Circuit Television Systems (AREA)
- Alarm Systems (AREA)
Abstract
This monitoring system (1) is provided with a monitoring terminal device (10) that acquires monitoring information; and a central monitoring device (20) to which the monitoring information is input via an electronic communication line network (30). The monitoring terminal device is provided with: a position detection means (15) that detects position information for each of a plurality of mobile bodies (V); image generation means (11, 12) that are mounted to each of the plurality of mobile bodies and that generate image information by imaging the vicinity of the corresponding mobile body; and an abnormality information output means (16) that outputs abnormality information to the effect that there is an abnormal state. The central monitoring device is provided with: an information input means (23) to which are input the position information and image information output from the monitoring terminal device; and a command output means (21) that, to the monitoring terminal device of a mobile body present in a predetermined first region that has as the origin point the position of a mobile body that has output the abnormality information, outputs a command that suppresses the transmission of image information.
Description
本発明は、監視システムに関するものである。
The present invention relates to a monitoring system.
商店街、店舗の出入り口、家庭の玄関その他の街中に複数の防犯カメラ装置を設置し、当該防犯カメラ装置により撮像された周囲の映像を監視することで、異常の発生を検出する防犯装置が知られている(特許文献1)。
A security device that detects the occurrence of abnormalities is known by installing multiple security camera devices in shopping streets, store entrances, home entrances, and other streets, and monitoring surrounding images captured by the security camera device (Patent Document 1).
しかしながら、街中に設置した防犯カメラ装置で広い範囲をカバーしようとすると多数の防犯カメラ装置を設置しなければならず、通信回線が混雑している場合には通信時間がかかるという問題がある。
However, in order to cover a wide area with security camera devices installed in the city, a large number of security camera devices must be installed, and there is a problem that it takes communication time when the communication line is congested.
本発明は、リアルタイムで監視情報を送信できる監視システムを提供することを目的とする。
An object of the present invention is to provide a monitoring system capable of transmitting monitoring information in real time.
本発明は、複数の移動体にカメラを装着し、所定のタイミングで移動体の位置情報とカメラで撮像した画像情報とを取得し、特定の移動体から位置情報と画像情報を送信する際に周囲の移動体からの送信を抑制することによって、上記目的を達成する。
The present invention attaches a camera to a plurality of moving objects, acquires position information of the moving object and image information captured by the camera at a predetermined timing, and transmits position information and image information from a specific moving object. The above object is achieved by suppressing transmissions from surrounding mobile objects.
本発明によれば、ランダムに走行する複数の移動体に装着したカメラからの画像情報と移動体の位置情報とを取得し、これを送信する際に周囲の移動体からの送信を抑制するので、当該通信エリアにおける通信データ量が適正化し、リアルタイムで監視情報を送信することができる。
According to the present invention, image information from a camera mounted on a plurality of mobile bodies that travel at random and position information of the mobile body are acquired, and transmission from surrounding mobile bodies is suppressed when transmitting this information. The amount of communication data in the communication area is optimized, and monitoring information can be transmitted in real time.
以下に示す一実施の形態は、本発明に係る監視システムを、街中の治安を警察署や消防署などの当局にて集中監視する監視システム1に具体化したものである。すなわち、複数の移動体のそれぞれの位置情報と、当該移動体の周囲の画像情報と、時刻情報とを所定のタイミングで取得し、これら位置情報と画像情報と時刻情報とを、無線通信を介して、当局に設置された中央監視装置へ送信し、これら位置情報を地図情報上に表示するとともに必要に応じて画像情報と時刻情報とをディスプレイに表示するものである。そのため、本例の監視システム1は、図1に示すように位置情報及び画像情報などの監視情報を取得する監視端末装置10と、電気通信回線網30を介して監視情報を入力して処理する中央監視装置20とを備える。図2は、監視端末装置10及び中央監視装置20の具体的構成を示すブロック図である。特に本例の監視システム1は、事件や事故が発生した場合に、これを発見した移動体の周囲の移動体からの画像情報の送信を一時的に抑制することで、発見者からの画像情報を高速又は高質で送信するための回線通信量を確保し、これによりリアルタイムで現場の画像情報を監視できるものである。
In the following embodiment, the monitoring system according to the present invention is embodied as a monitoring system 1 that centrally monitors the security of a city by authorities such as a police station and a fire station. That is, the position information of each of the plurality of moving objects, the image information around the moving objects, and the time information are acquired at a predetermined timing, and the position information, the image information, and the time information are acquired via wireless communication. The position information is displayed on the map information and, if necessary, the image information and the time information are displayed on the display. Therefore, the monitoring system 1 of this example inputs and processes the monitoring information via the telecommunications network 30 and the monitoring terminal device 10 that acquires monitoring information such as position information and image information as shown in FIG. Central monitoring device 20 is provided. FIG. 2 is a block diagram illustrating a specific configuration of the monitoring terminal device 10 and the central monitoring device 20. In particular, the monitoring system 1 of this example, when an incident or accident occurs, temporarily suppresses the transmission of image information from a moving body around the moving body that has discovered the incident, so that the image information from the discoverer The amount of line communication for transmitting the image at high speed or with high quality can be ensured, so that the image information in the field can be monitored in real time.
監視端末装置10は、複数の移動体Vに搭載される端末装置であって、これら複数の移動体Vのそれぞれの位置情報を検出する位置検出機能と、複数の移動体のそれぞれに装着され、当該移動体の周囲を撮像して画像情報を生成する画像生成機能と、時刻検出機能と、所定のタイミングで位置情報、画像情報及び時刻情報を取得する情報取得制御機能と、これら位置情報、画像情報及び時刻情報を中央監視装置20へ出力するとともに中央監視装置20からの指令を受け付ける通信機能と、異常の発生を通報する機能とを有する。そのため、複数の車載カメラ11a~11e、画像処理装置12、通信装置13、車載制御装置14、位置検出装置15及び通報ボタン16を備える。なお、時刻情報は主として事象の事後解析に供される情報であるため省略してもよいが、本例では一時的に送信が抑制された移動体は、その間に取得した画像情報を一時的に記憶し、送信抑制が解除された際に必要に応じて送信して事後解析に供されることもあるので、その意味からも時刻情報を取得するのは効果的である。
The monitoring terminal device 10 is a terminal device mounted on a plurality of moving bodies V, and is mounted on each of the plurality of moving bodies and a position detection function that detects position information of each of the plurality of moving bodies V. An image generation function that captures an image of the periphery of the moving body to generate image information, a time detection function, an information acquisition control function that acquires position information, image information, and time information at a predetermined timing, and the position information and image It has a communication function for outputting information and time information to the central monitoring device 20 and receiving a command from the central monitoring device 20, and a function for reporting the occurrence of an abnormality. Therefore, a plurality of in-vehicle cameras 11a to 11e, an image processing device 12, a communication device 13, an in-vehicle control device 14, a position detection device 15, and a notification button 16 are provided. Note that the time information is mainly used for the post-analysis of the event and may be omitted. In this example, the mobile object whose transmission has been temporarily suppressed temporarily stores the image information acquired during that time. Since it may be stored and transmitted as necessary when transmission suppression is released, it may be used for post-mortem analysis, so it is effective to acquire time information from the meaning as well.
監視端末装置10が搭載される移動体Vは、目的とする監視領域を走行するものであれば特に限定されず、自動車、二輪車、産業車両、路面電車などの移動体を含み、自動車には自家用自動車V2や緊急自動車V3が含まれるが、なかでも特に予め決められた領域をランダム且つ常時走行するタクシーや路線バスV1などが好適に含まれる。図1には、タクシーV1、自家用自動車V2、パトカー、消防車又は救急車などの緊急自動車V3を例示するが、これらを総称する場合は移動体Vまたは乗用車Vという。
The mobile body V on which the monitoring terminal device 10 is mounted is not particularly limited as long as it travels in the target monitoring area, and includes mobile bodies such as automobiles, motorcycles, industrial vehicles, and trams. The vehicle V2 and the emergency vehicle V3 are included, and in particular, a taxi or a route bus V1 that travels randomly and constantly in a predetermined area is particularly preferable. FIG. 1 illustrates an emergency vehicle V3 such as a taxi V1, a private vehicle V2, a police car, a fire engine or an ambulance, but these are collectively referred to as a moving body V or a passenger vehicle V.
それぞれの移動体Vには、複数の車載カメラ11a~11e(以下、総称する場合はカメラ11という。)、画像処理装置12、通信装置13、車載制御装置14、位置検出装置15及び通報ボタン16が搭載されている。カメラ11は、CCDカメラなどで構成され、移動体Vの周囲を撮像し、その撮像信号を画像処理装置12へ出力する。画像処理装置12は、カメラ11からの撮像信号を読み出し、画像情報に画像処理する。この画像処理の詳細は後述する。
Each moving body V includes a plurality of in-vehicle cameras 11a to 11e (hereinafter collectively referred to as cameras 11), an image processing device 12, a communication device 13, an in-vehicle control device 14, a position detection device 15, and a notification button 16. Is installed. The camera 11 is composed of a CCD camera or the like, images the surroundings of the moving object V, and outputs the image pickup signal to the image processing device 12. The image processing device 12 reads an imaging signal from the camera 11 and performs image processing on the image information. Details of this image processing will be described later.
位置検出装置15は、GPS装置及びその補正装置などで構成され、当該移動体Vの現在位置を検出し、車載制御装置14へ出力する。通報ボタン16は、車室内に設置された入力ボタンであって、運転手や同乗者がインシデント(事故、火事、犯罪など治安に関する出来事)を発見した際に入力する手動ボタンである。
The position detection device 15 is composed of a GPS device and its correction device, etc., detects the current position of the moving object V, and outputs it to the in-vehicle control device 14. The notification button 16 is an input button installed in the passenger compartment, and is a manual button that is input when a driver or a passenger finds an incident (an incident related to security such as an accident, fire, or crime).
車載制御装置14は、CPU,ROM,RAMにより構成され、通報ボタン16が押されたときに、画像処理装置12、通信装置13及び位置検出装置15を制御し、画像処理装置12で生成された画像情報と、位置検出装置15で検出された移動体Vの位置情報と、CPUが内蔵する時計からの時刻情報とを通信装置13及び電気通信回線網30を介して中央監視装置20へ出力する。また、電気通信回線網30及び通信装置13を介して受信された中央監視装置20からの情報取得指令を受け付け、画像処理装置12、通信装置13及び位置検出装置15を制御し、画像処理装置12で生成された画像情報と、位置検出装置15で検出された移動体Vの位置情報と、CPUが内蔵する時計からの時刻情報とを通信装置13及び電気通信回線網30を介して中央監視装置20へ出力する。これらの制御の詳細も後述する。
The in-vehicle control device 14 includes a CPU, a ROM, and a RAM, and controls the image processing device 12, the communication device 13, and the position detection device 15 when the notification button 16 is pressed, and is generated by the image processing device 12. The image information, the position information of the moving object V detected by the position detection device 15, and the time information from the clock built in the CPU are output to the central monitoring device 20 via the communication device 13 and the telecommunication network 30. . Further, it receives an information acquisition command from the central monitoring device 20 received via the telecommunications network 30 and the communication device 13, controls the image processing device 12, the communication device 13 and the position detection device 15, and controls the image processing device 12. The central monitoring device through the communication device 13 and the telecommunications network 30 includes the image information generated in step S1, the position information of the moving body V detected by the position detection device 15, and the time information from the clock built in the CPU. 20 output. Details of these controls will also be described later.
通信装置13は、無線通信が可能な通信手段であり、電気通信回線網30を介して中央監視装置20の通信装置23と情報の授受を実行する。電気通信回線網30が商用電話回線網である場合は携帯電話通信装置を汎用することができ、電気通信回線網30が本例の監視システム1の専用電気通信回線網である場合は、それ専用の通信装置13,23を用いることができる。なお、電気通信回線網30に代えて、無線LAN、WiFi(登録商標)、WiMAX(登録商標)、Bluetooth(登録商標)、専用無線回線などを用いることもできる。
The communication device 13 is a communication means capable of wireless communication, and exchanges information with the communication device 23 of the central monitoring device 20 via the telecommunication network 30. When the telecommunications network 30 is a commercial telephone network, a mobile phone communication device can be used widely, and when the telecommunications network 30 is a dedicated telecommunications network for the monitoring system 1 of this example, it is dedicated to it. The communication devices 13 and 23 can be used. Instead of the telecommunication network 30, a wireless LAN, WiFi (registered trademark), WiMAX (registered trademark), Bluetooth (registered trademark), a dedicated wireless line, or the like can be used.
中央監視装置20は、上述した監視端末装置10から出力された位置情報及び画像情報を入力する情報入力機能と、地図データベースからの地図情報を表示するとともに、受信した位置情報を地図情報上に表示制御する表示制御機能と、受信した画像情報をディスプレイ24に表示する表示制御機能と、を有する。そのため、中央制御装置21、画像処理装置22、通信装置23、ディスプレイ24及び入力装置25を備える。
The central monitoring device 20 displays the information input function for inputting the position information and image information output from the monitoring terminal device 10 and the map information from the map database, and displays the received position information on the map information. A display control function for controlling and a display control function for displaying the received image information on the display 24; Therefore, a central control device 21, an image processing device 22, a communication device 23, a display 24 and an input device 25 are provided.
中央制御装置21は、CPU,ROM,RAMにより構成され、画像処理装置22、通信装置23及びディスプレイ24を制御して、監視端末装置10から送信された位置情報、画像情報及び時刻情報を受信し、必要に応じて画像処理を施したうえでディスプレイ24に表示する。
The central control device 21 includes a CPU, a ROM, and a RAM, and controls the image processing device 22, the communication device 23, and the display 24, and receives position information, image information, and time information transmitted from the monitoring terminal device 10. The image is displayed on the display 24 after being subjected to image processing as necessary.
画像処理装置24は、地図データベースを有し、当該地図データベースからの地図情報をディスプレイ24に表示するとともに、監視端末装置10の位置検出装置15により検出された位置情報を当該地図情報上に表示する。また、監視端末装置10の車載カメラ11で撮像され、画像処理装置12で処理された画像情報をディスプレイ24に表示するための画像処理を施す。
The image processing device 24 has a map database, displays map information from the map database on the display 24, and displays position information detected by the position detection device 15 of the monitoring terminal device 10 on the map information. . Further, image processing for displaying image information captured by the vehicle-mounted camera 11 of the monitoring terminal device 10 and processed by the image processing device 12 on the display 24 is performed.
ディスプレイ24は、たとえば一つの画面上に2つのウィンド画面が表示できる大きさの液晶表示装置又は2つのウィンド画面をそれぞれ表示する2つの液晶表示装置により構成することができる。そして、一方のウィンド画面には、地図情報上に各移動体Vの位置情報を重ね合わせた画面を表示し(図1参照)、他方のウィンド画面には、車載カメラ11で撮像された映像に係る画像情報を表示する。
The display 24 can be composed of, for example, a liquid crystal display device having a size capable of displaying two window screens on one screen or two liquid crystal display devices each displaying two window screens. One window screen displays a screen in which the position information of each moving object V is superimposed on the map information (see FIG. 1), and the other window screen displays an image captured by the in-vehicle camera 11. Such image information is displayed.
入力装置25は、キーボード又はマウスで構成され、所望の移動体Vに対して情報取得指令を出力したり、ディスプレイ24に表示される各種情報の処理指令を入力したりする場合に用いられる。
The input device 25 is composed of a keyboard or a mouse, and is used when outputting an information acquisition command to a desired moving body V or inputting a processing command for various information displayed on the display 24.
通信装置23は、無線通信が可能な通信手段であり、電気通信回線網30を介して監視端末装置10の通信装置13と情報の授受を実行する。電気通信回線網30が商用電話回線網である場合は携帯電話通信装置を汎用することができ、電気通信回線網30が本例の監視システム1の専用電気通信回線網である場合は、それ専用の通信装置13,23を用いることができる。
The communication device 23 is a communication means capable of wireless communication, and exchanges information with the communication device 13 of the monitoring terminal device 10 via the telecommunication network 30. When the telecommunications network 30 is a commercial telephone network, a mobile phone communication device can be used widely, and when the telecommunications network 30 is a dedicated telecommunications network for the monitoring system 1 of this example, it is dedicated to it. The communication devices 13 and 23 can be used.
本例の中央監視装置20は、監視端末装置10から送信される画像情報を一時的に抑制する機能を有する。すなわち、画像情報は位置情報に比べてデータ量が大きいため高速又は高質で送信するには回線通信量が確保されていることが必要とされるが、事件や事故が発生した場合に、これを発見した移動体Vの周囲の移動体からの画像情報の送信を一時的に抑制し、これにより、発見した移動体Vからの画像情報を高速又は高質で送信するための回線通信量を確保する。
The central monitoring device 20 of this example has a function of temporarily suppressing image information transmitted from the monitoring terminal device 10. In other words, image information has a larger amount of data than position information, so it is necessary to secure a sufficient amount of line communication to transmit at high speed or high quality. The transmission of the image information from the mobile bodies around the mobile body V that discovered the image is temporarily suppressed, thereby reducing the amount of line communication for transmitting the image information from the discovered mobile body V at high speed or with high quality. Secure.
図13は、画像情報の送信を抑制する一例を示す図であり、Vaは事件や事故を発見して通報ボタン16を押した移動体(以下、通報車ともいう)、Taは移動体Vaの監視端末装置10が属する通信基地局の通信エリア、Tbは他の通信基地局の通信エリアであって移動体Vaの監視端末装置10が属さない通信エリア、Vb~Veは他の移動体を示す。
FIG. 13 is a diagram illustrating an example of suppressing transmission of image information. Va is a mobile body (hereinafter also referred to as a notification vehicle) in which an incident or accident is detected and the notification button 16 is pressed, and Ta is the mobile body Va. The communication area of the communication base station to which the monitoring terminal device 10 belongs, Tb is the communication area of another communication base station, and the communication area to which the monitoring terminal device 10 of the mobile unit Va does not belong, and Vb to Ve indicate other mobile units. .
本例の中央監視装置20は、通報車Vaを基点として半径Ra以内(例えば50m以内)に存在する移動体Vbに対しては、通報車Vaからの画像情報に加えて有効な画像情報が得られる可能性が高いので、位置情報と画像情報を送信する指令を出力する。これに対して、通報車Vaを基点として半径Raより遠い範囲(例えば半径Rb=50m超)に存在する移動体Vcに対しては、通報車Vaからの画像情報に加えて有効な画像情報が得られる可能性がゼロとは言えないものの緊急性より回線通信量の確保を優先して、位置情報のみ送信する指令、すなわち画像情報の送信を抑制する指令を出力する。ちなみに、画像情報の送信を抑制する場合は、その画像情報はその移動体Vcの監視端末装置10のメモリRAM等に一時的に記憶し、必要に応じて事後的に送信する。
The central monitoring device 20 of this example obtains effective image information in addition to the image information from the reporting vehicle Va for the moving body Vb existing within the radius Ra (for example, within 50 m) with the reporting vehicle Va as a base point. Therefore, a command for transmitting the position information and the image information is output. On the other hand, effective image information in addition to the image information from the reporting vehicle Va is provided for the moving body Vc existing in a range farther than the radius Ra (for example, the radius Rb = more than 50 m) from the reporting vehicle Va. Although the possibility of being obtained cannot be said to be zero, priority is given to securing the amount of line communication over urgency, and a command for transmitting only position information, that is, a command for suppressing transmission of image information is output. Incidentally, when the transmission of image information is suppressed, the image information is temporarily stored in the memory RAM or the like of the monitoring terminal device 10 of the moving body Vc, and is transmitted later as necessary.
さらに本例の中央監視装置20は、通報車Vaの監視端末装置10が属する基地局の通信エリアTaとの関係によっても画像情報の送信を抑制又は抑制を解除する。すなわち、通報車Vaを基点として半径Raより遠い半径Rbに存在し、且つ同じ基地局の通信エリアTaに属する移動体Vcに対しては、画像情報の送信を抑制する指令を出力するが、同じ基地局の通信エリアTaに属さず他の基地局の通信エリアTbに属する移動体Vdに対しては、通信回線量の確保がさほど必要ないため、画像情報の送信の抑制を解除する。また、通報車Vaを基点として半径Rbより遠い領域に存在し、且つ他の基地局の通信エリアTbに属する移動体Vfに対しては、画像情報の送信を抑制しないが、通報車Vaを基点として半径Rbより遠い領域に存在し、且つ同じ基地局の通信エリアTaに属する移動体Veに対しては、画像情報の送信を抑制する。
Furthermore, the central monitoring device 20 of the present example suppresses or cancels the transmission of image information depending on the relationship with the communication area Ta of the base station to which the monitoring terminal device 10 of the reporting vehicle Va belongs. That is, a command to suppress transmission of image information is output to the mobile body Vc that exists in the radius Rb far from the radius Ra with the reporting vehicle Va as a base point and belongs to the communication area Ta of the same base station. Since it is not necessary to secure the communication line amount for the mobile body Vd that does not belong to the communication area Ta of the base station and belongs to the communication area Tb of another base station, the suppression of transmission of image information is cancelled. In addition, transmission of image information is not suppressed for the mobile body Vf existing in a region farther than the radius Rb from the reporting vehicle Va and belonging to the communication area Tb of another base station, but the reporting vehicle Va is the starting point. As described above, transmission of image information is suppressed for a mobile unit Ve that exists in an area farther than the radius Rb and belongs to the communication area Ta of the same base station.
次に車載カメラ11a~11eの装着位置と撮像範囲について説明する。ここでは移動体Vとして乗用車Vを例に挙げて説明する。
カメラ11a~11eはCCD等の撮像素子を用いて構成され、4つの車載カメラ11a~11dは乗用車Vの外部の異なる位置にそれぞれ設置され、車両周囲の4方向をそれぞれ撮影する。 Next, the mounting positions and imaging ranges of the in-vehicle cameras 11a to 11e will be described. Here, a passenger car V will be described as an example of the moving body V.
Thecameras 11a to 11e are configured using an image sensor such as a CCD, and the four on-vehicle cameras 11a to 11d are installed at different positions outside the passenger car V, respectively, and shoot four directions around the vehicle.
カメラ11a~11eはCCD等の撮像素子を用いて構成され、4つの車載カメラ11a~11dは乗用車Vの外部の異なる位置にそれぞれ設置され、車両周囲の4方向をそれぞれ撮影する。 Next, the mounting positions and imaging ranges of the in-
The
例えば、図5に示すように、フロントグリル部分などの乗用車Vの前方の所定位置に設置された車載カメラ11aは、乗用車Vの前方のエリアSP1内及びその前方の空間に存在する物体又は路面(フロントビュー)を撮影する。また、左サイドミラー部分などの乗用車Vの左側方の所定位置に設置された車載カメラ11bは、乗用車Vの左側方のエリアSP2内及びその周囲の空間に存在する物体又は路面(左サイドビュー)を撮影する。また、リアフィニッシャー部分やルーフスポイラー部分などの乗用車Vの後方部分の所定位置に設置された車載カメラ11cは、乗用車Vの後方のエリアSP3内及びその後方の空間に存在する物体又は路面(リアビュー)を撮影する。また、右サイドミラー部分などの乗用車Vの右側方の所定位置に設置された車載カメラ11dは、乗用車Vの右側方のエリアSP4内及びその周囲の空間に存在する物体又は路面(右サイドビュー)を撮影する。なお、図5には図示を省略したが、1つの車載カメラ11eは、乗用車の室内の例えば天井部に設置され、図6に示すように室内のエリアSP5を撮像し、タクシーの無賃乗車や強盗などの犯罪防止又は犯罪通報に供される。
For example, as shown in FIG. 5, the in-vehicle camera 11 a installed at a predetermined position in front of the passenger car V such as a front grill portion is an object or road surface (in the area SP1 in front of the passenger car V and in the space in front thereof) Shoot the front view. The in-vehicle camera 11b installed at a predetermined position on the left side of the passenger car V such as the left side mirror portion is an object or road surface (left side view) that exists in the area SP2 on the left side of the passenger car V and in the surrounding space. Shoot. The in-vehicle camera 11c installed at a predetermined position in the rear part of the passenger car V, such as a rear finisher part or a roof spoiler part, is an object or road surface (rear view) existing in the area SP3 behind the passenger car V and in the space behind it. Shoot. The in-vehicle camera 11d installed at a predetermined position on the right side of the passenger car V such as the right side mirror portion is an object or road surface (right side view) that exists in the area SP4 on the right side of the passenger car V and in the surrounding space. Shoot. Although not shown in FIG. 5, one in-vehicle camera 11e is installed, for example, on the ceiling of the passenger car interior, and images the indoor area SP5 as shown in FIG. Used for crime prevention or crime reporting.
図6は、各車載カメラ11a~11eの配置を乗用車Vの上空から見た図である。同図に示すように、エリアSP1を撮像する車載カメラ11a、エリアSP2を撮像する車載カメラ11b、エリアSP3を撮像する車載カメラ11c、エリアSP4を撮像する車載カメラ11dの4つは、乗用車Vのボディの外周VEに沿って左回り(反時計回り)又は右回り(時計回り)に沿って設置されている。つまり、同図に矢印Cで示す左回り(反時計回り)に乗用車Vのボディの外周VEに沿って見ると、車載カメラ11aの左隣りに車載カメラ11bが設置され、車載カメラ11bの左隣りに車載カメラ11cが設置され、車載カメラ11cの左隣りに車載カメラ11dが設置され、車載カメラ11dの左隣りに車載カメラ11aが設置されている。逆に同図に示す矢印Cの方向とは反対に(時計回り)に乗用車Vのボディの外周VEに沿って見ると、車載カメラ11aの右隣りに車載カメラ11dが設置され、車載カメラ11dの右隣りに車載カメラ11cが設置され、車載カメラ11cの右隣りに車載カメラ11bが設置され、車載カメラ11bの右隣りに車載カメラ11aが設置されている。
FIG. 6 is a view of the arrangement of the in-vehicle cameras 11a to 11e as viewed from above the passenger car V. As shown in the figure, the in-vehicle camera 11a that images the area SP1, the in-vehicle camera 11b that images the area SP2, the in-vehicle camera 11c that images the area SP3, and the in-vehicle camera 11d that images the area SP4 are It is installed along the outer periphery VE of the body along the counterclockwise direction (counterclockwise) or the clockwise direction (clockwise). That is, when viewed along the outer periphery VE of the body of the passenger car V in the counterclockwise direction (counterclockwise) indicated by the arrow C in the figure, the in-vehicle camera 11b is installed on the left side of the in-vehicle camera 11a, and the left side of the in-vehicle camera 11b. The vehicle-mounted camera 11c is installed on the left side of the vehicle-mounted camera 11c, and the vehicle-mounted camera 11a is installed on the left side of the vehicle-mounted camera 11d. Conversely, when viewed along the outer periphery VE of the body of the passenger car V in the direction opposite to the direction of the arrow C shown in the figure (clockwise), the in-vehicle camera 11d is installed on the right side of the in-vehicle camera 11a. The vehicle-mounted camera 11c is installed on the right side, the vehicle-mounted camera 11b is installed on the right side of the vehicle-mounted camera 11c, and the vehicle-mounted camera 11a is installed on the right side of the vehicle-mounted camera 11b.
図7Aは、フロントの車載カメラ11aがエリアSP1を撮像した画像GSP1の一例を示し、図7Bは、左サイドの車載カメラ11bがエリアSP2を撮像した画像GSP2の一例を示し、図7Cは、リアの車載カメラ11cがエリアSP3を撮像した画像GSP3の一例を示し、図7Dは、右サイドの車載カメラ11dがエリアSP4を撮像した画像GSP4の一例を示し、図7Eは、室内の車載カメラ11eが室内エリアSP5を撮像した画像GSP5の一例を示す画像図である。ちなみに、各画像のサイズは、縦480ピクセル×横640ピクセルである。画像サイズは特に限定されず、一般的な端末装置で動画再生が可能なサイズであればよい。
FIG. 7A shows an example of an image GSP1 in which the front in-vehicle camera 11a images the area SP1, FIG. 7B shows an example of the image GSP2 in which the left in-vehicle camera 11b images the area SP2, and FIG. 7D shows an example of an image GSP3 in which the area SP3 is imaged, FIG. 7D shows an example of an image GSP4 in which the right-side in-vehicle camera 11d images the area SP4, and FIG. 7E shows an indoor in-vehicle camera 11e. It is an image figure which shows an example of image GSP5 which imaged indoor area SP5. Incidentally, the size of each image is vertical 480 pixels × horizontal 640 pixels. The image size is not particularly limited, and may be any size as long as a general terminal device can reproduce a moving image.
なお、車載カメラ11の配置数及び配置位置は、乗用車Vの大きさ、形状、検出領域の設定手法等に応じて適宜に決定することができる。上述した複数の車載カメラ11は、それぞれの配置に応じた識別子が付されており、車載制御装置14は、各識別子に基づいて各車載カメラ11のそれぞれを識別することができる。また、車載制御装置14は、指令信号に識別子を付することにより、特定の車載カメラ11に撮像命令その他の命令を送出することができる。
It should be noted that the number and position of the in-vehicle camera 11 can be appropriately determined according to the size, shape, detection area setting method, etc. of the passenger car V. The plurality of in-vehicle cameras 11 described above are assigned identifiers corresponding to the respective arrangements, and the in-vehicle control device 14 can identify each of the in-vehicle cameras 11 based on each identifier. Moreover, the vehicle-mounted control apparatus 14 can send an imaging command and other commands to a specific vehicle-mounted camera 11 by attaching an identifier to the command signal.
車載制御装置14は、画像処理装置12を制御して車載カメラ11によって撮像された撮像信号をそれぞれ取得し、画像処理装置12は、各車載カメラ11からの撮像信号を処理して図7A~図7Eに示す画像情報に変換する。そして、車載制御装置14は、図7A~図7Dに示す4つの画像情報に基づいて監視画像情報を生成するとともに(監視画像生成機能)、この監視画像情報を柱体の投影モデルの側面に設定された投影面に投影するためのマッピング情報を監視画像情報に対応づけ(マッピング情報付加機能)、中央監視装置20へ出力する。以下、監視画像生成機能とマッピング情報付加機能について詳述する。
The in-vehicle control device 14 controls the image processing device 12 to acquire each image signal picked up by the in-vehicle camera 11, and the image processing device 12 processes the image pickup signal from each in-vehicle camera 11 to perform FIG. 7E is converted into image information. The in-vehicle controller 14 generates monitoring image information based on the four pieces of image information shown in FIGS. 7A to 7D (monitoring image generation function), and sets the monitoring image information on the side of the projection model of the columnar body. The mapping information to be projected onto the projected plane is associated with the monitoring image information (mapping information adding function) and output to the central monitoring device 20. Hereinafter, the monitoring image generation function and the mapping information addition function will be described in detail.
なお、乗用車Vの周囲を撮像した4つの画像情報に基づいて監視画像情報を生成し、これにマッピング情報を関連付ける処理は、本例のように監視端末装置10で実行するほか、中央監視装置20で実行することもできる。この場合には、乗用車Vの周囲を撮像した4つの画像情報を監視端末装置10から中央監視装置20へそのまま送信し、これを中央監視装置20の画像処理装置22及び中央制御装置21にて監視画像情報を生成するとともにマッピング情報を関連付け、投影変換すればよい。
Note that the monitoring image information is generated based on the four pieces of image information obtained by imaging the periphery of the passenger car V, and the process of associating the mapping information with this is performed by the monitoring terminal device 10 as in this example, and the central monitoring device 20 You can also run In this case, four pieces of image information obtained by imaging the periphery of the passenger car V are transmitted as they are from the monitoring terminal device 10 to the central monitoring device 20, and are monitored by the image processing device 22 and the central control device 21 of the central monitoring device 20. Image information may be generated and mapping information may be associated to perform projection conversion.
まず、監視画像生成機能について説明する。本実施形態の監視端末装置10の車載制御装置14は、画像処理装置12を制御して各車載カメラ11a~11eの撮像信号をそれぞれ取得し、さらに乗用車Vのボディの外周に沿って右回り又は左回りの方向に設置された車載カメラ11a~11dの画像情報がこれらの車載カメラ11a~11dの設置順に配置されるように、一枚の監視画像を生成する。
First, the monitoring image generation function will be described. The in-vehicle control device 14 of the monitoring terminal device 10 of the present embodiment controls the image processing device 12 to acquire the imaging signals of the in-vehicle cameras 11a to 11e, respectively, and further clockwise or along the outer periphery of the body of the passenger car V One monitoring image is generated so that the image information of the in-vehicle cameras 11a to 11d installed in the counterclockwise direction is arranged in the order of installation of these in-vehicle cameras 11a to 11d.
上述したように、本実施形態において、4つの車載カメラ11a~11dは乗用車Vのボディの外周VEに沿って左回り(反時計回り)にカメラ11a、11b、11c、11dの順に設置されているので、車載制御装置14は、これらの車載カメラ11a~11dの設置の順序(車載カメラ11a→11b→11c→11d)に従って、各車載カメラ11a~11dが撮像した4枚の画像が一体となるように水平方向に繋げ、一枚の監視画像を生成する。本実施形態の監視画像において、各画像は乗用車Vの接地面(路面)が下辺となるように配置され、各画像は路面に対して高さ方向(垂直方向)の辺で互いに接続される。
As described above, in the present embodiment, the four in-vehicle cameras 11a to 11d are installed in the order of the cameras 11a, 11b, 11c, and 11d in the counterclockwise direction (counterclockwise) along the outer periphery VE of the body of the passenger car V. Therefore, the vehicle-mounted control device 14 integrates the four images captured by the vehicle-mounted cameras 11a to 11d in accordance with the order of installation of the vehicle-mounted cameras 11a to 11d (vehicle-mounted cameras 11a → 11b → 11c → 11d). Are connected in the horizontal direction to generate a single monitoring image. In the monitoring image of the present embodiment, the images are arranged such that the ground contact surface (road surface) of the passenger vehicle V is the lower side, and the images are connected to each other at sides in the height direction (vertical direction) with respect to the road surface.
図8は、監視画像Kの一例を示す図である。同図に示すように、本実施形態の監視画像Kは、図面左側から図面右側へ向かう方向Pに沿って、フロントの車載カメラ11aがエリアSP1を撮像した撮像画像GSP1、左サイドの車載カメラ11bがエリアSP2を撮像した撮像画像GSP2、リアの車載カメラ11cがエリアSP3を撮像した撮像画像GSP3、及び右サイドの車載カメラ11dがエリアSP4を撮像した撮像画像GSP4が、水平方向にこの順序で並べて配置され、これら4つの画像が一連の画像とされている。このように生成された監視画像Kを、路面(車両の接地面)に対応する画像を下にして左端から右側へ順番に表示することにより、監視者は、車両Vの周囲を反時計回りに見回したのと同様にディスプレイ24上で視認することができる。
FIG. 8 is a diagram illustrating an example of the monitoring image K. As shown in the figure, the monitoring image K of the present embodiment includes a captured image GSP1 in which the front in-vehicle camera 11a images the area SP1 along the direction P from the left side to the right side in the drawing, and the left in-vehicle camera 11b. A captured image GSP2 obtained by imaging the area SP2, a captured image GSP3 obtained by the rear vehicle-mounted camera 11c imaging the area SP3, and a captured image GSP4 obtained by the right-side vehicle-mounted camera 11d imaging the area SP4 are arranged in this order in the horizontal direction. These four images are arranged as a series of images. The monitor image K generated in this way is displayed in order from the left end to the right side with the image corresponding to the road surface (vehicle contact surface) facing down, so that the monitor can rotate the periphery of the vehicle V counterclockwise. It can be visually recognized on the display 24 in a manner similar to the look around.
なお、一つの監視画像Kを生成する際には、各車載カメラ11a~11dの撮影タイミングを略同時にして取得した4つの画像が用いられる。これにより、監視画像Kに含まれる情報を同期させることができるので、所定タイミングにおける車両周囲の状況を正確に表現することができる。
It should be noted that when one monitoring image K is generated, four images acquired at substantially the same time as the photographing timings of the in-vehicle cameras 11a to 11d are used. Thereby, since the information contained in the monitoring image K can be synchronized, the situation around the vehicle at a predetermined timing can be accurately expressed.
また、カメラの撮像タイミングが略同時である各撮像画像から生成した監視画像Kを経時的に記憶し、所定の単位時間あたりに複数の監視画像Kが含まれる動画の監視画像Kを生成するようにしてもよい。撮像タイミングが同時の画像に基づいて動画の監視画像Kを生成することにより、車両周囲の状況の変化を正確に表現することができる。
In addition, the monitoring image K generated from the respective captured images having substantially the same imaging timing of the camera is stored with time, and the moving image monitoring image K including the plurality of monitoring images K per predetermined unit time is generated. It may be. By generating the moving image monitoring image K based on the images having the same imaging timing, it is possible to accurately represent changes in the situation around the vehicle.
ところで、各撮像領域の画像をそれぞれ経時的に記憶し、各撮像領域ごとに生成した動画の監視画像Kを中央監視装置20へ送信した場合には、中央監視装置20の機能によっては、複数の動画を同時に再生できない場合がある。このような従来の中央監視装置20においては、複数の動画を同時に再生表示することができないため、各動画を再生する際には画面を切り替えて動画を一つずつ再生しなければならない。つまり、従来の中央監視装置20では、複数方向の映像(動画)を同時に見ることができず、車両周囲の全体を一画面で監視することができないという不都合がある。
By the way, when the image of each imaging area is memorize | stored each time and the monitoring image K of the moving image produced | generated for every imaging area is transmitted to the central monitoring apparatus 20, depending on the function of the central monitoring apparatus 20, a several You may not be able to play videos at the same time. In such a conventional central monitoring apparatus 20, since a plurality of moving images cannot be reproduced and displayed at the same time, when reproducing each moving image, the moving images must be reproduced one by one by switching the screen. In other words, the conventional central monitoring device 20 has a disadvantage in that it cannot simultaneously watch images (moving images) in a plurality of directions and cannot monitor the entire vehicle periphery on a single screen.
これに対して本実施形態の車載制御装置14は、複数の画像から一つの監視画像Kを生成するので、中央監視装置20の機能にかかわらず、異なる撮像方向の画像を同時に動画再生することができる。つまり、監視画像Kを連続して再生(動画再生)することにより、監視画像Kに含まれる4枚の画像を同時に連続して再生(動画再生)し、方向の異なる領域の状態変化を一画面で監視することができる。
On the other hand, since the vehicle-mounted control apparatus 14 of this embodiment produces | generates one monitoring image K from several images, regardless of the function of the central monitoring apparatus 20, it can reproduce simultaneously the moving image reproduction of the image of a different imaging direction. it can. That is, by continuously reproducing the monitoring image K (moving image reproduction), four images included in the monitoring image K are simultaneously reproduced (moving image reproduction), and the state change of the regions in different directions is displayed on one screen. Can be monitored.
また、本実施形態の監視端末装置10は、監視画像Kの画素数が各車載カメラ11a~11dの画像の画素数と略同一になるように画像のデータ量を圧縮して監視画像Kを生成することもできる。図7A~図7Dに示す各画像のサイズは480×640ピクセルであるのに対し、本実施形態では、図8に示すように監視画像Kのサイズが1280×240ピクセルとなるように圧縮処理を行う。これにより、監視画像Kのサイズ(1280×240=307,200ピクセル)が、各画像のサイズ(480×640×4枚=307,200ピクセル)と等しくなるので、監視画像Kを受信した中央監視装置20側の機能にかかわらず、画像処理及び画像再生を行うことができる。
In addition, the monitoring terminal device 10 of the present embodiment generates the monitoring image K by compressing the data amount of the image so that the number of pixels of the monitoring image K is substantially the same as the number of pixels of the images of the in-vehicle cameras 11a to 11d. You can also While the size of each image shown in FIGS. 7A to 7D is 480 × 640 pixels, in this embodiment, compression processing is performed so that the size of the monitoring image K is 1280 × 240 pixels as shown in FIG. Do. As a result, the size of the monitoring image K (1280 × 240 = 307,200 pixels) becomes equal to the size of each image (480 × 640 × 4 = 307,200 pixels). Regardless of the function on the apparatus 20 side, image processing and image reproduction can be performed.
さらに、本実施形態の車載制御装置14は、配置された各画像の境界を示す線図形を、監視画像Kに付することもできる。図8に示す監視画像Kを例にすると、車載制御装置14は、配置された各画像の境界を示す線図形として、各画像の間に矩形の仕切り画像Bb,Bc,Bd,Ba,Ba´を監視画像Kに付することができる。このように、4つの画像の境界に仕切り画像を配置することにより、一連にされた監視画像Kの中で、撮像方向が異なる各画像をそれぞれ別個に認識させることができる。つまり、仕切り画像は各撮像画像の額縁として機能する。また、各撮像画像の境界付近は画像の歪みが大きいので、撮像画像の境界に仕切り画像を配置することにより、歪みの大きい領域の画像を隠すことや、歪みが大きいことを示唆することができる。
Furthermore, the in-vehicle control device 14 of the present embodiment can also attach a line figure indicating the boundary of each arranged image to the monitoring image K. Taking the monitoring image K shown in FIG. 8 as an example, the in-vehicle controller 14 forms a rectangular partition image Bb, Bc, Bd, Ba, Ba ′ between the images as a line figure indicating the boundary between the arranged images. Can be attached to the monitoring image K. In this manner, by arranging the partition images at the boundaries of the four images, it is possible to recognize each image having a different imaging direction in the series of monitoring images K. That is, the partition image functions as a frame of each captured image. In addition, since the image distortion is large in the vicinity of the boundary of each captured image, it is possible to hide the image of the region with large distortion or to suggest that the distortion is large by arranging the partition image at the boundary of the captured image. .
また、本実施形態の車載制御装置14は、後述する投影モデルの側面に設定された投影面に4つの画像を投影させた場合の歪みを補正してから、監視画像Kを生成することもできる。撮影された画像の周辺領域は画像の歪みが生じやすく、特に広角レンズを用いた車載カメラ11である場合には撮像画像の歪みが大きくなる傾向があるため、画像の歪みを補正するために予め定義された画像変換アルゴリズムと補正量とを用いて、撮像画像の歪みを補正することが望ましい。
Moreover, the vehicle-mounted control apparatus 14 of this embodiment can also generate | occur | produce the monitoring image K, after correct | amending the distortion at the time of projecting four images on the projection surface set to the side surface of the projection model mentioned later. . In the peripheral area of the captured image, image distortion is likely to occur. In particular, in the case of the in-vehicle camera 11 using the wide-angle lens, the distortion of the captured image tends to be large, so in order to correct the image distortion in advance. It is desirable to correct the distortion of the captured image using the defined image conversion algorithm and correction amount.
特に限定されないが、車載制御装置14は、図9に示すように、中央監視装置20において監視画像Kを投影させる投影モデルと同じ投影モデルの情報をROMから読み出し、この投影モデルの投影面に撮像画像を投影し、投影面において生じた歪みを予め補正することもできる。なお、画像変換アルゴリズムと補正量は車載カメラ11の特性、投影モデルの形状に応じて適宜定義することができる。このように、投影モデルの投影面に関し画像Kを投影した場合の歪みを予め補正しておくことにより、歪みの少ない視認性の良い監視画像Kを提供することができる。また、歪みを予め補正しておくことにより、並べて配置された各画像同士の位置ズレを低減させることができる。
Although not particularly limited, as shown in FIG. 9, the in-vehicle control device 14 reads out information of the same projection model as the projection model for projecting the monitoring image K in the central monitoring device 20 from the ROM, and images it on the projection plane of the projection model. It is also possible to project an image and correct in advance distortion generated on the projection surface. The image conversion algorithm and the correction amount can be appropriately defined according to the characteristics of the in-vehicle camera 11 and the shape of the projection model. In this way, by correcting in advance the distortion when the image K is projected with respect to the projection plane of the projection model, it is possible to provide the monitoring image K with good visibility with less distortion. Further, by correcting the distortion in advance, it is possible to reduce the positional deviation between the images arranged side by side.
次に、マッピング情報付加機能について説明する。本実施形態の監視端末装置10において、車載制御装置14は、乗用車Vの接地面を底面とする柱体の投影モデルMの側面に設定された投影面に、生成された監視画像Kを投影するためのマッピング情報を監視画像Kに対応づける処理を実行する。マッピング情報は、監視画像Kを受信した中央監視装置20に、容易に投影基準位置を認識させるための情報である。図10は本実施形態の投影モデルMの一例を示す図、図11は図10に示す投影モデルMのxy面に沿う断面模式図である。
Next, the mapping information addition function will be described. In the monitoring terminal device 10 of the present embodiment, the in-vehicle control device 14 projects the generated monitoring image K on the projection plane set on the side surface of the projection model M of the columnar body with the ground contact surface of the passenger car V as the bottom surface. A process for associating the mapping information for monitoring with the monitoring image K is executed. The mapping information is information for allowing the central monitoring device 20 that has received the monitoring image K to easily recognize the projection reference position. FIG. 10 is a diagram illustrating an example of the projection model M of the present embodiment, and FIG. 11 is a schematic cross-sectional view along the xy plane of the projection model M illustrated in FIG.
図10,11に示すように、本実施形態の投影モデルMは、底面が正八角形で、鉛直方向(図中z軸方向)に沿って高さを有する正八角柱体である。なお、投影モデルMの形状は、底面の境界に沿って隣接する側面を有する柱体であれば特に限定されず、円柱体、若しくは三角柱体、四角柱体、六角柱体などの角柱体、又は底面が多角形で側面が三角形の反角柱体とすることもできる。
10 and 11, the projection model M of this embodiment is a regular octagonal prism body having a regular octagonal bottom surface and a height along the vertical direction (z-axis direction in the figure). Note that the shape of the projection model M is not particularly limited as long as it is a column having side surfaces adjacent to each other along the boundary of the bottom surface, and is a cylinder, or a prism, such as a triangular column, a quadrangular column, or a hexagonal column, or An anti-rectangular column having a polygonal bottom surface and a triangular side surface can also be used.
また、同図に示すように、本実施形態の投影モデルMの底面は乗用車Vの接地面と平行である。また、投影モデルMの側面の内側面には、投影モデルMの底面に接地する乗用車Vの周囲の映像を映し出す投影面Sa,Sb,Sc,Sd(以下、投影面Sと総称する。)が設定されている。投影面Sは、投影面Saの一部と投影面Sbの一部、投影面Sbの一部と投影面Scの一部、投影面Scの一部と投影面Sdの一部、投影面Sdの一部と投影面Saの一部により構成することもできる。監視画像Kは、乗用車Vを取り囲む投影モデルMの上方の視点R(R1~R8、以下、視点Rと総称する。)から乗用車Vを俯瞰した映像として投影面Sに投影される。
Also, as shown in the figure, the bottom surface of the projection model M of this embodiment is parallel to the ground contact surface of the passenger car V. Projection surfaces Sa, Sb, Sc, and Sd (hereinafter collectively referred to as a projection surface S) that project an image around the passenger vehicle V that contacts the bottom surface of the projection model M are provided on the inner surface of the side surface of the projection model M. Is set. The projection surface S includes a part of the projection surface Sa and a part of the projection surface Sb, a part of the projection surface Sb and a part of the projection surface Sc, a part of the projection surface Sc and a part of the projection surface Sd, and the projection surface Sd. And a part of the projection surface Sa. The monitoring image K is projected on the projection plane S as an image of the passenger car V viewed from above the viewpoint R (R1 to R8, hereinafter referred to as viewpoint R) above the projection model M surrounding the passenger car V.
本実施形態の車載制御装置14は、右端又は左端に配置された撮像画像の基準座標を、マッピング情報として監視画像Kに対応づける。図8に示す監視画像Kを例にすると、車載制御装置14は、投影モデルMに投影される際の、監視画像Kの始端位置又は終端位置を示すマッピング情報(基準座標)として、右端に配置された撮像画像GSP1の左上頂点の座標A(x、y)と、左端に配置された撮像画像GSP2の右上頂点の座標B(x、y)とを監視画像Kに付する。なお、始端位置又は終端位置を示す撮像画像の基準座標は特に限定されず、左端に配置された監視画像Kの左下頂点、又は右端に配置された監視画像Kの右下頂点としてもよい。またマッピング情報は、監視画像Kの画像データの各画素に付してもよいし、監視画像Kとは別のファイルとして管理してもよい。
The in-vehicle control device 14 according to the present embodiment associates the reference coordinates of the captured image arranged at the right end or the left end with the monitoring image K as mapping information. Taking the monitoring image K shown in FIG. 8 as an example, the in-vehicle control device 14 is arranged at the right end as mapping information (reference coordinates) indicating the start end position or the end position of the monitoring image K when projected onto the projection model M. The coordinates A (x, y) of the upper left vertex of the captured image GSP1 and the coordinates B (x, y) of the upper right vertex of the captured image GSP2 arranged at the left end are attached to the monitoring image K. Note that the reference coordinates of the captured image indicating the start position or the end position are not particularly limited, and may be the lower left vertex of the monitoring image K arranged at the left end or the lower right vertex of the monitoring image K arranged at the right end. The mapping information may be attached to each pixel of the image data of the monitoring image K, or may be managed as a file different from the monitoring image K.
このように、監視画像Kの始端位置又は終端位置を示す情報、つまり投影処理において基準とする基準座標をマッピング情報として監視画像Kに対応づけることにより、監視画像Kを受信した中央監視装置20が、容易に投影処理時における基準位置を認識することができるので、車載カメラ11a~11dの配置順に並べられた監視画像Kを、投影モデルMの側面の投影面Sに容易且つ迅速に順次投影することができる。すなわち、図11に示すように車載カメラ11aの撮像方向に位置する投影面Saに車両前方の撮像画像GSP1を投影し、車載カメラ11bの撮像方向に位置する投影面Sbに車両右側方の撮像画像GSP2を投影し、車載カメラ11cの撮像方向に位置する投影面Scに車両後方の撮像画像GSP3を投影し、車載カメラ11dの撮像方向に位置する投影面Sdに車両左側方の撮像画像GSP4を投影することができる。
As described above, the information indicating the start position or the end position of the monitoring image K, that is, the reference coordinates used as a reference in the projection processing is associated with the monitoring image K as mapping information, whereby the central monitoring apparatus 20 that has received the monitoring image K Since the reference position at the time of the projection process can be easily recognized, the monitoring images K arranged in the order in which the in-vehicle cameras 11a to 11d are arranged are projected sequentially and easily on the projection surface S on the side surface of the projection model M. be able to. That is, as shown in FIG. 11, the captured image GSP1 in front of the vehicle is projected onto the projection surface Sa positioned in the imaging direction of the in-vehicle camera 11a, and the captured image on the right side of the vehicle is projected onto the projection surface Sb positioned in the imaging direction of the in-vehicle camera 11b. GSP2 is projected, a captured image GSP3 behind the vehicle is projected onto a projection plane Sc located in the imaging direction of the in-vehicle camera 11c, and a captured image GSP4 on the left side of the vehicle is projected onto the projection plane Sd positioned in the imaging direction of the in-vehicle camera 11d. can do.
これにより、投影モデルMに投影された監視画像Kは、あたかも乗用車Vの周囲を見回したときに見える映像を示すことができる。つまり、車載カメラ11a~11dの設置順序に応じて水平方向一列に配置された4つの画像を含む監視画像Kは、投影モデルMの柱体において、同じく水平方向に並ぶ側面に投影されるので、柱体の投影モデルMの投影面Sに投影された監視画像Kに、乗用車Vの周囲の映像をその位置関係を維持したまま再現することができる。
Thereby, the monitoring image K projected on the projection model M can show an image that can be seen as if looking around the passenger car V. That is, since the monitoring image K including four images arranged in a line in the horizontal direction according to the installation order of the in-vehicle cameras 11a to 11d is projected on the side surfaces that are also arranged in the horizontal direction in the column of the projection model M. An image around the passenger car V can be reproduced in the monitoring image K projected on the projection surface S of the projection model M of the columnar body while maintaining the positional relationship.
なお、本実施形態の車載制御装置14は、監視画像Kの各座標値と投影モデルMの各投影面Sの座標値との対応関係をマッピング情報として記憶し、監視画像Kに付することができるが、中央監視装置20に予め記憶させてもよい。
Note that the in-vehicle control device 14 of the present embodiment stores the correspondence relationship between each coordinate value of the monitoring image K and the coordinate value of each projection plane S of the projection model M as mapping information, and attaches it to the monitoring image K. However, it may be stored in the central monitoring device 20 in advance.
また、図10,11に示す視点R、投影面Sの位置は例示であり、任意に設定することができる。特に、視点Rは、操作者の操作によって変更可能である。視点Rと監視画像Kの投影位置との関係は予め定義されており、視点Rの位置が変更された場合には所定の座標変換を実行することにより、新たに設定された視点Rから見た監視画像Kを投影面S(Sa~Sd)に投影することができる。この視点変換処理には公知の手法を用いることができる。
Further, the positions of the viewpoint R and the projection plane S shown in FIGS. 10 and 11 are examples, and can be arbitrarily set. In particular, the viewpoint R can be changed by the operation of the operator. The relationship between the viewpoint R and the projection position of the monitoring image K is defined in advance, and when the position of the viewpoint R is changed, a predetermined coordinate transformation is performed, so that the viewpoint R is viewed from the newly set viewpoint R. The monitoring image K can be projected onto the projection surface S (Sa to Sd). A known method can be used for this viewpoint conversion processing.
以上のように、本実施形態の車載制御装置14は、所定タイミングで撮影された画像情報に基づいて監視画像Kを生成し、この監視画像Kにマッピング情報、基準座標、境界を示す線図形(仕切り画像)の情報を対応づけ、撮像タイミングに従って経時的に記憶する。特に限定されないが、車載制御装置14は、所定の単位時間あたりに複数の監視画像Kを含む一つの動画ファイルとして監視画像Kを記憶してもよいし、ストリーミング方式で転送・再生が可能な形態で監視画像Kを記憶してもよい。
As described above, the in-vehicle control device 14 according to the present embodiment generates the monitoring image K based on the image information captured at a predetermined timing, and the monitoring image K includes a line figure (mapping information, reference coordinates, and boundary). (Partition image) information is associated and stored over time according to the imaging timing. Although not particularly limited, the in-vehicle control device 14 may store the monitoring image K as a single moving image file including a plurality of monitoring images K per predetermined unit time, or can be transferred / reproduced by a streaming method. The monitoring image K may be stored in
一方、中央監視装置20の通信装置23は、監視端末装置10から送信された監視画像Kとこの監視画像Kに対応づけられたマッピング情報を受信する。なお、室内の車載カメラ11eにて撮影された画像情報は別途受信する。この監視画像Kは、上述したとおり乗用車Vのボディの異なる位置に設置された4つの車載カメラ11の画像が、乗用車Vのボディの外周に沿って右回り又は左回りの方向に沿って設置された車載カメラ11a~11dの設置順序(車両Vのボディの外周に沿う右回り又は左回りの順序)に従って配置されたものである。また、この監視画像Kには、監視画像Kを八角柱体の投影モデルMの投影面Sに投影させるためのマッピング情報が対応づけられている。通信装置23は取得した監視画像K及びマッピング情報を画像処理装置22へ送出する。
On the other hand, the communication device 23 of the central monitoring device 20 receives the monitoring image K transmitted from the monitoring terminal device 10 and the mapping information associated with the monitoring image K. In addition, the image information image | photographed with the indoor vehicle-mounted camera 11e is received separately. In this monitoring image K, as described above, images of the four in-vehicle cameras 11 installed at different positions of the body of the passenger car V are installed along the outer periphery of the body of the passenger car V along the clockwise or counterclockwise direction. The vehicle-mounted cameras 11a to 11d are arranged according to the installation order (clockwise or counterclockwise order along the outer periphery of the body of the vehicle V). The monitoring image K is associated with mapping information for projecting the monitoring image K onto the projection plane S of the octagonal prism projection model M. The communication device 23 sends the acquired monitoring image K and mapping information to the image processing device 22.
画像処理装置22は、予め記憶している投影モデルMを読み出し、マッピング情報に基づいて、図10及び図11に示す乗用車Vの接地面を底面とする八角柱体の投影モデルMの側面に設定された投影面Sa~Sdに監視画像Kを投影させた表示画像を生成する。具体的には、マッピング情報に従い、受信した監視画像Kの各画素を、投影面Sa~Sdの各画素に投影する。また、画像処理装置22は、監視画像Kを投影モデルMに投影する際に、監視画像Kと共に受信した基準座標に基づいて、監視画像Kの開始点(監視画像Kの右端又は左端)を認識し、この開始点が予め投影モデルM上に定義された開始点(投影面Sの右端又は左端)と合致するように投影処理を行う。また、画像処理装置22は、監視画像Kを投影モデルMに投影する際に、各画像の境界を示す線図形(仕切り画像)を投影モデルM上に配置する。仕切り画像は、予め投影モデルMに付しておくこともでき、投影処理後に監視画像Kに付すこともできる。
The image processing device 22 reads the projection model M stored in advance, and sets the projection model M on the side surface of the octagonal prism projection model M with the ground contact surface of the passenger car V shown in FIGS. 10 and 11 as the bottom surface based on the mapping information. A display image is generated by projecting the monitoring image K onto the projected planes Sa to Sd. Specifically, according to the mapping information, each pixel of the received monitoring image K is projected onto each pixel of the projection surfaces Sa to Sd. Further, when projecting the monitoring image K onto the projection model M, the image processing device 22 recognizes the start point of the monitoring image K (the right end or the left end of the monitoring image K) based on the reference coordinates received together with the monitoring image K. Then, the projection processing is performed so that the start point coincides with the start point (the right end or the left end of the projection surface S) defined in advance on the projection model M. Further, when projecting the monitoring image K onto the projection model M, the image processing device 22 arranges a line figure (partition image) indicating the boundary of each image on the projection model M. The partition image can be attached to the projection model M in advance, or can be attached to the monitoring image K after the projection processing.
ディスプレイ24は、投影モデルMの投影面Sに投影した監視画像Kを表示する。図12は、監視画像Kの表示画像の一例を示す。なお、マウスやキーボードなどの入力装置25又はディスプレイ24をタッチパネル式の入力装置25とすることで、監視者の操作により視点を自在に設定・変更することができる。視点位置と投影面Sとの対応関係は上述の画像処理装置22又はディスプレイ24において予め定義されているので、この対応関係に基づいて、変更後の視点に応じた監視画像Kをディスプレイ24に表示することができる。
The display 24 displays the monitoring image K projected on the projection plane S of the projection model M. FIG. 12 shows an example of a display image of the monitoring image K. In addition, by using the input device 25 such as a mouse or a keyboard or the display 24 as the touch panel type input device 25, the viewpoint can be freely set and changed by the operation of the supervisor. Since the correspondence relationship between the viewpoint position and the projection plane S is defined in advance in the image processing device 22 or the display 24 described above, the monitoring image K corresponding to the changed viewpoint is displayed on the display 24 based on this correspondence relationship. can do.
次に本実施形態に係る監視システム1の動作について説明する。図3は監視端末装置10側の動作を示すフローチャート、図4は中央監視装置20側の動作を示すフローチャートである。
Next, the operation of the monitoring system 1 according to this embodiment will be described. FIG. 3 is a flowchart showing the operation on the monitoring terminal device 10 side, and FIG. 4 is a flowchart showing the operation on the central monitoring device 20 side.
図3に示すように、監視端末装置10においては、所定の時間間隔(同図に示す1ルーチン)で車載カメラ11から周囲の映像と室内の映像を取得し、画像処理装置12によって画像情報に変換する(ステップST1)。また、位置検出装置15から当該監視端末装置10が搭載された乗用車Vの現在位置情報を検出する(ステップST2)。
As shown in FIG. 3, in the monitoring terminal device 10, the surrounding video and the indoor video are acquired from the in-vehicle camera 11 at a predetermined time interval (one routine shown in FIG. 3), and image information is obtained by the image processing device 12. Conversion is performed (step ST1). Further, the current position information of the passenger car V on which the monitoring terminal device 10 is mounted is detected from the position detection device 15 (step ST2).
ステップST3では、通報ボタン16が押されたか否かを判断し、通報ボタン16が押された場合はステップST4へ進み、ステップST1にて取得した画像情報と、ステップST2で取得した位置情報と、CPUの時刻情報とを関連付け、これらを、異常が発生した旨の異常情報とともに、通信装置13及び電気通信回線網30を介して中央監視装置20へ送信する。これにより、事故、犯罪などの治安に関する異常が発生したことを、乗用車Vの位置情報と、乗用車Vの周囲の画像情報と共に中央監視装置20へ自動送信されるので、街中の監視がより一層強化されることになる。なお、本例では最初のステップST1及びST2において画像情報と位置情報とを取得するが、ステップST3とST4との間のタイミングでこれら画像情報と位置情報とを取得してもよい。
In step ST3, it is determined whether or not the report button 16 has been pressed. If the report button 16 has been pressed, the process proceeds to step ST4, where the image information acquired in step ST1, the position information acquired in step ST2, and The time information of the CPU is associated, and these are transmitted to the central monitoring device 20 via the communication device 13 and the telecommunication network 30 together with the abnormality information indicating that an abnormality has occurred. As a result, the occurrence of an abnormality related to security such as an accident or crime is automatically transmitted to the central monitoring device 20 together with the position information of the passenger car V and the image information around the passenger car V, thereby further strengthening the monitoring in the city. Will be. In this example, the image information and the position information are acquired in the first steps ST1 and ST2, but the image information and the position information may be acquired at a timing between steps ST3 and ST4.
ステップST3に戻り、通報ボタン16が押されていない場合はステップST5へ進み、中央監視装置20から画像送信指令を入力する。そして、ステップST6にて中央監視装置20から画像送信指令がある場合はステップST7へ進んで、ステップST1にて取得した画像情報と、ステップST2で取得した位置情報と、CPUの時刻情報とを関連付け、これらを通信装置13及び電気通信回線網30を介して中央監視装置20へ送信する。これにより、乗用車Vの搭乗者が通報ボタン16を押さなくとも、中央監視装置20を操作する監視者が要求した場合に、必要とされる画像情報を適宜送信することができる。なお、本例では最初のステップST1及びST2において画像情報と位置情報とを取得するが、ステップST3とST4との間のタイミングでこれら画像情報と位置情報とを取得してもよい。
Returning to step ST3, if the report button 16 is not pressed, the process proceeds to step ST5, and an image transmission command is input from the central monitoring device 20. If there is an image transmission command from the central monitoring apparatus 20 in step ST6, the process proceeds to step ST7, and the image information acquired in step ST1, the position information acquired in step ST2, and the time information of the CPU are associated with each other. These are transmitted to the central monitoring device 20 via the communication device 13 and the telecommunication network 30. Thereby, even if the passenger of the passenger car V does not press the notification button 16, the required image information can be appropriately transmitted when requested by the supervisor operating the central monitoring device 20. In this example, the image information and the position information are acquired in the first steps ST1 and ST2, but the image information and the position information may be acquired at a timing between steps ST3 and ST4.
ステップST6において、中央監視装置20から画像送信指令がない場合にはステップST8へ進み、中央監視装置20から画像送信抑制指令があるか否かを判断する。画像送信抑制指令とは、その乗用車Vの周囲の乗用車Vの通報ボタン16が押されて異常情報が中央監視装置20へ送信された場合に、回線通信量を確保するために実行される指令である。そして、中央監視装置20から画像送信抑制指令がある場合はステップST10へ進み、ステップST1にて取得した画像情報と、ステップST2で取得した位置情報と、CPUの時刻情報とを関連付け、これらを監視端末装置10のメモリRAMに一時的に記憶する。これにより、通報ボタン16が押された乗用車Vから中央監視装置20への回線が混雑することが回避され、十分な回線通信量が確保されるので、画像情報の送信速度が速くなり又は画像情報の画質を高画質に設定することができる。
In step ST6, if there is no image transmission command from the central monitoring device 20, the process proceeds to step ST8, and it is determined whether there is an image transmission suppression command from the central monitoring device 20. The image transmission suppression command is a command that is executed to ensure the amount of line communication when the notification button 16 of the passenger vehicle V around the passenger vehicle V is pressed and abnormality information is transmitted to the central monitoring device 20. is there. If there is an image transmission suppression command from the central monitoring device 20, the process proceeds to step ST10, where the image information acquired in step ST1, the position information acquired in step ST2, and the time information of the CPU are associated and monitored. Temporarily stored in the memory RAM of the terminal device 10. As a result, it is avoided that the line from the passenger vehicle V whose notification button 16 is pressed to the central monitoring device 20 is congested, and a sufficient line communication amount is secured, so that the transmission speed of the image information is increased or the image information is increased. Can be set to high image quality.
ステップST8において、中央監視装置20から画像送信抑制指令がない場合はステップST9へ進み、ステップST2で取得した位置情報のみを通信装置13及び電気通信回線網30を介して中央監視装置20へ送信する。そして、中央監視装置20側では、各乗用車Vの現在位置を地図情報上に表示するので(図4のステップST12)、各乗用車Vの現在位置をタイムリーに把握することができる。
In step ST8, when there is no image transmission suppression command from the central monitoring device 20, the process proceeds to step ST9, and only the position information acquired in step ST2 is transmitted to the central monitoring device 20 via the communication device 13 and the telecommunication network 30. . The central monitoring device 20 displays the current position of each passenger car V on the map information (step ST12 in FIG. 4), so that the current position of each passenger car V can be grasped in a timely manner.
一方、中央監視装置20においては、図4に示すように監視端末装置10を搭載した乗用車全てから、位置情報と異常情報を取得する(ステップST11)。なお、通信負荷が高くなければこのタイミングで画像情報を取得してもよい。
On the other hand, the central monitoring device 20 acquires position information and abnormality information from all the passenger cars equipped with the monitoring terminal device 10 as shown in FIG. 4 (step ST11). If the communication load is not high, the image information may be acquired at this timing.
ステップST12では、ステップST11で取得した位置情報に基づいて乗用車Vを、ディスプレイ24に表示された地図データベースの地図情報上に図1の左上に示すように表示する。乗用車Vの位置情報は、図3の1ルーチン毎の所定のタイミングにて取得され送信されるので、監視者は乗用車Vの現在位置をタイムリーに把握することができる。
In step ST12, the passenger car V is displayed on the map information of the map database displayed on the display 24 as shown in the upper left of FIG. 1 based on the position information acquired in step ST11. Since the position information of the passenger car V is acquired and transmitted at a predetermined timing for each routine in FIG. 3, the supervisor can grasp the current position of the passenger car V in a timely manner.
ステップST13では、乗用車Vの監視端末装置10から通報される異常情報、すなわち事故、犯罪などの治安に関する異常が発生した旨の通報を受信したか否かを判断する。この異常情報は、乗用車Vの搭乗者が監視端末装置10の通報ボタン16を押すことで出力される。異常情報がある場合は、ステップST14にて異常情報が出力された乗用車Vを特定し、その乗用車の監視端末装置10から画像情報および時刻情報を受信し、画像情報をディスプレイ24に表示する。また、図1左上に示すように、地図情報上に表示されたその乗用車を他の乗用車と識別できるように色彩を変更するなど、強調表示を行う。これにより、異常が発生した位置を地図情報上で視認することができるとともに、異常内容をディスプレイ24にて把握することができる。
In step ST13, it is determined whether or not abnormality information notified from the monitoring terminal device 10 of the passenger car V, that is, a notification that an abnormality relating to security such as an accident or a crime has occurred has been received. This abnormality information is output when the passenger of the passenger car V presses the notification button 16 of the monitoring terminal device 10. If there is abnormality information, the passenger vehicle V to which abnormality information was output is identified in step ST14, image information and time information are received from the monitoring terminal device 10 of the passenger vehicle, and the image information is displayed on the display 24. Further, as shown in the upper left of FIG. 1, highlighting is performed such as changing the color so that the passenger car displayed on the map information can be distinguished from other passenger cars. Thereby, the position where the abnormality has occurred can be visually recognized on the map information, and the abnormality content can be grasped on the display 24.
次のステップST15では、異常情報を出力した乗用車Vaの近傍(たとえば図13に示すように乗用車Vaを基点として半径Ra=50m以内)を走行する乗用車Vbを検出し、その乗用車Vbに対して画像情報および時刻情報の送信指令を出力する。これにより異常情報を出力した乗用車Vaの近傍を走行する乗用車Vbから画像情報を取得することができるので、異常情報を出力した乗用車Vaからの画像情報に加えた複数の画像情報により、異常情報の内容を詳細に把握することができる。なお、異常情報を出力した乗用車Vの位置情報をパトカー、救急車、消防車等の緊急自動車へ送信してもよい。この場合に、異常内容を報知するために画像情報を添付して送信してもよい。これにより、現場からの通報が入る前に緊急自動車を出動させることができ、事故や犯罪に対する迅速な対処が可能となる。
In the next step ST15, a passenger car Vb traveling in the vicinity of the passenger car Va that has output the abnormality information (for example, the radius Ra = 50 m or less from the passenger car Va as shown in FIG. 13) is detected, and the passenger car Vb is imaged. Outputs information and time information transmission commands. As a result, the image information can be acquired from the passenger vehicle Vb that travels in the vicinity of the passenger car Va that has output the abnormality information. Therefore, the abnormality information can be obtained from a plurality of image information in addition to the image information from the passenger car Va that has output the abnormality information. The contents can be grasped in detail. In addition, you may transmit the positional information on the passenger vehicle V which output abnormality information to emergency vehicles, such as a police car, an ambulance, and a fire engine. In this case, image information may be attached and transmitted in order to notify the abnormal content. As a result, the emergency vehicle can be dispatched before a report from the site is entered, and a quick response to an accident or crime is possible.
ステップST16では、異常情報を出力した乗用車Vaの近傍以外の乗用車、すなわち図13に示す通報車Vaを基点として半径Raより遠い範囲(例えば半径Rb=50m超)に存在する乗用車Vcに対して、位置情報のみ送信する指令、すなわち画像情報の送信を抑制する指令を出力する。これらの乗用車Vcからの画像情報は、通報車Vaからの画像情報に加えて有効な情報が得られる可能性はゼロとは言えないものの、こうした緊急性より回線通信量の確保を優先する趣旨である。ちなみに、ここで出力される画像送信抑制指令は、上述した図3のステップST8にて有無が判断される。また、当該ステップST16にて画像情報の送信を抑制した場合は、上述したとおり図3のステップST10にてその画像情報はその移動体Vcの監視端末装置10のメモリRAM等に一時的に記憶し、必要に応じて事後的に送信される。
In step ST16, with respect to a passenger vehicle Vc that has output abnormality information, that is, a passenger vehicle Vc that exists in a range farther than the radius Ra (for example, radius Rb = more than 50 m) from the reporting vehicle Va shown in FIG. A command for transmitting only position information, that is, a command for suppressing transmission of image information is output. Although the image information from these passenger cars Vc cannot be said to be effective in addition to the image information from the reporting vehicle Va, it is not intended that the possibility of obtaining effective information is zero. is there. Incidentally, the presence / absence of the image transmission suppression command output here is determined in step ST8 of FIG. 3 described above. If the transmission of the image information is suppressed in step ST16, the image information is temporarily stored in the memory RAM or the like of the monitoring terminal device 10 of the moving body Vc as described above in step ST10 of FIG. Sent later, if necessary.
さらにステップST16において、本例の中央監視装置20は、通報車Vaの監視端末装置10が属する基地局の通信エリアTaとの関係によっても画像情報の送信を抑制又は抑制を解除する。すなわち、図13に示すように、通報車Vaを基点として半径Raより遠い半径Rbに存在し、且つ同じ基地局の通信エリアTaに属する移動体Vcに対しては、画像情報の送信を抑制する指令を出力するが、同じ基地局の通信エリアTaに属さず他の基地局の通信エリアTbに属する移動体Vdに対しては、通信回線量の確保がさほど必要ないため、画像情報の送信の抑制を解除する。また、通報車Vaを基点として半径Rbより遠い領域に存在し、且つ他の基地局の通信エリアTbに属する移動体Vfに対しては、画像情報の送信を抑制しないが、通報車Vaを基点として半径Rbより遠い領域に存在し、且つ同じ基地局の通信エリアTaに属する移動体Veに対しては、画像情報の送信を抑制する。
Further, in step ST16, the central monitoring device 20 of this example suppresses or cancels the transmission of image information depending on the relationship with the communication area Ta of the base station to which the monitoring terminal device 10 of the reporting vehicle Va belongs. That is, as shown in FIG. 13, transmission of image information is suppressed for a moving body Vc that exists in a radius Rb far from the radius Ra with the reporting vehicle Va as a base point and belongs to the communication area Ta of the same base station. Although the command is output, since it is not necessary to secure the communication line amount for the mobile body Vd that does not belong to the communication area Ta of the same base station and belongs to the communication area Tb of another base station, transmission of image information is not necessary. Release the suppression. In addition, transmission of image information is not suppressed for the mobile body Vf existing in a region farther than the radius Rb from the reporting vehicle Va and belonging to the communication area Tb of another base station, but the reporting vehicle Va is the starting point. As described above, transmission of image information is suppressed for a mobile unit Ve that exists in an area farther than the radius Rb and belongs to the communication area Ta of the same base station.
ステップST17では、監視端末装置10から受信した全ての位置情報、画像情報および時刻情報を記録媒体へ記録する。この記録は、事故や犯罪の発生後においてこれらを解決する際に用いられる。なお、ステップST13にて異常情報がない場合はステップST14~ST17の処理を行うことなくステップST1へ戻る。
In step ST17, all position information, image information, and time information received from the monitoring terminal device 10 are recorded on the recording medium. This record is used to resolve these after an accident or crime. If there is no abnormality information in step ST13, the process returns to step ST1 without performing the processes in steps ST14 to ST17.
以上のとおり、本実施形態の監視システムは以下の効果を奏する。
(1)本例の監視システム1は、複数の乗用車Vに車載カメラ11を装着し、当該車載カメラで周囲を撮像するとともに、乗用車Vの位置情報を位置検出装置15で検出するので、固定カメラに比べて少数の車載カメラで広い範囲を監視することができる。また、複数の乗用車Vはランダムに走行するので、固定カメラに比べて少数の車載カメラで死角を少なくすることができる。また、乗用車Vに搭載されたカメラであるため、固定カメラに比べて、監視阻止を目的に破壊される可能性が少ない。また、広い範囲を監視できるので監視者の巡回作業を低減することができる。 As described above, the monitoring system of the present embodiment has the following effects.
(1) Since the monitoring system 1 of this example attaches the vehicle-mountedcamera 11 to a plurality of passenger vehicles V, images the surroundings with the vehicle-mounted camera, and detects the position information of the passenger vehicle V with the position detection device 15, the fixed camera Compared with, it is possible to monitor a wide range with a small number of in-vehicle cameras. Further, since the plurality of passenger cars V travel at random, the number of blind spots can be reduced with a small number of in-vehicle cameras as compared with the fixed cameras. In addition, since the camera is mounted on the passenger car V, it is less likely to be destroyed for the purpose of monitoring prevention compared to a fixed camera. Moreover, since a wide range can be monitored, it is possible to reduce the patrol work of the supervisor.
(1)本例の監視システム1は、複数の乗用車Vに車載カメラ11を装着し、当該車載カメラで周囲を撮像するとともに、乗用車Vの位置情報を位置検出装置15で検出するので、固定カメラに比べて少数の車載カメラで広い範囲を監視することができる。また、複数の乗用車Vはランダムに走行するので、固定カメラに比べて少数の車載カメラで死角を少なくすることができる。また、乗用車Vに搭載されたカメラであるため、固定カメラに比べて、監視阻止を目的に破壊される可能性が少ない。また、広い範囲を監視できるので監視者の巡回作業を低減することができる。 As described above, the monitoring system of the present embodiment has the following effects.
(1) Since the monitoring system 1 of this example attaches the vehicle-mounted
(2)本例の監視システム1は、位置情報及び画像情報に加えて時刻情報も取得するので、事故や犯罪の事後解析を行う際に、時刻情報に沿って位置情報と画像情報を整理することができ、事件解決への寄与が期待できる。
(2) Since the monitoring system 1 of this example acquires time information in addition to position information and image information, the position information and image information are arranged along the time information when performing a post-mortem analysis of an accident or a crime. Can contribute to the resolution of the case.
(3)本例の監視システム1は、位置情報は常時取得して送信する一方で、画像情報は異常状態の場合や中央監視装置20から要求があった場合に取得して送信するので、通信容量を必要最少とすることができ、通信速度の低下を抑制することができる。また、情報の記録容量も必要最少とすることができ、廉価かつ小型のシステムとすることができる。
(3) Since the monitoring system 1 of the present example always acquires and transmits position information, the image information is acquired and transmitted in an abnormal state or when requested by the central monitoring device 20. The capacity can be minimized, and a decrease in communication speed can be suppressed. Further, the information recording capacity can be minimized, and an inexpensive and small system can be obtained.
(4)本例の監視端末装置10に通報ボタン16を設けているので、乗用車Vの搭乗者が異常を発見した場合に、位置情報及び画像情報とともに中央監視装置20へ即座にその旨を通報することができる。その結果、電話等による説明に比べて、異常内容の把握を正確、迅速且つ容易に行うことができ、事件の初動捜査等への貢献が期待できる。
(4) Since the notification button 16 is provided in the monitoring terminal device 10 of this example, when the passenger of the passenger car V finds an abnormality, the central monitoring device 20 is immediately notified together with the position information and the image information. can do. As a result, it is possible to grasp the contents of the abnormality more accurately, quickly and easily than the explanation by telephone or the like, and to contribute to the initial investigation of the incident.
(5)本例の監視端末装置10は、中央監視装置20からの画像送信指令を受け付けたら画像情報を位置情報とともに送信するので、監視者は、現場に出動することなく、希望する位置の映像を中央監視装置20が設置された場所で確認することができる。
(5) Since the monitoring terminal device 10 of this example receives the image transmission command from the central monitoring device 20, the monitoring terminal device 10 transmits the image information together with the position information. Can be confirmed at the place where the central monitoring device 20 is installed.
(6)本例の中央監視装置20は、監視端末装置から受信した位置情報をディスプレイ24の地図情報上に表示するので、情報検出可能な乗用車Vの配置を中央監視装置20にて把握することができる。その結果、画像情報が取得できる領域の分布が把握でき、監視者の配備計画への寄与が期待できる。また、必要に応じて監視端末装置10から受信した画像情報をディスプレイ24に表示するので、監視者は、現場に出動することなく、希望する位置の映像を中央監視装置20が設置された場所で確認することができる。
(6) Since the central monitoring device 20 of this example displays the position information received from the monitoring terminal device on the map information on the display 24, the central monitoring device 20 grasps the arrangement of the passenger car V that can detect information. Can do. As a result, it is possible to grasp the distribution of the area from which image information can be acquired, and to contribute to the monitoring plan of the supervisor. Moreover, since the image information received from the monitoring terminal device 10 is displayed on the display 24 as necessary, the monitor can view the video of the desired position at the place where the central monitoring device 20 is installed without going to the site. Can be confirmed.
(7)本例の中央監視装置20は、監視端末装置10から異常情報を受信したら、ディスプレイ24において通報した乗用車Vの表示を強調表示するとともに、その乗用車Vから受信した画像情報をディスプレイ24に表示するので、位置と映像とを即座に確認することができ、迅速な事件対応が期待できる。
(7) When the central monitoring device 20 of this example receives the abnormality information from the monitoring terminal device 10, the central monitoring device 20 highlights the display of the passenger car V notified on the display 24 and displays the image information received from the passenger vehicle V on the display 24. Since it is displayed, it is possible to immediately confirm the position and the video, and prompt response to the incident can be expected.
(8)本例の中央監視装置20は、監視端末装置10から異常情報を受信したら、図13に示すように、異常情報を発信した乗用車Vaの近傍領域を走行する乗用車Vbに画像送信指令を送信するので、1台の乗用車Vからの画像情報だけでなく複数の乗用車Vからの画像情報が即座に取得でき、異常内容の把握が容易になる。
(8) When the central monitoring device 20 of this example receives the abnormality information from the monitoring terminal device 10, as shown in FIG. 13, the central monitoring device 20 issues an image transmission command to the passenger vehicle Vb traveling in the vicinity of the passenger vehicle Va that has transmitted the abnormality information. Since the transmission is performed, not only the image information from one passenger car V but also the image information from a plurality of passenger cars V can be acquired immediately, and the contents of the abnormality can be easily grasped.
(9)本例の中央監視装置20は、監視端末装置10から異常情報を受信したら、図13に示すように、異常情報を発信した乗用車Vaの近傍領域以外の領域を走行する乗用車Vcに画像送信抑制指令を送信するので、通報車Vaからの画像情報の送信速度が速くなり、又はこれに代えて通報車Vaから高画質の画像を送信することができる。また、画像情報の送信が抑制された乗用車Vでは、その画像情報を監視端末装置10のメモリRAMに記憶するので、事後解析などへ利用することができる。また、通報車Vaの近く以外であっても通信基地局が異なる乗用車Vdに対しては画像送信抑制を解除するので、同時に複数の事故等が発生した場合に中央監視装置20への画像情報の集約が可能となる。
(9) When the central monitoring device 20 of this example receives the abnormality information from the monitoring terminal device 10, as shown in FIG. Since the transmission suppression command is transmitted, the transmission speed of the image information from the reporting vehicle Va is increased, or a high-quality image can be transmitted from the reporting vehicle Va instead. Further, in the passenger car V in which the transmission of the image information is suppressed, the image information is stored in the memory RAM of the monitoring terminal device 10, so that it can be used for post-mortem analysis. In addition, since image transmission suppression is canceled for passenger cars Vd with different communication base stations even if they are not near the reporting vehicle Va, the image information to the central monitoring device 20 can be transmitted to the central monitoring device 20 when multiple accidents occur at the same time. Aggregation is possible.
(10)本例の中央監視装置20は、監視端末装置10から異常情報を受信したら、パトカー、救急車及び消防車などの緊急自動車に異常情報を送信するので、事件への対応を迅速に行うことができる。また、異常情報とともに位置情報及び画像情報を緊急自動車へ送信することで、緊急自動車側において異常内容を迅速且つ正確に把握することができる。
(10) When the central monitoring device 20 of this example receives the abnormality information from the monitoring terminal device 10, it transmits the abnormality information to emergency vehicles such as police cars, ambulances, and fire engines, so that the incident can be quickly handled. Can do. Further, by transmitting the position information and the image information together with the abnormality information to the emergency vehicle, it is possible to quickly and accurately grasp the abnormality content on the emergency vehicle side.
なお、上述した実施形態では、乗用車Vの位置情報と車載カメラ11a~11eからの画像情報を取得するようにしたが、図1に示す、街中に設置された固定カメラ11fからの画像情報と組み合わせて取得してもよい。また、位置情報と画像情報を取得する乗用車Vは、図1に示すように予め決められた領域を走行するタクシーV1やバスを用いることが望ましいが、自家用自動車V2や緊急自動車V3を用いてもよい。
In the above-described embodiment, the position information of the passenger car V and the image information from the in-vehicle cameras 11a to 11e are acquired. However, in combination with the image information from the fixed camera 11f installed in the city shown in FIG. May be obtained. Moreover, as for the passenger car V which acquires positional information and image information, it is desirable to use the taxi V1 and bus | bath which drive | work a predetermined area | region as shown in FIG. 1, but even if it uses private car V2 and emergency car V3 Good.
また、上述した実施形態では、乗用車Vに5つのカメラを搭載し、このうち4つの車載カメラ11a~11dを用いて360°周囲の映像を画像情報として取得したが、室内の車載カメラ11eを省略してもよい。また、交通量が多い監視領域のように多くの乗用車Vから画像情報が取得できる環境等であれば特に、4つの車載カメラ11a~11dを3つ以下にしてもよい。
Further, in the embodiment described above, five cameras are mounted on the passenger car V, and a video around 360 ° is obtained as image information using four on-vehicle cameras 11a to 11d, but the on-vehicle camera 11e in the room is omitted. May be. In addition, the number of the four on-vehicle cameras 11a to 11d may be three or less, particularly in an environment where image information can be acquired from many passenger cars V, such as a monitoring area where there is a large amount of traffic.
上記乗用車Vは本発明に係る移動体に相当し、上記位置検出装置15は本発明に係る位置検出手段に相当し、上記車載カメラ11及び画像処理装置12は本発明に係る画像生成手段に相当し、上記車載制御装置14は本発明に係る情報取得制御手段に相当し、上記車載制御装置14のCPUは本発明に係る時刻検出手段に相当し、上記通報ボタン16は本発明に係る指令入力手段に相当し、上記通信装置13は本発明に係る指令受付手段及び情報出力手段に相当し、上記通信装置23は本発明に係る情報入力手段、異常情報受付手段及び指令出力手段に相当し、上記ディスプレイ24は本発明に係る第1表示制御手段及び第2表示制御手段に相当する。
The passenger vehicle V corresponds to a moving body according to the present invention, the position detection device 15 corresponds to a position detection unit according to the present invention, and the in-vehicle camera 11 and the image processing device 12 correspond to an image generation unit according to the present invention. The on-vehicle control device 14 corresponds to the information acquisition control means according to the present invention, the CPU of the on-vehicle control device 14 corresponds to the time detection means according to the present invention, and the notification button 16 is a command input according to the present invention. The communication device 13 corresponds to a command receiving means and an information output means according to the present invention, and the communication device 23 corresponds to an information input means, an abnormality information receiving means and a command output means according to the present invention, The display 24 corresponds to first display control means and second display control means according to the present invention.
1…車両監視システム
10…監視端末装置
11,11a~11e…車載カメラ
11f…街中固定カメラ
12…画像処理装置
13…通信装置
14…車載制御装置
15…位置検出装置
16…通報ボタン
20…中央監視装置
21…中央制御装置
22…画像処理装置
23…通信装置
24…ディスプレイ
25…入力装置
30…電気通信回線網
V,V1,V2,V3…移動体
M…投影モデル
S,Sa,Sb、Sc、Sd…投影面
R1~R8…視点 DESCRIPTION OF SYMBOLS 1 ...Vehicle monitoring system 10 ... Monitoring terminal device 11, 11a-11e ... Vehicle-mounted camera 11f ... Street fixed camera 12 ... Image processing device 13 ... Communication device 14 ... Vehicle-mounted control device 15 ... Position detection device 16 ... Notification button 20 ... Central monitoring Device 21 ... Central control device 22 ... Image processing device 23 ... Communication device 24 ... Display 25 ... Input device 30 ... Telecommunication network V, V1, V2, V3 ... Moving object M ... Projection model S, Sa, Sb, Sc, Sd: Projection planes R1 to R8 ... Viewpoint
10…監視端末装置
11,11a~11e…車載カメラ
11f…街中固定カメラ
12…画像処理装置
13…通信装置
14…車載制御装置
15…位置検出装置
16…通報ボタン
20…中央監視装置
21…中央制御装置
22…画像処理装置
23…通信装置
24…ディスプレイ
25…入力装置
30…電気通信回線網
V,V1,V2,V3…移動体
M…投影モデル
S,Sa,Sb、Sc、Sd…投影面
R1~R8…視点 DESCRIPTION OF SYMBOLS 1 ...
Claims (10)
- 複数の移動体のそれぞれの位置情報を検出する位置検出手段と、前記複数の移動体のそれぞれに装着され、当該移動体の周囲を撮像して画像情報を生成する画像生成手段と、異常状態である旨の異常情報を出力する異常情報出力手段と、を備える監視端末装置から、
無線通信を介して監視情報を入力する中央監視装置を備える監視システムであって、
前記中央監視装置は、
前記監視端末装置から出力された位置情報及び画像情報を入力する情報入力手段と、
前記監視端末装置から出力された異常情報を受け付ける異常情報受付手段と、
前記異常情報が出力された移動体の位置を基点とする所定の第1領域に存在する移動体の監視端末装置に対して、画像情報の送信を抑制する指令を出力する指令出力手段と、を備える監視システム。 Position detecting means for detecting position information of each of the plurality of moving bodies, image generating means attached to each of the plurality of moving bodies and imaging the surroundings of the moving bodies to generate image information, and in an abnormal state From the monitoring terminal device comprising abnormality information output means for outputting abnormality information to the effect,
A monitoring system comprising a central monitoring device for inputting monitoring information via wireless communication,
The central monitoring device is
Information input means for inputting position information and image information output from the monitoring terminal device;
Abnormality information receiving means for receiving abnormality information output from the monitoring terminal device;
Command output means for outputting a command for suppressing transmission of image information to a monitoring terminal device of a moving object existing in a predetermined first area based on the position of the moving object from which the abnormality information is output; A monitoring system. - 前記指令出力手段は、前記第1領域に存在する移動体の監視端末装置に対して、位置情報と画像情報を関連付けて記憶する指令を出力する請求項1に記載の監視システム。 The monitoring system according to claim 1, wherein the command output means outputs a command to store positional information and image information in association with each other to a monitoring terminal device of a moving object existing in the first area.
- 前記指令出力手段は、前記第1領域であっても、前記異常情報が出力された移動体の監視端末装置が属する通信基地局には属さない移動体の監視端末装置に対しては、画像情報の送信の抑制を解除する請求項1又は2に記載の監視システム。 The command output means includes image information for a mobile monitoring terminal device that does not belong to a communication base station to which the mobile monitoring terminal device to which the abnormality information is output belongs to the first area. The monitoring system of Claim 1 or 2 which cancels | releases suppression of transmission.
- 前記指令出力手段は、前記第1領域に比べて、前記異常情報が出力された移動体に近い第2領域に存在する移動体の監視端末装置に対して、位置情報と画像情報を送信する指令を出力する請求項1~3のいずれか一項に記載の監視システム。 The command output means is a command for transmitting position information and image information to a monitoring terminal device of a moving body existing in a second area close to the moving body from which the abnormality information is output, compared to the first area. The monitoring system according to any one of claims 1 to 3, wherein:
- 前記指令出力手段は、前記第1領域に比べて、前記異常情報が出力された移動体から遠い第3領域であっても、前記異常情報が出力された移動体の監視端末装置が属する通信基地局に属する移動体の監視端末装置に対して、画像情報の送信を抑制する指令を出力する請求項1~4のいずれか一項に記載の監視システム。 The command output means includes a communication base to which the monitoring terminal device of the mobile body to which the abnormality information is output belongs to the third area farther from the mobile body to which the abnormality information is output than the first area. The monitoring system according to any one of claims 1 to 4, wherein a command for suppressing transmission of image information is output to a mobile monitoring terminal device belonging to a station.
- 中央監視装置に対して、無線通信を介して監視情報を出力する監視端末装置を備える監視システムであって、
前記監視端末装置は、
複数の移動体のそれぞれの位置情報を検出する位置検出手段と、
前記複数の移動体のそれぞれに装着され、当該移動体の周囲を撮像して画像情報を生成する画像生成手段と、
前記位置検出手段および前記画像生成手段を制御し、所定のタイミングで前記位置情報又は前記画像情報を取得する情報取得制御手段と、
異常状態である旨の異常情報を出力する異常情報出力手段と、
前記位置情報又は前記画像情報を送信する送信手段と、
前記中央監視装置からの指令により前記画像情報の送信が抑制された場合に、前記位置情報と前記画像情報を関連付けて記憶する記憶手段と、を備える監視システム。 A monitoring system comprising a monitoring terminal device that outputs monitoring information via wireless communication to a central monitoring device,
The monitoring terminal device
Position detecting means for detecting position information of each of the plurality of moving bodies;
An image generating means attached to each of the plurality of moving bodies, for imaging the surroundings of the moving bodies and generating image information;
Information acquisition control means for controlling the position detection means and the image generation means and acquiring the position information or the image information at a predetermined timing;
Abnormal information output means for outputting abnormal information indicating an abnormal state;
Transmitting means for transmitting the position information or the image information;
A monitoring system comprising: storage means for associating and storing the position information and the image information when transmission of the image information is suppressed by a command from the central monitoring device. - 前記監視端末装置は、時刻情報を検出する時刻検出手段を備え、
前記記憶手段は、前記位置情報及び前記画像情報を取得した時刻情報を当該位置情報及び画像情報に関連付けて記憶する請求項6に記載の監視システム。 The monitoring terminal device includes time detection means for detecting time information,
The monitoring system according to claim 6, wherein the storage unit stores time information when the position information and the image information are acquired in association with the position information and the image information. - 監視情報を取得する監視端末装置と、無線通信を介して前記監視情報を入力する中央監視装置と、を備える監視システムであって、
前記監視端末装置は、複数の移動体のそれぞれの位置情報を検出する位置検出手段と、前記複数の移動体のそれぞれに装着され、当該移動体の周囲を撮像して画像情報を生成する画像生成手段と、異常状態である旨の異常情報を出力する異常情報出力手段と、を備え、
前記中央監視装置は、前記監視端末装置から出力された位置情報及び画像情報を入力する情報入力手段と、前記監視端末装置から出力された異常情報を受け付ける異常情報受付手段と、前記異常情報が出力された移動体の位置を基点とする所定の第1領域に存在する移動体の監視端末装置に対して、画像情報の送信を抑制する指令を出力する指令出力手段と、を備える監視システム。 A monitoring system comprising: a monitoring terminal device that acquires monitoring information; and a central monitoring device that inputs the monitoring information via wireless communication,
The monitoring terminal device includes a position detection unit that detects position information of each of the plurality of moving bodies, and an image generation that is attached to each of the plurality of moving bodies and that captures images of the surroundings of the moving bodies to generate image information. Means and abnormality information output means for outputting abnormality information indicating an abnormal state,
The central monitoring device includes information input means for inputting position information and image information output from the monitoring terminal device, abnormality information receiving means for receiving abnormality information output from the monitoring terminal device, and output of the abnormality information And a command output means for outputting a command for suppressing transmission of image information to a monitoring terminal device of a moving body that exists in a predetermined first area with the position of the moving body as a base point. - 複数の移動体のそれぞれの位置情報と、当該移動体の周囲の画像情報とを所定のタイミングで取得するステップと、
前記位置情報と前記画像情報とを無線通信を介して送信するステップと、
異常情報を出力するステップと、
前記異常情報を出力した移動体の位置を基点とする所定の第1領域に存在する移動体に対しては、画像情報の送信を抑制するステップと、を備える監視方法。 Obtaining each position information of a plurality of moving bodies and image information around the moving bodies at a predetermined timing;
Transmitting the position information and the image information via wireless communication;
Outputting anomaly information; and
A monitoring method comprising: a step of suppressing transmission of image information with respect to a moving body existing in a predetermined first area whose base point is the position of the moving body that has output the abnormality information. - 監視情報を取得する監視端末装置と、無線通信を介して前記監視情報を入力する中央監視装置と、を備える監視システムであって、
前記監視端末装置は、複数の移動体のそれぞれの位置情報を検出する位置検出手段と、前記複数の移動体のそれぞれに装着され、当該移動体の周囲を撮像して画像情報を生成する画像生成手段と、を備え、
前記中央監視装置は、前記監視端末装置から出力された位置情報及び画像情報を入力する情報入力手段を備え、
特定の監視端末装置からは前記位置情報及び前記画像情報を出力するとともに、前記特定の監視端末装置以外の監視端末装置からは前記位置情報及び前記画像情報を出力しないように制御する制御手段をさらに備える監視システム。 A monitoring system comprising: a monitoring terminal device that acquires monitoring information; and a central monitoring device that inputs the monitoring information via wireless communication,
The monitoring terminal device includes a position detection unit that detects position information of each of the plurality of moving bodies, and an image generation that is attached to each of the plurality of moving bodies and images the surroundings of the moving bodies to generate image information. Means, and
The central monitoring device includes information input means for inputting position information and image information output from the monitoring terminal device,
Control means for outputting the position information and the image information from a specific monitoring terminal apparatus, and controlling the position information and the image information not to be output from a monitoring terminal apparatus other than the specific monitoring terminal apparatus. A monitoring system.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012014617 | 2012-01-26 | ||
JP2012-014617 | 2012-01-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013111492A1 true WO2013111492A1 (en) | 2013-08-01 |
Family
ID=48873231
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/083465 WO2013111492A1 (en) | 2012-01-26 | 2012-12-25 | Monitoring system |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2013111492A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015179333A (en) * | 2014-03-18 | 2015-10-08 | 株式会社日本総合研究所 | Local community watching system and local community watching method using automatic operating traffic system |
EP3599134A1 (en) * | 2018-07-25 | 2020-01-29 | Denso Ten Limited | Accident report device and accident report method |
CN114868167A (en) * | 2019-12-13 | 2022-08-05 | 大和通信株式会社 | Security system and monitoring method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007076404A (en) * | 2005-09-12 | 2007-03-29 | Sony Ericsson Mobilecommunications Japan Inc | Vehicle data obtaining system, and vehicle data obtaining device |
JP2007328477A (en) * | 2006-06-07 | 2007-12-20 | Hitachi Ltd | Communication system, communication terminal and information processor |
JP2009205368A (en) * | 2008-02-27 | 2009-09-10 | Denso Corp | Accident notification system and onboard device |
-
2012
- 2012-12-25 WO PCT/JP2012/083465 patent/WO2013111492A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007076404A (en) * | 2005-09-12 | 2007-03-29 | Sony Ericsson Mobilecommunications Japan Inc | Vehicle data obtaining system, and vehicle data obtaining device |
JP2007328477A (en) * | 2006-06-07 | 2007-12-20 | Hitachi Ltd | Communication system, communication terminal and information processor |
JP2009205368A (en) * | 2008-02-27 | 2009-09-10 | Denso Corp | Accident notification system and onboard device |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015179333A (en) * | 2014-03-18 | 2015-10-08 | 株式会社日本総合研究所 | Local community watching system and local community watching method using automatic operating traffic system |
EP3599134A1 (en) * | 2018-07-25 | 2020-01-29 | Denso Ten Limited | Accident report device and accident report method |
US20200031299A1 (en) * | 2018-07-25 | 2020-01-30 | Denso Ten Limited | Accident report device and accident report method |
JP2020017077A (en) * | 2018-07-25 | 2020-01-30 | 株式会社デンソーテン | Accident notification device and accident notification method |
US10713829B2 (en) | 2018-07-25 | 2020-07-14 | Denso Ten Limited | Accident report device and accident report method |
JP7168367B2 (en) | 2018-07-25 | 2022-11-09 | 株式会社デンソーテン | accident reporting device |
CN114868167A (en) * | 2019-12-13 | 2022-08-05 | 大和通信株式会社 | Security system and monitoring method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5786963B2 (en) | Monitoring system | |
JP5811190B2 (en) | Monitoring system | |
JP6451840B2 (en) | Information presentation system | |
KR101470313B1 (en) | Real-time Collection System and Method of Video Data Car Blackbox | |
WO2013008623A1 (en) | Vehicle monitoring device, vehicle monitoring system, terminal device, and vehicle monitoring method | |
JP4643860B2 (en) | VISUAL SUPPORT DEVICE AND SUPPORT METHOD FOR VEHICLE | |
WO2015045578A1 (en) | Information provision system | |
WO2012137367A1 (en) | Image accumulation system | |
WO2013111494A1 (en) | Monitoring system | |
JP6260174B2 (en) | Surveillance image presentation system | |
WO2013111491A1 (en) | Monitoring system | |
WO2013111492A1 (en) | Monitoring system | |
JP2016092814A (en) | Drive recorder with 360-degree panoramic view | |
KR20140035645A (en) | Vehicle black box system and method for providing thereof | |
WO2013111479A1 (en) | Monitoring system | |
WO2013161345A1 (en) | Monitoring system and monitoring method | |
JP5796638B2 (en) | Monitoring system | |
WO2013094405A1 (en) | Monitoring system | |
WO2013125301A1 (en) | Surveillance system | |
WO2013111493A1 (en) | Monitoring system | |
JP5812105B2 (en) | Monitoring system | |
JP4093094B2 (en) | Vehicle periphery monitoring system and vehicle periphery monitoring method | |
JP4696825B2 (en) | Blind spot image display device for vehicles | |
JP2013255237A (en) | Image display device and image display method | |
KR20130028214A (en) | Black box apparatus for car |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12866441 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12866441 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |