Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be mutually grouped without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic view of an application scenario of a target determination method according to some embodiments of the present disclosure.
In the application scenario diagram of fig. 1, first, the computing device 101 may obtain a set of images 102 captured by a vehicle-mounted camera, camera parameter information 104 of the vehicle-mounted camera, and a set of corner coordinates 103, where the corner coordinates are coordinates in a world coordinate system. Next, based on the camera parameter information 104, coordinate conversion is performed on each corner coordinate in each corner coordinate set in the corner coordinate set 103 to generate a conversion corner coordinate set 105, and the conversion corner coordinate set is obtained, where the conversion corner coordinate is a coordinate in an image coordinate system. Further, based on the set of transformed corner point coordinates 105, the detection frame information corresponding to each image in the image set 102 is determined, and a set of detection frame information 106 is obtained. In addition, each piece of detection frame information in each detection frame information group in the detection frame information group set 106 is subjected to correction processing to generate corrected detection frame information, resulting in a corrected detection frame information group set 107. Further, based on the set 107 of the set of rectification detection frames, a traffic light quantity value 108 included in each image in the set 102 of images is determined. Thus, an image containing a traffic signal whose number value 108 satisfies a predetermined condition is selected as a candidate image 109 from the above-described image set. Finally, the color of each traffic light in the candidate image 109 is identified, resulting in a color information set 110.
The computing device 101 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices enumerated above. It may be implemented, for example, as multiple pieces of software and software modules used to provide distributed services, or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices in FIG. 1 is merely illustrative. There may be any number of computing devices, as implementation needs dictate.
With continued reference to fig. 2, a flow 200 of some embodiments of a target determination method according to the present disclosure is shown. The method may be performed by the computing device 101 of fig. 1. The data display method comprises the following steps:
step 201, acquiring an image set shot by a vehicle-mounted camera, camera parameter information of the vehicle-mounted camera, and an angular point coordinate set.
In some embodiments, the subject performing the object determination method (e.g., the computing device 101 shown in fig. 1) may obtain the set of images captured by the onboard camera, the camera parameter information of the onboard camera, and the set of corner coordinates by a wired connection or a wireless connection. Wherein the camera parameter information includes, but is not limited to, at least one of: a first camera parameter, a second camera parameter, a third camera parameter. The corner coordinates are three-dimensional coordinates in a world coordinate system. The first camera parameter represents an intra-camera parameter. The second camera parameter is a rotation matrix. The third camera parameter is a translation vector.
As an example, the second camera parameter may be
The third camera parameter may be
The set of corner point coordinates may be [ [ [12, 14, 16 ]],[12,18,16]],[[29,14,16],[39,18,16]]]。
Step 202, based on the camera parameter information, performing coordinate transformation on each corner coordinate in each corner coordinate set in the corner information set to generate transformation corner coordinates, so as to obtain a transformation corner coordinate set.
In some embodiments, the executing entity may perform coordinate transformation on each corner coordinate in each corner coordinate set in the corner information set based on the camera parameter information to generate a transformation corner coordinate, so as to obtain a transformation corner coordinate set. Wherein the corner point coordinates are coordinates in an image coordinate system. The image coordinate system is a coordinate system established with the upper left corner of the image as the origin, the line parallel to the horizontal direction of the image as the horizontal axis, and the line parallel to the vertical direction of the image as the vertical axis.
In some optional implementation manners of some embodiments, the executing entity may perform coordinate transformation on each corner coordinate in each corner coordinate set in the corner information set to generate a transformation corner coordinate based on the camera parameter information and the following formula, so as to obtain a transformation corner coordinate set:
where u represents the abscissa in the coordinates of the above-mentioned conversion corner point. v denotes the ordinate in the coordinates of the above-mentioned conversion corner point. K denotes the above-described first camera parameter. R represents the second camera parameter described above. t represents the third camera parameter described above. 0TRepresenting a transposed matrix of the 0 matrix. XwRepresenting the abscissa in the coordinates of the above-mentioned corner points. Y iswRepresenting the ordinate in the coordinates of the above-mentioned corner points. ZwRepresenting the vertical coordinates in the corner coordinates above. XcAnd the abscissa of the coordinates of the corner points is represented as the corresponding abscissa in the camera coordinate system. Y iscAnd expressing the vertical coordinate corresponding to the vertical coordinate in the corner point coordinate system in a camera coordinate system. ZcAnd representing the corresponding vertical coordinate of the vertical coordinate in the corner point coordinates under the camera coordinate system.
As an example, the above corner point coordinates may be [ -1, 0, 1 [ ]]. The transpose matrix 0 of the above 0 matrix
TMay be [ 000 ]]. The second camera parameter may be
The third camera parameter may be
The first camera parameter may be 1. The coordinates of the conversion angular point are obtained by the formula
(the calculation procedure is as follows).
The above formula is taken as an invention point of the embodiment of the present disclosure, thereby solving the second technical problem mentioned in the background art, that is, the problem that target detection cannot be performed because the corner coordinates and the target are not in the same coordinate system.
Firstly, the transformation of the corner point coordinates under the world coordinate system into the camera coordinate system is realized through the second camera parameters and the third camera parameters. And secondly, converting the corner point coordinates in the camera coordinate system into the image coordinate system based on the first camera parameters. Because coordinate dimensions under each coordinate system are different, the matrix is filled up through the parameter '1', and conversion of coordinates with different dimensions under different coordinate systems can be realized through matrix operation.
And 203, determining the detection frame information corresponding to each image in the image set based on the conversion corner point coordinate set to obtain a detection frame information set.
In some embodiments, the executing entity may determine, based on the set of transformed corner point coordinates, detection frame information corresponding to each image in the image set, so as to obtain a set of detection frame information. Wherein, the detection box information includes a binary group, and the binary group includes: and detecting a first coordinate of the frame and a second coordinate of the frame. The detection frame information is a projection of each conversion corner point coordinate set in the conversion corner point coordinate set on each image.
Step 204, performing correction processing on each detection frame information in each detection frame information group in the detection frame information group set to generate corrected detection frame information, so as to obtain a corrected detection frame information group set.
In some embodiments, the execution body may perform, by various means, a correction process on each detection frame information in each detection frame information group in the detection frame information group set to generate corrected detection frame information, resulting in a corrected detection frame information group set. The correction processing refers to scaling or shifting the corner coordinates in the corner coordinate set in the detection frame information.
In some optional implementation manners of some embodiments, the executing body may perform a correction process on each detection frame information in each detection frame information group in the detection frame information group set by using the following formula to generate corrected detection frame information, so as to obtain a corrected detection frame information group set:
wherein x is1And represents an abscissa in the first coordinate of the detection frame. y is1And represents a vertical coordinate in the first coordinate of the detection frame. x is the number of2And represents the abscissa of the second coordinate of the detection frame. y is2And represents the ordinate of the second coordinate of the detection frame. nx1And an abscissa indicating a first coordinate of the detection frame included in the correction detection frame information. ny1And a vertical coordinate in the first coordinate of the detection frame included in the correction detection frame information. nx2And an abscissa indicating a second coordinate of the detection frame included in the correction detection frame information. ny2And a vertical coordinate in the second coordinate of the detection frame included in the correction detection frame information. iw represents the horizontal pixel value of the image captured by the onboard camera. ih denotes a vertical pixel value of an image captured by the onboard camera. w denotes a first threshold value. Value of x2-x1. h represents a second threshold value, and the value is y2-y1. lw represents the left-end offset coefficient, and the value range is [0, + ∞ ]. rw represents the right-hand offset coefficient and ranges from 0, + ∞. th represents the upper offset coefficient, which ranges from 0, + ∞. dh represents the lower end offset coefficient, with a value in the range of [0, + ∞). max () represents taking the maximum value in each row in the matrix. min () represents taking the minimum value in each row in the matrix.
As an example, the above-described detection frame first coordinate may be [2, 8 ]. The second coordinate of the detection frame may be [3, 10 ]. The first threshold may be 1. The second threshold may be 2. The left-end offset coefficient may be 2. The right-end offset coefficient may be 3. The upper end offset coefficient may be 1. The lower end offset coefficient may be 4. The horizontal pixel value of the image photographed by the above-described onboard camera may be 1920. The vertical pixel value of the image captured by the above-described onboard camera may be 1080. The correction detection information generated by the above formula may be [ [0, 6], [6, 18] ] (calculation process is as follows).
The above formula is an inventive point of the embodiments of the present disclosure, thereby solving the technical problem mentioned in the background art, i.e., the problem that the generated detection frames may not all frame the targets.
Due to the fact that the cameras have various specifications, different visual angles and different focal lengths, images shot by the cameras are distorted. And the detection frames corresponding to the detection frame information cannot frame the targets completely. The first threshold value and the second threshold value indicate the length and width of the detection frame corresponding to the detection frame information. By introducing the upper end offset coefficient, the lower end offset coefficient, the left end offset coefficient and the right end offset coefficient, the detection frame is scaled and shifted, the problem of inaccurate frame setting caused by image distortion is solved, and the detection frame can completely frame the target.
Step 205, determining the quantity value of the traffic lights contained in each image in the image set based on the set of the rectification detection frame information groups.
In some embodiments, the executing body may determine the number value of the traffic lights included in each image in the image set in various ways based on the set of rectification detection frame information. The number value of the traffic signal lamps contained in the image is equal to the number of the correction detection frames in the corresponding correction detection frame information group.
In step 206, an image containing a traffic signal light with a quantity value satisfying a predetermined condition is selected from the image set as a candidate image.
In some embodiments, the execution subject may select, as the candidate image, an image containing a traffic signal whose number value satisfies a predetermined condition from the image set. The predetermined condition may be that the number of traffic lights in the image set is the largest.
And step 207, identifying the color of each traffic signal lamp in the candidate image to obtain a color information set.
In some embodiments, the executing entity may identify a color of each traffic light in the candidate image, resulting in a color information set. The color of each traffic signal lamp in the candidate image may be identified by taking a color corresponding to the largest area of the red corresponding area, the green corresponding area and the yellow corresponding area in the area determined by the correction detection frame information as the color of the traffic signal lamp.
In some optional implementations of some embodiments, the performing subject may recognize the color of the traffic signal lamp through a pre-trained color recognition model. Specifically, the pre-trained color recognition model may include a feature extraction layer, a feature summarization layer, and a classification layer. The feature extraction layer is used for identifying the traffic signal lamps in the images and extracting features. The characteristic summarizing layer is used for summarizing the extracted characteristics. The classification layer is used for classifying according to the summarized characteristics.
In some optional implementations of some embodiments, the execution subject may send the color information set to a vehicle with a display function for display.
The above embodiments of the present disclosure have the following advantages: firstly, an image set shot by a vehicle-mounted camera, camera parameter information of the vehicle-mounted camera and a corner point coordinate set are obtained, wherein the corner point coordinates are coordinates in a world coordinate system. And secondly, based on the camera parameter information, performing coordinate conversion on each corner coordinate in each corner coordinate set in the corner coordinate set to generate a conversion corner coordinate, so as to obtain a conversion corner coordinate set, wherein the conversion corner coordinate is a coordinate in an image coordinate system. And the data are processed conveniently in the same coordinate system by converting the coordinates. And determining the detection frame information corresponding to each image in the image set based on the conversion corner point coordinate set to obtain a detection frame information set. By obtaining the information of the detection frame, the primary determination of the target is realized. In addition, each piece of detection frame information in each detection frame information group in the detection frame information group set is subjected to correction processing to generate corrected detection frame information, so that a corrected detection frame information group set is obtained. Through correcting the detection frame information, the target frame is more accurate. Further, based on the set of correction detection frame information sets, the quantity value of traffic lights contained in each image in the image set is determined. Then, an image containing a traffic signal whose number value satisfies a predetermined condition is selected from the above-described image set as a candidate image. And finally, identifying the color of each traffic signal lamp in the candidate image to obtain a color information set. Through correcting the detection frame, the accuracy of target detection is improved. The problem of when carrying out the target measurement through artifical visual inspection too rely on people's experience and lead to the target measurement result inaccurate is solved. Meanwhile, the measuring efficiency is improved to a certain extent by a programmed measuring method.
With further reference to FIG. 3, as an implementation of the above-described method for the above-described figures, the present disclosure provides some embodiments of a target measurement apparatus, which correspond to those of the method embodiments described above with reference to FIG. 2, and which may be particularly applicable to various electronic devices. As shown in fig. 3, the target assay device 300 of some embodiments includes: an acquisition unit 301 configured to acquire a set of images captured by a vehicle-mounted camera, camera parameter information of the vehicle-mounted camera, and a set of corner point coordinates, wherein the corner point coordinates are coordinates in a world coordinate system. A coordinate transformation unit 302, configured to perform coordinate transformation on each corner coordinate in each corner coordinate set in the corner coordinate set to generate transformation corner coordinates based on the camera parameter information, so as to obtain a transformation corner coordinate set, where the transformation corner coordinates are coordinates in an image coordinate system. The first determining unit 303 is configured to determine, based on the set of transformed corner point coordinates, detection frame information corresponding to each image in the set of images, to obtain a set of detection frame information. A correcting unit 304 configured to perform correction processing on each detection frame information in each detection frame information group in the detection frame information group set to generate corrected detection frame information, resulting in a corrected detection frame information group set. A second determining unit 305 configured to determine a quantity value of traffic lights included in each image in the image set based on the set of correction detection frame information; a selecting unit 306 configured to select, from the image set, an image containing a traffic signal light whose number value satisfies a predetermined condition as a candidate image. The identifying unit 307 identifies the color of each traffic signal in the candidate image to obtain a color information set.
It will be understood that the units described in the apparatus 300 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 300 and the units included therein, and are not described herein again.
Referring now to FIG. 4, a block diagram of an electronic device (e.g., computing device 101 of FIG. 1)400 suitable for use in implementing some embodiments of the present disclosure is shown. The server shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, electronic device 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 404 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 404: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 4 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 409, or from the storage device 408, or from the ROM 402. The computer program, when executed by the processing apparatus 401, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring an image set shot by a vehicle-mounted camera, camera parameter information of the vehicle-mounted camera and a corner point coordinate set, wherein the corner point coordinates are coordinates in a world coordinate system. And based on the camera parameter information, performing coordinate conversion on each corner coordinate in each corner coordinate set in the corner coordinate set to generate a conversion corner coordinate, so as to obtain a conversion corner coordinate set, wherein the conversion corner coordinate is a coordinate in an image coordinate system. And determining the detection frame information corresponding to each image in the image set based on the conversion corner point coordinate set to obtain a detection frame information set. And correcting each piece of detection frame information in each detection frame information group in the detection frame information group set to generate corrected detection frame information, so as to obtain a corrected detection frame information group set. And determining the quantity value of the traffic signal lamp contained in each image in the image set based on the correction detection frame information group set. And selecting the images with the quantity value of the traffic signal lamps meeting the preset condition from the image set as candidate images. And identifying the color of each traffic signal lamp in the candidate image to obtain a color information set.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a coordinate conversion unit, a first determination unit, a correction unit, a second determination unit, a selection unit, and an identification unit. The names of the units do not form a limitation on the units themselves in some cases, and for example, the acquiring unit may be further described as "a unit that acquires a set of images captured by a vehicle-mounted camera, camera parameter information of the vehicle-mounted camera, and a set of corner point coordinates".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.