CN114286002B - Image processing circuit, method, device, electronic equipment and chip - Google Patents
Image processing circuit, method, device, electronic equipment and chip Download PDFInfo
- Publication number
- CN114286002B CN114286002B CN202111627097.5A CN202111627097A CN114286002B CN 114286002 B CN114286002 B CN 114286002B CN 202111627097 A CN202111627097 A CN 202111627097A CN 114286002 B CN114286002 B CN 114286002B
- Authority
- CN
- China
- Prior art keywords
- image
- video
- images
- image processing
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 33
- 238000007499 fusion processing Methods 0.000 claims abstract description 35
- 238000003672 processing method Methods 0.000 claims abstract description 23
- 238000000926 separation method Methods 0.000 claims description 33
- 230000015572 biosynthetic process Effects 0.000 claims description 26
- 238000003786 synthesis reaction Methods 0.000 claims description 26
- 238000004891 communication Methods 0.000 claims description 15
- 230000002194 synthesizing effect Effects 0.000 claims description 14
- 239000000284 extract Substances 0.000 claims description 8
- 230000000694 effects Effects 0.000 description 40
- 238000005516 engineering process Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000007781 pre-processing Methods 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 4
- 238000002360 preparation method Methods 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 238000007687 exposure technique Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Landscapes
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The application discloses an image processing circuit, an image processing method, an image processing device, electronic equipment and a chip, and belongs to the technical field of electronics. The specific scheme comprises the following steps: the main control chip is connected with the image processing chip; the main control chip is used for acquiring a first video and a first image; the image processing chip is used for carrying out fusion processing on the image of the first area in the first image and at least two frames of video images of the first video to obtain a second video.
Description
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image processing circuit, an image processing method, an image processing device, electronic equipment and a chip.
Background
The double exposure means that the contents recorded by two or more exposure are overlapped in one picture, thereby achieving the purpose of increasing the illusion effect of the image.
In the related art, a plurality of photographed pictures can be overlapped through image processing software, so as to achieve the effect of double exposure.
However, current image processing software can only output double exposure pictures, and the limitation of application scenes is large, so that the creation requirement of users cannot be met.
Disclosure of Invention
The embodiment of the application aims to provide an image processing circuit, an image processing method, an image processing device, electronic equipment and an image processing chip, which can enrich the application scene of a double exposure technology, improve the diversity of double exposure effects and the innovation and interestingness of an image display mode, thereby meeting the creation requirements of users.
In a first aspect, an embodiment of the present application provides an image processing circuit, including: the main control chip is connected with the image processing chip; the main control chip is used for acquiring a first video and a first image; the image processing chip is used for carrying out fusion processing on the image of the first area in the first image and at least two frames of video images of the first video to obtain a second video.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including an image processing circuit as described in the first aspect.
In a third aspect, an embodiment of the present application provides an image processing method, including: the main control chip acquires a first video and a first image; and the image processing chip performs fusion processing on the image of the first area in the first image and at least two frames of video images of the first video to obtain a second video.
In a fourth aspect, an embodiment of the present application provides an electronic device comprising an image processing circuit as described in the first aspect, a processor and a memory storing a program or instructions executable on the processor, which program or instructions when executed by the processor implement the steps of the method as described in the third aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions, to implement a step performed by an image processing chip in a method according to the first aspect, or to implement a step performed by a main control chip in a method according to the first aspect.
In the embodiment of the application, the main control chip can acquire the first video and the first image; the image processing chip can perform fusion processing on the image of the first area in the first image and at least two frames of video images of the first video to obtain a second video. Through this scheme, because can be with the image of first region in the video frame of first video, consequently, can obtain the second video that contains the double exposure effect, be applied to the video scene with double exposure technique, so, not only can richen the application scene of double exposure technique, improve the variety of double exposure effect, can also enrich the display mode of image, improve the interest of image display to satisfy user's creation demand.
Drawings
Fig. 1 is a schematic structural diagram of a chip in an electronic device according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an image processing method according to an embodiment of the present application;
fig. 3 (a) is a schematic diagram of an image processing effect of the image processing method according to the embodiment of the present application;
FIG. 3 (b) is a second schematic diagram illustrating an image processing effect of the image processing method according to the embodiment of the present application;
FIG. 3 (c) is a third schematic diagram illustrating an image processing effect of the image processing method according to the embodiment of the present application;
FIG. 4 (a) is a diagram showing an image processing effect of an image processing method according to an embodiment of the present application;
FIG. 4 (b) is a schematic diagram showing an image processing effect of the image processing method according to the embodiment of the present application;
FIG. 5 is a schematic diagram of hardware of an electronic device according to an embodiment of the present application;
fig. 6 is a second hardware schematic of the electronic device according to the embodiment of the application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing circuit, the method, the device and the electronic equipment provided by the embodiment of the application are described in detail through specific embodiments and application scenes thereof with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present application provides an image processing circuit including: a main control chip 110 and an image processing chip 120. The main control chip 110 is connected with the image processing chip 120. The main control chip 110 may be used to acquire a first video and a first image. The image processing chip 120 may be configured to perform fusion processing on an image of the first region in the first image and at least two frames of video images of the first video to obtain a second video.
Based on the scheme, the image of the first area can be merged into the video frame of the first video, so that the second video containing the double exposure effect can be obtained, namely, the double exposure technology is applied to the video scene, the application scene of the double exposure technology can be enriched, the diversity of the double exposure effect is improved, the display mode of the image can be enriched, the interestingness of the image display is improved, and the creation requirement of a user is met.
Optionally, with continued reference to fig. 1, the main control chip 110 may include: a first output interface 111, a second output interface 112, and an image separation unit 113; the image processing chip 120 may include a first input interface 121, a second input interface 122, and an image synthesizing unit 123. The image separation unit 113 is connected to the first output interface 111 and the second output interface 112, the first output interface 111 is connected to the first input interface 121, the second output interface 112 is connected to the second input interface 122, and the first input interface 121 and the second input interface 122 are connected to the image synthesis unit 123.
The image separation unit 113 may be configured to determine a first area from the first image, and extract at least two sub-images from at least two frames of video images of the first video, where the sub-images are images of a second area in each frame of video image, and the second area is an area corresponding to the first area; the first output interface 111 may be used to output a first image, and the first input interface 121 may be used to receive the first image; the second output interface 112 may be configured to output at least two sub-images, and the second input interface 122 may be configured to receive at least two sub-images; the image synthesis unit 123 may be configured to perform image fusion processing on the first image and each of the at least two sub-images, so as to obtain a second video.
Optionally, the main control chip 110 may further include a preprocessing unit, the preprocessing unit is connected to the image separation unit, after the main control chip 110 acquires the first video and the first image, before performing image separation processing, the main control chip may perform preprocessing on the first video and the first image through the preprocessing unit, and the preprocessing process may include noise reduction and other basic effect processing.
The above-described image separation unit may transmit the first image and the at least two sub-images to the image processing chip via MIPI DSI protocol, for example. The first output interface may be MIPI DSI0, the second output interface may be MIPI DSI1, the first input interface may be MIPI DSI RX0, the second input interface may be MIPI DSI RX1, and the image separation unit may be a surfaceflinger module of a frame.
Based on the scheme, the second video for displaying the sub-images in the first area can be obtained, namely the second video comprises the double exposure effect of the first image and the first video, and the double exposure technology can be applied to the video field, so that the application scene of the double exposure technology is enriched, the display mode of the images is enriched, and the interestingness of image display is improved.
Optionally, with continued reference to fig. 1, the main control chip 110 may further include a third input interface 114, and the image processing chip 120 may further include an image frame inserting unit 124 and a third output interface 125. The second input interface 122 is connected to the image frame inserting unit 124, the image frame inserting unit 124 is respectively connected to the image synthesizing unit 123 and the third output interface 125, and the third output interface 125 is connected to the third input interface 114.
The second output interface 112 may be used to transmit the third video to the image frame inserting unit 124 through the second input interface 122; the image frame inserting unit can be used for carrying out frame inserting processing on the third video according to a preset frame rate to generate a first video; a third output interface 125 may be used to transmit the first video to the main control chip 110 through the third input interface 114; the third video is a video recorded through the first camera, or the third video image is a pre-stored video; the frame rate of the first video is higher than the frame rate of the third video.
Illustratively, the third input interface may be MIPI CSI TX1, and the third output interface may be MIPI CSI RX1.
Based on the above scheme, since the third video can be subjected to the frame inserting process to generate the first video with higher frame rate, the display effect of the first video can be improved, thereby providing preparation for generating the second video with higher display quality.
Alternatively, the image synthesis unit 123 may be further configured to adjust, before performing the image fusion processing on the first image and each of the at least two sub-images, to obtain the second video, an image transparency of the image of the first area from a first transparency to a second transparency, where the first transparency is smaller than the second transparency.
According to the scheme, the image transparency of the image of the first area can be adjusted from the first transparency to the second transparency, and the sub-image displayed in the first area can be clearer because the first transparency is smaller than the second transparency, so that the video display effect of the second video can be improved.
Alternatively, the above-described image synthesizing unit 123 may be specifically configured to replace pixels in the first region of the first image with pixels in each sub-image, respectively.
Based on the above scheme, since the pixels in the first region of the first image can be replaced with the pixels in each sub-image, the sub-images can be displayed in the first region of the first image, and thus the second video having the double exposure effect can be generated.
Alternatively, the image separation unit 113 may be configured to extract an image of the first region from the first image, to obtain an object image; the first output interface 111 may be used to output the object image, and the first input interface 121 may be used to receive the object image; the second output interface 112 may be configured to output the first video, and the second input interface 122 may be configured to receive the first video; the image synthesizing unit 123 may be configured to replace a pixel of a third region of each of the at least two frames of video images with a pixel of the object image; the third area is an image area determined by user input or an image area determined by feature recognition of each frame of video image.
Based on the above-described scheme, since pixel information of an image of a first region can be added to at least two frames of video images of a first video, a second video including an image pixel mark of the first region can be obtained. The second video contains the double exposure effect of the first image and the first video, so that the application scene of the double exposure technology is enriched, the display mode of the image is enriched, and the interestingness of image display is improved.
Optionally, with continued reference to fig. 1, the main control chip 110 may further include a fourth input interface 115, and the image processing chip 120 may further include a fourth output interface 126. The main control chip 110 may acquire the third video transmitted by the image sensor through the fourth input interface 115, and transmit the third video to the second output interface 112 through the image separation unit 113. The image processing chip 120 may transmit the second video to the display unit for display through the fourth output interface 126.
Illustratively, the fourth input interface may be MIPI CSI RX0, and the fourth output interface may be MIPI CSI TX0.
Based on the above scheme, since the size of the object image can be reduced and then used for replacing the pixels of the third area, the influence of the object image on the images of other areas outside the third area in at least two frames of video images can be reduced, thereby ensuring the video display effect of the second video.
As shown in fig. 2, an embodiment of the present application provides an image processing method, which is applied to an image processing apparatus including the image processing circuit shown in fig. 1, where the image processing apparatus may further include an image sensor and a display unit, a main control chip may be connected to the image sensor, and the image processing chip may be connected to the display unit. The method may include steps 201-202:
step 201, a main control chip acquires a first video and a first image.
If the user wants to obtain a video with a double exposure effect, the electronic device may be triggered to start a double exposure processing mode through an input, and in the case that the electronic device is in the double exposure processing mode, the electronic device may receive a first input of the user, where the first input is used to enable the electronic device to obtain the first video and the first image.
Alternatively, the first input may include a first sub-input for enabling the electronic device to acquire the first video, and a second sub-input for enabling the electronic device to acquire the first image. The first sub-input and the second sub-input may be touch input, voice input, gesture input, or the like. For example, the touch input may be a click input or a long press input of the user on the first video and the first image, or the like.
Illustratively, the first sub-input is a long press input and the second sub-input is a click input. With a first camera of the electronic device aimed at a first scene, a user may make a long press input to a video recording control, and the electronic device may receive and record a first video in response to the long press input. In the case where the second camera of the electronic device is aimed at the second scene, the user may make a click input to the capture control, which the electronic device may receive and capture a first image in response to.
Optionally, before the main control chip acquires the first video, the main control chip may acquire a third video transmitted by the image sensor, and transmit the third video to the image processing chip; then, an image frame inserting unit of the image processing chip can perform frame inserting processing on the third video according to a preset frame rate, so as to generate a first video; the third video may be a video recorded by the first camera, or the third video image may be a pre-stored video; the frame rate of the first video is higher than the frame rate of the third video.
The third video is exemplified by a video recorded by the first camera, and the preset frame rate is 120 fps. Under the condition that a first camera of the electronic equipment is aimed at a shooting object, a user can input and control shooting duration of a third video, and after shooting of the third video is completed, the main control chip can transmit the third video to the image processing chip; thereafter, the image processing chip may generate the first video having a frame rate of 120fps from the third video having a frame rate of 30fps through an interpolation process.
Based on the above scheme, since the third video can be subjected to the frame inserting process to generate the first video with higher frame rate, the display effect of the first video can be improved, thereby providing preparation for generating the second video with higher display quality.
Alternatively, the first image may be an image captured by a second camera; or the first image may be a pre-stored image.
In an exemplary embodiment, the third video is a video recorded by the first camera, and the first image is an image captured by the second camera. The electronic device including the image processing apparatus may capture a first image by the first camera while recording a third video by the first camera, and then the electronic device may perform frame interpolation processing on the third video to generate the first video, and acquire the first video and the first image after receiving a first input of a user.
Step 202, an image processing chip performs fusion processing on an image of a first area in the first image and at least two frames of video images of the first video to obtain a second video.
Optionally, the first area may be an image area including the subject in the first image, or may be any area that is defined by the user from the main body in the first image, and may specifically be determined according to an actual use situation, which is not limited in the embodiment of the present application.
Optionally, the image processing chip performing fusion processing on the image of the first region in the first image and at least two frames of video images of the first video may include the following two implementations:
Implementation 1
The image separation unit of the main control chip can determine a first area from the first image; extracting at least two sub-images from at least two frames of video images of a first video, wherein the sub-images are images of a second area in each frame of video image, and the second area is an area corresponding to the first area; then, the image separation unit may transmit the first image and at least two sub-images to the image processing chip; the image synthesis unit of the image processing chip can respectively perform image fusion processing on the first image and each sub-image in at least two sub-images to obtain a second video.
Illustratively, at least two frames of video images are taken as video frames 31, 32. As shown in fig. 3 (a), the image separation unit may determine a first region 34 from the first image 33, that is, a region in which the subject is photographed in the first image, and after determining the first region 34, the image separation unit may extract the sub-image 33 from the video frame 31 and extract the sub-image 34 from the video frame 32 according to the first region 34, as shown in fig. 3 (b), and then the image separation unit may transmit the first image 33, the sub-image 33, and the sub-image 34 to the image processing chip. As shown in fig. 3 (c), the image synthesis unit may perform image fusion processing on the first image 33 and the sub-image 33 to obtain a new video frame 35, and perform image fusion processing on the first image 33 and the sub-image 34 to obtain a new video frame 36. So that a second video comprising new video frames 35 and new video frames 36 can be generated.
Based on the scheme, the second video for displaying the sub-images in the first area can be obtained, namely the second video comprises the double exposure effect of the first image and the first video, and the double exposure technology can be applied to the video field, so that the application scene of the double exposure technology is enriched, the display mode of the images is enriched, and the interestingness of image display is improved.
Alternatively, after the image separation unit determines the first area, the image separation unit may retain images of other areas except the first area in the first image, may delete images of other areas, and may edit pixels of images of other areas. Specifically, the method can be determined according to actual use conditions, and the embodiment of the application is not limited to the method.
Optionally, the image combination unit performs image fusion processing on the first image and each of the at least two sub-images, so as to obtain the second video, which may include two embodiments, where the image combination unit performs image fusion processing on the first image and each of the at least two sub-images, for example, taking at least two sub-images including sub-image 1 and sub-image 2 as an example, and the image combination unit may perform image fusion processing on the first image and sub-image 1 first and then perform image fusion processing on the first image and sub-image 2. In another embodiment, the image synthesis unit replicates the first image according to the number of sub-images included in the at least two sub-images, and performs image fusion processing on the replicated first image and the sub-images in the at least two sub-images in a one-to-one correspondence. For example, taking at least two sub-images including sub-image 1 and sub-image 2 as an example, the image synthesis unit may copy the first image to obtain the copied image 1, and then the image synthesis unit may perform image fusion processing on the first image and the sub-image 1, and perform image fusion processing on the copied image 1 and the sub-image 2.
Optionally, before performing image fusion processing on the first image and each of the at least two sub-images to obtain the second video, the image synthesis unit may adjust the image transparency of the image of the first area from a first transparency to a second transparency, where the first transparency is smaller than the second transparency.
According to the scheme, the image transparency of the image of the first area can be adjusted from the first transparency to the second transparency, and the sub-image displayed in the first area can be clearer because the first transparency is smaller than the second transparency, so that the video display effect of the second video can be improved.
Optionally, the image processing chip performing the image fusion processing on the first image and each of the at least two sub-images respectively may specifically include: the image synthesis unit replaces pixels in the first area of the first image with pixels in each sub-image, respectively.
Based on the above scheme, since the pixels in the first region of the first image can be replaced with the pixels in each sub-image, the sub-images can be displayed in the first region of the first image, and thus the second video having the double exposure effect can be generated.
Implementation 2
After the first video and the first image are acquired, the image separation unit may extract an image of the first region from the first image to obtain an object image; transmitting the object image and the first video to an image processing chip; the image synthesizing unit of the image processing chip may replace pixels of the third region of each of the at least two frames of video images of the first video with pixels in the object image; the third area is an image area determined by user input or an image area determined by feature recognition of each frame of video image.
Illustratively, at least two frames of video images are taken as video frames 41, 42. As shown in fig. 4 (a), the image separation unit may extract an image of the first region from the first image 43, obtain an object image 44, and transmit the object image 44 and the first video to the image processing chip; as shown in fig. 4 (b), the image synthesizing unit of the image processing chip may replace the pixels of the third region 45 of the video frame 41 and the video frame 42 of the first video with the pixels in the object image 44, respectively, to thereby obtain the second video including the new video frame 46 and the new video frame 47.
Based on the above-described scheme, since pixel information of an image of a first region can be added to at least two frames of video images of a first video, a second video including an image pixel mark of the first region can be obtained. The second video contains the double exposure effect of the first image and the first video, so that the application scene of the double exposure technology is enriched, the display mode of the image is enriched, and the interestingness of image display is improved.
Alternatively, the image synthesizing unit may also adjust the object image from a first size to a second size, the first size being larger than the second size; and replacing the pixels of the third area of each frame of video image in the at least two frames of video images with the pixels of the object image after the size is adjusted.
Based on the above scheme, since the size of the object image can be reduced and then used for replacing the pixels of the third area, the influence of the object image on the images of other areas outside the third area in at least two frames of video images can be reduced, thereby ensuring the video display effect of the second video.
In the embodiment of the application, the image of the first area can be merged into the video frame of the first video, so that the second video containing the double exposure effect can be obtained, namely, the double exposure technology is applied to the video scene, thus, the application scene of the double exposure technology can be enriched, the diversity of the double exposure effect is improved, the display mode of the image can be enriched, the interest of the image display is improved, and the creation requirement of a user is met.
The image processing device in the embodiment of the application can be an electronic device, or can be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. The electronic device may be a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a Mobile internet appliance (Mobile INTERNET DEVICE, MID), an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a robot, a wearable device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and may also be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., which are not particularly limited in the embodiments of the present application.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 2, fig. 3 (a), fig. 3 (b) and fig. 3 (c), and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 5, an electronic device 500 is further provided in the embodiment of the present application, which includes the above-mentioned image processing circuit, a processor 501 and a memory 502, where a program or an instruction capable of running on the processor 501 is stored in the memory 502, and when the program or the instruction is executed by the processor 501, the steps of the embodiment of the above-mentioned image processing method are implemented, and the same technical effects can be achieved, so that repetition is avoided and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 600 includes, but is not limited to: radio frequency unit 601, network module 602, audio output unit 603, input unit 604, sensor 605, display unit 606, user input unit 607, interface unit 608, memory 609, processor 610, and image processing chip.
Those skilled in the art will appreciate that the electronic device 600 may further include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 610 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
In the embodiment of the present application, the main control chip may be the processor 610, or the main control chip includes the processor 610, that is, the processor 610 is integrated on the main control chip.
The main control chip is used for acquiring a first video and a first image; the image processing chip is used for carrying out fusion processing on the image of the first area in the first image and at least two frames of video images of the first video to obtain a second video.
In the embodiment of the application, the image of the first area can be merged into the video frame of the first video, so that the second video containing the double exposure effect can be obtained, namely, the double exposure technology is applied to the video scene, thus, the application scene of the double exposure technology can be enriched, the diversity of the double exposure effect is improved, the display mode of the image can be enriched, the interest of the image display is improved, and the creation requirement of a user is met.
Optionally, the main control chip comprises an image separation unit; the image processing chip comprises an image synthesizing unit; the interface unit 608 includes a first output interface, a second output interface, a first input interface, and a second input interface. The image separation unit is respectively connected with the first output interface and the second output interface, the first output interface is connected with the first input interface, the second output interface is connected with the second input interface, and the first input interface and the second input interface are respectively connected with the image synthesis unit;
The image separation unit is configured to determine a first area from the first image, and extract at least two sub-images from at least two frames of video images of the first video, where the sub-images are images of a second area in each frame of video image, and the second area is an area corresponding to the first area; the first output interface is used for outputting the first image, and the first input interface is used for receiving the first image; the second output interface is used for outputting the at least two sub-images, and the second input interface is used for receiving the at least two sub-images; the image synthesis unit is used for carrying out image fusion processing on the first image and each sub-image in the at least two sub-images respectively to obtain a second video.
In the embodiment of the application, the second video for displaying the sub-image in the first area can be obtained, namely the second video comprises the double exposure effect of the first image and the first video, and the double exposure technology can be applied to the video field, so that the application scene of the double exposure technology is enriched, the display mode of the image is enriched, and the interest of the image display is improved.
Optionally, the interface unit 608 further includes a third input interface and a third output interface; the image processing chip further comprises an image frame inserting unit; the second input interface is connected with the image frame inserting unit, the image frame inserting unit is respectively connected with the image synthesizing unit and the third output interface, and the third output interface is connected with the third input interface.
The image frame inserting unit is used for carrying out frame inserting processing on the third video according to a preset frame rate to generate a first video; the third output interface is used for transmitting the first video to the main control chip through the third input interface; the third video is a video recorded through the first camera, or the third video image is a pre-stored video; the frame rate of the first video is higher than the frame rate of the third video.
In the embodiment of the application, the first video with higher frame rate can be generated by carrying out the frame inserting processing on the third video, so that the display effect of the first video can be improved, and preparation is provided for generating the second video with higher display quality.
Optionally, the image synthesis unit is further configured to perform image fusion processing on the first image and each of the at least two sub-images, and before obtaining the second video, adjust an image transparency of the image of the first area from a first transparency to a second transparency, where the first transparency is smaller than the second transparency.
In the embodiment of the application, the image transparency of the image of the first area can be adjusted from the first transparency to the second transparency, and the sub-image displayed in the first area can be clearer because the first transparency is smaller than the second transparency, so that the video display effect of the second video can be improved.
Optionally, the image synthesis unit is specifically configured to replace pixels in the first area of the first image with pixels in each sub-image, respectively.
In the embodiment of the application, the pixels in the first area of the first image can be replaced by the pixels in each sub-image, so that the sub-images can be displayed in the first area of the first image, and the second video with double exposure effect can be generated.
Optionally, the image separation unit is configured to extract an image of the first area from the first image, to obtain an object image; the first output interface is used for outputting the object image, and the first input interface is used for receiving the object image; the second output interface is used for outputting the first video, and the second input interface is used for receiving the first video; the image synthesis unit is used for replacing pixels of a third area of each frame of video image in the at least two frames of video images with pixels in the object image; the third area is an image area determined by user input or an image area determined by feature recognition of each frame of video image.
In the embodiment of the application, since the pixel information of the image of the first area can be added in at least two frames of video images of the first video, a second video containing the image pixel mark of the first area can be obtained. The second video contains the double exposure effect of the first image and the first video, so that the application scene of the double exposure technology is enriched, the display mode of the image is enriched, and the interestingness of image display is improved.
Optionally, the image synthesis unit is specifically configured to adjust the object image from a first size to a second size, where the first size is larger than the second size; and replacing pixels of a third region of each of the at least two frames of video images with pixels of the resized object image.
In the embodiment of the application, the size of the object image can be reduced and then the object image can be used for replacing the pixels of the third area, so that the influence of the object image on the images of other areas outside the third area in at least two frames of video images can be reduced, and the video display effect of the second video is ensured.
It should be appreciated that in embodiments of the present application, the input unit 604 may include a graphics processor (Graphics Processing Unit, GPU) 6041 and a microphone 6042, with the graphics processor 6041 processing image data of still pictures or video obtained by an image capturing apparatus (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 606 may include a display panel 6061, and the display panel 6061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 607 includes at least one of a touch panel 6071 and other input devices 6072. The touch panel 6071 is also called a touch screen. The touch panel 6071 may include two parts of a touch detection device and a touch controller. Other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 609 may include volatile memory or nonvolatile memory, or the memory 609 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDRSDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and Direct random access memory (DRRAM). Memory 609 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The processor 610 may include one or more processing units; optionally, the processor 610 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above image processing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, a detailed description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a processing chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, the communication interface is used for transmitting image data, and the processor is used for running programs or instructions to realize the steps executed by the image processing chip in the video sharing method. And the same technical effects can be achieved, and in order to avoid repetition, the description is omitted here.
The embodiment of the application further provides a control chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, the communication interface is used for transmitting image data, and the processor is used for running programs or instructions to realize the steps executed by the main control chip in the video sharing method. And the same technical effects can be achieved, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the related art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.
Claims (18)
1. The image processing circuit is characterized by comprising a main control chip and an image processing chip, wherein the main control chip is connected with the image processing chip;
the main control chip is used for acquiring a first video and a first image;
The image processing chip is used for carrying out fusion processing on the image of the first area in the first image and at least two frames of video images of the first video to obtain a second video;
The main control chip comprises a second input interface, an image frame inserting unit, an image synthesizing unit and a third output interface, wherein the second output interface is used for transmitting a third video to the image frame inserting unit through the second input interface, the image frame inserting unit is respectively connected with the image synthesizing unit and the third output interface, and the third output interface is connected with the third input interface; the image frame inserting unit is used for carrying out frame inserting processing on the third video according to a preset frame rate to generate a first video; the third output interface is used for transmitting the first video to the main control chip through the third input interface;
The third video is a video recorded through the first camera, or the third video is a pre-stored video; the frame rate of the first video is higher than the frame rate of the third video.
2. The image processing circuit of claim 1, wherein the main control chip comprises a first output interface and an image separation unit; the image processing chip comprises a first input interface;
the image separation unit is respectively connected with the first output interface and the second output interface, the first output interface is connected with the first input interface, and the first input interface is connected with the image synthesis unit;
The image separation unit is used for determining a first area from the first image, extracting at least two sub-images from at least two frames of video images of the first video, wherein the sub-images are images of a second area in each frame of video image, and the second area is an area corresponding to the first area;
The first output interface is used for outputting the first image, and the first input interface is used for receiving the first image; the second output interface is used for outputting the at least two sub-images, and the second input interface is used for receiving the at least two sub-images;
The image synthesis unit is used for carrying out image fusion processing on the first image and each sub-image in the at least two sub-images respectively to obtain a second video.
3. The image processing circuit according to claim 2, wherein the image synthesis unit is further configured to perform image fusion processing on the first image and each of the at least two sub-images, respectively, and adjust an image transparency of the image of the first area from a first transparency to a second transparency before obtaining the second video, where the first transparency is smaller than the second transparency.
4. The image processing circuit according to claim 2, wherein the image synthesis unit is specifically configured to replace pixels in the first region of the first image with pixels in each sub-image, respectively.
5. The image processing circuit of claim 1, wherein the master control chip comprises a first output interface, a second output interface, and an image separation unit; the image processing chip comprises a first input interface, a second input interface and an image synthesis unit;
The image separation unit is respectively connected with the first output interface and the second output interface, the first output interface is connected with the first input interface, the second output interface is connected with the second input interface, and the first input interface and the second input interface are respectively connected with the image synthesis unit;
The image separation unit is used for extracting the image of the first area from the first image to obtain an object image;
the first output interface is used for outputting the object image, and the first input interface is used for receiving the object image; the second output interface is used for outputting the first video, and the second input interface is used for receiving the first video;
The image synthesis unit is used for replacing pixels of a third area of each frame of video image in the at least two frames of video images with pixels in the object image;
the third area is an image area determined by user input or an image area determined by feature recognition of each frame of video image.
6. The image processing circuit according to claim 5, wherein the image synthesizing unit is specifically configured to adjust the object image from a first size to a second size, the first size being larger than the second size; and replacing pixels of a third region of each of the at least two frames of video images with pixels of the resized object image.
7. An image processing apparatus comprising the image processing circuit of any one of claims 1-6.
8. The image processing device of claim 7, further comprising an image sensor, wherein the main control chip is connected to the image sensor.
9. An image processing method applied to the image processing apparatus according to claim 7 or 8, comprising:
the main control chip acquires a first video and a first image;
The image processing chip performs fusion processing on the image of the first area in the first image and at least two frames of video images of the first video to obtain a second video;
The main control chip transmits the third video to the image processing chip;
The image frame inserting unit of the image processing chip carries out frame inserting processing on the third video according to a preset frame rate to generate a first video;
The third video is a video recorded through the first camera, or the third video is a pre-stored video; the frame rate of the first video is higher than the frame rate of the third video.
10. The image processing method according to claim 9, wherein the image processing chip performs fusion processing on an image of a first region in the first image and at least two frames of video images of the first video, and further comprises, before obtaining the second video:
the image separation unit of the main control chip determines a first area from the first image;
The image separation unit of the main control chip extracts at least two sub-images from at least two frames of video images of the first video, wherein the sub-images are images of a second area in each frame of video image, and the second area is an area corresponding to the first area;
the image separation unit of the main control chip transmits the first image and the at least two sub-images to an image processing chip;
The image processing chip performs fusion processing on an image of a first area in the first image and at least two frames of video images of the first video to obtain a second video, and the method comprises the following steps:
And an image synthesis unit of the image processing chip performs image fusion processing on the first image and each sub-image in the at least two sub-images respectively to obtain a second video.
11. The image processing method according to claim 10, wherein the image processing chip performs image fusion processing on the first image and each of the at least two sub-images, respectively, and before obtaining the second video, the method further comprises:
The image synthesizing unit adjusts the image transparency of the image of the first region from a first transparency to a second transparency, the first transparency being smaller than the second transparency.
12. The image processing method according to claim 10, wherein the image processing chip performs image fusion processing on the first image and each of the at least two sub-images, respectively, comprising:
The image synthesis unit replaces pixels in the first region of the first image with pixels in each sub-image, respectively.
13. The image processing method according to claim 9, wherein the image processing chip performs fusion processing on an image of a first region in the first image and at least two frames of video images of the first video, and further comprises, before obtaining the second video:
the image separation unit of the main control chip extracts the image of the first area from the first image to obtain an object image;
The image separation unit of the main control chip transmits the object image and the first video to an image processing chip;
The image processing chip performs fusion processing on an image of a first area in the first image and at least two frames of video images of the first video to obtain a second video, and the method comprises the following steps:
An image synthesis unit of the image processing chip replaces pixels of a third area of each of the at least two frames of video images with pixels of the object image;
the third area is an image area determined by user input or an image area determined by feature recognition of each frame of video image.
14. The image processing method according to claim 13, wherein the image synthesizing unit adjusts the object image from a first size to a second size, the first size being larger than the second size; and replacing pixels of a third region of each of the at least two frames of video images with pixels of the resized object image.
15. The image processing method according to any one of claims 9 to 14, wherein the first image is an image photographed by a second camera or the first image is a pre-stored image.
16. An electronic device comprising the image processing circuit of any one of claims 1-6, a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the image processing method of any one of claims 9-14.
17. A processing chip, characterized in that it comprises a processor and a communication interface, said communication interface being coupled to said processor, said communication interface being for transmitting image data, said processor being for running a program or instructions for implementing the steps performed by the image processing chip in the image processing method according to any of claims 9-14.
18. A control chip, characterized in that the control chip comprises a processor and a communication interface, the communication interface being coupled to the processor, the communication interface being for transmitting image data, the processor being for running a program or instructions for implementing the steps performed by the main control chip in the image processing method according to any one of claims 9-14.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111627097.5A CN114286002B (en) | 2021-12-28 | 2021-12-28 | Image processing circuit, method, device, electronic equipment and chip |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111627097.5A CN114286002B (en) | 2021-12-28 | 2021-12-28 | Image processing circuit, method, device, electronic equipment and chip |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114286002A CN114286002A (en) | 2022-04-05 |
CN114286002B true CN114286002B (en) | 2024-07-19 |
Family
ID=80877033
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111627097.5A Active CN114286002B (en) | 2021-12-28 | 2021-12-28 | Image processing circuit, method, device, electronic equipment and chip |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114286002B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111107267A (en) * | 2019-12-30 | 2020-05-05 | 广州华多网络科技有限公司 | Image processing method, device, equipment and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112887583B (en) * | 2019-11-30 | 2022-07-22 | 华为技术有限公司 | Shooting method and electronic equipment |
CN112135049B (en) * | 2020-09-24 | 2022-12-06 | 维沃移动通信有限公司 | Image processing method and device and electronic equipment |
-
2021
- 2021-12-28 CN CN202111627097.5A patent/CN114286002B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111107267A (en) * | 2019-12-30 | 2020-05-05 | 广州华多网络科技有限公司 | Image processing method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114286002A (en) | 2022-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112770059B (en) | Photographing method and device and electronic equipment | |
CN113852757B (en) | Video processing method, device, equipment and storage medium | |
CN113014801B (en) | Video recording method, video recording device, electronic equipment and medium | |
CN111722775A (en) | Image processing method, device, equipment and readable storage medium | |
CN114285957A (en) | Image processing circuit and data transmission method | |
CN114339072A (en) | Image processing circuit, method and electronic device | |
CN113794831B (en) | Video shooting method, device, electronic equipment and medium | |
CN114338874A (en) | Image display method of electronic device, image processing circuit and electronic device | |
CN113207038B (en) | Video processing method, video processing device and electronic equipment | |
CN112565603B (en) | Image processing method and device and electronic equipment | |
CN112887515A (en) | Video generation method and device | |
CN112738399A (en) | Image processing method and device and electronic equipment | |
WO2023125316A1 (en) | Video processing method and apparatus, electronic device, and medium | |
CN111818382A (en) | Screen recording method and device and electronic equipment | |
CN114390205B (en) | Shooting method and device and electronic equipment | |
CN114286002B (en) | Image processing circuit, method, device, electronic equipment and chip | |
CN114143455B (en) | Shooting method and device and electronic equipment | |
CN115967854A (en) | Photographing method and device and electronic equipment | |
WO2022247766A1 (en) | Image processing method and apparatus, and electronic device | |
CN114253449B (en) | Screen capturing method, device, equipment and medium | |
CN114125297B (en) | Video shooting method, device, electronic equipment and storage medium | |
CN112202958B (en) | Screenshot method and device and electronic equipment | |
CN114339071A (en) | Image processing circuit, image processing method and electronic device | |
CN113923367B (en) | Shooting method and shooting device | |
CN112367562B (en) | Image processing method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |