Nothing Special   »   [go: up one dir, main page]

CN109002787B - Image processing method and device, storage medium and electronic equipment - Google Patents

Image processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN109002787B
CN109002787B CN201810746550.6A CN201810746550A CN109002787B CN 109002787 B CN109002787 B CN 109002787B CN 201810746550 A CN201810746550 A CN 201810746550A CN 109002787 B CN109002787 B CN 109002787B
Authority
CN
China
Prior art keywords
target
moving
preview image
shooting scene
frame area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810746550.6A
Other languages
Chinese (zh)
Other versions
CN109002787A (en
Inventor
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810746550.6A priority Critical patent/CN109002787B/en
Publication of CN109002787A publication Critical patent/CN109002787A/en
Application granted granted Critical
Publication of CN109002787B publication Critical patent/CN109002787B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to an image processing method and device, electronic equipment and a computer readable storage medium, wherein a moving target in a current shooting scene is obtained from a preset number of preview images; carrying out target detection on each frame of preview image to obtain a target detection result of the preview image; then, removing target detection results corresponding to the moving target from the target detection results of the preview image, thereby obtaining target detection results of the static target in the current shooting scene; and performing corresponding image processing on the preview image according to the target detection result of the static target. The still object in the current shooting scene concerned by the user can be selectively acquired, so that the moving object in the current shooting scene is ignored, and only the still object is subjected to image processing. The image processing effect is improved, and the requirements of users are met.

Description

Image processing method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus, a storage medium, and an electronic device.
Background
With the popularization of mobile terminals and the rapid development of mobile internet, the usage amount of users of mobile terminals is increasing. The photographing function in the mobile terminal has become one of the functions commonly used by users, and users have also raised some higher requirements while often using the photographing function of the electronic device. For example, when a user needs to photograph a shooting scene with a moving object, how to capture an image that the user is satisfied with becomes one of the problems to be solved at present.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, a storage medium and electronic equipment, which can improve the image processing efficiency when a shooting scene with moving object interference is shot.
An image processing method, comprising:
acquiring a moving target in a current shooting scene from a preset number of preview images;
carrying out target detection on each frame of preview image to obtain a target detection result of the preview image;
removing a target detection result corresponding to the moving target from the target detection result of the preview image to obtain a target detection result of a static target in the current shooting scene;
and performing corresponding image processing on the preview image according to the target detection result of the static target.
An image processing apparatus, characterized in that the apparatus comprises:
the moving target acquisition module is used for acquiring a moving target in the current shooting scene from the preview images with the preset number;
the target detection module is used for carrying out target detection on each frame of preview image to obtain a target detection result of the preview image;
the removing module is used for removing the target detection result corresponding to the moving target from the target detection result of the preview image to obtain the target detection result of the static target in the current shooting scene;
and the preview image processing module is used for carrying out corresponding image processing on the preview image according to the target detection result of the static target.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method as described above.
An electronic device comprising a memory, a processor, said memory having stored thereon a computer program operable on said processor, said processor performing the steps of the image processing method as described above when executing the computer program.
The image processing method and device, the storage medium and the electronic equipment acquire the moving object in the current shooting scene from the preset number of preview images; carrying out target detection on each frame of preview image to obtain a target detection result of the preview image; then, removing target detection results corresponding to the moving target from the target detection results of the preview image, thereby obtaining target detection results of the static target in the current shooting scene; and performing corresponding image processing on the preview image according to the target detection result of the static target. In general, if a user takes a picture of a shooting scene with moving object interference and the user focuses on only a static object in the shooting scene, only the static object in the shooting scene needs to be subjected to image processing, and the moving object in the shooting scene does not need to be subjected to image processing. By the method, the moving target in the current shooting scene can be obtained from the preset number of preview images, and then the target detection result of the moving target in the current shooting scene can be removed from the target detection result of the preview images, so that only the static target detection result in the current shooting scene is left. And finally, performing corresponding image processing on the preview image according to the target detection result of the static target. The still object in the current shooting scene concerned by the user can be selectively acquired, so that the moving object in the current shooting scene is ignored, and only the still object is subjected to image processing. The image processing effect is improved, and the requirements of users are met.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of the internal structure of an electronic device in one embodiment;
FIG. 2 is a flow diagram of a method of image processing in one embodiment;
FIG. 3 is a flowchart of the method for acquiring a moving object in the current shooting scene in FIG. 2;
FIG. 4 is a schematic diagram of an embodiment of a neural network;
fig. 5 is a flowchart of a method for removing a target detection result corresponding to the moving target from the target detection results of the preview image in fig. 2 to obtain a target detection result of a stationary target in the current shooting scene;
FIG. 6 is a diagram showing a configuration of an image processing apparatus according to an embodiment;
FIG. 7 is a schematic diagram of the reject module of FIG. 6;
FIG. 8 is a schematic structural diagram of a target frame region elimination module of the moving target in FIG. 7;
fig. 9 is a block diagram of a partial structure of a cellular phone related to an electronic device provided in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Fig. 1 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 1, the electronic device includes a processor, a memory, and a network interface connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory is used for storing data, programs and the like, and the memory stores at least one computer program which can be executed by the processor to realize the scene recognition method suitable for the electronic device provided in the embodiment of the application. The Memory may include a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random-Access-Memory (RAM). For example, in one embodiment, the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor to implement an image processing method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The network interface may be an ethernet card or a wireless network card, etc. for communicating with an external electronic device. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc.
In one embodiment, as shown in fig. 2, an image processing method is provided, which is described by taking the method as an example applied to the electronic device in fig. 1, and includes:
step 220, obtaining the moving object in the current shooting scene from a preset number of preview images.
The preview image is a picture of a current shooting scene acquired by the electronic equipment in real time through the imaging equipment, and the preview image can be displayed on a display screen of the electronic equipment in real time. The multi-frame preview images form a video, that is, when a user holds the electronic device and previews a current shooting scene, the user sees a coherent video. Therefore, the moving object in the current shooting scene can be acquired from the preset number of preview images. For example, when a subject person is photographed and there is a moving person behind the subject person, a moving object, i.e., a moving person in the current photographing scene may be acquired from a preset number of preview images.
And 240, performing target detection on each frame of preview image to obtain a target detection result of the preview image.
And performing target detection on the acquired preview image, specifically, performing target detection on each frame of preview image to obtain a target detection result of the detected preview image. Of course, the target detection may be performed only on a certain frame or a certain number of frames of preview images to obtain the target detection result of each frame of preview images.
At the time of object detection, object detection is performed for each object in the preview image. Because each frame of image is a frame with a fixed shooting time, which objects are static objects in the current shooting scene and which objects are moving objects in the current shooting scene cannot be distinguished in each frame of image. However, in general, a user takes a picture of a shooting scene with moving object interference, and an object of interest of the user is often only a static object in the shooting scene, so that only the static object in the shooting scene needs to be subjected to image processing, and the moving object in the shooting scene does not need to be subjected to image processing.
When the target detection is performed on each frame of preview image, because each frame of image is captured by a frame with a fixed shooting time, which targets are static targets in the current shooting scene and which targets are moving targets in the current shooting scene cannot be distinguished in each frame of image. Therefore, how to remove the moving object in the current scene to extract the moving object in the current scene.
And step 260, removing the target detection result corresponding to the moving target from the target detection results of the preview image to obtain the target detection result of the static target in the current shooting scene.
In step 220, a moving object in the current shooting scene is obtained from a preset number of preview images, and specifically, the moving object in the current shooting scene is obtained by performing moving object tracking on the preset number of preview images. Therefore, the moving object in the current preview image can be acquired in a certain frame of preview image. Therefore, the target detection result of the moving target is obtained from the target detection result of the preview image, the target detection result of the moving target is removed, and the rest target detection results are the target detection results of the static target in the current shooting scene.
Step 280, performing corresponding image processing on the preview image according to the target detection result of the static target.
After the target detection result of the static target in the current shooting scene in the preview image is obtained, corresponding image processing can be carried out on the preview image according to the target detection result of the static target. The target detection result generally includes a target type and a target frame region corresponding to the target. For example, the target detection result that has obtained the still target is a portrait, and the target frame area where this portrait is located is obtained. At this time, the image processing mode may be adopted to perform image processing, such as beauty, on the target frame region where the portrait is located. The moving object in the current shooting scene in the preview image may not be processed, and certainly, the moving object in the current shooting scene in the preview image may be blurred or removed, so as to more prominently represent the still object, and achieve the image processing effect that the user wants to highlight only the still object.
In the embodiment of the application, in general, if a user takes a picture of a shooting scene with moving object interference and an object concerned by the user is only a static object in the shooting scene, only the static object in the shooting scene needs to be subjected to image processing, and the moving object in the shooting scene does not need to be subjected to image processing. By the method, the moving target in the current shooting scene can be obtained from the preset number of preview images, and then the target detection result of the moving target in the current shooting scene can be removed from the target detection results of the preview images, so that only the target detection result of the static target in the current shooting scene is left. And finally, performing corresponding image processing on the preview image according to the target detection result of the static target. The still object in the current shooting scene concerned by the user can be selectively acquired, so that the moving object in the current shooting scene is ignored, and only the still object is subjected to image processing. The image processing effect is improved, and the requirements of users are met.
In one embodiment, as shown in FIG. 3, step 220 includes:
step 222, obtaining a preset number of preview images, where the preset number of preview images are images obtained by shooting the current shooting scene in real time.
The preview image is a picture of a current shooting scene acquired by the electronic equipment in real time through the imaging equipment, and the preview image can be displayed on a display screen of the electronic equipment in real time. The preset number of preview images may be a selection of consecutive several frames of preview images, such as consecutive 5 frames or consecutive 10 frames of preview images, but may be other reasonable number of frames of preview images. Secondly, the preset number of preview images may also be several discontinuous preview images with selected intervals, as long as the moving object in the current shooting scene can be obtained from the several discontinuous preview images.
And 224, tracking the moving target of the preview images with the preset number, and acquiring the moving target in the current shooting scene.
After a preset number of preview images of the same shooting scene are acquired, the preset number of preview images may be subject to moving object tracking. Generally, target tracking is divided into two parts, namely a feature extraction algorithm and a target tracking algorithm. The extracted target features can be roughly divided into the following types:
1) the color histogram of the target area is used as a feature, and the color feature has rotation invariance, is not influenced by the change of the size and the shape of the target object, and is approximately distributed in the color space.
2) The contour characteristic of the target is high in algorithm speed, and the method has a good effect under the condition that the target is partially shielded.
3) The tracking effect of the texture features of the target is improved compared with the tracking effect of the contour features.
The target tracking algorithm can be roughly divided into the following four algorithms: a mean shift algorithm, namely a meanshift algorithm; a target tracking algorithm based on Kalman filtering; a target tracking algorithm based on particle filtering; based on an algorithm modeling moving objects.
In the embodiment of the application, the moving target in the current shooting scene can be obtained by tracking the moving target of the preset number of preview images, and specifically, the feature information of the moving target in the image, such as the position coordinate of the moving target in the current shooting scene in the moving process, the contour feature of the moving target, and the like, can be obtained. Therefore, according to the moving target tracking result, the target detection result corresponding to the moving target is removed from the target detection results of the preview image of a certain frame, and the target detection result of the static target in the current shooting scene is obtained.
In one embodiment, performing moving object tracking on a preset number of preview images to acquire a moving object in a current shooting scene includes:
and tracking the moving target of the preset number of preview images by adopting a mean shift algorithm or an optical flow method to acquire the moving target in the current shooting scene.
Specifically, the mean shift algorithm is a nonparametric method based on density gradient rise, and the target position is found through iterative operation to realize target tracking. The method has the obvious advantages of small calculation amount of the algorithm, simplicity and easy realization, and is very suitable for the real-time tracking occasion.
The basic principle of detecting moving objects by an optical flow method is as follows: each pixel point in the image is endowed with a velocity vector, so that an image motion field is formed, at a specific moment of motion, the points on the image correspond to the points on the three-dimensional object one to one, the corresponding relation can be obtained by projection relation, and the image can be dynamically analyzed according to the velocity vector characteristics of each pixel point. If there is no moving object in the image, the optical flow vector is continuously varied over the entire image area. When a moving object exists in the image, the target and the image background move relatively, and the speed vector formed by the moving object is different from the speed vector of the neighborhood background, so that the moving object and the position are detected. The optical flow method has the advantages that the optical flow not only carries the motion information of a moving object, but also carries rich information related to the three-dimensional structure of a scene, and the optical flow method can detect the moving object under the condition of not knowing any information of the scene.
In the embodiment of the application, a moving target is tracked by adopting a mean shift algorithm or an optical flow method on a preset number of preview images, and the moving target in the current shooting scene is obtained. The method can directly acquire the moving target in the current shooting scene on the premise of not carrying out target detection on the image, thereby preparing for removing the target detection result corresponding to the moving target from the target detection result of a certain frame of preview image subsequently to obtain the target detection result of the static target in the current shooting scene.
In one embodiment, step 240, performing object detection on each frame of preview image to obtain an object detection result of the preview image, includes:
and carrying out target detection on each frame of preview image to obtain a target type and a target frame area corresponding to the target in the preview image.
Specifically, a neural network model is adopted to perform target detection on the image, and the specific training process of the neural network model is as follows: inputting a training image containing a designated image category and a designated object category into a neural network, extracting features through a basic network layer of the neural network, inputting the extracted image features into a classification network layer and a target detection network layer, obtaining a first loss function reflecting the difference between a first prediction confidence coefficient and a first real confidence coefficient of the designated image category to which a background image belongs in the training image at the classification network layer, and obtaining a second loss function reflecting the difference between a second prediction confidence coefficient and a second real confidence coefficient of the designated object category to which a foreground target belongs in the training image at the target detection network layer; weighting and summing the first loss function and the second loss function to obtain a target loss function; and adjusting parameters of the neural network according to the target loss function, and training the neural network. And training a neural network model, and carrying out scene recognition on the image according to the neural network model to obtain the scene category to which the image belongs.
FIG. 4 is an architectural diagram of a neural network model in one embodiment. As shown in fig. 4, an input layer of a neural network receives a training image with an image category label, performs feature extraction through a basic network (such as a VGG network), outputs the extracted image features to a feature layer, performs category detection on the image by the feature layer to obtain a first loss function, performs target detection on a foreground target according to the image features to obtain a second loss function, performs position detection on the foreground target according to the foreground target to obtain a position loss function, and performs weighted summation on the first loss function, the second loss function, and the position loss function to obtain a target loss function. The neural network comprises a data input layer, a basic network layer, a classification network layer, a target detection network layer and two output layers. The data input layer is used for receiving original image data. And the basic network layer performs preprocessing and feature extraction on the image input by the input layer. The preprocessing may include de-averaging, normalization, dimensionality reduction, and whitening processing. Deaveraging refers to centering the input data to 0 for each dimension in order to pull the center of the sample back to the origin of the coordinate system. Normalization is to normalize the amplitude to the same range. Whitening refers to normalizing the amplitude on each characteristic axis of the data. The image data is subjected to feature extraction, for example, the original image is subjected to feature extraction by using the first 5 layers of convolution layer of VGG16, and the extracted features are input into the classification network layer and the target detection network layer. The characteristics can be detected by adopting deep convolution and point convolution of a Mobilene network in a classification network layer, then the characteristics are input into an output layer to obtain a first prediction confidence coefficient of a designated image category to which the image classification belongs, and then a first loss function is obtained by subtracting a first true confidence coefficient according to the first prediction confidence coefficient; the target detection network layer can adopt an SSD network, for example, and is cascaded with convolution feature layers after the convolution layer of the first 5 layers of the VGG16, and a set of convolution filters are used in the convolution feature layers to predict the offset parameter of the preselected default bounding box corresponding to the specified object type relative to the real bounding box and the second prediction confidence corresponding to the specified object type. The region of interest is a region of a preselected default bounding box. And constructing a position loss function according to the offset parameter, and obtaining a second loss function according to the difference between the second prediction confidence coefficient and the second real confidence coefficient. And weighting and summing the first loss function, the second loss function and the position loss function to obtain a target loss function, and adjusting parameters of the neural network by adopting a back propagation algorithm according to the target loss function to train the neural network. When a trained neural network is adopted to identify an image to be detected, the input layer of the neural network receives the input image to be detected, the characteristics of the image to be detected are extracted, the characteristics are input into the classification network layer to be subjected to image classification and identification, the confidence coefficients of all appointed image classes to which a background image belongs are output through a softmax classifier in the first output layer, and the image class with the highest confidence coefficient and exceeding the confidence coefficient threshold value is selected as the image class to which the background image of the image belongs. Inputting the extracted features of the image to be detected into a target detection network layer for foreground target detection, outputting the confidence coefficient and the corresponding position of the specified object type to which the foreground target belongs through a softmax classifier on a second output layer, selecting the specified object type with the highest confidence coefficient and exceeding the confidence coefficient threshold value as the object type to which the foreground target belongs in the image, and outputting the position corresponding to the object type.
In the embodiment of the application, target detection is performed on each frame of preview image to obtain the target type and the target frame area corresponding to the target in the preview image. When the target detection is performed, the target detection is performed on a static target in the current shooting scene, and certainly, the target detection is performed on a moving target in the current shooting scene, so that the target type and the target frame area corresponding to the target in the preview image are obtained. The target category may be, for example, a portrait, a baby, a cat, a dog, a gourmet, etc., but may of course be other objects. The target frame area may be an area including a target, such as a rectangular frame area, which is commonly used, but may also be an area of other geometric shapes.
In one embodiment, removing a target detection result corresponding to a moving target from target detection results of a preview image to obtain a target detection result of a stationary target in a current shooting scene includes:
acquiring a target frame area of the moving target in the preview image according to the moving target;
removing a target frame area of the moving target from a target frame area corresponding to the target in the preview image;
and obtaining a target frame area of a static target in the current shooting scene.
Specifically, when the preview image is subjected to target detection, the method includes the step of performing target detection on a static target in the current shooting scene, and certainly also includes the step of performing target detection on a moving target in the current shooting scene, so as to obtain a target type and a target frame area corresponding to the target in the preview image. According to the method, the position coordinates of the moving target in the current shooting scene in the moving process, the contour characteristics of the moving target and the like can be obtained by tracking the moving target according to the preview images with the preset number, and the characteristic information of the moving target can be identified in the images. Therefore, the target frame area of the moving target is obtained in the preview image according to the characteristic information of the moving target, the target frame area of the moving target is removed from the target frame area corresponding to the target in the preview image, and the target frame area of the static target in the current shooting scene is obtained.
For example, when a still object a (e.g., a portrait) is photographed, a moving object B (the feature information of the moving object B includes that the object type is a puppy) appears in the current photographing scene, and the focus of the photographing is only the still object a and does not want to focus on the moving object B. Therefore, at this time, the moving object tracking may be performed on a preset number of preview images, so that the position coordinates of the moving object B in the current shooting scene during the moving process, the contour features of the moving object B, and the like, may be obtained, and the feature information of the moving object B may be identified in the images. And then, carrying out target detection on a certain frame of preview image to obtain a target detection result of the preview image, wherein the target detection result simultaneously obtains target types and target frame areas of two targets, namely a portrait and a puppy. And acquiring a target frame area of the moving target B from the frame preview image according to the characteristic information of the moving target B. And finally, according to the target frame area of the obtained moving target B in the frame preview image, removing the target frame area of the moving target B from the target frame area corresponding to the target in the preview image, and further obtaining the target frame area of the static target in the current shooting scene. I.e. only the target frame area of the portrait remains.
In the embodiment of the present application, in the process of performing target detection on a certain frame of preview image, it cannot be distinguished which target is a static target and which target is a moving target in the preview image. And the moving object in the current shooting scene can be obtained by tracking the moving object through the mean shift algorithm or the optical flow method on the preset number of preview images. Therefore, the target frame area of the moving target is directly removed from the target frame area for target detection of a certain frame of preview image, and only the target frame area of the static target is left. The method lays a foundation for image processing only on the target frame area where the static target (the target concerned by the user) is located in the follow-up process, and does not need to perform image processing on the target frame area where the moving target is located, so that the use requirements of the user are met, resources are saved, and the efficiency is improved.
In one embodiment, as shown in fig. 5, the removing the target frame area of the moving target from the target frame area corresponding to the target in the preview image includes:
step 520, judging whether an intersection exists between the target frame area of the moving target and the target frame area corresponding to the target in the preview image;
and 540, if so, removing the target frame area which is intersected with the target frame area of the moving target from the target frame area corresponding to the target in the preview image to obtain the target frame area of the static target in the current shooting scene.
And step 560, if not, directly taking the target frame area corresponding to the target in the preview image as the target frame area of the static target in the current shooting scene.
Specifically, a target frame region of the moving target is obtained from the target detection result of the preview image according to the moving target, and further, a target type of the target frame region may also be obtained. According to the position coordinates of the moving target in the moving process, the contour characteristics of the moving target and the like acquired through the tracking of the moving target, the characteristic information of the moving target can be identified in the image, and the target frame area of the moving target is acquired from the target frame area obtained by performing target detection on the preview image which needs image processing at present. After the target frame area of the moving target is obtained, whether intersection exists between the target frame area of the moving target and the target frame area corresponding to the target in the preview image is judged. If the intersection exists, the moving target exists on the preview image which needs to be subjected to image processing at the moment, and the target frame area which has the intersection with the target frame area of the moving target is removed to obtain the target frame area of the static target in the current shooting scene.
If the intersection does not exist, the fact that no moving target exists on the preview image needing image processing at the moment is indicated, and the target frame area corresponding to the target in the preview image is directly used as the target frame area of the static target in the current shooting scene.
In the embodiment of the application, because the moving target and the static target are difficult to distinguish from one frame of preview image, the moving target and the static target can be distinguished from the target in one frame of image by adopting the method of the scheme. Specifically, after the moving object in the current shooting scene is acquired by tracking the moving object through the mean shift algorithm or the optical flow method on the preset number of preview images. And acquiring a target frame area of the moving target from a preview image which needs image processing at present according to the relevant characteristic information of the moving target. And further, judging whether the target frame area of the moving target and the target frame area corresponding to the target in the preview image have intersection or not, so as to judge whether the moving target exists in the preview image which needs image processing currently. And if so, removing the target frame area which has intersection with the target frame area of the moving target to obtain the target frame area of the static target in the current shooting scene.
In one embodiment, the image processing of the preview image according to the target detection result of the still target includes:
and respectively carrying out image processing corresponding to the target type on the target frame area corresponding to the static target according to the target type corresponding to the static target.
Specifically, in the above step, a target detection result of a stationary target in the current shooting scene is obtained, and the target detection result includes a target type and a target frame area. If the user only needs to perform image processing on the concerned (static target), after the target frame area of the static target is obtained, the target type of the static target is obtained according to the target frame area of the static target. For example, a stationary target may include one or more target frame areas, such as a portrait and a vehicle. In this case, image processing corresponding to the target type may be performed on the target frame region corresponding to the still target according to the target type corresponding to the still target. When the type of the object corresponding to the still object is a portrait, image processing related to the portrait, such as face beautification, is performed on the object frame area corresponding to the portrait. And when the static target is the vehicle, performing image processing related to the vehicle on the target frame area corresponding to the vehicle. So that all still objects achieve a better image processing result.
In the embodiment of the application, if a user takes a picture of a shooting scene with moving object interference and the object concerned by the user is only a static object in the shooting scene, only the static object in the shooting scene needs to be subjected to image processing, and the moving object in the shooting scene does not need to be subjected to image processing. And after the target frame area of the static target is obtained, acquiring the target type of the static target according to the target frame area of the static target. The image processing corresponding to the target type can be performed on the target frame area corresponding to the stationary target according to the target type corresponding to the stationary target. So that all still objects achieve a better image processing result.
In a specific embodiment, an image processing method is provided, which is described by taking the application of the method to the electronic device in fig. 1 as an example, and includes:
the method comprises the following steps: acquiring a preset number of preview images, wherein the preset number of preview images are images acquired by shooting a current shooting scene in real time;
step two: tracking a moving target of a preset number of preview images by adopting a mean shift algorithm or an optical flow method to obtain the moving target in the current shooting scene;
step three: performing target detection on each frame of preview image to obtain a target type and a target frame area corresponding to a target in the preview image;
step four: acquiring a target frame area of the moving target in the preview image according to the moving target;
step five: judging whether an intersection exists between a target frame area of the moving target and a target frame area corresponding to the target in the preview image;
step six: if so, removing a target frame area which is intersected with the target frame area of the moving target from the target frame area corresponding to the target in the preview image to obtain a target frame area of a static target in the current shooting scene;
step seven: and respectively carrying out image processing corresponding to the target type on the target frame area corresponding to the static target according to the target type corresponding to the static target.
Because it is difficult to distinguish the moving target from the static target from the preview image of one frame, the method of the present invention can distinguish the moving target from the static target for the target in the image of one frame. Specifically, after the moving object in the current shooting scene is acquired by tracking the moving object through the mean shift algorithm or the optical flow method on the preset number of preview images. And acquiring a target frame area of the moving target from a preview image which needs image processing at present according to the relevant characteristic information of the moving target. And further, judging whether the target frame area of the moving target and the target frame area corresponding to the target in the preview image have intersection or not, so as to judge whether the moving target exists in the preview image which needs image processing currently. And if so, removing the target frame area which has intersection with the target frame area of the moving target to obtain the target frame area of the static target in the current shooting scene. And after the target frame area of the static target is obtained, acquiring the target type of the static target according to the target frame area of the static target. The image processing corresponding to the target type can be performed on the target frame area corresponding to the stationary target according to the target type corresponding to the stationary target. So that all still objects achieve a better image processing result. Because the image processing of the target frame area of the moving target in the preview image is not required, resources are saved and the efficiency is improved.
In one embodiment, as shown in fig. 6, there is provided an image processing apparatus 600 including: a moving object acquisition module 620, an object detection module 640, a culling module 660, and a preview image processing module 680. Wherein,
a moving object obtaining module 620, configured to obtain a moving object in a current shooting scene from a preset number of preview images;
the target detection module 640 is configured to perform target detection on each frame of preview image to obtain a target detection result of the preview image;
the removing module 660 is configured to remove a target detection result corresponding to the moving target from target detection results of the preview image to obtain a target detection result of a stationary target in the current shooting scene;
and the preview image processing module 680 is configured to perform corresponding image processing on the preview image according to the target detection result of the static target.
In an embodiment, the moving object obtaining module 620 is further configured to obtain a preset number of preview images, where the preset number of preview images are obtained by shooting the current shooting scene in real time; and tracking the moving target of the preview images with the preset number to acquire the moving target in the current shooting scene.
In an embodiment, the moving object obtaining module 620 is further configured to perform moving object tracking on a preset number of preview images by using a mean shift algorithm or an optical flow method, so as to obtain a moving object in the current shooting scene.
In an embodiment, the target detection module 640 is further configured to perform target detection on each frame of the preview image, so as to obtain a target type and a target frame area corresponding to a target in the preview image.
In one embodiment, as shown in FIG. 7, the culling module 660 comprises:
a target frame area acquiring module 662 of the moving target, configured to acquire a target frame area of the moving target in the preview image according to the moving target;
a target frame region removing module 664 of the moving target, which is used for removing the target frame region of the moving target from the target frame region corresponding to the target in the preview image;
and the target frame area acquisition module 666 of the static target is used for acquiring the target frame area of the static target in the current shooting scene.
In one embodiment, as shown in fig. 8, the target frame region eliminating module 664 of the moving target includes:
the judging module 664a is used for judging whether an intersection exists between the target frame area of the moving target and the target frame area corresponding to the target in the preview image;
and the intersection target frame region removing module 664b is used for removing the target frame region which has intersection with the target frame region of the moving target from the target frame region corresponding to the target in the preview image if the target frame region is the intersection target frame region.
In one embodiment, the preview image processing module 680 is further configured to perform image processing corresponding to the target types on the target frame areas corresponding to the static targets according to the target types corresponding to the static targets.
The division of the modules in the image processing apparatus is only for illustration, and in other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, implements the steps of the image processing method provided by the above embodiments.
In one embodiment, an electronic device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the steps of the image processing method provided in the above embodiments are implemented.
The embodiments of the present application also provide a computer program product, which when run on a computer, causes the computer to execute the steps of the image processing method provided in the foregoing embodiments.
The embodiment of the application also provides the electronic equipment. As shown in fig. 9, for convenience of explanation, only the parts related to the embodiments of the present application are shown, and details of the technology are not disclosed, please refer to the method part of the embodiments of the present application. The electronic device may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, a wearable device, and the like, taking the electronic device as the mobile phone as an example:
fig. 9 is a block diagram of a partial structure of a mobile phone related to an electronic device provided in an embodiment of the present application. Referring to fig. 9, the handset includes: radio Frequency (RF) circuit 910, memory 920, input unit 930, display unit 940, sensor 950, audio circuit 990, wireless fidelity (WiFi) module 970, processor 980, and power supply 990. Those skilled in the art will appreciate that the handset configuration shown in fig. 9 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The RF circuit 910 may be used for receiving and transmitting signals during information transmission or communication, and may receive downlink information of a base station and then process the downlink information to the processor 980; the uplink data may also be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 910 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE)), e-mail, Short Messaging Service (SMS), and the like.
The memory 920 may be used to store software programs and modules, and the processor 980 may execute various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 920. The memory 920 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as an application program for a sound playing function, an application program for an image playing function, and the like), and the like; the data storage area may store data (such as audio data, an address book, etc.) created according to the use of the mobile phone, and the like. Further, the memory 920 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 930 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone 900. Specifically, the input unit 930 may include a touch panel 931 and other input devices 932. The touch panel 931, which may also be referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 931 (e.g., a user operating the touch panel 931 or near the touch panel 931 by using a finger, a stylus, or any other suitable object or accessory), and drive the corresponding connection device according to a preset program. In one embodiment, the touch panel 931 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 980, and can receive and execute commands sent by the processor 980. In addition, the touch panel 931 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 930 may include other input devices 932 in addition to the touch panel 931. In particular, other input devices 932 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), and the like.
The display unit 940 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The display unit 940 may include a display panel 941. In one embodiment, the Display panel 941 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. In one embodiment, the touch panel 931 may overlay the display panel 941, and when the touch panel 931 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 980 to determine the type of touch event, and then the processor 980 provides a corresponding visual output on the display panel 941 according to the type of touch event. Although in fig. 9, the touch panel 931 and the display panel 941 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 931 and the display panel 941 may be integrated to implement the input and output functions of the mobile phone.
Cell phone 900 may also include at least one sensor 950, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 941 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 941 and/or backlight when the mobile phone is moved to the ear. The motion sensor can comprise an acceleration sensor, the acceleration sensor can detect the magnitude of acceleration in each direction, the magnitude and the direction of gravity can be detected when the mobile phone is static, and the motion sensor can be used for identifying the application of the gesture of the mobile phone (such as horizontal and vertical screen switching), the vibration identification related functions (such as pedometer and knocking) and the like; the mobile phone may be provided with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor.
The audio circuit 990, speaker 991, and microphone 992 may provide an audio interface between a user and a cell phone. The audio circuit 990 may convert the received audio data into an electrical signal, transmit the electrical signal to the speaker 991, and convert the electrical signal into an audio signal by the speaker 991 and output the audio signal; on the other hand, the microphone 992 converts the collected sound signal into an electrical signal, which is received by the audio circuit 990 and converted into audio data, and then the audio data is output to the processor 980, and then the audio data is transmitted to another mobile phone through the RF circuit 910, or the audio data is output to the memory 920 for subsequent processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 970, and provides wireless broadband Internet access for the user. Although fig. 9 shows WiFi module 970, it is to be understood that it does not belong to the essential components of cell phone 900 and may be omitted as desired.
The processor 980 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 920 and calling data stored in the memory 920, thereby integrally monitoring the mobile phone. In one embodiment, processor 980 may include one or more processing units. In one embodiment, the processor 980 may integrate an application processor and a modem processor, wherein the application processor primarily handles operating systems, user interfaces, applications, and the like; the modem processor handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 980.
The handset 900 also includes a power supply 990 (e.g., a battery) for supplying power to various components, which may preferably be logically connected to the processor 980 via a power management system, such that the power management system may be used to manage charging, discharging, and power consumption.
In one embodiment, the cell phone 900 may also include a camera, a bluetooth module, and the like.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An image processing method, comprising:
acquiring a moving target in a current shooting scene from a preset number of preview images;
performing target detection on each frame of preview image to obtain a target detection result of the preview image, wherein the target detection result comprises a target type and a target frame area;
removing a target detection result corresponding to the moving target from the target detection result of the preview image to obtain a target detection result of a static target in the current shooting scene;
and respectively carrying out image processing corresponding to the target type on a target frame area corresponding to the static target according to the target type corresponding to the static target, and not processing the moving target in the current shooting scene in the preview image or blurring the moving target in the current shooting scene in the preview image.
2. The method of claim 1, wherein obtaining the moving object in the current shooting scene from a preset number of preview images comprises:
acquiring a preset number of preview images, wherein the preset number of preview images are images acquired by shooting a current shooting scene in real time;
and tracking the moving target of the preview images with the preset number to acquire the moving target in the current shooting scene.
3. The method of claim 2, wherein performing moving object tracking on the preset number of preview images to obtain a moving object in the current shooting scene comprises:
and tracking the moving target of the preset number of preview images by adopting a mean shift algorithm or an optical flow method to acquire the moving target in the current shooting scene.
4. The method of claim 1, wherein performing object detection on each frame of preview image to obtain an object detection result of the preview image comprises:
and carrying out target detection on each frame of preview image to obtain a target type and a target frame area corresponding to a target in the preview image.
5. The method of claim 4, wherein removing the target detection result corresponding to the moving target from the target detection results of the preview image to obtain the target detection result of the stationary target in the current shooting scene comprises:
acquiring a target frame area of the moving target in the preview image according to the moving target;
removing the target frame area of the moving target from the target frame area corresponding to the target in the preview image;
and obtaining a target frame area of a static target in the current shooting scene.
6. The method of claim 5, wherein removing the target frame area of the moving target from the target frame area corresponding to the target in the preview image comprises:
judging whether an intersection exists between the target frame area of the moving target and the target frame area corresponding to the target in the preview image;
and if so, removing the target frame area which has intersection with the target frame area of the moving target from the target frame area corresponding to the target in the preview image.
7. The method of claim 4, applied to a shooting scene in which there is interference from moving objects, further comprising:
and blurring the moving target or removing the moving target according to a target detection result corresponding to the moving target.
8. An image processing apparatus, characterized in that the apparatus comprises:
the moving target acquisition module is used for acquiring a moving target in the current shooting scene from the preview images with the preset number;
the target detection module is used for carrying out target detection on each frame of preview image to obtain a target detection result of the preview image, wherein the target detection result comprises a target type and a target frame area;
the removing module is used for removing the target detection result corresponding to the moving target from the target detection result of the preview image to obtain the target detection result of the static target in the current shooting scene;
and the preview image processing module is used for respectively carrying out image processing corresponding to the target type on the target frame area corresponding to the static target according to the target type corresponding to the static target, and not processing the moving target in the current shooting scene in the preview image or blurring the moving target in the current shooting scene in the preview image.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 7.
10. An electronic device comprising a memory, a processor, said memory having stored thereon a computer program operable on the processor, wherein the steps of the image processing method of any of claims 1 to 7 are implemented when the computer program is executed by the processor.
CN201810746550.6A 2018-07-09 2018-07-09 Image processing method and device, storage medium and electronic equipment Active CN109002787B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810746550.6A CN109002787B (en) 2018-07-09 2018-07-09 Image processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810746550.6A CN109002787B (en) 2018-07-09 2018-07-09 Image processing method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN109002787A CN109002787A (en) 2018-12-14
CN109002787B true CN109002787B (en) 2021-02-23

Family

ID=64599206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810746550.6A Active CN109002787B (en) 2018-07-09 2018-07-09 Image processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN109002787B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7086878B2 (en) * 2019-02-20 2022-06-20 株式会社東芝 Learning device, learning method, program and recognition device
CN110460773B (en) * 2019-08-16 2021-05-11 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN113497881B (en) * 2020-03-20 2022-11-08 华为技术有限公司 Image processing method and device
CN111723767B (en) * 2020-06-29 2023-08-08 杭州海康威视数字技术股份有限公司 Image processing method, device and computer storage medium
CN113011497B (en) * 2021-03-19 2023-06-20 城云科技(中国)有限公司 Image comparison method and system
CN113129229A (en) * 2021-03-29 2021-07-16 影石创新科技股份有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113129227A (en) * 2021-03-29 2021-07-16 影石创新科技股份有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113192101B (en) * 2021-05-06 2024-03-29 影石创新科技股份有限公司 Image processing method, device, computer equipment and storage medium
CN115103120B (en) * 2022-06-30 2024-07-26 Oppo广东移动通信有限公司 Shooting scene detection method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107844765A (en) * 2017-10-31 2018-03-27 广东欧珀移动通信有限公司 Photographic method, device, terminal and storage medium
CN107844764A (en) * 2017-10-31 2018-03-27 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106952235B (en) * 2017-02-10 2019-07-26 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN107959841B (en) * 2017-12-07 2020-03-27 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, storage medium, and electronic device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107844765A (en) * 2017-10-31 2018-03-27 广东欧珀移动通信有限公司 Photographic method, device, terminal and storage medium
CN107844764A (en) * 2017-10-31 2018-03-27 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium

Also Published As

Publication number Publication date
CN109002787A (en) 2018-12-14

Similar Documents

Publication Publication Date Title
CN109002787B (en) Image processing method and device, storage medium and electronic equipment
RU2731370C1 (en) Method of living organism recognition and terminal device
CN107172364B (en) Image exposure compensation method and device and computer readable storage medium
US20200167581A1 (en) Anti-counterfeiting processing method and related products
CN108875451B (en) Method, device, storage medium and program product for positioning image
CN108259758B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN109086761B (en) Image processing method and device, storage medium and electronic equipment
CN107729889B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN107451454B (en) Unlocking control method and related product
CN108989665A (en) Image processing method, device, mobile terminal and computer-readable medium
CN109325518B (en) Image classification method and device, electronic equipment and computer-readable storage medium
CN110933312B (en) Photographing control method and related product
CN108921941A (en) Image processing method, device, storage medium and electronic equipment
CN107730460B (en) Image processing method and mobile terminal
CN109086680A (en) Image processing method, device, storage medium and electronic equipment
CN110830706A (en) Image processing method and device, storage medium and electronic equipment
CN110072057B (en) Image processing method and related product
CN113421211A (en) Method for blurring light spots, terminal device and storage medium
CN111325220B (en) Image generation method, device, equipment and storage medium
CN108921086A (en) Image processing method and device, storage medium, electronic equipment
CN107798662B (en) Image processing method and mobile terminal
CN110086987B (en) Camera visual angle cutting method and device and storage medium
CN107454339A (en) Image processing method and related product
CN110717486A (en) Text detection method and device, electronic equipment and storage medium
CN114140655A (en) Image classification method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant