Nothing Special   »   [go: up one dir, main page]

CN111866394A - Photographing method, photographing device, terminal and computer-readable storage medium - Google Patents

Photographing method, photographing device, terminal and computer-readable storage medium Download PDF

Info

Publication number
CN111866394A
CN111866394A CN202010785958.1A CN202010785958A CN111866394A CN 111866394 A CN111866394 A CN 111866394A CN 202010785958 A CN202010785958 A CN 202010785958A CN 111866394 A CN111866394 A CN 111866394A
Authority
CN
China
Prior art keywords
frame image
photographing
target object
acquiring
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010785958.1A
Other languages
Chinese (zh)
Inventor
张光辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010785958.1A priority Critical patent/CN111866394A/en
Publication of CN111866394A publication Critical patent/CN111866394A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Studio Devices (AREA)

Abstract

The present application belongs to the field of photographing technologies, and in particular, to a photographing method, apparatus, terminal, and computer-readable storage medium, where the method includes: acquiring a preview frame image, and detecting a target object contained in the preview frame image; acquiring characteristic information of the target object, and calculating shooting parameters of a shooting frame image according to the characteristic information; receiving a photographing instruction, wherein the photographing instruction carries the photographing parameters; acquiring a picture corresponding to the shooting parameter according to the shooting instruction; the acquiring of the characteristic information of the target object and the calculating of the shooting parameters of the shooting frame image according to the characteristic information comprise: and acquiring the chromaticity of the target object, and calculating the saturation of the photographed frame image according to the chromaticity of the target object. By the method, the technical problem that the picture meeting the user requirement cannot be obtained through the post processing of the picture in the later shooting period is effectively solved, and the shooting quality of the picture is improved.

Description

Photographing method, photographing device, terminal and computer-readable storage medium
Technical Field
The present application belongs to the field of photographing technologies, and in particular, to a photographing method, apparatus, terminal, and computer-readable storage medium.
Background
With the continuous development of image processing technology, post-processing of images has become mature, such as image denoising, resolution reconstruction, background blurring, style transformation, and the like.
However, image post-processing still has certain limitations, for example, for underexposed photos or out-of-focus photos, photos meeting the requirements of users can not be obtained by processing the photos usually.
Disclosure of Invention
The embodiment of the application provides a photographing method, a photographing device, a terminal and a computer-readable storage medium, which can solve the technical problem that a photo meeting the requirements of a user cannot be obtained by processing the photo.
A first aspect of an embodiment of the present application provides a photographing method, including:
acquiring a preview frame image, and detecting a target object contained in the preview frame image;
acquiring characteristic information of the target object, and calculating shooting parameters of a shooting frame image according to the characteristic information;
receiving a photographing instruction, wherein the photographing instruction carries the photographing parameters;
acquiring a picture corresponding to the shooting parameter according to the shooting instruction;
the obtaining of the feature information of the target object and the calculating of the shooting parameters of the shot frame image according to the feature information include: and acquiring the chromaticity of the target object, and calculating the saturation of the photographed frame image according to the chromaticity of the target object.
A second aspect of the embodiments of the present application provides a photographing apparatus, including:
a detection unit for acquiring a preview frame image and detecting a target object included in the preview frame image;
the calculating unit is used for acquiring the characteristic information of the target object and calculating the shooting parameters of the shooting frame image according to the characteristic information;
a receiving unit, configured to receive a photographing instruction, where the photographing instruction carries the photographing parameters;
the photographing unit is used for acquiring a photo corresponding to the photographing parameter according to the photographing instruction;
the calculating unit is specifically configured to:
and acquiring the chromaticity of the target object, and calculating the saturation of the photographed frame image according to the chromaticity of the target object.
A third aspect of the embodiments of the present application provides a terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method when executing the computer program.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the above method.
In the embodiment of the application, in the shooting process before shooting, the target object contained in the preview frame image is detected in real time, and the characteristic information of the target object is obtained, so that the shooting parameters of the shooting frame image are calculated according to the characteristic information, when a shooting instruction is received, shooting can be performed according to the shooting parameters, when the picture is shot, shooting is performed according to the characteristic information of the target object in real time, the picture with the best shooting effect is obtained, the technical problem that the picture meeting the requirements of a user cannot be obtained through post-processing of the picture in the later shooting period is effectively solved, and the shooting quality of the picture is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic flowchart of a first implementation of a photographing method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a second implementation of a photographing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a first specific implementation of step 102 of a photographing method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a second specific implementation of step 102 of the photographing method provided in the embodiment of the present application;
fig. 5 is a schematic diagram of a saturation label picture provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a photographing device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In addition, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
In order to explain the technical solution of the present application, the following description will be given by way of specific examples.
Before android 5.0, manual control of camera shooting parameters can be realized only by changing a system, an application program interface API (application program interface) Camera1.0 of a camera is not friendly, and the Camera1.0 is similar to a black box with an advanced control function and is not concerned about the shooting parameters of each frame of image shot by the camera. Starting from android 5.0, a new application program interface API Camera2.0 capable of completely controlling the android device camera is introduced, so that the shooting parameters of each frame of image shot by the camera are controlled, and the camera has stronger flexibility.
In the embodiment of the application, in the shooting process before shooting, the target object contained in the preview frame image is detected in real time, and the characteristic information of the target object is obtained, so that the shooting parameters of the shooting frame image are calculated according to the characteristic information, meanwhile, when a shooting instruction is received, the shooting parameters of the camera are set to be the shooting parameters calculated according to the characteristic information, when the picture is shot, the shooting parameters of the frame image shot by the camera at the next moment can be adjusted in real time according to the characteristic information of the target object, so that the picture with the best shooting effect is obtained, the technical problem that the picture meeting the requirements of a user cannot be obtained through post-processing of the picture in the later shooting stage is effectively solved, and the shooting quality of the picture is improved.
Fig. 1 shows a schematic flow chart of a photographing method implemented by an embodiment of the present application, where the method is applied to a terminal, can be executed by a photographing device configured on the terminal, and is suitable for a situation where a quality of a photo needs to be improved, and includes steps 101 to 104.
The terminal comprises terminal equipment such as a smart phone, a tablet personal computer and a learning machine, wherein the terminal equipment is provided with a photographing device. The terminal device can be provided with applications such as a photographing application, a browser and a WeChat.
In step 101, a preview frame image is acquired, and a target object included in the preview frame image is detected.
The preview frame image refers to a frame image generated by a camera collecting an external optical signal when the photographing application is in a preview state. The method comprises the steps that data output by an external optical signal collected by a camera each time are called frame data, a user enters a preview mode after starting a photographing application on a terminal, and the terminal obtains frame data collected by the camera and displays the frame data to obtain a preview frame image.
Generally, the frame data is captured at a frequency of 30 frames per 1 second, and is generally divided into a preview frame and a photographing frame for previewing and photographing, respectively.
In the embodiment of the application, the preview frame image is acquired in real time in the preview state, and the target object contained in the preview frame image is detected, so that the characteristic information of the target object in the current state is acquired in real time.
The target object refers to a target object currently being photographed, for example, when the target object currently belongs to a person for photographing, the target object is a person, and when the target object currently belongs to a building for photographing, the target object is a building. It should be noted that one or more target objects may be present in the preview frame image, and the types of the target objects may be one or more.
The detection of the target object contained in the preview frame image includes performing target detection on the preview frame image, classifying the foreground and the background at a pixel level, removing the background, and retaining one or more target objects, i.e., one or more target objects.
In some embodiments of the present application, the target object in the preview frame image may be detected by a target detection algorithm, and common target detection algorithms include a Local Binary Pattern (LBP) algorithm, a directional gradient feature combination support vector machine model, a Convolutional Neural Network (CNN) model, and the like. Compared with other target detection algorithms, the convolutional neural network model can realize more accurate and rapid detection of the target object, so that the trained convolutional neural network model can be selected to detect the target object in the preview frame image.
Before the trained convolutional neural network model is used for detecting the target object in the preview frame image, the trained convolutional neural network model needs to be obtained first. The trained convolutional neural network model is obtained by training according to each sample image and the detection result corresponding to each sample image, wherein the detection result corresponding to each sample image is used for indicating all target objects contained in the sample image.
Optionally, the training step of the convolutional neural network model may include: acquiring a sample image and a detection result corresponding to the sample image; and detecting the sample image by using a convolutional neural network model, adjusting parameters of the convolutional neural network model according to a detection result until the adjusted convolutional neural network model can detect all target objects in the sample image, or detecting that the accuracy rate of the target objects in the sample image is greater than a preset value, and taking the adjusted convolutional neural network model as a trained convolutional neural network model. The parameters of the convolutional neural network model may include the weight, deviation, and coefficient of the regression function of each convolutional layer in the convolutional neural network model, and may further include a learning rate, iteration times, the number of neurons in each layer, and the like. At present, the conventional Convolutional Neural Network models include an RCNN (region based Convolutional Neural Network) model, a Fast-RCNN model, and the like. The fast-RCNN model is evolved on the basis of the RCNN model and the fast-RCNN model, and compared with the RCNN model and the fast-RCNN model, the fast-RCNN model still cannot realize real-time detection of the target object, but has higher target detection precision and target detection speed compared with the RCNN model and the fast-RCNN model, so that in some embodiments of the application, the fast-RCNN model is preferred to the convolutional neural network model.
It should be noted that, the method for detecting the target object is only illustrated here, and is not meant to limit the scope of the present application, and other methods that can achieve target object detection are also applicable to the present application, and are not listed here.
Step 102, acquiring characteristic information of the target object, and calculating shooting parameters of a shooting frame image according to the characteristic information;
the obtaining of the feature information of the target object and the calculating of the shooting parameters of the shot frame image according to the feature information include: and acquiring the chromaticity of the target object, and calculating the saturation of the photographed frame image according to the chromaticity of the target object.
The feature information of the target object is used to determine what shooting parameters are required to be used for shooting to achieve a good shooting effect. The photographing frame image is a frame image generated by a camera according to a photographing instruction and used for generating a final photo.
The feature information of the target object may include a chromaticity of the target object, and the shooting parameter of the shot frame image may include a saturation of the shot frame image. In addition, in some optional embodiments, the feature information of the target object may include position information of the target object in the preview frame image, and motion state information of the target object and chromaticity of the target object, and is used for calculating a photometric area, a focal length, an exposure parameter and saturation of the photographing frame image according to the position information, the motion state information and the chromaticity of the target object, so as to achieve an optimal photographing effect. It should be noted that, this is merely an example, and in some embodiments of the present application, the feature information of the target object may further include other more feature information. For example, facial expression information and height information.
In step 103, a photographing instruction is received, where the photographing instruction carries the photographing parameters.
In this embodiment of the application, the triggering manner of the photographing instruction may be triggered by using an existing triggering manner, for example, by clicking a photo shooting selection control in a photographing interface or by pressing a volume key, which is not described herein again.
In step 104, a photo corresponding to the shooting parameter is obtained according to the shooting instruction.
In the embodiment of the application, when a user starts a photographing application, the terminal obtains a target object contained in a preview frame image by acquiring the preview frame image in real time, and obtains photographing parameters of the photographing frame image according to characteristic information of the target object, so that after a photographing instruction is received, a photo corresponding to the photographing parameters can be obtained according to the photographing parameters.
In the embodiment of the application, in the shooting process before shooting, the target object contained in the preview frame image is detected in real time, and the characteristic information of the target object is obtained, so that the shooting parameters of the shooting frame image are calculated according to the characteristic information, meanwhile, when a shooting instruction is received, the shooting parameters of the camera are set to be the shooting parameters calculated according to the characteristic information, when the picture is shot, the shooting parameters of the frame image shot by the camera at the next moment can be adjusted in real time according to the characteristic information of the target object, so that the picture with the best shooting effect is obtained, the technical problem that the picture meeting the requirements of a user cannot be obtained through post-processing of the picture in the later shooting stage is effectively solved, and the shooting quality of the picture is improved.
Optionally, in some embodiments of the application, as shown in fig. 2, when the target object is a person, the method may include: step 201 to step 204.
In step 201, a preview frame image is obtained, and a target face included in the preview frame image is detected.
In step 202, feature information of the target face is obtained, and a shooting parameter of a shooting frame image is calculated according to the feature information, where the feature information of the target face includes a chromaticity of the target face, or the feature information of the target face includes the chromaticity of the target face, and position information and/or motion state information of the target face.
In step 203, a photographing instruction is received, where the photographing instruction carries the photographing parameters.
In step 204, a photo of the person corresponding to the shooting parameters is obtained according to the shooting instruction.
For example, a target face included in the preview frame image is obtained by face feature point recognition, and then the chromaticity of the target face in the preview frame image, or the chromaticity of the target face in the preview frame image, and the position information and/or the motion state information of the target face in the preview frame image are obtained, and then the saturation of the photographed frame image is calculated according to the chromaticity of the target face, or the saturation of the photographed frame image is calculated according to the chromaticity of the target face, and the photometric area and the focal length of the photographed frame image are calculated according to the position information of the target face, so as to avoid the blurred or dark picture face caused by the error of the focusing and photometric positions, and/or the exposure parameter of the photographed frame image is calculated according to the motion state of the target face, so as to avoid the blurred picture face caused by the shutter speed not being right when a person runs, the best face shooting effect is realized.
In some embodiments of the present application, when the photographic subject is a person, the detection of the target object included in the preview frame image may be to detect a plurality of target objects included in the preview frame image, for example, target objects such as a human face, clothes, and arm movements.
In the above-described embodiment, the step 102 of acquiring the feature value information of the target object and calculating the shooting parameters of the shooting frame image according to the feature information may include: acquiring the position information of the target object in the preview frame image, and calculating a light metering area and a focal length of the photographing frame image according to the position information; and/or acquiring the motion state information of the target object, and calculating the exposure parameter of the photographing frame image according to the motion speed of the characteristic point in the motion state information.
The selection of the light measuring area is one of the important bases for accurately selecting the shutter and aperture values. The photometry system of the camera is generally used to measure the brightness of light reflected by a subject, and is also called reflective photometry. The camera automatically assumes that the light reflectance of the photometric area is 18%, and the photometry is performed by this ratio, and then the values of the aperture and the shutter, which are related, are determined, and under the same illumination condition, if the same exposure amount is obtained, the larger the aperture value is, the smaller the shutter value is required, and if the aperture value is smaller, the larger the shutter value is required. The 18% value is derived from the appearance of neutral (grey) shades in natural scenes, where there is more white in the viewfinder, and more than 18% reflected light, which reflects about 90% of the incident light in the case of a completely white scene, and perhaps only a few percent of the reflectance in the case of a black scene. The standard gray card is an 8 x 10 inch card, the gray card is placed on the same light measuring source of a subject, the obtained light measuring region overall reflection rate is 18% of the standard, and then only the shot needs to be carried out according to the aperture shutter value given by the camera, and the shot picture is accurate in exposure. If the total reflectance of the entire photometric area is greater than 18%, for example, the background of the photometric area is mainly white, and if the image is taken according to the aperture shutter value automatically measured by the camera, the image will be an underexposed image, and the white background will appear grayed out, and if the image is a white paper, the image will become a black paper. Therefore, when a scene with a light reflection rate of more than 18% is photographed, the exposure compensation value EV of the camera needs to be increased. Conversely, if a scene with a light reflectance of less than 18% is taken, such as a black background, the taken picture tends to be overexposed and the black background becomes gray. Therefore, scenes with a light reflectance of less than 18% are photographed, and it is necessary to reduce EV exposure.
The current photometry methods mainly include central average photometry, central local photometry, spot photometry, multi-spot photometry and evaluative photometry. The embodiment of the present application exemplifies the selection of the light metering region by means of the center average light metering.
The center averaging metering is mainly in consideration of the fact that a general photographer is accustomed to placing a subject, that is, a target subject to be accurately exposed, in the middle of a viewfinder, and therefore the shooting contents are the most important. Therefore, the sensing element responsible for metering can organically divide the whole metering value of the camera, the metering data of the central part occupies most proportion, and the metering data outside the center of the picture plays an auxiliary role of metering as a small proportion. And obtaining photometric data shot by the camera through the proportion obtained by weighted and averaged two grid values by a processor of the camera. For example, it is set that the photometry data of the central portion of the camera occupies 75% of the entire photometry proportion, and the photometry data of the other non-central portion gradually extending to the edge occupies 25% of the proportion.
It can be seen that after the position of the target object is determined, the photometric area needs to be selected, for example, the position of the target object is set as the central part of the photometric area.
In addition, the focal length of the camera is generally selected by emitting a group of infrared rays or other rays by the camera head, determining the distance of the shot object after the rays are reflected by the shot object, and then adjusting the lens combination according to the measured distance to realize automatic focusing. Therefore, it is also necessary to obtain the focal length of the photographed frame image after the position of the target object is determined.
Optionally, in some embodiments of the application, as shown in fig. 3, the acquiring motion state information of the target object, and calculating an exposure parameter of the photo frame image according to a motion speed of a feature point in the motion state information includes: step 301 to step 303.
In step 301, obtaining preview frame images of a first preset frame number, and calculating the position change of a target object feature point in each adjacent preview frame image;
in step 302, calculating an average movement speed of the target object according to the position change and the acquisition period of the preview frame image;
in step 303, the shutter speed and the aperture parameter corresponding to the average moving speed of the target object are acquired.
For example, the first preset frame number is 30 frames, the target object is a human face, the human face feature points include eye feature points, nose feature points, mouth feature points and eyebrow feature points, and the acquisition period of the preview frame image is 30 frames/second. The method comprises the steps of obtaining 30 frames of preview frame images continuously shot by a camera, calculating the position change of a eyebrow center characteristic point in each adjacent preview frame image, accumulating the position change, obtaining the average motion speed of a human face, and obtaining the shutter speed and the aperture parameter of the shot frame image by searching a corresponding relation list of the shutter speed and the aperture parameter and the motion speed of an object.
The first preset frame number may be a frame number set by a user in a self-defined manner according to different shooting scenes, or may be a frame number set by default in factory, for example, 20 frames, 30 frames, 40 frames, or 50 frames, which is only illustrated here and is not meant to limit the scope of the present application.
Optionally, in some embodiments of the present application, as shown in fig. 4, the obtaining of the chromaticity of the target object and the calculating of the saturation of the photo frame image according to the chromaticity includes steps 401 to 402.
In step 401, a first chromaticity of the target object and a second chromaticity of a comparison picture selected by a user are obtained;
in step 402, the saturation of the photographed frame image is calculated according to the difference between the first chromaticity and the second chromaticity.
Color is commonly represented by lightness and chroma, which are properties of a color excluding lightness that reflects the hue and saturation of the color. And calculating the difference value of the first chroma and the second chroma by acquiring the first chroma of the target object and the second chroma of the comparison picture selected by the user, and obtaining the saturation adjustment size of the photographing frame image.
As shown in fig. 5, the comparison picture selected by the user may be saturation label pictures 50 pre-stored by the terminal, each saturation label picture representing a saturation value of a hue;
for example, when the image belongs to a person shooting currently, the target object may be a person's clothing, and the difference between the first chromaticity and the second chromaticity is calculated by obtaining the first chromaticity of the clothing and the second chromaticity of the comparison picture selected by the user, so as to obtain the saturation of the photographed frame image.
Optionally, the selected picture is a photograph arbitrarily selected by the user, and the second chroma of the reference picture may be the chroma of a certain area selected by the user in the photograph.
In the embodiment of the photographing method described in fig. 1 to fig. 5, in step 104, acquiring the photo corresponding to the photographing parameter according to the photographing instruction may include: and acquiring a second preset frame number of photographing frame images corresponding to the photographing parameters, and synthesizing the second preset frame number of photographing frame images into a photo corresponding to the photographing parameters.
For example, average fusion is carried out on pixel values of corresponding positions of the photographed frame images with a second preset frame number to obtain a picture corresponding to the photographing parameters; or taking the middle value of the pixel value of the corresponding position of the photographed frame image with the second preset frame number to synthesize the pixel value into a photo corresponding to the photographing parameter so as to optimize the photographing effect of the photo.
The second preset frame number may be a frame number set by a user in a self-defined manner, or may be a frame number set by a factory default, for example, 10 frames, 15 frames, 20 frames, or 30 frames, which is only an example and is not meant to limit the scope of the present application.
Fig. 6 shows a schematic structural diagram of a photographing apparatus 600 provided in an embodiment of the present application, which includes a detection unit 601, a calculation unit 602, a receiving unit 603, and a photographing unit 604.
A detection unit 601 configured to acquire a preview frame image and detect a target object included in the preview frame image;
a calculating unit 602, configured to obtain feature information of the target object, and calculate a shooting parameter of the shooting frame image according to the feature information;
a receiving unit 603, configured to receive a photographing instruction, where the photographing instruction carries the photographing parameters;
a photographing unit 604, configured to obtain a photo corresponding to the photographing parameter according to the photographing instruction;
the calculating unit 602 is specifically configured to:
and acquiring the chromaticity of the target object, and calculating the saturation of the photographed frame image according to the chromaticity of the target object.
In some embodiments of the present application, the detecting unit is specifically configured to acquire a preview frame image and detect a target face included in the preview frame image; correspondingly, the obtaining of the feature information of the target object and the calculating of the shooting parameters of the shot frame image according to the feature information includes: and acquiring feature information of the target face, and calculating shooting parameters of a shooting frame image according to the feature information, wherein the feature information of the target face comprises the chromaticity of the target face, or the feature information of the target face comprises the chromaticity of the target face, and position information and/or motion state information of the target face.
In some embodiments of the present application, the calculating unit is specifically configured to acquire position information of the target object in the preview frame image, and calculate a light metering area and a focal length of the photo frame image according to the position information; and/or acquiring the motion state information of the target object, and calculating the exposure parameter of the photographing frame image according to the motion speed of the characteristic point in the motion state information.
In some embodiments of the present application, the calculating unit is further specifically configured to acquire preview frame images of a first preset number of frames, and calculate a position change of a target object feature point in each adjacent preview frame image; calculating the average movement speed of the target object according to the position change and the acquisition period of the preview frame image; and acquiring the shutter speed and the aperture parameter corresponding to the average motion speed of the target object.
In some embodiments of the present application, the calculating unit is further specifically configured to obtain a first chromaticity of the target object and a second chromaticity of a user-selected comparison picture; and calculating the saturation of the photographed frame image according to the difference value of the first chroma and the second chroma.
Optionally, the detection unit is specifically configured to detect a target object included in the preview frame image by using a trained convolutional neural network model.
Optionally, the photographing unit is further specifically configured to acquire a second preset frame number of photographing frame images corresponding to the photographing parameters, and synthesize the photographing frame images with the preset frame number into a photo corresponding to the photographing parameters.
It should be noted that, for convenience and brevity of description, the specific working process of the photographing apparatus 600 described above may refer to the corresponding process of the method described in fig. 1 to fig. 5, and is not described herein again.
As shown in fig. 7, the present application provides a terminal for implementing the above-mentioned photographing method, where the terminal may be a mobile terminal, and the mobile terminal may be a terminal such as a smart phone, a tablet computer, a Personal Computer (PC), a learning machine, and includes: one or more input devices 73 (only one shown in fig. 7) and one or more output devices 74 (only one shown in fig. 7). The processor 71, memory 72, input device 73, output device 74, and camera 75 are connected by a bus 76. The camera is used for generating a preview frame image and a photographing frame image according to the collected external light signals.
It should be understood that, in the embodiment of the present Application, the Processor 71 may be a Central Processing Unit (CPU), and the Processor may also be other general processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 73 may include a virtual keyboard, a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of the fingerprint), a microphone, etc., and the output device 74 may include a display, a speaker, etc.
Memory 72 may include both read-only memory and random-access memory and provides instructions and data to processor 71. Some or all of memory 72 may also include non-volatile random access memory. For example, the memory 72 may also store device type information.
The memory 72 stores a computer program that is executable on the processor 71, for example, a program of a photographing method. The processor 71 implements the steps of the photographing method embodiment, such as the steps 101 to 104 shown in fig. 1, when executing the computer program. Alternatively, the processor 71, when executing the computer program, implements the functions of the modules/units in the device embodiments, such as the functions of the units 601 to 604 shown in fig. 6.
The computer program may be divided into one or more modules/units, which are stored in the memory 72 and executed by the processor 71 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used for describing the execution process of the computer program in the terminal for taking pictures. For example, the computer program may be divided into a detection unit, a calculation unit, a reception unit, and a photographing unit, and each unit functions specifically as follows: a detection unit for acquiring a preview frame image and detecting a target object included in the preview frame image; the calculating unit is used for acquiring the characteristic information of the target object and calculating the shooting parameters of the shooting frame image according to the characteristic information; a receiving unit, configured to receive a photographing instruction, where the photographing instruction carries the photographing parameters; the photographing unit is used for acquiring a photo corresponding to the photographing parameter according to the photographing instruction; the calculating unit is specifically configured to: and acquiring the chromaticity of the target object, and calculating the saturation of the photographed frame image according to the chromaticity of the target object.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal are merely illustrative, and for example, the division of the above-described modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above may be implemented by a computer program, which may be stored in a computer readable storage medium and used by a processor to implement the steps of the embodiments of the methods described above. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-described computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier signal, telecommunications signal, software distribution medium, and the like. It should be noted that the computer readable medium described above may include content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media that does not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method of taking a picture, comprising:
acquiring a preview frame image, and detecting a target object contained in the preview frame image;
acquiring characteristic information of the target object, and calculating shooting parameters of a shooting frame image according to the characteristic information;
receiving a photographing instruction, wherein the photographing instruction carries the photographing parameters;
acquiring a picture corresponding to the shooting parameter according to the shooting instruction;
the acquiring of the characteristic information of the target object and the calculating of the shooting parameters of the shooting frame image according to the characteristic information comprise: and acquiring the chromaticity of the target object, and calculating the saturation of the photographed frame image according to the chromaticity of the target object.
2. The photographing method of claim 1, wherein the acquiring a preview frame image and detecting a target object included in the preview frame image comprises:
acquiring a preview frame image, and detecting a target face contained in the preview frame image;
correspondingly, the obtaining of the feature information of the target object and the calculating of the shooting parameters of the shooting frame image according to the feature information include:
and acquiring the feature information of the target face, and calculating shooting parameters of a shooting frame image according to the feature information, wherein the feature information of the target face comprises the chromaticity of the target face, or the feature information of the target face comprises the chromaticity of the target face, and the position information and/or the motion state information of the target face.
3. The photographing method according to claim 1 or 2, wherein the acquiring feature value information of the target object and calculating the photographing parameters of the photographing frame image according to the feature information comprises:
acquiring the position information of the target object in the preview frame image, and calculating a light metering area and a focal length of the photographing frame image according to the position information; and/or the presence of a gas in the gas,
and acquiring the motion state information of the target object, and calculating the exposure parameters of the photographing frame image according to the motion speed of the characteristic points in the motion state information.
4. The photographing method of claim 3, wherein the acquiring of the motion state information of the target object and the calculating of the exposure parameter of the photographed frame image according to the motion speed of the feature point in the motion state information comprises:
acquiring preview frame images with a first preset frame number, and calculating the position change of target object feature points in each adjacent preview frame image;
calculating the average movement speed of the target object according to the position change and the acquisition period of the preview frame image;
and acquiring a shutter speed and an aperture parameter corresponding to the average movement speed of the target object.
5. The photographing method of claim 1, wherein the obtaining of the chromaticity of the target object and the calculating of the saturation of the photographing frame image according to the chromaticity of the target object comprises:
acquiring a first chromaticity of the target object and a second chromaticity of a comparison picture selected by a user;
and calculating the saturation of the photographed frame image according to the difference value of the first chromaticity and the second chromaticity.
6. The photographing method of claim 1, wherein the acquiring a preview frame image and detecting a target object included in the preview frame image comprises:
and detecting a target object contained in the preview frame image by using the trained convolutional neural network model.
7. The photographing method of claim 1, wherein the obtaining of the picture corresponding to the photographing parameter according to the photographing instruction comprises:
and acquiring a photographing frame image with a second preset frame number corresponding to the photographing parameter, and synthesizing the photographing frame image with the second preset frame number into a photo corresponding to the photographing parameter.
8. A photographing apparatus, comprising:
the device comprises a detection unit, a processing unit and a display unit, wherein the detection unit is used for acquiring a preview frame image and detecting a target object contained in the preview frame image;
the calculating unit is used for acquiring the characteristic information of the target object and calculating the shooting parameters of the shooting frame image according to the characteristic information;
the receiving unit is used for receiving a photographing instruction, and the photographing instruction carries the photographing parameters;
the photographing unit is used for acquiring a photo corresponding to the photographing parameters according to the photographing instruction;
the computing unit is specifically configured to:
and acquiring the chromaticity of the target object, and calculating the saturation of the photographed frame image according to the chromaticity of the target object.
9. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202010785958.1A 2018-06-15 2018-06-15 Photographing method, photographing device, terminal and computer-readable storage medium Pending CN111866394A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010785958.1A CN111866394A (en) 2018-06-15 2018-06-15 Photographing method, photographing device, terminal and computer-readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810626707.1A CN108495050B (en) 2018-06-15 2018-06-15 Photographing method, photographing device, terminal and computer-readable storage medium
CN202010785958.1A CN111866394A (en) 2018-06-15 2018-06-15 Photographing method, photographing device, terminal and computer-readable storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201810626707.1A Division CN108495050B (en) 2018-06-15 2018-06-15 Photographing method, photographing device, terminal and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN111866394A true CN111866394A (en) 2020-10-30

Family

ID=63343007

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201810626707.1A Active CN108495050B (en) 2018-06-15 2018-06-15 Photographing method, photographing device, terminal and computer-readable storage medium
CN202010785958.1A Pending CN111866394A (en) 2018-06-15 2018-06-15 Photographing method, photographing device, terminal and computer-readable storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201810626707.1A Active CN108495050B (en) 2018-06-15 2018-06-15 Photographing method, photographing device, terminal and computer-readable storage medium

Country Status (2)

Country Link
CN (2) CN108495050B (en)
WO (1) WO2019237992A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112800969A (en) * 2021-01-29 2021-05-14 新疆爱华盈通信息技术有限公司 Image quality adjusting method and system, AI processing method and access control system
CN113160357A (en) * 2021-04-07 2021-07-23 浙江工商大学 Information auditing method, system and computer readable storage medium

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108495050B (en) * 2018-06-15 2020-09-04 Oppo广东移动通信有限公司 Photographing method, photographing device, terminal and computer-readable storage medium
CN109729272B (en) * 2019-01-04 2022-03-08 平安科技(深圳)有限公司 Shooting control method, terminal device and computer readable storage medium
CN109919891A (en) * 2019-03-14 2019-06-21 Oppo广东移动通信有限公司 Imaging method, device, terminal and storage medium
CN110995994B (en) * 2019-12-09 2021-09-14 上海瑾盛通信科技有限公司 Image shooting method and related device
CN113596315A (en) * 2020-04-30 2021-11-02 鸿富锦精密电子(郑州)有限公司 Photographing method for dynamic scene compensation and camera device
CN111598958A (en) * 2020-05-19 2020-08-28 北京迁移科技有限公司 High-quality 3D point cloud image rapid acquisition system and method
CN111986263B (en) * 2020-06-28 2023-09-12 百度在线网络技术(北京)有限公司 Image processing method, device, electronic equipment and storage medium
CN111866384B (en) * 2020-07-16 2022-02-01 深圳传音控股股份有限公司 Shooting control method, mobile terminal and computer storage medium
CN111800740B (en) * 2020-07-31 2023-02-07 平安国际融资租赁有限公司 Data remote acquisition method and device, computer equipment and storage medium
CN114092925A (en) * 2020-08-05 2022-02-25 武汉Tcl集团工业研究院有限公司 Video subtitle detection method and device, terminal equipment and storage medium
CN112115418B (en) * 2020-08-13 2024-03-26 深圳市智物联网络有限公司 Method, device and equipment for acquiring bias estimation information
CN112115411B (en) * 2020-08-21 2024-04-16 中国电子科技集团公司第十三研究所 Position drift compensation method, terminal device and readable storage medium
CN112132227B (en) * 2020-09-30 2024-04-05 石家庄铁道大学 Bridge train load action time course extraction method and device and terminal equipment
CN112333392A (en) * 2020-11-03 2021-02-05 珠海格力电器股份有限公司 Picture processing method and device
CN114979455A (en) * 2021-02-25 2022-08-30 北京小米移动软件有限公司 Photographing method, photographing device and storage medium
CN113177440B (en) * 2021-04-09 2024-10-29 上海元罗卜智能科技有限公司 Image synchronization method, device, electronic equipment and computer storage medium
CN113221754A (en) * 2021-05-14 2021-08-06 深圳前海百递网络有限公司 Express waybill image detection method and device, computer equipment and storage medium
CN114071024A (en) * 2021-11-26 2022-02-18 北京百度网讯科技有限公司 Image shooting method, neural network training method, device, equipment and medium
CN114264835B (en) * 2021-12-22 2023-11-17 上海集成电路研发中心有限公司 Method, device and chip for measuring rotation speed of fan
CN115002352A (en) * 2022-06-23 2022-09-02 深圳市理德铭科技股份有限公司 Processing method and system for photographing selfie stick of automatic telescopic tripod
CN115134536B (en) * 2022-06-28 2024-05-03 维沃移动通信有限公司 Shooting method and device thereof
CN117412177A (en) * 2022-07-04 2024-01-16 北京小米移动软件有限公司 Shooting method, shooting device, medium and chip
CN116320716B (en) * 2023-05-25 2023-10-20 荣耀终端有限公司 Picture acquisition method, model training method and related devices

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103197491A (en) * 2013-03-28 2013-07-10 华为技术有限公司 Method capable of achieving rapid automatic focusing and image acquisition device
CN205510224U (en) * 2016-04-12 2016-08-24 上海豪成通讯科技有限公司 Digital image processing ware
CN105898143A (en) * 2016-04-27 2016-08-24 维沃移动通信有限公司 Moving object snapshotting method and mobile terminal
US20170187938A1 (en) * 2015-12-24 2017-06-29 Canon Kabushiki Kaisha Image pickup apparatus, image processing apparatus, image processing method, and non-transitory computer-readable storage medium for improving quality of captured image
CN107180415A (en) * 2017-03-30 2017-09-19 北京奇艺世纪科技有限公司 Skin landscaping treatment method and device in a kind of image
CN107566728A (en) * 2017-09-25 2018-01-09 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
JP2018025597A (en) * 2016-08-08 2018-02-15 オリンパス株式会社 Imaging device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104469131A (en) * 2014-09-05 2015-03-25 宇龙计算机通信科技(深圳)有限公司 Method, device and terminal for displaying shooting control
CN104243832A (en) * 2014-09-30 2014-12-24 北京金山安全软件有限公司 Method and device for shooting through mobile terminal and mobile terminal
CN105100628B (en) * 2015-08-28 2018-11-06 上海与德通讯技术有限公司 A kind of image capturing method and electronic equipment
US20170163953A1 (en) * 2015-12-08 2017-06-08 Le Holdings (Beijing) Co., Ltd. Method and electronic device for processing image containing human face
JP6604864B2 (en) * 2016-02-05 2019-11-13 キヤノン株式会社 Electronic device and control method thereof
CN108495050B (en) * 2018-06-15 2020-09-04 Oppo广东移动通信有限公司 Photographing method, photographing device, terminal and computer-readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103197491A (en) * 2013-03-28 2013-07-10 华为技术有限公司 Method capable of achieving rapid automatic focusing and image acquisition device
US20170187938A1 (en) * 2015-12-24 2017-06-29 Canon Kabushiki Kaisha Image pickup apparatus, image processing apparatus, image processing method, and non-transitory computer-readable storage medium for improving quality of captured image
CN205510224U (en) * 2016-04-12 2016-08-24 上海豪成通讯科技有限公司 Digital image processing ware
CN105898143A (en) * 2016-04-27 2016-08-24 维沃移动通信有限公司 Moving object snapshotting method and mobile terminal
JP2018025597A (en) * 2016-08-08 2018-02-15 オリンパス株式会社 Imaging device
CN107180415A (en) * 2017-03-30 2017-09-19 北京奇艺世纪科技有限公司 Skin landscaping treatment method and device in a kind of image
CN107566728A (en) * 2017-09-25 2018-01-09 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112800969A (en) * 2021-01-29 2021-05-14 新疆爱华盈通信息技术有限公司 Image quality adjusting method and system, AI processing method and access control system
CN112800969B (en) * 2021-01-29 2022-04-19 深圳市爱深盈通信息技术有限公司 Image quality adjusting method and system, AI processing method and access control system
CN113160357A (en) * 2021-04-07 2021-07-23 浙江工商大学 Information auditing method, system and computer readable storage medium

Also Published As

Publication number Publication date
CN108495050A (en) 2018-09-04
WO2019237992A1 (en) 2019-12-19
CN108495050B (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN108495050B (en) Photographing method, photographing device, terminal and computer-readable storage medium
CN108933899B (en) Panorama shooting method, device, terminal and computer readable storage medium
WO2020038109A1 (en) Photographing method and device, terminal, and computer-readable storage medium
JP7371081B2 (en) Night view photography methods, devices, electronic devices and storage media
CN111402135B (en) Image processing method, device, electronic equipment and computer readable storage medium
US20200267314A1 (en) Real time assessment of picture quality
CN108900782B (en) Exposure control method, exposure control device and electronic equipment
CN110072052B (en) Image processing method and device based on multi-frame image and electronic equipment
WO2020034737A1 (en) Imaging control method, apparatus, electronic device, and computer-readable storage medium
CN109068058B (en) Shooting control method and device in super night scene mode and electronic equipment
TWI416945B (en) Image processing apparatus, image processing method and computer readable-medium
CN101465972B (en) Apparatus and method for blurring image background in digital image processing device
CN110248107A (en) Image processing method and device
CN109151333B (en) Exposure control method, exposure control device and electronic equipment
CN110166707A (en) Image processing method, device, electronic equipment and storage medium
WO2023098743A1 (en) Automatic exposure method, apparatus and device, and storage medium
CN107424117B (en) Image beautifying method and device, computer readable storage medium and computer equipment
CN108093174A (en) Patterning process, device and the photographing device of photographing device
CN113905182B (en) Shooting method and equipment
CN110246101A (en) Image processing method and device
CN109005369A (en) Exposal control method, device, electronic equipment and computer readable storage medium
CN106791451B (en) Photographing method of intelligent terminal
CN109756680B (en) Image synthesis method and device, electronic equipment and readable storage medium
WO2020034739A1 (en) Control method and apparatus, electronic device, and computer readable storage medium
WO2021218536A1 (en) High-dynamic range image synthesis method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201030

RJ01 Rejection of invention patent application after publication