Nothing Special   »   [go: up one dir, main page]

CN113760415A - Dial plate generation method and device, electronic equipment and computer readable storage medium - Google Patents

Dial plate generation method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113760415A
CN113760415A CN202010499509.0A CN202010499509A CN113760415A CN 113760415 A CN113760415 A CN 113760415A CN 202010499509 A CN202010499509 A CN 202010499509A CN 113760415 A CN113760415 A CN 113760415A
Authority
CN
China
Prior art keywords
picture
matched
target
features
reference picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010499509.0A
Other languages
Chinese (zh)
Inventor
陈德银
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010499509.0A priority Critical patent/CN113760415A/en
Priority to PCT/CN2021/086409 priority patent/WO2021244138A1/en
Publication of CN113760415A publication Critical patent/CN113760415A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to a dial plate generation method, a dial plate generation device, electronic equipment and a computer readable storage medium. The method comprises the steps of obtaining a picture to be matched; extracting the features to be matched of the pictures to be matched; acquiring a reference picture and reference characteristics of the reference picture; respectively matching the features to be matched with the reference features of each reference picture, and determining the similarity between the pictures to be matched and each reference picture; determining a target picture from each reference picture based on the similarity between the picture to be matched and each reference picture; and acquiring a time element, and generating a dial plate based on the time element and the target picture. The dial generation method, the device, the electronic equipment and the computer readable storage medium can generate more accurate dials.

Description

Dial plate generation method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a dial generation method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of mobile technology, many traditional electronic products also start to increase mobile functions, such as watches which can only be used for watching time in the past, and nowadays, they can also be connected to the internet through a smart phone or a home network to display content such as incoming call information, social media chat information, news, weather information, and the like.
On the display interface of electronic equipment such as intelligent wrist-watch or intelligent bracelet, the user can select the pattern of liking as the dial plate. However, the traditional dial plate generation method has the problem that the generated dial plate is inaccurate.
Disclosure of Invention
The embodiment of the application provides a dial plate generation method and device, electronic equipment and a computer readable storage medium, and the accuracy of the generated dial plate can be improved.
A dial generation method, comprising:
acquiring a picture to be matched;
extracting the features to be matched of the pictures to be matched;
acquiring a reference picture and reference characteristics of the reference picture;
matching the features to be matched with the reference features of the reference pictures respectively, and determining the similarity between the pictures to be matched and each reference picture;
determining a target picture from each reference picture based on the similarity between the picture to be matched and each reference picture;
and acquiring a time element, and generating a dial plate based on the time element and the target picture.
A dial generation method, comprising:
acquiring a picture to be matched;
extracting the features to be matched of the pictures to be matched;
acquiring a reference picture and reference characteristics of the reference picture;
matching the features to be matched with the reference features of the reference pictures respectively, and determining the similarity between the pictures to be matched and each reference picture;
determining a target picture from each reference picture based on the similarity between the picture to be matched and each reference picture;
sending the target picture to a wearable device; the target picture is used for indicating the wearable equipment to acquire a time element, and a dial plate is generated based on the time element and the target picture.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program, the computer program, when executed by the processor, causing the processor to perform the steps of the dial plate generation method as described above.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as described above.
According to the dial generation method, the dial generation device, the electronic equipment and the computer readable storage medium, the to-be-matched features of the to-be-matched pictures are matched with the reference features of the reference pictures, so that more accurate target pictures can be determined from the reference pictures based on the similarity between the to-be-matched pictures and the reference pictures, time elements are obtained, and more accurate dials can be generated based on the time elements and the target pictures.
A dial generation method, comprising:
acquiring a picture to be matched;
extracting the features to be matched of the pictures to be matched;
acquiring a reference picture and reference characteristics of the reference picture;
matching the features to be matched with the reference features of the reference pictures respectively, and determining the similarity between the pictures to be matched and each reference picture;
determining a target picture from each reference picture based on the similarity between the picture to be matched and each reference picture;
sending the target picture to a wearable device; the target picture is used for indicating the wearable equipment to acquire a time element, and a dial plate is generated based on the time element and the target picture.
A dial generation apparatus comprising:
the image to be matched acquisition module is used for acquiring an image to be matched;
the characteristic extraction module is used for extracting the characteristics to be matched of the pictures to be matched;
the device comprises a reference picture and reference feature acquisition module, a reference picture and reference feature acquisition module and a reference feature acquisition module, wherein the reference picture and reference feature acquisition module is used for acquiring a reference picture and reference features of the reference picture;
the matching module is used for respectively matching the features to be matched with the reference features of the reference pictures and determining the similarity between the pictures to be matched and each reference picture;
the target picture determining module is used for determining a target picture from each reference picture based on the similarity between the picture to be matched and each reference picture;
the dial plate generation module is used for sending the target picture to the wearable equipment; the target picture is used for indicating the wearable equipment to acquire a time element, and a dial plate is generated based on the time element and the target picture.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program, the computer program, when executed by the processor, causing the processor to perform the steps of the dial plate generation method as described above.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as described above.
According to the dial plate generation method, the dial plate generation device, the electronic equipment and the computer readable storage medium, the to-be-matched features of the to-be-matched pictures are matched with the reference features of the reference pictures, so that more accurate target pictures can be determined from the reference pictures based on the similarity between the to-be-matched pictures and the reference pictures, and the target pictures are sent to the wearable equipment to be used for the wearable equipment to generate more accurate dial plates.
A dial plate generation method is applied to wearable equipment and comprises the following steps:
acquiring a target picture sent by electronic equipment; the target picture is determined from each reference picture by the electronic equipment based on the similarity between the obtained picture to be matched and each obtained reference picture; the similarity between the picture to be matched and each reference picture is obtained by respectively matching the features to be matched of the picture to be matched with the reference features of each reference picture by the electronic equipment; the to-be-matched features of the to-be-matched picture are extracted from the to-be-matched picture by the electronic equipment;
and acquiring a time element, and generating a dial plate based on the time element and the target picture.
A dial plate generation device is applied to wearable equipment and comprises:
the target picture acquisition module is used for acquiring a target picture sent by the electronic equipment; the target picture is determined from each reference picture by the electronic equipment based on the similarity between the obtained picture to be matched and each obtained reference picture; the similarity between the picture to be matched and each reference picture is obtained by respectively matching the features to be matched of the picture to be matched with the reference features of each reference picture by the electronic equipment; the to-be-matched features of the to-be-matched picture are extracted from the to-be-matched picture by the electronic equipment;
and the dial plate generation module is used for acquiring time elements and generating a dial plate based on the time elements and the target picture.
A wearable device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the dial plate generation method as described above.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as described above.
According to the dial plate generation method, the dial plate generation device, the wearable device and the computer readable storage medium, the target picture sent by the electronic device is obtained, the target picture is determined from all reference pictures by the electronic device based on the similarity between the picture to be matched and the reference pictures, and therefore the determined target picture is more accurate; and acquiring the time element, and generating a more accurate dial plate based on the time element and the target picture.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an application environment of a dial plate generation method in one embodiment;
FIG. 2 is a flow diagram of a method for dial generation in one embodiment;
FIG. 3 is a flow diagram that illustrates the steps of determining a target picture in one embodiment;
FIG. 4 is a flow diagram that illustrates steps in one embodiment for determining regions to match;
FIG. 5 is a flowchart of a dial plate generation method in another embodiment;
FIG. 6 is a block diagram showing the structure of a dial plate producing apparatus according to an embodiment;
fig. 7 is a block diagram showing the structure of a dial plate producing apparatus in another embodiment;
fig. 8 is a block diagram showing the structure of a dial plate producing apparatus in another embodiment;
fig. 9 is a schematic diagram of an internal structure of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Fig. 1 is a schematic diagram of an application environment of the dial plate generation method in one embodiment. As shown in fig. 1, the application environment includes a wearable device 102 and an electronic device 104, and the wearable device 102 and the electronic device 104 communicate over a network. The electronic device 104 acquires a picture to be matched; extracting the features to be matched of the pictures to be matched; acquiring a reference picture and reference characteristics of the reference picture; respectively matching the features to be matched with the reference features of each reference picture, and determining the similarity between the pictures to be matched and each reference picture; determining a target picture from each reference picture based on the similarity between the picture to be matched and each reference picture; the target picture is sent to wearable device 102 via a network. After the wearable device 102 receives the target picture, the time element is obtained, and a dial is generated based on the time element and the target picture.
Fig. 2 is a flow chart of a dial generation method in one embodiment. As shown in fig. 2, the dial plate generation method includes steps 202 to 212.
Step 202, obtaining a picture to be matched.
The picture to be matched refers to a picture for matching so as to generate a dial plate. The picture to be matched may be one of RGB (Red, Green, Blue) picture, grayscale picture, and the like. The RGB picture can be obtained by shooting through a color camera. The grey-scale picture can be obtained by shooting through a black-and-white camera. The picture to be matched can be stored locally by the electronic device, can also be stored by other devices, can also be stored from a network, and can also be shot by the electronic device in real time, which is not limited to the above.
Specifically, an ISP (Image Signal Processing) processor or a central Processing unit of the electronic device may obtain a picture to be matched from a local device or other devices, or obtain the picture to be matched by shooting through a camera.
And step 204, extracting the features to be matched of the pictures to be matched.
The feature to be matched refers to the feature of the picture to be matched. The feature to be matched may include at least one of a local feature and a global feature of the picture to be matched. Local features such as texture features, contour features, and the like of the picture to be matched; global features such as color features, contrast features, etc. of the picture to be matched.
Alternatively, the features to be matched of the picture to be matched can be represented by vectors.
The electronic equipment inputs the picture to be matched into the feature extraction model, and the feature to be matched of the picture to be matched is extracted through the trained feature extraction model. Wherein, the feature extraction model is trained by adopting deep learning and metric learning. The deep learning is performed by using a Convolutional Neural Network (CNN). Metric Learning (Metric Learning) is a method of spatial mapping, which can learn a feature (Embedding) space in which all data is converted into a feature vector, and the distance between feature vectors of similar samples is small and the distance between feature vectors of dissimilar samples is large, thereby distinguishing data.
The convolutional neural network in the feature extraction model is formed by combining a plurality of convolutional layers, a shallow convolutional layer can extract features of local details such as textures and outlines in a picture to be matched, a high convolutional layer can extract globally abstract features such as colors and contrast, and finally the picture to be matched is embedded (embedded) into a high-dimensional vector (generally 128-dimensional, 256-dimensional, 512-dimensional and the like) by the whole convolutional neural network and the high-dimensional vector is output. The high-dimensional vector is the feature to be matched of the picture to be matched.
Furthermore, the electronic device can also perform denoising, wrinkle removal and other processing on the picture to be matched, and then perform feature extraction on the processed picture to be matched, so that more accurate features to be matched can be extracted.
In step 206, a reference picture and reference characteristics of the reference picture are obtained.
The reference picture refers to a picture matched with a picture to be matched. The reference feature refers to a feature of a reference picture. Likewise, the reference feature may also include at least one of a local feature and a global feature of the reference picture. Local features such as texture features, contour features, etc. of the reference picture; global features such as color features, contrast features, etc. of the reference picture. Alternatively, the reference features of the reference picture may be represented by a vector.
In one embodiment, the electronic device may extract reference features from the reference picture in advance. In another embodiment, the electronic device may also extract the reference features from the reference picture after the reference picture is acquired.
And the electronic equipment inputs the reference picture into the feature extraction model, and extracts the reference feature of the reference picture through the trained feature extraction model. Wherein, the feature extraction model is trained by adopting deep learning and metric learning. The deep learning is performed by using a Convolutional Neural Network (CNN). Metric Learning (Metric Learning) is a method of spatial mapping, which can learn a feature (Embedding) space in which all data is converted into a feature vector, and the distance between feature vectors of similar samples is small and the distance between feature vectors of dissimilar samples is large, thereby distinguishing data.
The convolutional neural network in the feature extraction model is formed by combining a plurality of convolutional layers, a shallow convolutional layer can extract features of local details such as textures and outlines in a reference picture, a high convolutional layer can extract globally abstract features such as colors and contrasts, and finally the reference picture is embedded (embedded) into a high-dimensional vector (generally 128-dimensional, 256-dimensional, 512-dimensional and the like) by the whole convolutional neural network and the high-dimensional vector is output. The high-dimensional vector is the reference feature of the reference picture.
Furthermore, the electronic device can also perform denoising, wrinkle removal and other processing on the reference picture, and then perform feature extraction on the processed reference picture, so that more accurate reference features can be extracted.
And step 208, respectively matching the features to be matched with the reference features of the reference pictures, and determining the similarity between the pictures to be matched and each reference picture.
It is to be understood that similar pictures have similar representations of features. The higher the similarity between the picture to be matched and the reference picture is, the closer the features to be matched of the picture to be matched and the reference features of the reference picture are.
Specifically, the electronic device calculates a cosine distance between the feature to be matched and the reference feature, and takes the cosine distance as the similarity between the picture to be matched and the reference picture. The cosine distance, also called cosine similarity, is a measure for measuring the difference between two individuals by using the cosine value of the included angle between two vectors in the vector space.
Step 210, determining a target picture from each reference picture based on the similarity between the picture to be matched and each reference picture.
Optionally, the number of the determined target pictures may be one, or may be at least two.
In one embodiment, the electronic device may determine the reference picture with the highest similarity as the target picture. In another embodiment, the electronic device may also determine the first two reference pictures with the highest similarity as the target pictures. In other embodiments, the electronic device may further determine reference pictures with other similarities as the target picture.
Further, obtaining the weight factor of each reference picture, and determining a target picture from each reference picture based on the similarity between the picture to be matched and each reference picture and the weight factor of each reference picture.
For example, the similarity between the picture to be matched and the reference picture a is 60%, the similarity between the picture to be matched and the reference picture a is 85%, the weighting factor of the reference picture a is 1.5, the weighting factor of the reference picture B is 1.0, the similarity of the reference picture a is multiplied by the corresponding weighting factor 1.5 to obtain 90%, the similarity of the reference picture B is multiplied by the corresponding weighting factor 1.0 to obtain 85%, and then the target picture is determined based on the values obtained by the reference picture a and the reference picture B, respectively. The electronic device may select a reference picture a with a higher numerical value as a target picture, and may also select a reference picture B with a lower numerical value as a target picture.
And step 212, acquiring a time element, and generating a dial plate based on the time element and the target picture.
The time element refers to an element including time information. The time elements may include time scales, hour, minute, second, etc. The style of the time element is not limited, as the style of the time element is a cartoon style, a landscape style, an article style, and the like. The time information included in the time element may be either running or static. For example, a time element may be a running clock or a map that includes a clock, the clock in the map being static.
Specifically, the electronic device may perform superposition processing with the target picture as a background picture and the time element as a foreground, so as to generate a dial.
According to the dial plate generation method, the to-be-matched features of the to-be-matched pictures are matched with the reference features of the reference pictures, so that more accurate target pictures can be determined from the reference pictures based on the similarity between the to-be-matched pictures and the reference pictures, time elements are obtained, and more accurate dial plates can be generated based on the time elements and the target pictures.
Furthermore, different target pictures can be determined from the reference pictures through different pictures to be matched, so that various different dials are generated, and the richness of the dials is improved. For example, the picture to be matched is a landscape, a building, a car and the like shot by the electronic equipment, so that target pictures such as a beautiful landscape, a world name building, a famous car and the like are determined from the reference picture, and various dials are generated.
In one embodiment, the method for determining the target picture can also be applied to schemes of picture recommendation, shopping recommendation and the like.
In one embodiment, the method further comprises: determining the category of the picture to be matched based on the feature to be matched of the picture to be matched, determining a reference picture matched with the category of the picture to be matched, and taking the reference picture matched with the category of the picture to be matched as an intermediate picture; respectively matching the features to be matched with the reference features of each reference picture, and determining the similarity between the pictures to be matched and each reference picture, wherein the method comprises the following steps: respectively matching the features to be matched with the reference features of each intermediate picture, and determining the similarity between the picture to be matched and each intermediate picture; determining a target picture from each reference picture based on the similarity between the picture to be matched and each reference picture, wherein the determining comprises the following steps: and determining a target picture from each intermediate picture based on the similarity between the picture to be matched and each intermediate picture.
It can be understood that the scene in the picture to be matched, the object included in the picture to be matched, and other information can be identified based on the feature to be matched of the picture to be matched, so that the category of the picture to be matched can be judged.
The electronic device may classify the reference pictures in advance, and then use the reference pictures with the same category as the picture to be matched as the intermediate pictures.
In this embodiment, the intermediate picture is screened from the reference pictures, and the features to be matched are matched with the reference features of the intermediate picture, so that matching of the features to be matched with the reference features of all the reference pictures is avoided, and the feature matching efficiency can be improved.
In one embodiment, as shown in fig. 3, after the picture to be matched is obtained, the method further includes:
step 302, determining a region to be matched from the picture to be matched, and obtaining a sub-picture according to the region to be matched.
The region to be matched refers to a region selected from the pictures to be matched. The shape of the region to be matched is not limited, and may be a circle, a rectangle, a triangle, an irregular figure, or the like.
The sub-picture refers to a picture generated from a region to be matched. In one embodiment, the electronic device may treat the region to be matched as a sub-picture. In another embodiment, the electronic device may obtain a sub-picture from the region to be matched. For example, the region to be matched is an irregular shape, and the largest rectangular region may be determined from the region to be matched as a sub-picture. The specific implementation of obtaining the sub-picture according to the region to be matched is not limited, and can be set according to the user requirement.
Extracting the features to be matched of the pictures to be matched, comprising the following steps:
step 304, extracting the sub-features of the sub-picture.
Sub-features refer to features of a sub-picture. The sub-features may include at least one of local features and global features of the sub-picture. Local features such as texture features, contour features, etc. of the sub-picture; global features such as color features, contrast features, etc. of the sub-picture.
Alternatively, the sub-features of the sub-picture may be represented by vectors.
And the electronic equipment inputs the sub-picture into the feature extraction model, and extracts the sub-features of the sub-picture through the trained feature extraction model. Wherein, the feature extraction model is trained by adopting deep learning and metric learning. The deep learning is performed by using a Convolutional Neural Network (CNN). Metric Learning (Metric Learning) is a method of spatial mapping, which can learn a feature (Embedding) space in which all data is converted into a feature vector, and the distance between feature vectors of similar samples is small and the distance between feature vectors of dissimilar samples is large, thereby distinguishing data.
The convolutional neural network in the feature extraction model is formed by combining a plurality of convolutional layers, a shallow convolutional layer can extract features of local details such as textures and outlines in a sub-picture, a high convolutional layer can extract globally abstract features such as colors and contrasts, and finally the sub-picture is embedded (embedded) into a high-dimensional vector (generally 128-dimensional, 256-dimensional, 512-dimensional and the like) by the whole convolutional neural network and the high-dimensional vector is output. The high-dimensional vector is the sub-feature of the sub-picture.
Furthermore, the electronic equipment can also perform denoising, wrinkle removal and other processing on the sub-picture, and then perform feature extraction on the processed sub-picture, so that more accurate sub-features can be extracted.
Respectively matching the features to be matched with the reference features of each reference picture, and determining the similarity between the pictures to be matched and each reference picture, wherein the method comprises the following steps:
and step 306, matching the sub-features with the reference features of the reference pictures respectively, and determining the similarity between the sub-pictures and each reference picture.
The higher the similarity between the sub-picture and the reference picture, the closer the sub-feature representing the sub-picture is to the reference feature of the reference picture.
Specifically, the electronic device calculates a cosine distance between the sub-feature and the reference feature, and takes the cosine distance as a similarity between the sub-picture and the reference picture. The cosine distance, also called cosine similarity, is a measure for measuring the difference between two individuals by using the cosine value of the included angle between two vectors in the vector space.
Determining a target picture from each reference picture based on the similarity between the picture to be matched and each reference picture, wherein the determining comprises the following steps:
and 308, determining a target picture from the reference pictures based on the similarity between the sub-picture and each reference picture.
In this embodiment, the region to be matched is determined from the picture to be matched, the sub-picture is obtained according to the region to be matched, and the sub-features of the sub-picture are matched with the reference features of the reference picture, so that the features of all regions of the picture to be matched are prevented from being obtained, the features of all regions of the picture to be matched are also prevented from being matched, resources of electronic equipment are saved, the efficiency of feature matching is improved, and the target picture can be determined more quickly.
In one embodiment, extracting the target feature of the target region of the picture to be matched includes: acquiring a target scale; adjusting the size of the sub-picture to a target scale; normalizing the pixel value of each pixel point in the sub-picture of the target scale; and performing feature extraction on the sub-picture after the normalization processing to obtain the target feature of the sub-picture.
It can be understood that, the sub-picture is obtained from the region to be matched determined from the picture to be matched according to the region to be matched, and the size of the sub-picture may be different from that of the reference picture, so that the size of the sub-picture is adjusted to the target size. The target dimension can be set according to the needs of the user. When the target dimension is larger than the original dimension of the sub-picture, expanding the sub-picture; and when the target scale is smaller than the original scale of the sub-picture, reducing the sub-picture.
For example, if the target scale is (224 × 224 pixels), the sub-picture is resized to the target scale (224 × 224 pixels).
Normalization refers to mapping data into a range of 0-1, and processing can be performed more conveniently and rapidly. Specifically, the pixel value of each pixel point in the sub-picture of the target scale is obtained, and the pixel value is mapped to the range of 0-1 from 0-255.
In this embodiment, the size of the sub-picture is adjusted to a target scale; the pixel values of all the pixel points in the target scale sub-picture are normalized, so that the sub-picture after normalization can be conveniently processed subsequently.
In one embodiment, before obtaining the reference picture and the reference feature of the reference picture, the method further includes: acquiring a reference picture; adjusting the size of the reference picture to a target scale; normalizing the pixel value of each pixel point in the reference picture of the target scale; and performing feature extraction on the normalized reference picture to obtain the reference features of the reference picture.
It can be understood that, by adjusting the sizes of the reference picture and the sub-picture to the target scale, the reference picture and the sub-picture can perform feature matching under the same condition, and the similarity between the sub-picture and the reference picture can be obtained more accurately, so that the target picture can be determined more accurately from the reference picture. Moreover, the pixel values of all the pixel points in the reference picture are normalized, so that the reference picture can be conveniently processed subsequently.
In one embodiment, as shown in fig. 4, determining a region to be matched from a picture to be matched includes:
step 402, generating a central weight graph corresponding to the picture to be matched, wherein the weight value represented by the central weight graph is gradually reduced from the center to the edge.
The central weight map is used for recording the weight value of each pixel point in the picture to be matched. The weight values recorded in the central weight map gradually decrease from the center to the four sides, i.e., the central weight is the largest, and the weight values gradually decrease toward the four sides. And representing the weight value from the picture center pixel point to the picture edge pixel point of the picture to be matched by the center weight graph, and gradually reducing the weight value.
The ISP processor or central processor may generate a corresponding central weight map according to the size of the picture to be matched. The weight value represented by the central weight map gradually decreases from the center to the four sides. The central weight map may be generated using a gaussian function, or using a first order equation, or a second order equation. The gaussian function may be a two-dimensional gaussian function.
Step 404, inputting the picture to be matched and the central weight map into a main body detection model to obtain a main body region confidence map, wherein the main body detection model is obtained by training in advance according to the picture to be matched, the central weight map and a corresponding marked main body mask map of the same scene.
The subject detection model is obtained by acquiring a large amount of training data in advance and inputting the training data into the subject detection model containing the initial network weight for training. Each set of training data comprises a picture to be matched, a center weight graph and a marked main body mask graph corresponding to the same scene. The picture to be matched and the central weight graph are used as input of a trained main body detection model, and the marked main body mask (mask) graph is used as an expected output real value (ground true) of the trained main body detection model. The main body mask image is an image filter template used for identifying a main body in a picture, and can shield other parts of the picture to screen out the main body in the picture. The subject detection model may be trained to recognize and detect various subjects, such as people, flowers, cats, dogs, backgrounds, etc.
Specifically, the ISP processor or the central processing unit may input the to-be-matched image and the central weight map into the subject detection model, and perform detection to obtain a subject area confidence map. The subject region confidence map is used to record the probability of which recognizable subject the subject belongs to, for example, the probability of a certain pixel point belonging to a person is 0.8, the probability of a flower is 0.1, and the probability of a background is 0.1.
And step 406, determining a target main body in the picture to be matched according to the main body region confidence map, and taking the region where the target main body is located as the region to be matched.
Specifically, the ISP processor or the central processing unit may select the highest confidence level or the highest confidence level as a main body in the picture to be matched according to the main body region confidence level map, and if there is one main body, the main body is used as a target main body; if multiple subjects exist, one or more of the subjects can be selected as target subjects as desired.
In this embodiment, after a central weight map corresponding to a picture to be matched is generated, the picture to be matched and the central weight map are input into corresponding main body detection models for detection, a main body region confidence map can be obtained, a target main body in the picture to be matched can be determined according to the main body region confidence map, an object in the center of the image can be more easily detected by using the central weight map, the target main body in the picture to be matched can be more accurately identified by using the trained main body detection models obtained by using the picture to be matched, the central weight map, the main body mask map and the like, the region where the target main body is located is used as the region to be matched, and the region to be matched in the picture to be matched is more accurately determined.
In one embodiment, the method further comprises: dividing each reference picture into at least two reference categories; determining a target picture from each reference picture based on the similarity between the picture to be matched and each reference picture, wherein the determining comprises the following steps: determining the category of the picture to be matched based on the similarity between the picture to be matched and each reference picture; and taking the reference category matched with the category of the picture to be matched as a target category, and determining the target picture from all the reference pictures included in the target category.
The electronic equipment acquires the labels of all the reference pictures and divides the reference pictures with the same label into the same reference category. For example, if reference picture a is labeled "building", reference picture B is labeled "flower", reference picture C is labeled "flower", reference picture D is labeled "building", and reference picture E is labeled "building", then reference picture a, reference picture D, and reference picture E are divided into the same reference category "building", and reference picture B and reference picture C are divided into the same reference category "flower".
It can be understood that the higher the similarity between the picture to be matched and the reference picture is, the closer the feature to be matched representing the picture to be matched and the reference feature of the reference picture are, and the closer the category of the picture to be matched and the reference picture are also represented.
In an embodiment, the electronic device may use a reference category corresponding to a reference picture with the highest similarity as the category of the picture to be matched. In another embodiment, the electronic device may also acquire a preset number of reference pictures with the highest similarity, and use the reference category with the largest number in the preset number of reference pictures as the category of the picture to be matched. In other embodiments, the electronic device may determine the category of the picture to be matched in other manners, which is not limited to this.
The target category refers to a reference category that matches the category of the picture to be matched. The number of target pictures may be one or at least two.
In this embodiment, the category of the picture to be matched is determined, and then the reference category matched with the category of the picture to be matched is used as the target category, and the target picture is determined from the reference pictures included in the target category, so that the target picture is prevented from being determined from all the reference pictures, and the efficiency of determining the target picture can be improved, and the accuracy of determining the target picture can also be improved.
In one embodiment, as shown in fig. 5, the electronic device obtains a reference picture 502; step 504 is executed to classify the reference picture 502, and the reference picture 502 is divided into at least two reference categories. The electronic device executes step 506 to perform de-noising and de-wrinkling on the classified reference pictures to obtain a picture library 508. The electronic device executes step 510 to perform depth learning and metric learning on the reference pictures in the picture library 508, so as to obtain the reference features of each reference picture in the picture library, thereby generating a picture feature library 512.
It should be noted that the execution processes of 502 to 512 may be performed in advance, or may be performed in the dial generation process, but is not limited thereto.
The electronic device obtains 514 a picture to be matched; executing step 516, denoising and wrinkle removing are carried out on the picture to be matched 514; and step 518 is executed again, and features of the image to be matched after denoising and wrinkle removing are extracted to obtain the features to be matched. The electronic device executes step 520, and performs feature matching on the features to be matched and the reference features of the reference pictures to obtain the similarity between the pictures to be matched and each reference picture; then, based on the similarity between the picture to be matched and each reference picture, a target picture 522 is determined from each reference picture; a time element is obtained and a dial 524 is generated based on the time element and the target picture.
Further, after the similarity between the picture to be matched and each reference picture is obtained, the electronic device may determine the category of the picture to be matched based on the similarity between the picture to be matched and each reference picture, take the reference category matched with the category of the picture to be matched as a target category, and determine the target picture 522 from each reference picture included in the target category, which may improve the efficiency of determining the target picture.
In one embodiment, obtaining a time element and generating a watch face based on the time element and a target picture comprises: acquiring time elements, respectively generating candidate dials based on the time elements and the target pictures determined in the target category, and displaying the candidate dials in a display interface; and receiving a selection instruction of the candidate dial plate, and displaying the candidate dial plate selected by the selection instruction in a display interface to generate the dial plate.
When the determined number of the target pictures is one, a candidate dial is generated based on the time element and the target pictures, and the electronic equipment can directly generate the candidate dial into the dial. When the number of the determined target pictures is at least two, generating at least two candidate dials based on the time elements and the target pictures, and displaying the at least two candidate dials on a display interface; and when a selection instruction for the candidate dial plate is received, displaying the candidate dial plate selected by the selection instruction on a display interface, thereby generating the dial plate.
In this embodiment, each candidate dial is generated based on the time elements and the target pictures determined in the target category, and one of the candidate dials can be selected and displayed on the display interface, so that the dial is generated, and the richness of the generated dial is improved.
In one embodiment, obtaining a time element and generating a watch face based on the time element and a target picture comprises: acquiring the category of a target picture; acquiring a corresponding target style based on the category of the target picture; and acquiring time elements of the target style, and generating a dial plate based on the target picture and the time elements of the target style.
In the electronic device, at least one style corresponding to each category may be stored in advance. When the electronic equipment acquires the category of the target picture, the category of the target picture is matched with each stored category, and therefore the target style corresponding to the category of the target picture is acquired. Target styles such as cartoon styles, landscape styles, architectural styles, and the like.
For example, if the category of the target picture is "building", various styles such as "cantonese tower" style, "world window" style, "yellow crane building" style, etc., with the category of "building" are acquired from the memory of the electronic device.
In this embodiment, the time element of the target pattern corresponding to the category of the target picture is obtained, the time element is more matched with the target picture, the degree of engagement is higher, and a more accurate dial can be generated based on the target picture and the time element of the target pattern.
In another embodiment, there is provided a dial plate generation method including: acquiring a picture to be matched; extracting the features to be matched of the pictures to be matched; acquiring a reference picture and reference characteristics of the reference picture; respectively matching the features to be matched with the reference features of each reference picture, and determining the similarity between the pictures to be matched and each reference picture; determining a target picture from each reference picture based on the similarity between the picture to be matched and each reference picture; sending the target picture to the wearable device; the target picture is used for indicating the wearable device to acquire the time element, and the dial plate is generated based on the time element and the target picture.
Wearable devices such as smartwatches, smartbands, and the like.
It can be understood that time-consuming and high-workload tasks such as feature extraction, feature matching and the like are executed in the electronic device, the finally determined target picture is sent to the wearable device, the wearable device only needs to acquire a time element, the dial plate is generated based on the time element and the target picture, the operating pressure of the wearable device is reduced, and therefore other functions of the wearable device can be better achieved.
In another embodiment, a dial plate generation method is provided, which is applied to a wearable device, and includes: acquiring a target picture sent by electronic equipment; the target picture is determined from each reference picture by the electronic equipment based on the similarity between the obtained picture to be matched and each obtained reference picture; the similarity between the picture to be matched and each reference picture is obtained by respectively matching the features to be matched of the picture to be matched with the reference features of each reference picture by the electronic equipment; the to-be-matched features of the to-be-matched picture are extracted from the to-be-matched picture by the electronic equipment; and acquiring a time element, and generating a dial plate based on the time element and the target picture.
The process of generating the target picture needs to execute time-consuming and high-workload tasks such as feature extraction, feature matching and the like, and the tasks are executed in the electronic equipment; and wearable equipment receives the target picture that electronic equipment sent, reacquires time element, can be based on time element and target picture generation dial plate, has alleviateed wearable equipment's operating pressure to can realize wearable equipment's other functions better.
It should be understood that, although the steps in the flowcharts of fig. 2 to 4 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-4 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least some of the sub-steps or stages of other steps.
Fig. 6 is a block diagram showing the structure of the dial plate generation device according to the embodiment. As shown in fig. 6, there is provided a dial producing apparatus 600 including: a to-be-matched picture obtaining module 602, a feature extracting module 604, a reference picture and reference feature obtaining module 606, a matching module 608, a target picture determining module 610 and a dial plate generating module 612, wherein:
a to-be-matched picture obtaining module 602, configured to obtain a to-be-matched picture.
The feature extraction module 604 is configured to extract features to be matched of the picture to be matched.
A reference picture and reference feature obtaining module 606, configured to obtain a reference picture and a reference feature of the reference picture.
The matching module 608 is configured to match the features to be matched with the reference features of each reference picture, and determine a similarity between the picture to be matched and each reference picture.
And a target picture determining module 610, configured to determine a target picture from the reference pictures based on the similarity between the picture to be matched and each reference picture.
And a dial generating module 612, configured to obtain the time element, and generate a dial based on the time element and the target picture.
According to the dial plate generation device, the to-be-matched features of the to-be-matched pictures are matched with the reference features of the reference pictures, so that more accurate target pictures can be determined from the reference pictures based on the similarity between the to-be-matched pictures and the reference pictures, time elements are obtained, and more accurate dial plates can be generated based on the time elements and the target pictures.
In an embodiment, the dial plate generating apparatus 600 further includes an intermediate picture determining module, configured to determine a category of a picture to be matched based on a feature to be matched of the picture to be matched, determine a reference picture matched with the category of the picture to be matched, and use the reference picture matched with the category of the picture to be matched as an intermediate picture; the matching module 608 is further configured to match the features to be matched with the reference features of each intermediate picture, and determine a similarity between the picture to be matched and each intermediate picture; the target picture determining module 610 is further configured to determine a target picture from each intermediate picture based on a similarity between the picture to be matched and each intermediate picture.
In an embodiment, the dial plate generating apparatus 600 further includes a sub-picture obtaining module, configured to determine a region to be matched from the picture to be matched, and obtain a sub-picture according to the region to be matched; the feature extraction module 604 is further configured to extract sub-features of the sub-picture; the matching module 608 is further configured to match the sub-features with the reference features of each reference picture, respectively, and determine a similarity between the sub-picture and each reference picture; the target picture determining module 610 is further configured to determine a target picture from the reference pictures based on the similarity between the sub-picture and each of the reference pictures.
In one embodiment, the feature extraction module 604 is further configured to obtain a target dimension; adjusting the size of the sub-picture to a target scale; normalizing the pixel value of each pixel point in the sub-picture of the target scale; and performing feature extraction on the sub-picture after the normalization processing to obtain the target feature of the sub-picture.
In one embodiment, the feature extraction module 604 is further configured to obtain a reference picture; adjusting the size of the reference picture to a target scale; normalizing the pixel value of each pixel point in the reference picture of the target scale; and performing feature extraction on the normalized reference picture to obtain the reference features of the reference picture.
In one embodiment, the sub-picture obtaining module is further configured to generate a central weight map corresponding to the picture to be matched, where a weight value represented by the central weight map is gradually decreased from the center to the edge; inputting the picture to be matched and the central weight graph into a main body detection model to obtain a main body region confidence graph, wherein the main body detection model is obtained by training in advance according to the picture to be matched, the central weight graph and a corresponding marked main body mask graph of the same scene; and determining a target main body in the picture to be matched according to the main body region confidence map, and taking the region where the target main body is located as the region to be matched.
In one embodiment, the dial plate generating apparatus further includes a classifying module, configured to classify each reference picture into at least two reference categories; the target picture determining module 610 is further configured to determine a category of a picture to be matched based on a similarity between the picture to be matched and each reference picture; and taking the reference category matched with the category of the picture to be matched as a target category, and determining the target picture from all the reference pictures included in the target category.
In one embodiment, the dial plate generation module 612 is further configured to obtain a time element, generate each candidate dial plate based on the time element and each target picture determined in the target category, and display each candidate dial plate in a display interface; and receiving a selection instruction of the candidate dial plate, and displaying the candidate dial plate selected by the selection instruction in a display interface to generate the dial plate.
In one embodiment, the dial plate generating module 612 is further configured to obtain a category of the target picture; acquiring a corresponding target style based on the category of the target picture; and acquiring time elements of the target style, and generating a dial plate based on the target picture and the time elements of the target style.
Fig. 7 is a block diagram showing the structure of a dial plate generation device according to another embodiment. As shown in fig. 7, there is provided a dial plate generation apparatus 700 including: a to-be-matched picture obtaining module 702, a feature extracting module 704, a reference picture and reference feature obtaining module 706, a matching module 708, a target picture determining module 710 and a dial plate generating module 712, wherein:
a to-be-matched picture obtaining module 702, configured to obtain a to-be-matched picture.
And the feature extraction module 704 is configured to extract features to be matched of the picture to be matched.
A reference picture and reference feature obtaining module 706, configured to obtain a reference picture and a reference feature of the reference picture.
The matching module 708 is configured to match the features to be matched with the reference features of the reference pictures, and determine similarity between the pictures to be matched and each reference picture.
And a target picture determining module 710, configured to determine a target picture from the reference pictures based on the similarity between the picture to be matched and each reference picture.
The dial plate generation module 712 is used for sending the target picture to the wearable device; the target picture is used for indicating the wearable device to acquire the time element, and the dial plate is generated based on the time element and the target picture.
According to the dial plate generation device, the to-be-matched features of the to-be-matched pictures are matched with the reference features of the reference pictures, so that more accurate target pictures can be determined from the reference pictures based on the similarity between the to-be-matched pictures and the reference pictures, and the target pictures are sent to wearable equipment for the wearable equipment to generate more accurate dial plates.
Fig. 8 is a block diagram showing the structure of a dial plate generation device according to another embodiment. As shown in fig. 8, there is provided a dial producing apparatus 800 including: a target picture obtaining module 802 and a dial plate generating module 804, wherein:
a target picture obtaining module 802, configured to obtain a target picture sent by an electronic device; the target picture is determined from each reference picture by the electronic equipment based on the similarity between the obtained picture to be matched and each obtained reference picture; the similarity between the picture to be matched and each reference picture is obtained by respectively matching the features to be matched of the picture to be matched with the reference features of each reference picture by the electronic equipment; the to-be-matched features of the to-be-matched picture are extracted from the to-be-matched picture by the electronic equipment.
And the dial plate generation module 804 is used for acquiring the time elements and generating a dial plate based on the time elements and the target picture.
The dial plate generation device acquires a target picture sent by the electronic equipment, wherein the target picture is determined from each reference picture by the electronic equipment based on the similarity between the picture to be matched and the reference picture, so that the determined target picture is more accurate; and acquiring the time element, and generating a more accurate dial plate based on the time element and the target picture.
The division of the modules in the dial generation device is merely for illustration, and in other embodiments, the dial generation device may be divided into different modules as needed to complete all or part of the functions of the dial generation device.
For specific limitations of the dial generation apparatus, reference may be made to the above limitations of the dial generation method, which are not described herein again. The respective modules in the dial plate generation apparatus described above may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
Fig. 9 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 9, the electronic device includes a processor and a memory connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor to implement a dial generation method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The electronic device may be any terminal device such as a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a Point of Sales (POS), a vehicle-mounted computer, and a wearable device.
The respective modules in the dial plate generation apparatus provided in the embodiments of the present application may be implemented in the form of a computer program. The computer program may be run on an electronic device. Program modules constituted by such computer programs may be stored on the memory of the electronic device. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The respective modules in the dial plate generation apparatus provided in the embodiments of the present application may be implemented in the form of a computer program. The computer program may be run on a wearable device. The program modules of the computer program may be stored on a memory of the wearable device. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of a dial generation method.
A computer program product containing instructions which, when run on a computer, cause the computer to perform a dial generation method.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (16)

1. A dial generation method, comprising:
acquiring a picture to be matched;
extracting the features to be matched of the pictures to be matched;
acquiring a reference picture and reference characteristics of the reference picture;
matching the features to be matched with the reference features of the reference pictures respectively, and determining the similarity between the pictures to be matched and each reference picture;
determining a target picture from each reference picture based on the similarity between the picture to be matched and each reference picture;
and acquiring a time element, and generating a dial plate based on the time element and the target picture.
2. The method according to claim 1, wherein after the obtaining the picture to be matched, further comprising:
determining a region to be matched from the picture to be matched, and obtaining a sub-picture according to the region to be matched;
the extracting of the features to be matched of the pictures to be matched comprises the following steps:
extracting sub-features of the sub-picture;
the matching the features to be matched with the reference features of the reference pictures respectively to determine the similarity between the pictures to be matched and each reference picture, includes:
matching the sub-features with the reference features of the reference pictures respectively to determine the similarity between the sub-pictures and each reference picture;
the determining a target picture from each reference picture based on the similarity between the picture to be matched and each reference picture comprises:
and determining a target picture from the reference pictures based on the similarity between the sub-picture and each reference picture.
3. The method according to claim 2, wherein the extracting the target feature of the target region of the picture to be matched comprises:
acquiring a target scale;
adjusting the size of the sub-picture to a target scale;
normalizing the pixel value of each pixel point in the sub-picture of the target scale;
and performing feature extraction on the sub-picture after the normalization processing to obtain the target feature of the sub-picture.
4. The method of claim 3, wherein the obtaining the reference picture and the reference feature of the reference picture are preceded by:
acquiring a reference picture;
adjusting the size of the reference picture to a target scale;
normalizing the pixel value of each pixel point in the reference picture of the target scale;
and performing feature extraction on the normalized reference picture to obtain the reference features of the reference picture.
5. The method according to claim 2, wherein the determining a region to be matched from the picture to be matched comprises:
generating a central weight graph corresponding to the picture to be matched, wherein the weight value represented by the central weight graph is gradually reduced from the center to the edge;
inputting the picture to be matched and the central weight graph into a main body detection model to obtain a main body region confidence graph, wherein the main body detection model is a model obtained by training in advance according to the picture to be matched, the central weight graph and a corresponding marked main body mask graph of the same scene;
and determining a target main body in the picture to be matched according to the main body region confidence map, and taking the region where the target main body is located as the region to be matched.
6. The method of claim 1, further comprising:
dividing each of the reference pictures into at least two reference categories;
the determining a target picture from each reference picture based on the similarity between the picture to be matched and each reference picture comprises:
determining the category of the picture to be matched based on the similarity between the picture to be matched and each reference picture;
and taking the reference category matched with the category of the picture to be matched as a target category, and determining a target picture from each reference picture included in the target category.
7. The method of claim 6, wherein obtaining a time element, generating a dial based on the time element and the target picture, comprises:
acquiring time elements, respectively generating candidate dials based on the time elements and the determined target pictures in the target category, and displaying the candidate dials in a display interface;
and receiving a selection instruction of the candidate dial plate, and displaying the candidate dial plate selected by the selection instruction in the display interface to generate the dial plate.
8. The method of claim 1, wherein obtaining a time element, generating a dial based on the time element and the target picture, comprises:
acquiring the category of the target picture;
acquiring a corresponding target style based on the category of the target picture;
and acquiring time elements of the target style, and generating a dial plate based on the target picture and the time elements of the target style.
9. A dial generation method, comprising:
acquiring a picture to be matched;
extracting the features to be matched of the pictures to be matched;
acquiring a reference picture and reference characteristics of the reference picture;
matching the features to be matched with the reference features of the reference pictures respectively, and determining the similarity between the pictures to be matched and each reference picture;
determining a target picture from each reference picture based on the similarity between the picture to be matched and each reference picture;
sending the target picture to a wearable device; the target picture is used for indicating the wearable equipment to acquire a time element, and a dial plate is generated based on the time element and the target picture.
10. A dial plate generation method is applied to wearable equipment and comprises the following steps:
acquiring a target picture sent by electronic equipment; the target picture is determined from each reference picture by the electronic equipment based on the similarity between the obtained picture to be matched and each obtained reference picture; the similarity between the picture to be matched and each reference picture is obtained by respectively matching the features to be matched of the picture to be matched with the reference features of each reference picture by the electronic equipment; the to-be-matched features of the to-be-matched picture are extracted from the to-be-matched picture by the electronic equipment;
and acquiring a time element, and generating a dial plate based on the time element and the target picture.
11. A dial generating apparatus, comprising:
the image to be matched acquisition module is used for acquiring an image to be matched;
the characteristic extraction module is used for extracting the characteristics to be matched of the pictures to be matched;
the device comprises a reference picture and reference feature acquisition module, a reference picture and reference feature acquisition module and a reference feature acquisition module, wherein the reference picture and reference feature acquisition module is used for acquiring a reference picture and reference features of the reference picture;
the matching module is used for respectively matching the features to be matched with the reference features of the reference pictures and determining the similarity between the pictures to be matched and each reference picture;
the target picture determining module is used for determining a target picture from each reference picture based on the similarity between the picture to be matched and each reference picture;
and the dial plate generation module is used for acquiring time elements and generating a dial plate based on the time elements and the target picture.
12. A dial generating apparatus, comprising:
the image to be matched acquisition module is used for acquiring an image to be matched;
the characteristic extraction module is used for extracting the characteristics to be matched of the pictures to be matched;
the device comprises a reference picture and reference feature acquisition module, a reference picture and reference feature acquisition module and a reference feature acquisition module, wherein the reference picture and reference feature acquisition module is used for acquiring a reference picture and reference features of the reference picture;
the matching module is used for respectively matching the features to be matched with the reference features of the reference pictures and determining the similarity between the pictures to be matched and each reference picture;
the target picture determining module is used for determining a target picture from each reference picture based on the similarity between the picture to be matched and each reference picture;
the dial plate generation module is used for sending the target picture to the wearable equipment; the target picture is used for indicating the wearable equipment to acquire a time element, and a dial plate is generated based on the time element and the target picture.
13. A dial generation device, applied to a wearable device, includes:
the target picture acquisition module is used for acquiring a target picture sent by the electronic equipment; the target picture is determined from each reference picture by the electronic equipment based on the similarity between the obtained picture to be matched and each obtained reference picture; the similarity between the picture to be matched and each reference picture is obtained by respectively matching the features to be matched of the picture to be matched with the reference features of each reference picture by the electronic equipment; the to-be-matched features of the to-be-matched picture are extracted from the to-be-matched picture by the electronic equipment;
and the dial plate generation module is used for acquiring time elements and generating a dial plate based on the time elements and the target picture.
14. An electronic device comprising a memory and a processor, the memory having stored therein a computer program, characterized in that the computer program, when executed by the processor, causes the processor to carry out the steps of the dial generation method according to any one of claims 1 to 9.
15. A wearable device comprising a memory and a processor, the memory having stored therein a computer program, wherein the computer program, when executed by the processor, causes the processor to perform the steps of the dial generating method of claim 10.
16. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 10.
CN202010499509.0A 2020-06-04 2020-06-04 Dial plate generation method and device, electronic equipment and computer readable storage medium Pending CN113760415A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010499509.0A CN113760415A (en) 2020-06-04 2020-06-04 Dial plate generation method and device, electronic equipment and computer readable storage medium
PCT/CN2021/086409 WO2021244138A1 (en) 2020-06-04 2021-04-12 Dial generation method and apparatus, electronic device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010499509.0A CN113760415A (en) 2020-06-04 2020-06-04 Dial plate generation method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113760415A true CN113760415A (en) 2021-12-07

Family

ID=78783573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010499509.0A Pending CN113760415A (en) 2020-06-04 2020-06-04 Dial plate generation method and device, electronic equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN113760415A (en)
WO (1) WO2021244138A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117911792B (en) * 2024-03-15 2024-06-04 垣矽技术(青岛)有限公司 Pin detecting system for voltage reference source chip production

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101639858A (en) * 2009-08-21 2010-02-03 深圳创维数字技术股份有限公司 Image search method based on target area matching
CN105678778A (en) * 2016-01-13 2016-06-15 北京大学深圳研究生院 Image matching method and device
CN106354735A (en) * 2015-07-22 2017-01-25 杭州海康威视数字技术股份有限公司 Image target searching method and device
CN106682698A (en) * 2016-12-29 2017-05-17 成都数联铭品科技有限公司 OCR identification method based on template matching
CN108874889A (en) * 2018-05-15 2018-11-23 中国科学院自动化研究所 Objective body search method, system and device based on objective body image
CN109189544A (en) * 2018-10-17 2019-01-11 三星电子(中国)研发中心 Method and apparatus for generating dial plate
CN110276767A (en) * 2019-06-28 2019-09-24 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855245A (en) * 2011-06-28 2013-01-02 北京百度网讯科技有限公司 Image similarity determining method and image similarity determining equipment
CN105469376B (en) * 2014-08-12 2019-10-25 腾讯科技(深圳)有限公司 The method and apparatus for determining picture similarity
CN105045818B (en) * 2015-06-26 2017-07-18 腾讯科技(深圳)有限公司 A kind of recommendation methods, devices and systems of picture
US10379721B1 (en) * 2016-11-28 2019-08-13 A9.Com, Inc. Interactive interfaces for generating annotation information
CN109189970A (en) * 2018-09-20 2019-01-11 北京京东尚科信息技术有限公司 Picture similarity comparison method and device
CN109726664B (en) * 2018-12-24 2021-07-09 出门问问信息科技有限公司 Intelligent dial recommendation method, system, equipment and storage medium
CN110569380B (en) * 2019-09-16 2021-06-04 腾讯科技(深圳)有限公司 Image tag obtaining method and device, storage medium and server

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101639858A (en) * 2009-08-21 2010-02-03 深圳创维数字技术股份有限公司 Image search method based on target area matching
CN106354735A (en) * 2015-07-22 2017-01-25 杭州海康威视数字技术股份有限公司 Image target searching method and device
CN105678778A (en) * 2016-01-13 2016-06-15 北京大学深圳研究生院 Image matching method and device
CN106682698A (en) * 2016-12-29 2017-05-17 成都数联铭品科技有限公司 OCR identification method based on template matching
CN108874889A (en) * 2018-05-15 2018-11-23 中国科学院自动化研究所 Objective body search method, system and device based on objective body image
CN109189544A (en) * 2018-10-17 2019-01-11 三星电子(中国)研发中心 Method and apparatus for generating dial plate
CN110276767A (en) * 2019-06-28 2019-09-24 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium

Also Published As

Publication number Publication date
WO2021244138A1 (en) 2021-12-09

Similar Documents

Publication Publication Date Title
CN111080628B (en) Image tampering detection method, apparatus, computer device and storage medium
CN109492643B (en) Certificate identification method and device based on OCR, computer equipment and storage medium
JP7490141B2 (en) IMAGE DETECTION METHOD, MODEL TRAINING METHOD, IMAGE DETECTION APPARATUS, TRAINING APPARATUS, DEVICE, AND PROGRAM
CN106778928B (en) Image processing method and device
US20180260664A1 (en) Deep-learning network architecture for object detection
CN112818975B (en) Text detection model training method and device, text detection method and device
CN110428399B (en) Method, apparatus, device and storage medium for detecting image
US20190362144A1 (en) Eyeball movement analysis method and device, and storage medium
CN112651333B (en) Silence living body detection method, silence living body detection device, terminal equipment and storage medium
CN112101386B (en) Text detection method, device, computer equipment and storage medium
CN112101195B (en) Crowd density estimation method, crowd density estimation device, computer equipment and storage medium
CN111428671A (en) Face structured information identification method, system, device and storage medium
CN111666813B (en) Subcutaneous sweat gland extraction method of three-dimensional convolutional neural network based on non-local information
CN115115552B (en) Image correction model training method, image correction device and computer equipment
CN109741380B (en) Textile picture fast matching method and device
CN109784379B (en) Updating method and device of textile picture feature library
CN110751004A (en) Two-dimensional code detection method, device, equipment and storage medium
CN108647605B (en) Human eye gaze point extraction method combining global color and local structural features
CN113760415A (en) Dial plate generation method and device, electronic equipment and computer readable storage medium
CN109657083A (en) The method for building up and device in textile picture feature library
CN111047632A (en) Method and device for processing picture color of nail image
CN113724237B (en) Tooth trace identification method, device, computer equipment and storage medium
CN116798041A (en) Image recognition method and device and electronic equipment
CN111178202B (en) Target detection method, device, computer equipment and storage medium
CN111428553B (en) Face pigment spot recognition method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination