Nothing Special   »   [go: up one dir, main page]

CN113703577B - Drawing method, drawing device, computer equipment and storage medium - Google Patents

Drawing method, drawing device, computer equipment and storage medium Download PDF

Info

Publication number
CN113703577B
CN113703577B CN202110996280.6A CN202110996280A CN113703577B CN 113703577 B CN113703577 B CN 113703577B CN 202110996280 A CN202110996280 A CN 202110996280A CN 113703577 B CN113703577 B CN 113703577B
Authority
CN
China
Prior art keywords
target
hand detection
information
detected
brush
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110996280.6A
Other languages
Chinese (zh)
Other versions
CN113703577A (en
Inventor
孔祥晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202110996280.6A priority Critical patent/CN113703577B/en
Publication of CN113703577A publication Critical patent/CN113703577A/en
Priority to PCT/CN2022/087946 priority patent/WO2023024536A1/en
Application granted granted Critical
Publication of CN113703577B publication Critical patent/CN113703577B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/80Creating or modifying a manually drawn or painted image using a manual input device, e.g. mouse, light pen, direction keys on keyboard

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure provides a drawing method, apparatus, computer device, and storage medium, including: acquiring an image to be detected of a target area; detecting the image to be detected, and determining hand detection information in the image to be detected; the hand detection information comprises position information of a hand detection frame; determining a starting position of a first virtual tool in the display device based on the position information of the hand detection frame under the condition that the hand detection information meets a trigger condition; and controlling the first virtual tool to draw by taking the initial position as a drawing starting point according to the change of the position information of the hand detection frame detected in the target period.

Description

Drawing method, drawing device, computer equipment and storage medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a drawing method, a drawing device, computer equipment and a storage medium.
Background
In the related art, when drawing, drawing is generally performed directly by a touch screen, for example, by touching the touch screen with a finger or a stylus. However, the implementation described above is difficult to support the drawing process of a larger touch screen, and generally affects the drawing effect.
Disclosure of Invention
The embodiment of the disclosure at least provides a drawing method, a drawing device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a drawing method, including:
acquiring an image to be detected of a target area;
Detecting the image to be detected, and determining hand detection information in the image to be detected; the hand detection information comprises position information of a hand detection frame;
Determining a starting position of a first virtual tool in the display device based on the position information of the hand detection frame under the condition that the hand detection information meets a trigger condition;
and controlling the first virtual tool to draw by taking the initial position as a drawing starting point according to the change of the position information of the hand detection frame detected in the target period.
In the method, after the image to be detected of the target area is obtained, the hand detection information in the image to be detected can be determined, and the first virtual tool can be controlled to draw under the condition that the change of the position information of the hand detection frame is detected, so that a user does not need to directly contact with the display equipment through a medium such as a finger or a touch pen in the drawing process, and the drawing mode is enriched. Especially, when display device is great, be used for presenting the display screen of picture when great promptly, can watch holistic drawing effect through the mode of long-range drawing in drawing process, reduce the user and for accomplishing the discontinuous or discontinuous condition of drawing that longer lines drawing aroused, optimized interactive process to the drawing effect has been promoted.
In a possible implementation manner, the controlling the first virtual tool to draw with the starting position as a drawing starting point includes:
determining a target tool type corresponding to the first gesture information according to the first gesture information indicated in the hand detection information;
controlling a first virtual tool under the target tool type to draw by taking the initial position as a drawing starting point; and the drawing result after drawing accords with the attribute corresponding to the target tool type.
Different gesture information corresponds to different tool types, and a user can switch the tool types by controlling the change of the gesture information, so that the interaction process of the user and the equipment in the drawing process can be enriched, and the user experience is improved.
In a possible embodiment, the method further comprises:
And under the condition that the user is detected to be in the first target gesture based on the image to be detected, and the duration of the first target gesture exceeds the first preset duration, starting the drawing function of the display equipment.
Therefore, the user can directly start the drawing function without touching, the drawing process of the user is simplified, the influence on the drawing efficiency caused by the fact that the user cannot find the mark of the drawing tool is avoided, and the drawing interestingness is enhanced.
In a possible implementation manner, the hand detection information meets a trigger condition, and includes at least one of the following:
The second gesture information indicated in the hand detection information accords with a preset trigger gesture type;
And the duration that the position of the hand detection frame indicated in the hand detection information is in the target area exceeds the set duration.
In a possible implementation manner, in a case that the first virtual tool is a virtual brush, the target tool type is a target brush type;
after the determining the target tool type corresponding to the first gesture information, the method further includes:
And determining a target virtual brush for drawing from a plurality of preset virtual brushes matched with the target brush type, and displaying the target virtual brush at the starting position.
Through the virtual brush of the display target, the user can clearly and intuitively watch the current drawing process, and drawing adjustment can be further carried out.
In a possible implementation manner, the determining, from a plurality of preset virtual brushes matched with the target brush type, the target virtual brush for drawing includes:
And determining the target virtual brush matched with the user attribute information from a plurality of preset virtual brushes matched with the target brush type according to the user attribute information of the user corresponding to the hand detection frame.
The displayed virtual brush is matched with the user attribute information, so that the virtual brush can be displayed in a personalized mode, and user experience of a user in a drawing process is improved.
In one possible implementation, a menu area of a display device includes a plurality of virtual tool identifications;
The method further comprises the steps of:
responsive to detecting that the user makes a second target gesture, presenting a movement identification at a starting location of the first virtual tool;
displaying a second virtual tool at the starting position of the first virtual tool under the condition that the duration of the mobile identifier at the display position corresponding to the second virtual tool identifier in the plurality of tool identifiers exceeds a second preset duration;
In response to the target processing operation, the mapped portion is processed based on the second virtual tool.
By the method, the user can realize the switching of the virtual tool without touching, the interaction between the user and the device in the drawing process is increased, and the user experience is improved.
In a possible implementation manner, the controlling the first virtual tool to draw with the starting position as a drawing starting point according to the change of the position information of the hand detection frame detected in the target period includes:
Determining corrected position information according to the detected change of the position information of the hand detection frame; and controlling the first virtual tool to draw by taking the initial position as a drawing starting point according to the corrected position information.
By the mode, the problem of poor drawing effect caused by hand shake of a user can be avoided, and the drawing effect is improved.
In a possible implementation manner, the determining, based on the position information of the hand detection frame, the starting position of the first virtual tool in the display device includes:
And determining the starting position of a first virtual tool in the display device based on the position information of the hand detection frame and the proportional relation between the image to be detected and the display interface of the display device.
In a second aspect, an embodiment of the present disclosure further provides a drawing apparatus, including:
the acquisition module is used for acquiring an image to be detected of the target area;
The first determining module is used for detecting the image to be detected and determining hand detection information in the image to be detected; the hand detection information comprises position information of a hand detection frame;
A second determining module, configured to determine, based on position information of the hand detection frame, a start position of a first virtual tool in a display device if the hand detection information satisfies a trigger condition;
And the drawing module is used for controlling the first virtual tool to draw by taking the initial position as a drawing starting point according to the change of the position information of the hand detection frame detected in the target period.
In one possible implementation manner, the drawing module is configured to, when controlling the first virtual tool to draw with the starting position as a drawing starting point:
determining a target tool type corresponding to the first gesture information according to the first gesture information indicated in the hand detection information;
controlling a first virtual tool under the target tool type to draw by taking the initial position as a drawing starting point; and the drawing result after drawing accords with the attribute corresponding to the target tool type.
In a possible embodiment, the apparatus further comprises a control module for:
And under the condition that the user is detected to be in the first target gesture based on the image to be detected, and the duration of the first target gesture exceeds the first preset duration, starting the drawing function of the display equipment.
In a possible implementation manner, the hand detection information meets a triggering condition, including at least one of the following:
The second gesture information indicated in the hand detection information accords with a preset trigger gesture type;
And the duration that the position of the hand detection frame indicated in the hand detection information is in the target area exceeds the set duration.
In a possible implementation manner, in a case that the first virtual tool is a virtual brush, the target tool type is a target brush type;
After the determining the target tool type corresponding to the first gesture information, the second determining module is further configured to:
And determining a target virtual brush for drawing from a plurality of preset virtual brushes matched with the target brush type, and displaying the target virtual brush at the starting position.
In a possible implementation manner, the second determining module is configured to, when determining, from a plurality of preset virtual brushes that match the target brush type, a target virtual brush that performs drawing:
And determining the target virtual brush matched with the user attribute information from a plurality of preset virtual brushes matched with the target brush type according to the user attribute information of the user corresponding to the hand detection frame.
In one possible implementation, a menu area of a display device includes a plurality of virtual tool identifications;
The drawing module is further configured to:
responsive to detecting that the user makes a second target gesture, presenting a movement identification at a starting location of the first virtual tool;
displaying a second virtual tool at the starting position of the first virtual tool under the condition that the duration of the mobile identifier at the display position corresponding to the second virtual tool identifier in the plurality of tool identifiers exceeds a second preset duration;
In response to the target processing operation, the mapped portion is processed based on the second virtual tool.
In one possible implementation manner, the drawing module is configured to, when controlling the first virtual tool to draw with the starting position as a drawing starting point according to the change of the position information of the hand detection frame detected in the target period:
Determining corrected position information according to the detected change of the position information of the hand detection frame; and controlling the first virtual tool to draw by taking the initial position as a drawing starting point according to the corrected position information.
In a possible implementation manner, the first determining module is configured to, when determining a starting position of the first virtual tool in the display device based on the position information of the hand detection frame:
And determining the starting position of a first virtual tool in the display device based on the position information of the hand detection frame and the proportional relation between the image to be detected and the display interface of the display device.
In a third aspect, embodiments of the present disclosure further provide a computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect, or any of the possible implementations of the first aspect.
In a fourth aspect, the presently disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the first aspect, or any of the possible implementations of the first aspect.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
FIG. 1 illustrates a flow chart of a drawing method provided by an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating position information of key points of a body and limbs and position information of a hand detection frame of a user in a drawing method according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of a drawing device according to an embodiment of the disclosure;
Fig. 4 shows a schematic structural diagram of a computer device according to an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
The present invention provides a method for drawing a picture by directly contacting a user with a display device through a finger or a stylus, which is a result of the inventor after practice and careful study, and thus the discovery process of the above problems and the solutions to the problems presented in the following disclosure should be all contributions of the inventor to the present disclosure in the process of the present disclosure.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
For the sake of understanding the present embodiment, first, a detailed description will be given of a drawing method disclosed in an embodiment of the present disclosure, where an execution subject of the drawing method provided in the embodiment of the present disclosure is generally a computer device with display capability, such as a smart tv, a smart phone, a tablet computer, and so on. It should be noted that, drawing in this disclosure includes, but is not limited to, editing operations on a display interface, which are implemented by interaction between a user and a computer device, such as drawing, writing, and the like.
The display device described herein may refer to the above-mentioned computer device, or may be a display apparatus, such as a display, connected to the above-mentioned computer device, and the specific calculation process is performed by the computer device, and the display process is performed by the display device.
Referring to fig. 1, a flowchart of a drawing method according to an embodiment of the disclosure is shown, where the method includes steps 101 to 104, where:
step 101, obtaining an image to be detected of a target area.
102, Detecting the image to be detected, and determining hand detection information in the image to be detected; the hand detection information includes position information of a hand detection frame.
Step 103, determining a starting position of a first virtual tool in the display device based on the position information of the hand detection frame when the hand detection information meets the trigger condition.
And 104, controlling the first virtual tool to draw by taking the initial position as a drawing starting point according to the change of the position information of the hand detection frame detected in the target period.
In the method, after the image to be detected of the target area is obtained, the hand detection information in the image to be detected can be determined, and the first virtual tool can be controlled to draw under the condition that the change of the position information of the hand detection frame is detected, so that a user does not need to directly contact with the display equipment through a medium such as a finger or a touch pen in the drawing process, and the drawing mode is enriched. Especially, when display device is great, be used for presenting the display screen of picture when great promptly, can watch holistic drawing effect through the mode of long-range drawing in drawing process, reduce the user and for accomplishing the discontinuous or discontinuous condition of drawing that longer lines drawing aroused, optimized interactive process to the drawing effect has been promoted.
The following is a detailed description of the above steps.
For step 101 and step 102,
Here, the target area may be any area where the display interface of the display device can be viewed, for example, to ensure the control effect of the user on the display device, to ensure the display effect when the display device performs content display through the display screen, an area facing the display device may be set as the target area. In practice, the image capturing apparatus may be disposed on the display apparatus or disposed near the display apparatus. The image pickup device can acquire the scene image of the target area in real time, the scene image comprises the image to be detected, and the image to be detected of the target area can be acquired from the image pickup device in a data transmission mode. It should be noted that the deployment position of the image capturing apparatus may be determined according to the position of the target area such that the shooting area of the deployed image capturing apparatus contains at least the target area.
The image to be detected may be any frame of image corresponding to the target area, for example, the image to be detected may be an image corresponding to the target area at the current moment or an image corresponding to the target area at the historical moment. After the image to be detected is obtained, the image to be detected can be detected, and hand detection information of a user in the image to be detected is determined.
The hand detection information may include position information of a hand detection frame, and the hand detection frame may refer to a minimum detection frame including a hand of the user in the image to be detected.
In specific implementation, the target neural network for detecting the key points may be trained, so that the trained target neural network meets a preset condition, for example, the loss value of the trained target neural network is smaller than a set loss threshold. And detecting the image to be detected through the trained target neural network, and determining the position information of the hand detection frame of the user in the image to be detected.
The target neural network may identify the image to be detected, determine position information of a body limb key point of the user included in the image to be detected, and determine position information of a hand detection frame of the user based on the position information of the body limb key point and the image to be detected. The number and positions of the body limb key points can be set according to the needs, for example, the number of the limb key points can be 14 or 17, etc. The position information of the hand detection frame includes coordinate information of four vertexes of the detection frame and coordinate information of a center point of the hand detection frame.
Referring to fig. 2, the schematic diagram of the position information of the key points of the limbs and the position information of the hand detection frame of the user is shown. The user's body limb keypoints may include head vertex 5, head center point 4, neck articulation point 3, left shoulder articulation point 9, right shoulder articulation point 6, left elbow articulation point 10, right elbow articulation point 7, left wrist articulation point 11, right wrist articulation point 8, body limb center point 12, crotch articulation point 1, crotch articulation point 2, and crotch center point 0; the hand detection frame may include four vertices 13, 15, 16, 17 of the left hand detection frame and a center point 14 of the left hand frame; and four vertices 18, 20, 21, 22 of the right hand detection frame and a center point 19 of the right hand frame.
Aiming at step 103,
In a possible implementation manner, before detecting whether the gesture detection information meets the triggering condition, whether the display device is in the drawing interface or not, that is, whether an interface presented by the display device through a display screen belongs to the drawing interface or not may be detected.
When detecting whether the display equipment is in a drawing interface, detecting the working state of each process currently started by the display equipment, wherein different processes correspond to different application programs, and when detecting that the working state of the process corresponding to the drawing application program is a display state, determining that the display equipment is currently in the drawing interface; when the working state of the process corresponding to the drawing application program is detected to be the non-display state, the current drawing application program can be determined to be started, but is switched to the background.
When the fact that the display device is not in the drawing interface is detected, the drawing function can be started firstly, namely, the drawing application program is started, or the drawing application program running in the background is switched to the foreground for displaying.
In a possible implementation manner, when the user is detected to be in the first target gesture based on the image to be detected, and the duration of the first target gesture exceeds a first preset duration, a drawing function of the display device is started.
Here, the first target gesture may refer to a target action, such as waving a hand, a specific scissor hand, or the like; or the first target gesture may be that the corresponding position of the center point of the hand detection frame in the display device is located in a first target area, where the first target area may be an area where a drawing function may be started or called, for example, may be an area where an identifier (for example, an icon) of a drawing application program is located, so that a user starts or calls the drawing function by performing a corresponding gesture or operation in the first target area.
When detecting whether the user makes a target action, the image to be detected may be input into an action recognition network through an action recognition network, and whether the user makes the target action may be obtained based on the action recognition network, where the action recognition network may be obtained based on sample image training with an action tag, and the image to be detected input into the action recognition network may be a plurality of continuous images. The motion label refers to a label for indicating a motion type included in the sample image, and may be, for example, a hand of scissors, a punch, or the like.
By the method, a user can directly start the drawing function without touching, so that the drawing process of the user is simplified, the problem that the drawing efficiency is affected due to the fact that the user cannot find the mark of the drawing tool is avoided, and the drawing interestingness is enhanced.
When the display device is not in the drawing interface, the position information of the corresponding mobile identification, which may be a mouse identification by way of example, may be determined according to the position information of the hand detection frame, and the hand detection frame is represented by the mobile identification. The hand detection frame may be represented with the first virtual tool when the display device is in a drawing interface. Specifically, the method for determining the position information of the first virtual tool based on the position information of the hand detection frame will be described below.
The hand detection information meeting the trigger condition may mean that the hand detection information meets the trigger condition for starting drawing, and in a possible implementation manner, the hand detection information meets the trigger condition, including at least one of the following:
The second gesture information indicated in the hand detection information accords with a preset trigger gesture type;
And the duration that the position of the hand detection frame indicated in the hand detection information is in the target area exceeds the set duration.
Here, the trigger gesture type may be a gesture for indicating the start of drawing, and may be, for example, a two-hand punch, a ratio "OK", or the like. In determining the second gesture information indicated in the image to be detected, the gesture recognition network may be used for recognition, and a training process of the gesture recognition network is similar to that of the action recognition network, which will not be described herein.
The target area may be an area within a preset range from the first detected position of the hand detection frame; the position of the hand detection frame may refer to a position after mapping the position information of the hand detection frame onto a display screen/display interface of a display device, or may refer to a position indicated by the position information of the hand detection frame.
In combination with a specific application scenario, the fact that the hand detection information meets the trigger condition may mean that the user makes a preset trigger gesture, or that the duration that the hand of the user remains unchanged (or the hand movement range is smaller) exceeds a set duration.
For step 104,
In one possible embodiment, the virtual tool may refer to a virtual tool for drawing, and may include, for example, a brush, a dye pen, an eraser, and the like. The first virtual tool may refer to a default virtual tool or a virtual tool determined based on historical drawing operations.
For example, if the first detection of the hand detection information satisfies the trigger condition, a default virtual brush may be determined as the first virtual tool, and if the nth detection of the hand detection information satisfies the trigger condition, a virtual tool used at the end of the nth execution of the drawing operation may be determined as the first virtual tool, where N is a positive integer greater than 1.
For example, if the nth drawing operation is finished, the virtual tool is an eraser, and when the n+1st hand detection information is detected to meet the trigger condition, the corresponding first virtual tool is also an eraser.
In one possible implementation manner, when the first virtual tool is controlled to draw by taking the starting position as a drawing starting point, a target tool type corresponding to the first gesture information can be determined according to the first gesture information indicated in the hand detection information; then controlling a first virtual tool under the target tool type to draw by taking the initial position as a drawing starting point; and the drawing result after drawing accords with the attribute corresponding to the target tool type.
Here, the attribute corresponding to the target tool type may include, for example, color, thickness, size, process type, and the like. Different gesture information corresponds to different tool types, and a user can switch the tool types by controlling the change of the gesture information, so that the interaction process of the user and the equipment in the drawing process can be enriched, and the user experience is improved.
It should be noted that, the image to be detected of the target area may be obtained in real time, and the hand detection information in the image to be detected may also be detected in real time. The first gesture information may refer to gesture information detected from the image to be detected after determining that the hand detection information satisfies a trigger condition, unlike the second gesture information described above.
The target tool type corresponding to the first gesture information may refer to different tools, and the target tool type corresponding to the gesture information a is an eraser, and the target tool type corresponding to the gesture information B is a brush; or the target tool type corresponding to the first gesture information refers to different types of the first virtual tool, and if the first virtual tool is a brush, the target tool type corresponding to the gesture information a is a thick brush, and the target tool type corresponding to the gesture information B is a thin brush.
In a specific implementation, the first virtual tool controlled under the target tool type uses the initial position as a drawing start point to draw, and may determine a movement track of the hand detection frame based on position information of the hand detection frame and historical position information of the hand detection frame corresponding to an adjacent historical to-be-detected image located before the to-be-detected image in time sequence, and then draw based on the movement track of the hand detection frame and the initial position.
After completing the one-step drawing, the position information of the first virtual tool may be redetermined based on the changed position information of the hand detection frame, and the redetermined position information may be redetermined as the starting position, so as to perform the drawing of the subsequent step.
In one possible implementation manner, in the case that the first virtual tool is a virtual brush, the target tool type may be a target brush type, and after determining the target tool type corresponding to the first gesture information, a target virtual brush for drawing may be determined from a plurality of preset virtual brushes matched with the target brush type, and the target virtual brush may be displayed at the starting position.
Here, when determining the target virtual brush for drawing from among the plurality of virtual brushes preset to match the target brush type, the target virtual brush matched with the user attribute information of the user may be determined from among the plurality of virtual brushes preset to match the target brush type by combining the user attribute information of the user.
Wherein, the user attribute information of the user can include age, gender, occupation and the like; if the user attribute information of the user is male and the age of 30 years, the target virtual brush matched with the attribute information of the user can be a virtual pen; if the user attribute information of the user is female and the age is 5, the target virtual brush matched with the user attribute information of the user can be a virtual cartoon pencil.
Therefore, the displayed virtual brush is matched with the user attribute information, so that the virtual tool can be displayed in a personalized manner, and the user experience of the user in the drawing process is improved.
When the target virtual brush is displayed at the initial position, the drawing habit of the user can be combined, and the drawing habit of the user can be preset, and can be displayed in an inclined mode according to a preset inclination angle, so that the process that the user holds the drawing tool to edit on a plane such as paper can be visually presented.
In the drawing process, the user may shake his hand, and so on, so in order to prevent the influence of this situation on the drawing effect, an anti-shake process may be performed before the first virtual tool is controlled to draw.
Specifically, the correction position information can be determined according to the detected change of the position information of the hand detection frame; and controlling the first virtual tool to draw by taking the initial position as a drawing starting point according to the corrected position information.
Here, the corrected position information refers to information after performing correction processing after mapping the position information of the hand detection frame onto a display device, and the correction processing may be, for example, smoothing processing.
In one possible implementation, the menu area of the display device may include a plurality of virtual tool identifications, such as names, symbols, etc. of a plurality of virtual tools.
In one possible implementation, a user may switch virtual tools and perform drawing operations based on the switched virtual tools.
For example, a mobile identity may be presented at a starting location of the first virtual tool in response to detecting that the user makes a second target gesture; then, when the time length of the mobile identifier at the display position corresponding to the second virtual tool identifier in the plurality of tool identifiers exceeds a second preset time length, displaying the second virtual tool at the starting position of the first virtual tool; and then processing the mapped portion based on the second virtual tool in response to the target processing operation.
Here, the second target gesture may refer to a gesture for indicating to stop drawing, and for example, if the palm of the user is facing the display device, drawing may be performed based on the movement of the palm during the movement of the palm (i.e., during the movement of the hand detection frame); if the user's hand is facing away from the display device, a movement identifier may be presented at the starting location of the first virtual tool, and a mouse identifier may be presented as an example.
It should be noted that, the starting position of the first virtual tool may be changed according to the change of the position information of the hand frame detection point, where the time of the change is that the user makes the second target gesture, and when the user is detected to make the second target gesture, the starting position may be updated in real time according to the position information of the hand frame point, or the movement identifier is changed according to the change of the position information of the hand frame point, and the display position corresponding to the movement identifier after the position change is also the starting position.
The processing of the drawn part based on the second virtual tool in response to the target processing operation may refer to performing a processing function corresponding to the second virtual tool on the drawn part based on the second virtual tool, for example, if the second virtual tool is an eraser, a part of the drawing of the drawn part may be removed; the responding to the target processing operation may refer to responding to movement of a user hand frame point, and determining a processing position corresponding to the second virtual tool.
By the method, the user can realize the switching of the virtual tool without touching, the interaction between the user and the device in the drawing process is increased, and the user experience is improved.
In one possible implementation, when the position information of a plurality of hand detection frames is detected, a target hand detection frame can be randomly selected, and drawing is performed based on the position information of the target hand detection frame; or when the position information of the plurality of hand detection frames is detected, determining the starting positions of the two first virtual tools based on the two hand detection frames respectively, and controlling the two first virtual tools to draw based on the change of the position information of the two hand detection frames respectively.
In summary, the above embodiments may be generally described as: under the condition that the hand detection information meets the trigger condition, controlling a first virtual tool to draw based on the change condition of the position information of the hand detection frame; stopping drawing and displaying a mobile identifier when the user makes a second target gesture, wherein the position of the mobile identifier can be changed according to the change of the position information of the hand detection frame; when the hand detection information is detected to meet the triggering condition again, or the user stops making the second target gesture, the display position of the first virtual tool can be determined again based on the position information of the hand detection frame, and display is performed.
The hand detection information meeting the trigger condition may be understood as the beginning of the drawing step, and the user making the second target gesture may be understood as the end of the drawing step.
A description will be presented below of a specific method of determining the start position of the first virtual tool in the display device based on the position information of the hand detection frame, i.e. if a conversion between the image coordinates in the image to be detected and the coordinates in the display device is achieved.
In one possible implementation, the starting position of the first virtual tool in the display device may be determined based on the position information of the hand detection frame and a proportional relationship between the image to be detected and a display interface of the display device.
In the implementation, the target position information of the center point position information of the hand detection frame of the user on the display interface can be determined through the proportional relation between the image to be detected and the display interface of the display device and the position information of the hand detection frame of the user, and then the target position information of the center point position information of the hand detection frame of the user on the display interface is determined to be the starting position of the first virtual tool.
In an alternative embodiment, before determining the starting position of the first virtual tool based on the position information of the hand detection frame, the method further comprises: detecting an image to be detected, and determining target node position information of a user included in the image to be detected;
Determining a starting position of the first virtual tool based on the position information of the hand detection frame comprises: and determining the starting position of the first virtual tool based on the position information of the hand detection frame, the target node position information and the reference proportion corresponding to the user, wherein the reference proportion is used for amplifying a first distance between the position of the hand detection frame and the target node position.
Wherein the reference ratio may be determined according to the following steps:
step one, obtaining the distance between the hand detection frame and the target joint point to obtain the arm length of the user in the image to be detected.
And step two, obtaining the distance between the target joint point and each vertex of the image to be detected to obtain a second distance, wherein the second distance is the maximum distance among the distances between the target joint point and each vertex.
And thirdly, determining the ratio of the arm length to the second distance as a reference ratio.
In the first step, since the distance between the target key point and the hand detection frame may represent the longest distance that the arm stretches in the motion process of the person, the distance between the center point of the hand detection frame and the target node may be determined first, so as to obtain the arm length of the user in the image to be detected.
For example, referring to fig. 2, a first linear distance between the right shoulder joint point 6 (target joint point) and the right elbow joint point 7, a second linear distance between the right elbow joint point 7 and the right wrist joint point 8, and a third linear distance between the right wrist joint point 8 and the center point 19 (hand detection frame) of the right hand frame may be calculated, and the sum of the first linear distance, the second linear distance, and the third linear distance may be determined as the arm length of the user. Or may calculate a first straight line distance between the left shoulder joint point 9 (target joint point) and the left elbow joint point 10, calculate a second straight line distance between the left elbow joint point 10 and the left wrist joint point 11, and calculate a third straight line distance between the left wrist joint point 11 and the left hand frame center point 14 (hand detection frame), and determine the sum of the first straight line distance, the second straight line distance, and the third straight line distance as the arm length of the user.
In the second step, after calculating the linear distances between the target node and the four vertices of the image to be detected, the second distance may be determined from the four generated linear distances, that is, the maximum distance may be selected from the four calculated linear distances as the second distance.
Or the center pixel point of the image to be detected is taken as an original point in advance, the image to be detected is divided into four areas on average, namely a first area positioned at the upper left, a second area positioned at the upper right, a third area positioned at the lower left and a fourth area positioned at the lower right. Furthermore, the area where the target node is located can be determined based on the target node position information; and determining a target vertex with the farthest distance from the target joint point based on the region where the target joint point is located, and calculating the linear distance between the target joint point and the target vertex to obtain the second distance. If the target node is located in the third area, determining the vertex of the upper right corner as the target vertex; and if the target node is positioned in the fourth area, determining the vertex of the upper left corner as the target vertex.
In the third step, the ratio of the farthest linear distance c to the second distance d may be determined as a reference ratio, that is, the reference ratio is c/d, where the farthest linear distance c is the longest length of the arm calculated in the first step.
If the first distance is a, when the first distance is amplified based on the reference ratio, the amplified target distance is a/(c/d) =a/c×d, and since c is the farthest straight-line distance, a/c is necessarily not greater than 1, and thus a/c×d is necessarily not greater than d, so that the amplified target distance can be prevented from being greater than the second distance.
In the method, the arm length and the second distance of the user in the image to be detected are determined, and the ratio of the arm length to the second distance is determined as the reference ratio, so that when the first distance is enlarged based on the determined reference ratio, the condition that the determined target distance is larger than the second distance and the determined intermediate position information exceeds the range of the image to be detected can be avoided.
In an alternative embodiment, determining the starting position of the first virtual tool based on the position information of the hand detection frame, the target node position information, and the reference proportion corresponding to the user includes:
Step one, determining intermediate position information of a first virtual tool under an image coordinate system corresponding to an image to be detected based on position information of a hand detection frame, target node position information and a reference proportion corresponding to a user.
And step two, determining the target display position of the mobile identifier in the display equipment based on the intermediate position information.
Wherein, the specific implementation of the first step is as follows:
1. and obtaining a first distance between the hand detection frame and the target node based on the position information of the hand detection frame and the target node position information.
2. Amplifying the first distance based on the reference proportion to obtain a target distance;
3. and determining the intermediate position information of the mobile identifier under an image coordinate system corresponding to the image to be detected based on the target distance and the position information of the hand detection frame.
Here, the first distance between the hand detection frame and the target node point may be calculated based on the position information of the hand detection frame and the target node point position information, for example, if the position information of the center point of the hand detection frame is (x 1,y1), the target node point position information is (x 2,y2), and the first distance is C1
The first distance C1 may then be amplified based on the reference ratio C/D to determine the target distance D1, c1/d1=c/D, i.e. the target distance d1=c1×c/D. Finally, the position information of the center point of the hand detection frame after the distance is enlarged can be determined based on the target distance and the coordinates of the center point of the hand indicated by the position information of the hand detection frame; and determining the position information of the center point of the hand detection frame with the enlarged distance as the intermediate position information of the mobile identifier under the image coordinate system corresponding to the image to be detected.
In the second step, for example, based on the proportional relationship between the display interface of the display device and the image to be detected, the intermediate position information of the mobile identifier under the image coordinate system corresponding to the image to be detected is converted into the coordinate system corresponding to the display interface of the display device, and the target display position of the mobile identifier in the display device is determined.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same inventive concept, the embodiments of the present disclosure further provide a drawing device corresponding to the drawing method, and since the principle of solving the problem by the device in the embodiments of the present disclosure is similar to that of the drawing method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 3, a schematic architecture diagram of a drawing device according to an embodiment of the disclosure is shown, where the device includes: an acquisition module 301, a first determination module 302, a second determination module 303, a drawing module 304, and a control module 305; wherein,
An acquiring module 301, configured to acquire an image to be detected of a target area;
The first determining module 302 is configured to detect the image to be detected, and determine hand detection information in the image to be detected; the hand detection information comprises position information of a hand detection frame;
a second determining module 303, configured to determine, based on the position information of the hand detection frame, a start position of a first virtual tool in the display device if the hand detection information meets a trigger condition;
And a drawing module 304, configured to control the first virtual tool to draw with the starting position as a drawing starting point according to the change of the position information of the hand detection frame detected in the target period.
In a possible implementation manner, the drawing module 304 is configured to, when controlling the first virtual tool to draw with the starting position as a drawing starting point:
determining a target tool type corresponding to the first gesture information according to the first gesture information indicated in the hand detection information;
controlling a first virtual tool under the target tool type to draw by taking the initial position as a drawing starting point; and the drawing result after drawing accords with the attribute corresponding to the target tool type.
In a possible embodiment, the apparatus further comprises a control module 305 for:
And under the condition that the user is detected to be in the first target gesture based on the image to be detected, and the duration of the first target gesture exceeds the first preset duration, starting the drawing function of the display equipment.
In a possible implementation manner, the hand detection information meets a triggering condition, including at least one of the following:
The second gesture information indicated in the hand detection information accords with a preset trigger gesture type;
And the duration that the position of the hand detection frame indicated in the hand detection information is in the target area exceeds the set duration.
In a possible implementation manner, in a case that the first virtual tool is a virtual brush, the target tool type is a target brush type;
After the determining the target tool type corresponding to the first gesture information, the second determining module 303 is further configured to:
And determining a target virtual brush for drawing from a plurality of preset virtual brushes matched with the target brush type, and displaying the target virtual brush at the starting position.
In a possible implementation manner, the second determining module 303 is configured to, when determining, from a plurality of preset virtual brushes that match the target brush type, a target virtual brush that performs drawing,:
And determining the target virtual brush matched with the user attribute information from a plurality of preset virtual brushes matched with the target brush type according to the user attribute information of the user corresponding to the hand detection frame.
In one possible implementation, a menu area of a display device includes a plurality of virtual tool identifications;
The drawing module 304 is further configured to:
responsive to detecting that the user makes a second target gesture, presenting a movement identification at a starting location of the first virtual tool;
displaying a second virtual tool at the starting position of the first virtual tool under the condition that the duration of the mobile identifier at the display position corresponding to the second virtual tool identifier in the plurality of tool identifiers exceeds a second preset duration;
In response to the target processing operation, the mapped portion is processed based on the second virtual tool.
In a possible implementation manner, the drawing module 304 is configured to, when controlling the first virtual tool to draw with the starting position as a drawing starting point according to the change of the position information of the hand detection frame detected in the target period:
Determining corrected position information according to the detected change of the position information of the hand detection frame; and controlling the first virtual tool to draw by taking the initial position as a drawing starting point according to the corrected position information.
In a possible implementation manner, the first determining module 302 is configured to, when determining, based on the position information of the hand detection frame, a starting position of the first virtual tool in the display device:
And determining the starting position of a first virtual tool in the display device based on the position information of the hand detection frame and the proportional relation between the image to be detected and the display interface of the display device.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 4, a schematic structural diagram of a computer device 400 according to an embodiment of the disclosure includes a processor 401, a memory 402, and a bus 403. The memory 402 is configured to store execution instructions, including a memory 4021 and an external memory 4022; the memory 4021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 401 and data exchanged with the external memory 4022 such as a hard disk, the processor 401 exchanges data with the external memory 4022 through the memory 4021, and when the computer device 400 operates, the processor 401 and the memory 402 communicate with each other through the bus 403, so that the processor 401 executes the following instructions:
acquiring an image to be detected of a target area;
Detecting the image to be detected, and determining hand detection information in the image to be detected; the hand detection information comprises position information of a hand detection frame;
Determining a starting position of a first virtual tool in the display device based on the position information of the hand detection frame under the condition that the hand detection information meets a trigger condition;
and controlling the first virtual tool to draw by taking the initial position as a drawing starting point according to the change of the position information of the hand detection frame detected in the target period.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the drawing method described in the method embodiments described above. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
Embodiments of the present disclosure further provide a computer program product, where the computer program product carries program code, where instructions included in the program code may be used to perform steps of the drawing method described in the foregoing method embodiments, and specifically reference may be made to the foregoing method embodiments, which are not described herein.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A method of drawing, comprising:
acquiring an image to be detected of a target area;
Detecting the image to be detected, and determining hand detection information in the image to be detected; the hand detection information comprises position information of a hand detection frame;
determining a target tool type corresponding to the first gesture information according to the first gesture information indicated in the hand detection information; wherein the target tool type is a target brush type;
determining a target virtual brush for drawing from a plurality of preset virtual brushes matched with the target brush type; wherein the target virtual brush is a virtual brush matched with the attribute information of the user;
Determining a starting position of the target virtual brush in the display device based on the position information of the hand detection frame under the condition that the hand detection information meets a trigger condition;
And controlling the target virtual brush to draw by taking the initial position as a drawing starting point according to the change of the position information of the hand detection frame detected in the target period.
2. The method of claim 1, wherein the controlling the target virtual brush to draw with the starting position as a drawing start point comprises:
Controlling the target virtual brush to draw by taking the initial position as a drawing starting point; and the drawing result after drawing accords with the attribute corresponding to the target tool type.
3. The method according to claim 1, wherein the method further comprises:
And under the condition that the user is detected to be in the first target gesture based on the image to be detected, and the duration of the first target gesture exceeds the first preset duration, starting the drawing function of the display equipment.
4. The method according to any one of claims 1 to 3, wherein the hand detection information satisfies a trigger condition, including at least one of:
The second gesture information indicated in the hand detection information accords with a preset trigger gesture type;
And the duration that the position of the hand detection frame indicated in the hand detection information is in the target area exceeds the set duration.
5. The method according to claim 2, wherein the method further comprises:
And displaying the target virtual brush at the starting position.
6. The method of claim 1, wherein the determining a target virtual brush for drawing from among a plurality of virtual brushes preset to match the target brush type comprises:
And determining the target virtual brush matched with the user attribute information from a plurality of preset virtual brushes matched with the target brush type according to the user attribute information of the user corresponding to the hand detection frame.
7. The method of claim 1, wherein the menu area of the display device includes a plurality of virtual tool identifications;
The method further comprises the steps of:
responsive to detecting that the user makes a second target gesture, displaying a movement identification at a starting position of the target virtual brush;
displaying a second virtual tool at the initial position of the target virtual brush under the condition that the time length of the mobile identifier at the display position corresponding to the second virtual tool identifier in the plurality of tool identifiers exceeds a second preset time length is detected;
In response to the target processing operation, the mapped portion is processed based on the second virtual tool.
8. The method according to claim 1, wherein the controlling the target virtual brush to draw with the start position as a drawing start point according to the change in the position information of the hand detection frame detected within the target period includes:
Determining corrected position information according to the detected change of the position information of the hand detection frame; and according to the corrected position information, controlling the target virtual brush to draw by taking the initial position as a drawing starting point.
9. The method of claim 1, wherein determining a starting position of the target virtual brush in a display device based on the position information of the hand detection frame comprises:
and determining the initial position of the target virtual brush in the display equipment based on the position information of the hand detection frame and the proportional relation between the image to be detected and the display interface of the display equipment.
10. A drawing device, characterized by comprising:
the acquisition module is used for acquiring an image to be detected of the target area;
The first determining module is used for detecting the image to be detected and determining hand detection information in the image to be detected; the hand detection information comprises position information of a hand detection frame;
The drawing module is used for determining a target tool type corresponding to the first gesture information according to the first gesture information indicated in the hand detection information; wherein the target tool type is a target brush type;
the second determining module is used for determining a target virtual brush for drawing from a plurality of preset virtual brushes matched with the target brush type; wherein the target virtual brush is a virtual brush matched with the attribute information of the user;
The second determining module is further configured to determine, based on the position information of the hand detection frame, a starting position of the target virtual brush in the display device if the hand detection information meets a trigger condition;
and the drawing module is used for controlling the target virtual painting brush to draw by taking the initial position as a drawing starting point according to the change of the position information of the hand detection frame detected in the target period.
11. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the drawing method according to any one of claims 1 to 9.
12. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the steps of the drawing method according to any of claims 1 to 9.
CN202110996280.6A 2021-08-27 2021-08-27 Drawing method, drawing device, computer equipment and storage medium Active CN113703577B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110996280.6A CN113703577B (en) 2021-08-27 2021-08-27 Drawing method, drawing device, computer equipment and storage medium
PCT/CN2022/087946 WO2023024536A1 (en) 2021-08-27 2022-04-20 Drawing method and apparatus, and computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110996280.6A CN113703577B (en) 2021-08-27 2021-08-27 Drawing method, drawing device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113703577A CN113703577A (en) 2021-11-26
CN113703577B true CN113703577B (en) 2024-07-16

Family

ID=78656074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110996280.6A Active CN113703577B (en) 2021-08-27 2021-08-27 Drawing method, drawing device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113703577B (en)
WO (1) WO2023024536A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113703577B (en) * 2021-08-27 2024-07-16 北京市商汤科技开发有限公司 Drawing method, drawing device, computer equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932053A (en) * 2018-05-21 2018-12-04 腾讯科技(深圳)有限公司 Drawing practice, device, storage medium and computer equipment based on gesture

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9971490B2 (en) * 2014-02-26 2018-05-15 Microsoft Technology Licensing, Llc Device control
US9727161B2 (en) * 2014-06-12 2017-08-08 Microsoft Technology Licensing, Llc Sensor correlation for pen and touch-sensitive computing device interaction
CN108268181A (en) * 2017-01-04 2018-07-10 奥克斯空调股份有限公司 A kind of control method and device of non-contact gesture identification
US11023055B2 (en) * 2018-06-01 2021-06-01 Apple Inc. Devices, methods, and graphical user interfaces for an electronic device interacting with a stylus
CN108921101A (en) * 2018-07-04 2018-11-30 百度在线网络技术(北京)有限公司 Processing method, equipment and readable storage medium storing program for executing based on gesture identification control instruction
US11188145B2 (en) * 2019-09-13 2021-11-30 DTEN, Inc. Gesture control systems
CN110750160B (en) * 2019-10-24 2023-08-18 京东方科技集团股份有限公司 Gesture-based drawing screen drawing method and device, drawing screen and storage medium
CN112262393A (en) * 2019-12-23 2021-01-22 商汤国际私人有限公司 Gesture recognition method and device, electronic equipment and storage medium
CN112506340B (en) * 2020-11-30 2023-07-25 北京市商汤科技开发有限公司 Equipment control method, device, electronic equipment and storage medium
CN112925414A (en) * 2021-02-07 2021-06-08 深圳创维-Rgb电子有限公司 Display screen gesture drawing method and device and computer readable storage medium
CN112987933A (en) * 2021-03-25 2021-06-18 北京市商汤科技开发有限公司 Device control method, device, electronic device and storage medium
CN113703577B (en) * 2021-08-27 2024-07-16 北京市商汤科技开发有限公司 Drawing method, drawing device, computer equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932053A (en) * 2018-05-21 2018-12-04 腾讯科技(深圳)有限公司 Drawing practice, device, storage medium and computer equipment based on gesture

Also Published As

Publication number Publication date
WO2023024536A1 (en) 2023-03-02
CN113703577A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN108062526B (en) Human body posture estimation method and mobile terminal
CN109242765B (en) Face image processing method and device and storage medium
CN110456907A (en) Control method, device, terminal device and the storage medium of virtual screen
CN112926423B (en) Pinch gesture detection and recognition method, device and system
EP2790089A1 (en) Portable device and method for providing non-contact interface
CN112506340B (en) Equipment control method, device, electronic equipment and storage medium
US10621766B2 (en) Character input method and device using a background image portion as a control region
CN114138121B (en) User gesture recognition method, device and system, storage medium and computing equipment
CN111527468A (en) Air-to-air interaction method, device and equipment
US10469274B2 (en) Live ink presence for real-time collaboration
US20200326783A1 (en) Head mounted display device and operating method thereof
CN115061577B (en) Hand projection interaction method, system and storage medium
CN115565241A (en) Gesture recognition object determination method and device
CN111782131A (en) Pen point implementation method, device, equipment and readable storage medium
CN109062491A (en) Handwriting processing method and device for interactive intelligent equipment
CN113703577B (en) Drawing method, drawing device, computer equipment and storage medium
CN108227923A (en) A kind of virtual touch-control system and method based on body-sensing technology
JP2016099643A (en) Image processing device, image processing method, and image processing program
WO2020042442A1 (en) Expression package generating method and device
CN111913560B (en) Virtual content display method, device, system, terminal equipment and storage medium
TWI815593B (en) Method and system for detecting hand gesture, and computer readable storage medium
CN116301551A (en) Touch identification method, touch identification device, electronic equipment and medium
US11481940B2 (en) Structural facial modifications in images
CN114578956A (en) Equipment control method and device, virtual wearable equipment and storage medium
US11340706B2 (en) Gesture recognition based on depth information and computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40061847

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant