CN105045499A - Method and device for judging object touch - Google Patents
Method and device for judging object touch Download PDFInfo
- Publication number
- CN105045499A CN105045499A CN201510325483.7A CN201510325483A CN105045499A CN 105045499 A CN105045499 A CN 105045499A CN 201510325483 A CN201510325483 A CN 201510325483A CN 105045499 A CN105045499 A CN 105045499A
- Authority
- CN
- China
- Prior art keywords
- touched
- point
- coordinate
- layer
- touch points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 230000001960 triggered effect Effects 0.000 abstract description 2
- 230000000007 visual effect Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 10
- 238000010422 painting Methods 0.000 description 10
- 230000001788 irregular Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention discloses a method and a device for judging object touch, which comprises the steps of acquiring position coordinates of touch points triggered by a user on a screen; acquiring coordinates of touched points in the touched objects according to the position coordinates of the touched points and coordinates of the touched objects stored in advance; newly building a preset layer, wherein the layer is white and transparent, and the points at the preset positions in the layer comprise the touch points and touched points in a touched object; if the transparency of the point at the preset position in the layer is not 0, determining that the touch point is on the touched object, and accurately detecting whether the position of the current touch point is in the animation range or not, wherein a blank place is regarded as an invalid area, so that the purpose of improving the visual precision is achieved.
Description
Technical field
The embodiment of the present invention relates to the technical field of mobile terminal, particularly relates to a kind of touched method of judgment object and device.
Background technology
At present, if user triggers display animation on the touchscreen, normally first obtain the minimum rectangle frame comprising current display animation, judge touch points that user triggers again whether in this rectangle frame, if so, then determine that user triggers this display animation, if but the irregular words of animation shape matching, easily cause the situation judging to trigger this display animation inaccurate point, therefore, there will be to cause and visually do not touch animation and be also moved terminal and be considered as touching.
Summary of the invention
The object of the embodiment of the present invention is to propose a kind of touched method of judgment object and device, is intended to solve the problem how accurately whether judgment object is touched.
For reaching this object, the embodiment of the present invention by the following technical solutions:
The method that judgment object is touched, described method comprises:
Obtain the position coordinates of the touch points that user triggers on screen;
The coordinate of the touched point in described touched object is obtained according to the position coordinates of described touch points and the coordinate of the touched object prestored;
Newly-built default layer, described layer is white clear, and in described layer, the point of position comprises the touched point in described touch points and touched object in advance;
If the transparency of the point of position is not 0 in advance in described layer, then determine that described touch points is on described touched object.
Preferably, the described position coordinates according to described touch points and the coordinate of the touched object prestored also comprise after obtaining the coordinate of the touched point in described touched object:
Obtain the primary vector of described touch points to the touched point in described touched object;
Judge whether described touched object has convergent-divergent;
If so, then obtain secondary vector, described secondary vector is the primary vector after convergent-divergent.
Preferably, described newly-built default layer, described layer is white clear, and in described layer, the point of position comprises the touched point in described touch points and touched object in advance, comprising:
Newly-built default layer, described layer is white clear;
The coordinate of the point pre-set is obtained in described layer;
If the coordinate of described touch points is set to the coordinate of the point pre-set in described layer, then the coordinate pre-set described in being set to by the coordinate of the touched point in described touched object deducts the coordinate after described secondary vector.
Preferably, described method also comprises:
If the transparency of the point of position is 0 in advance in described layer, then determine described touch points not on described touched object.
Preferably, described method also comprises:
If determine described touch points or not after described touched object, then delete described layer, the coordinate of the touched point in described touched object reverted in the coordinate of former touched object corresponding point.
The device that judgment object is touched, described device comprises:
First acquisition module, for obtaining the position coordinates of the touch points that user triggers on screen;
Second acquisition module, for obtaining the coordinate of the touched point in described touched object according to the position coordinates of described touch points and the coordinate of the touched object prestored;
Newly-built module, for newly-built default layer, described layer is white clear, and in described layer, the point of position comprises the touched point in described touch points and touched object in advance;
First determination module, if in described layer in advance the transparency of the point of position be not 0, then determine that described touch points is on described touched object.
Preferably, described device also comprises:
3rd acquisition module, for obtaining the primary vector of described touch points to the touched point in described touched object;
Judge module, for judging whether described touched object has convergent-divergent;
4th acquisition module, for if so, then obtaining secondary vector, described secondary vector is the primary vector after convergent-divergent.
Preferably, described newly-built module comprises:
Newly-built unit, for newly-built default layer, described layer is white clear;
Acquiring unit, for obtaining the coordinate of the point pre-set in described layer;
Setting unit, if the coordinate being set to the point pre-set in described layer for the coordinate of described touch points, then the coordinate pre-set described in being set to by the coordinate of the touched point in described touched object deducts the coordinate after described secondary vector.
Preferably, described device also comprises:
Second determination module, if in described layer in advance the transparency of the point of position be 0, then determine described touch points not on described touched object.
Preferably, described device also comprises:
Removing module, if for determine described touch points or not after described touched object, then delete described layer, the coordinate of the touched point in described touched object reverted in the coordinate of former touched object corresponding point.
The position coordinates of the touch points that the embodiment of the present invention is triggered on screen by acquisition user; The coordinate of the touched point in described touched object is obtained according to the position coordinates of described touch points and the coordinate of the touched object prestored; Newly-built default layer, described layer is white clear, and in described layer, the point of position comprises the touched point in described touch points and touched object in advance; If the transparency of the point of position is not 0 in advance in described layer, then determine that described touch points is on described touched object, whether the position current touch points being detected that can be accurate is in the scope of animation, blank place can regard as inactive area, thus realizes the object promoting visual precision.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of touched method first embodiment of embodiment of the present invention judgment object;
Fig. 2 is the schematic flow sheet of touched method second embodiment of embodiment of the present invention judgment object;
Fig. 3 is the schematic flow sheet of touched method the 3rd embodiment of embodiment of the present invention judgment object;
Fig. 4 is the schematic flow sheet of touched method the 4th embodiment of embodiment of the present invention judgment object;
Fig. 5 is the high-level schematic functional block diagram of the touched device of embodiment of the present invention judgment object;
Fig. 6 is the high-level schematic functional block diagram of the newly-built module 503 of the embodiment of the present invention;
Fig. 7 is the high-level schematic functional block diagram of the touched device of embodiment of the present invention judgment object;
Fig. 8 is the high-level schematic functional block diagram of the touched device of embodiment of the present invention judgment object;
Fig. 9 is the high-level schematic functional block diagram of the touched device of embodiment of the present invention judgment object.
Embodiment
Below in conjunction with drawings and Examples, the embodiment of the present invention is described in further detail.Be understandable that, specific embodiment described herein is only for explaining the embodiment of the present invention, but not the restriction to the embodiment of the present invention.It also should be noted that, for convenience of description, illustrate only the part relevant to the embodiment of the present invention in accompanying drawing but not entire infrastructure.
Embodiment one
With reference to the schematic flow sheet that figure 1, Fig. 1 is touched method first embodiment of embodiment of the present invention judgment object.
In embodiment one, the touched method of described judgment object comprises:
Step 101, obtains the position coordinates of the touch points that user triggers on screen;
Concrete, obtain the current animation object needing to detect, the coordinate of animation object is the coordinate set of each point on this object, obtains current touch points coordinate p1 (x, y) detected.
Step 102, obtains the coordinate of the touched point in described touched object according to the position coordinates of described touch points and the coordinate of the touched object prestored;
Concrete, according to the coordinate of described touch points coordinate P1 and described animation object, obtain the coordinate of animation node N touched on this animation object, be designated as the vectorial V (x, y) in the lower left corner calculating touch points P1 to node N.
Step 103, newly-built default layer, described layer is white clear, and in described layer, the point of position comprises the touched point in described touch points and touched object in advance;
Preferably, described newly-built default layer, described layer is white clear, and in described layer, the point of position comprises the touched point in described touch points and touched object in advance, comprising:
Newly-built default layer, described layer is white clear;
The coordinate of the point pre-set is obtained in described layer;
If the coordinate of described touch points is set to the coordinate of the point pre-set in described layer, then the coordinate pre-set described in being set to by the coordinate of the touched point in described touched object deducts the coordinate after described secondary vector.
Concrete, apply for the internal memory that one piece of painting canvas C and current system can be used for showing, size is 1 pixel.Obtain the coordinate P2 of present node N, by P2 motion-vector-V ', then node N is presented on painting canvas C, together with the point on the N now just touch points P1 and P1 touched, move on to the lower left corner and the initial point of painting canvas C.
Step 104, if in described layer in advance the transparency of the point of position be not 0, then determine that described touch points is on described touched object.
Concrete, in layer, the point of position can be initial point in advance, obtains the RGBA of the initial point of painting canvas C, the value that each point that can see is made up of RGBA, A value represents transparency, if the transparency of the touched point on initial point is not 0, then determines to have touched on this object; If the transparency of the touched point on initial point is 0, then determine not touch on this object.
Embodiment two
With reference to the schematic flow sheet that figure 2, Fig. 2 is touched method second embodiment of embodiment of the present invention judgment object.
On the basis of embodiment one, the described position coordinates according to described touch points and the coordinate of the touched object prestored also comprise after obtaining the coordinate of the touched point in described touched object:
Step 105, obtains the primary vector of described touch points to the touched point in described touched object;
Step 106, judges whether described touched object has convergent-divergent;
Step 107, if so, then obtains secondary vector, and described secondary vector is the primary vector after convergent-divergent.
Concrete, obtain the level of node N and the scale value S (x, y) of vertical direction, difference vector V is changed into the value V ' (x, y) after convergent-divergent, the x of V ' is the x of the x*S of V, and the y of V ' is the y of the y*S of V.
Embodiment three
With reference to the schematic flow sheet that figure 3, Fig. 3 is touched method the 3rd embodiment of embodiment of the present invention judgment object.
In embodiment one or embodiment two, to be described on the basis of embodiment two, the touched method of described judgment object also comprises:
Step 108, if in described layer in advance the transparency of the point of position be 0, then determine described touch points not on described touched object.
Embodiment four
With reference to the schematic flow sheet that figure 4, Fig. 4 is touched method the 4th embodiment of embodiment of the present invention judgment object.
In embodiment one or embodiment two, to be described on the basis of embodiment two, the touched method of described judgment object also comprises:
Step 109, if determine described touch points or not after described touched object, then delete described layer, the coordinate of the touched point in described touched object reverted in the coordinate of former touched object corresponding point.
Concrete, if determine described touch points or not after described touched object, the coordinate of N is reduced into P2, release painting canvas C.Corresponding operating is made according to whether having touched node.
Embodiment five
With reference to the high-level schematic functional block diagram that figure 5, Fig. 5 is the touched device of embodiment of the present invention judgment object.
In embodiment five, the touched device of described judgment object comprises:
First acquisition module 501, for obtaining the position coordinates of the touch points that user triggers on screen;
Concrete, obtain the current animation object needing to detect, the coordinate of animation object is the coordinate set of each point on this object, obtains current touch points coordinate p1 (x, y) detected.
Second acquisition module 502, for obtaining the coordinate of the touched point in described touched object according to the position coordinates of described touch points and the coordinate of the touched object prestored;
Concrete, according to the coordinate of described touch points coordinate P1 and described animation object, obtain the coordinate of animation node N touched on this animation object, be designated as the vectorial V (x, y) in the lower left corner calculating touch points P1 to node N.
Newly-built module 503, for newly-built default layer, described layer is white clear, and in described layer, the point of position comprises the touched point in described touch points and touched object in advance;
Preferably, Fig. 6 is the high-level schematic functional block diagram of the newly-built module 503 of the embodiment of the present invention;
Described newly-built module 503 comprises:
Newly-built unit 601, for newly-built default layer, described layer is white clear;
Acquiring unit 602, for obtaining the coordinate of the point pre-set in described layer;
Setting unit 603, if the coordinate being set to the point pre-set in described layer for the coordinate of described touch points, then the coordinate pre-set described in being set to by the coordinate of the touched point in described touched object deducts the coordinate after described secondary vector.
Concrete, apply for the internal memory that one piece of painting canvas C and current system can be used for showing, size is 1 pixel.Obtain the coordinate P2 of present node N, by P2 motion-vector-V ', then node N is presented on painting canvas C, together with the point on the N now just touch points P1 and P1 touched, move on to the lower left corner and the initial point of painting canvas C.
First determination module 504, if in described layer in advance the transparency of the point of position be not 0, then determine that described touch points is on described touched object.
Concrete, in layer, the point of position can be initial point in advance, obtains the RGBA of the initial point of painting canvas C, the value that each point that can see is made up of RGBA, A value represents transparency, if the transparency of the touched point on initial point is not 0, then determines to have touched on this object; If the transparency of the touched point on initial point is 0, then determine not touch on this object.
Embodiment six
With reference to the high-level schematic functional block diagram that figure 7, Fig. 7 is the touched device of embodiment of the present invention judgment object.
On the basis of embodiment five, described device also comprises:
3rd acquisition module 505, for obtaining the primary vector of described touch points to the touched point in described touched object;
Judge module 506, for judging whether described touched object has convergent-divergent;
4th acquisition module 507, for if so, then obtaining secondary vector, described secondary vector is the primary vector after convergent-divergent.
Concrete, obtain the level of node N and the scale value S (x, y) of vertical direction, difference vector V is changed into the value V ' (x, y) after convergent-divergent, the x of V ' is the x of the x*S of V, and the y of V ' is the y of the y*S of V.
Embodiment seven
With reference to the high-level schematic functional block diagram that figure 8, Fig. 8 is the touched device of embodiment of the present invention judgment object.
In embodiment five or embodiment six, to be described in the basis of embodiment six, described device also comprises:
Second determination module 508, if in described layer in advance the transparency of the point of position be 0, then determine described touch points not on described touched object.
Embodiment eight
With reference to the high-level schematic functional block diagram that figure 9, Fig. 9 is the touched device of embodiment of the present invention judgment object.
In embodiment five or embodiment six, to be described in the basis of embodiment six, described device also comprises:
Removing module 509, if for determine described touch points or not after described touched object, then delete described layer, the coordinate of the touched point in described touched object reverted in the coordinate of former touched object corresponding point.
Concrete, if determine described touch points or not after described touched object, the coordinate of N is reduced into P2, release painting canvas C.Corresponding operating is made according to whether having touched node.
Below the know-why of the embodiment of the present invention is described in conjunction with specific embodiments.These describe the principle just in order to explain the embodiment of the present invention, and can not be interpreted as the restriction to embodiment of the present invention protection domain by any way.Based on explanation herein, those skilled in the art does not need to pay other embodiment that performing creative labour can associate the embodiment of the present invention, these modes all by fall into the embodiment of the present invention protection domain within.
Claims (10)
1. the method that judgment object is touched, is characterized in that, described method comprises:
Obtain the position coordinates of the touch points that user triggers on screen;
The coordinate of the touched point in described touched object is obtained according to the position coordinates of described touch points and the coordinate of the touched object prestored;
Newly-built default layer, described layer is white clear, and in described layer, the point of position comprises the touched point in described touch points and touched object in advance;
If the transparency of the point of position is not 0 in advance in described layer, then determine that described touch points is on described touched object.
2. method according to claim 1, is characterized in that, the described position coordinates according to described touch points and the coordinate of the touched object prestored also comprise after obtaining the coordinate of the touched point in described touched object:
Obtain the primary vector of described touch points to the touched point in described touched object;
Judge whether described touched object has convergent-divergent;
If so, then obtain secondary vector, described secondary vector is the primary vector after convergent-divergent.
3. method according to claim 1, is characterized in that, described newly-built default layer, and described layer is white clear, and in described layer, the point of position comprises the touched point in described touch points and touched object in advance, comprising:
Newly-built default layer, described layer is white clear;
The coordinate of the point pre-set is obtained in described layer;
If the coordinate of described touch points is set to the coordinate of the point pre-set in described layer, then the coordinate pre-set described in being set to by the coordinate of the touched point in described touched object deducts the coordinate after described secondary vector.
4. the method according to claims 1 to 3 any one, is characterized in that, described method also comprises:
If the transparency of the point of position is 0 in advance in described layer, then determine described touch points not on described touched object.
5. the method according to claims 1 to 3 any one, is characterized in that, described method also comprises:
If determine described touch points or not after described touched object, then delete described layer, the coordinate of the touched point in described touched object reverted in the coordinate of former touched object corresponding point.
6. the device that judgment object is touched, is characterized in that, described device comprises:
First acquisition module, for obtaining the position coordinates of the touch points that user triggers on screen;
Second acquisition module, for obtaining the coordinate of the touched point in described touched object according to the position coordinates of described touch points and the coordinate of the touched object prestored;
Newly-built module, for newly-built default layer, described layer is white clear, and in described layer, the point of position comprises the touched point in described touch points and touched object in advance;
First determination module, if in described layer in advance the transparency of the point of position be not 0, then determine that described touch points is on described touched object.
7. device according to claim 6, is characterized in that, described device also comprises:
3rd acquisition module, for obtaining the primary vector of described touch points to the touched point in described touched object;
Judge module, for judging whether described touched object has convergent-divergent;
4th acquisition module, for if so, then obtaining secondary vector, described secondary vector is the primary vector after convergent-divergent.
8. device according to claim 6, is characterized in that, described newly-built module comprises:
Newly-built unit, for newly-built default layer, described layer is white clear;
Acquiring unit, for obtaining the coordinate of the point pre-set in described layer;
Setting unit, if the coordinate being set to the point pre-set in described layer for the coordinate of described touch points, then the coordinate pre-set described in being set to by the coordinate of the touched point in described touched object deducts the coordinate after described secondary vector.
9. the device according to claim 6 to 8 any one, is characterized in that, described device also comprises:
Second determination module, if in described layer in advance the transparency of the point of position be 0, then determine described touch points not on described touched object.
10. the device according to claim 6 to 8 any one, is characterized in that, described device also comprises:
Removing module, if for determine described touch points or not after described touched object, then delete described layer, the coordinate of the touched point in described touched object reverted in the coordinate of former touched object corresponding point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510325483.7A CN105045499A (en) | 2015-06-12 | 2015-06-12 | Method and device for judging object touch |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510325483.7A CN105045499A (en) | 2015-06-12 | 2015-06-12 | Method and device for judging object touch |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105045499A true CN105045499A (en) | 2015-11-11 |
Family
ID=54452074
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510325483.7A Pending CN105045499A (en) | 2015-06-12 | 2015-06-12 | Method and device for judging object touch |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105045499A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010071630A1 (en) * | 2008-12-15 | 2010-06-24 | Hewlett-Packard Development Company, L.P. | Gesture based edit mode |
CN103164158A (en) * | 2013-01-10 | 2013-06-19 | 深圳市欧若马可科技有限公司 | Method, system and device of creating and teaching painting on touch screen |
CN103399640A (en) * | 2013-08-16 | 2013-11-20 | 贝壳网际(北京)安全技术有限公司 | Method and device for controlling according to user gesture and client |
CN103455331A (en) * | 2013-08-27 | 2013-12-18 | 小米科技有限责任公司 | Icon display method and device |
CN103809914A (en) * | 2014-03-07 | 2014-05-21 | 商泰软件(上海)有限公司 | Man-machine interaction method, device and mobile terminal |
-
2015
- 2015-06-12 CN CN201510325483.7A patent/CN105045499A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010071630A1 (en) * | 2008-12-15 | 2010-06-24 | Hewlett-Packard Development Company, L.P. | Gesture based edit mode |
CN103164158A (en) * | 2013-01-10 | 2013-06-19 | 深圳市欧若马可科技有限公司 | Method, system and device of creating and teaching painting on touch screen |
CN103399640A (en) * | 2013-08-16 | 2013-11-20 | 贝壳网际(北京)安全技术有限公司 | Method and device for controlling according to user gesture and client |
CN103455331A (en) * | 2013-08-27 | 2013-12-18 | 小米科技有限责任公司 | Icon display method and device |
CN103809914A (en) * | 2014-03-07 | 2014-05-21 | 商泰软件(上海)有限公司 | Man-machine interaction method, device and mobile terminal |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9335836B2 (en) | Method and electronic apparatus for realizing virtual handwriting input | |
CN104391603B (en) | A kind of calibration method of touch screen, calibrating installation and calibration system | |
CN104317494B (en) | The method and system of mobile cursor | |
CN102033660B (en) | Touch-control system and method for touch detection | |
TWI506529B (en) | Display control system of sliding operation and method thereof | |
US10120501B2 (en) | Touch implementation method and device and electronic device | |
CN103365492A (en) | Multi-point touch identification method for infrared touch screen | |
CN105807965A (en) | False trigger prevention method and apparatus | |
CN111829531A (en) | Two-dimensional map construction method and device, robot positioning system and storage medium | |
CN108200416A (en) | Coordinate mapping method, device and the projection device of projected image in projection device | |
EP2767897B1 (en) | Method for generating writing data and an electronic device thereof | |
CN104484072A (en) | Detection plate, detection component and detection method for detecting ripples of touch screen | |
CN104656903A (en) | Processing method for display image and electronic equipment | |
KR101628081B1 (en) | System and method for sensing multiple touch points based image sensor | |
CN103093475A (en) | Image processing method and electronic device | |
CN103885644A (en) | Method of improving infrared touch screen touch precision and system thereof | |
CN104881220A (en) | User instruction acquiring method and device | |
CN101751165A (en) | Touch-control system and method thereof for obtaining location of referent | |
CN102270071A (en) | Multi-point touch identification method and device | |
CN105045499A (en) | Method and device for judging object touch | |
CN104457709A (en) | Distance detection method of and electronic equipment | |
CN102866808B (en) | Method and system for self-correcting of specially-shaped touch screen | |
CN106095158A (en) | The system that the computational methods of light target displacement vector, device and control cursor move | |
CN106488160A (en) | A kind of method for displaying projection, device and electronic equipment | |
CN103761012B (en) | A kind of fast algorithm suitable in large scale infrared touch panel |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20151111 |
|
RJ01 | Rejection of invention patent application after publication |