CN113157359B - Interaction method, interaction device, electronic equipment and computer readable storage medium - Google Patents
Interaction method, interaction device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN113157359B CN113157359B CN202110179793.8A CN202110179793A CN113157359B CN 113157359 B CN113157359 B CN 113157359B CN 202110179793 A CN202110179793 A CN 202110179793A CN 113157359 B CN113157359 B CN 113157359B
- Authority
- CN
- China
- Prior art keywords
- interaction
- event
- preset
- layer
- action
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 443
- 238000000034 method Methods 0.000 title claims abstract description 63
- 230000009471 action Effects 0.000 claims abstract description 161
- 230000004044 response Effects 0.000 claims abstract description 60
- 238000001514 detection method Methods 0.000 claims abstract description 7
- 230000002452 interceptive effect Effects 0.000 claims description 21
- 238000012545 processing Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 8
- 230000000977 initiatory effect Effects 0.000 claims description 7
- 230000002401 inhibitory effect Effects 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 11
- 238000010586 diagram Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 14
- 238000004590 computer program Methods 0.000 description 7
- 230000001960 triggered effect Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 238000007667 floating Methods 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000003139 buffering effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
- G06F9/453—Help systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the disclosure discloses an interaction method, an interaction device, electronic equipment and a computer readable storage medium. Wherein the interaction method comprises the following steps: displaying the first content in a display area of the first content; under the condition that a preset triggering condition is met, displaying interaction prompt information in a display area of the first content; the interaction prompt information comprises preset interaction action information, the preset interaction action information is used for indicating preset interaction actions, and the display area of the first content comprises an interaction area used for detecting the preset interaction actions; and responding to the detection of the preset interaction action in the interaction area, and triggering a first interaction response corresponding to the preset interaction action. According to the method, the technical problems that interactivity is poor and diversified interaction effects cannot be achieved are solved by adding the interaction layer of the interaction area for detecting the preset interaction action.
Description
Technical Field
The present disclosure relates to the field of interaction, and in particular, to an interaction method, an interaction device, an electronic device, and a computer readable storage medium.
Background
Along with the continuous development of computer technology, continuous progress of digital information and accelerated update of mobile terminal equipment are brought about, mobile terminal equipment such as tablet computers, mobile phones and electronic readers are widely popularized, the current mobile terminal usually has a media playing function, and along with the stronger functions on an intelligent terminal, a user can play and control media played in the terminal as required.
The content playing page in the current terminal device can only perform global playing control, such as pause, playing, closing and sliding switching operations, switching the playing content up and down, switching the playing page left and right, and the like. The user cannot interact with the content in the playing page, the interactivity is poor, and diversified interaction effects cannot be realized.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, an embodiment of the present disclosure provides an interaction method, including:
Displaying the first content in a display area of the first content;
Under the condition that a preset triggering condition is met, displaying interaction prompt information in a display area of the first content; the interaction prompt information comprises preset interaction action information, wherein the preset interaction action information is used for indicating preset interaction actions, and the display area of the first content comprises an interaction area used for detecting the preset interaction actions;
and responding to the detection of the preset interaction action in the interaction area, and triggering a first interaction response corresponding to the preset interaction action.
In a second aspect, embodiments of the present disclosure provide an interaction device, including:
a first display module for displaying first content in a display area of the first content;
The second display module is used for displaying interaction prompt information in the display area of the first content under the condition that the preset trigger condition is met; the interaction prompt information comprises preset interaction action information, the preset interaction action information is used for indicating preset interaction actions, and the display area of the first content comprises interaction area preset triggering conditions for detecting the preset interaction actions;
and the triggering module is used for responding to the detection of the preset interaction action in the interaction area and triggering a first interaction response corresponding to the preset interaction action.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and
A memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the interaction methods of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a non-transitory computer readable storage medium, wherein the non-transitory computer readable storage medium stores computer instructions for causing a computer to perform any of the interaction methods of the first aspect.
The embodiment of the disclosure discloses an interaction method, an interaction device, electronic equipment and a computer readable storage medium. Wherein the interaction method comprises the following steps: displaying the first content in a display area of the first content; under the condition that a preset triggering condition is met, displaying interaction prompt information in a display area of the first content; the interaction prompt information comprises preset interaction action information, the preset interaction action information is used for indicating preset interaction actions, and the display area of the first content comprises an interaction area used for detecting the preset interaction actions; and responding to the detection of the preset interaction action in the interaction area, and triggering a first interaction response corresponding to the preset interaction action. According to the method, the technical problems that interactivity is poor and diversified interaction effects cannot be achieved are solved by adding the interaction layer of the interaction area for detecting the preset interaction action.
The foregoing description is only an overview of the disclosed technology, and may be implemented in accordance with the disclosure of the present disclosure, so that the above-mentioned and other objects, features and advantages of the present disclosure can be more clearly understood, and the following detailed description of the preferred embodiments is given with reference to the accompanying drawings.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of an interaction method provided in an embodiment of the disclosure;
fig. 2 is a schematic diagram of a display area in a terminal device according to an embodiment of the present disclosure;
FIG. 3 is a further flow diagram of an interaction method provided by an embodiment of the present disclosure;
fig. 4 is a further flow diagram of an interaction method provided in an embodiment of the present disclosure;
FIG. 5 is a further flow diagram of an interaction method provided by an embodiment of the present disclosure;
FIG. 6 is a further flow diagram of an interaction method provided by an embodiment of the present disclosure;
Fig. 7 is a further flow diagram of an interaction method provided by an embodiment of the present disclosure;
fig. 8 is a schematic view of an application scenario of an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an embodiment of an interaction device provided in an embodiment of the disclosure;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Fig. 1 is a flowchart of an embodiment of an interaction method according to an embodiment of the present disclosure, where the interaction method according to the embodiment may be performed by an interaction device, and the interaction device may be implemented as software, or a combination of software and hardware, and the interaction device may be integrally provided in a device in an interaction system, such as an interaction server or an interaction terminal device. As shown in fig. 1, the method comprises the steps of:
Step S101, displaying the first content in a display area of the first content.
The display area of the first content is an area for displaying the first content in the terminal equipment. For example, if the first content is a video, the display area of the first content is a playing page of the video.
The first content may include any media content, such as video, pictures, text, music, etc., and the specific form and content of the first content are not limited in this disclosure. It should be noted that the first content is a whole, and no control capable of realizing individual logic control of the local area exists in the first content.
Returning to fig. 1, the interaction method further includes step S102, where interaction prompt information is displayed in the display area of the first content when a preset trigger condition is met; the interaction prompt information comprises preset interaction action information, the preset interaction action information is used for indicating preset interaction actions, and the display area of the first content comprises an interaction area used for detecting the preset interaction actions.
Optionally, the service logic layer corresponding to the display area of the first content includes a display layer and an interaction layer, where a control logic of the interaction layer has a higher priority than the display layer; the display layer is used for displaying the first content, and the interaction layer is used for realizing interaction of the interaction area.
Optionally, the interaction area is a local area of the display area of the first content. I.e. the extent of the interaction zone is within the display area of the first content and smaller than the extent of the display area of the first content.
Fig. 2 is a schematic diagram of a display area in a terminal device. As shown in fig. 2. Wherein 201 is a display layer of a display area of the first content, and 202 is an interaction layer of the display area of the first content; where 203, 204 and 205 are view structures of a page display in a terminal device, each view is used to display different content, and illustratively, view 203 is used to render a background of the entire page, and view 204 is used to render a column button in the page, and view 205 is used to display first content, such as a video, a picture, and the like, where a display area of the first content may include all areas within view 205, or may include only a portion within view 205. It is to be understood that the sizes and the positional relationships of the view and the logic layers in fig. 2 are merely examples, and may be set according to specific situations when implementing, for example, the size of the display layer 201 may be the same as that of the view 205, and the display layer 201 and the interaction layer 202 may be completely overlapped, which is not described herein. Illustratively, the display layer 201 is a rich media play layer, such as a video player.
The preset trigger condition comprises one or more of the occurrence of a first object in the first content, the arrival of the display time of the first content at a time threshold value or the detection of a preset trigger event in the display area of the first content. The first object is exemplified by a material which can appear in a preset video, such as basketball, shoes, etc., and when the material appears in the video, a preset triggering condition is met; the display time of the first content includes a playing time of the first content, for example, a playing time of a video, and when the playing time of the video reaches a preset time threshold, a preset trigger condition is satisfied; the preset trigger event includes a click event of a user at a preset position in the display layer, and when the click event is detected, a preset trigger condition is satisfied. It can be understood that the preset trigger condition may also be a combination condition of the plurality of trigger conditions, which is not described herein.
In this embodiment, in a case where a preset trigger condition is satisfied, interactive prompt information is displayed in a display area of the first content. The interaction prompt information can be rendered and generated by the display layer and displayed, and is used for prompting a user watching the first content to make a preset interaction action so as to trigger an interaction response, so that more interaction effects are provided for the user.
The interaction prompt information comprises preset interaction action information, wherein the preset interaction action information is used for indicating preset interaction actions, if the preset interaction action information is word description of the preset actions, a user can make the preset actions according to the word description; or the preset interactive action information is a demonstration animation of the preset interactive action, and the user can simulate the action in the demonstration animation to make the preset action after watching the demonstration animation.
Further, the interaction prompt information further includes: the interactive area indication information is used for indicating the area range of the interactive area. Illustratively, the interactive region indication information includes boundary information of the interactive region, and the boundary of the interactive region may be represented by a line frame, such as the interactive region 206 shown in fig. 2, by which the boundary of the interactive region is represented; the demonstration animation of the preset interaction action may also be displayed in an area corresponding to the interaction area, such as the interaction area 206 in fig. 2; the preset interaction is taken as a preset distance sliding from left to right, the interaction prompt information comprises the boundary of the interaction area, and an animation of an arrow sliding from left to right is played in the boundary of the interaction area so as to prompt a user to make a finger sliding action from left to right in the interaction area.
Wherein the interaction region is a region in the interaction layer. As shown in fig. 2, in order to provide more interactive effects to the user and enable the user to interact with the material in the first content, an interactive layer is provided to achieve interactive effects other than control of the first content. The interaction area is a preset area or a randomly set area. It can be understood that the display layer and the interaction layer are both service logic layers and are not felt by a user using the terminal device, that is, when the terminal device displays the interaction prompt information, the user sees the first content displayed on the screen of the terminal device and the interaction prompt information appears in the display area of the first content.
Returning to fig. 1, the interaction method further includes step S103, in response to detecting the preset interaction action in the interaction area, triggering a first interaction response corresponding to the preset interaction action.
When the user views the interaction prompt information and makes the preset interaction action in the interaction zone according to the interaction prompt information, triggering an interaction response corresponding to the preset interaction action. Illustratively, the interactive response is to pop up a floating window and display the second content in the floating window. The second content is, for example, information of the object in the first content, such as information of basketball or shoes, etc. Or the interaction response may be a preset celebration effect, etc., it may be understood that the interaction response may be any response to increase the interaction effect between the user and the first content, and any interaction response may be applied to the present disclosure and will not be described herein.
Optionally, as shown in fig. 3, the step S103 further includes:
Step S301, in response to detecting an initial event in a display area of the first content, judging whether a trigger position of the initial event is in the interaction area;
step S302, prohibiting layers except the interaction layer from intercepting subsequent events and consuming the initial events under the condition that the initial position is in the interaction area;
step S303, triggering a first interaction response corresponding to the preset interaction action when the subsequent event constitutes the preset interaction action.
In the embodiment of the disclosure, the user selects whether to make the preset interaction action in the interaction area according to the interaction prompt information. Whether or not the action made by the user is within the interaction zone, an initial action needs to be made in the display area of the terminal device first, as in a mobile terminal with a touch screen, the user touches the touch screen first; in a terminal device operated using a mouse, a user first presses a left mouse button or the like. The following description will be made of a mobile terminal with a touch screen.
When the mobile terminal with the touch screen is used, a user makes a finger touch action, after the system of the terminal equipment detects the touch signal, an initial event is generated and distributed, and the initial event is sequentially transmitted from the outer layer view 203 to the inner layer view 205 until the initial event is transmitted to the inner layer view 205, wherein the inner layer view 205 comprises an interaction layer and a display layer, and the priority of the interaction layer is higher, so that whether the triggering position of the initial event is in the interaction area 206 is judged firstly. If the trigger position is in the interaction area, the user is shown to have the intention of making the preset action, at the moment, the layers except the interaction layer are forbidden to intercept the event after the initial event, and meanwhile, the interaction layer consumes the initial event to show that the subsequent event is transmitted to the interaction layer for processing. Wherein the subsequent event is also an event generated by a user's touch on the touch screen. Wherein the layers other than the interaction layer include the display layer or any other view layer capable of intercepting and consuming events, the disclosure is not particularly limited to the layers other than the interaction layer.
And the interaction layer continuously receives subsequent events, and if the subsequent events can form the preset action, the interaction response corresponding to the preset interaction action is triggered. Therefore, under the condition that global play control of the first content and sliding switching of the content displayed on the page are not affected, more various interaction modes can be provided, and more interaction contents can be displayed.
Optionally, as shown in fig. 4, the step S303 further includes:
Step S401, in response to detecting an intermediate event, judging whether the intermediate event meets an intermediate condition;
Step S402, in the case that the intermediate event meets the intermediate condition, prohibiting layers except the interaction layer from intercepting the intermediate event and consuming the intermediate event;
step S403, in response to detecting the end event, determining whether the detected intermediate event sequence constitutes the preset interaction;
step S404, triggering a first interaction response corresponding to the preset interaction action when the intermediate event sequence forms the preset interaction action.
Wherein the intermediate event is an event detected before the start event is detected and no end event is detected.
In step S401, the intermediate condition includes necessary insufficient conditions that constitute the preset action, such as a movement angle, a direction, and the like of the movement event when the intermediate event is a movement event; if the movement event already indicates that the movement direction is from top to bottom, the detected movement is not the preset interaction movement by the movement direction and can be judged. It can be appreciated that the intermediate condition is different according to the type of the intermediate event or the type of the preset action, which will not be described herein.
And if the intermediate event satisfies the intermediate condition, as described in step S402, continuing to detect subsequent events and prohibiting layers other than the interaction layer from intercepting the intermediate event and consuming the intermediate event; so that subsequent intermediate events can still be handled by the interaction layer.
In step S403, when the user finishes the action, the ending time is triggered, if the user lifts a finger from the interaction area in the mobile terminal with a touch screen, the action is completed, and at this time, the system of the terminal device generates an ending event according to the lifted action, and when the ending event is detected, the interaction layer determines whether the detected intermediate event sequence constitutes the preset interaction action.
In step S404, when the intermediate event sequence can form a preset interaction action, an interaction response corresponding to the preset interaction action is triggered.
In the above steps, a plurality of judgment conditions are used for judging the preset action, and the interactive response is triggered only when the action of the user is consistent with the preset action, so that the erroneous judgment can be further reduced through the plurality of judgment conditions, and the intention judgment of the user is more accurate.
Optionally, in the step S401-step S404, the intermediate event includes a movement event and/or a click event. Illustratively, the movement event is generated by a sliding of a user's finger on the screen, and the click event is generated by a click of the user's cell phone on the screen. It can be understood that the object for generating the movement event and/or the click event may be different according to the type of the terminal and the operation tool used by the user, for example, the user may use a non-human body part such as a stylus to generate the event, or the user may use a man-machine interaction device such as a mouse to generate the event, which is not described herein again.
Further, the determining whether the detected intermediate event sequence constitutes the preset interaction action includes:
and judging whether the track formed by the moving events and/or the number and/or the position and/or the duration of the clicking events meet preset conditions or not.
The preset interaction action can be composed of a mobile event and/or a clicking event; if the preset interaction action consists of a single movement event, if the preset interaction action slides from left to right for a certain distance, the preset interaction action can consist of a movement event sequence, wherein the movement sequence can represent a movement track, and comprises a movement direction and a movement distance, so that whether the track formed by the movement event sequence accords with the preset interaction action can be judged; and if the preset interaction action is composed of a single click event, the click event can comprise the number of clicks, the click positions, the contact time and the like, and if the preset interaction is used as the click, the contact is kept for a period of time or a plurality of clicks or a plurality of preset positions are clicked at the same time, and at the moment, whether the action composed of the click events accords with the preset action can be judged by judging whether the kept time is the same as the kept time of the preset action or whether the click times are the same as the click times in the preset interaction action or whether the clicked positions are the same as the preset positions or not. The preset action can also be formed by mixing a moving event and a clicking event, for example, the preset action comprises both a moving action and a clicking action, and at this time, whether the sequence combination of the moving event and the clicking event accords with the interaction action can be judged through the combination of the moving event and the clicking time.
Through the steps S101 to S103 and further implementation steps thereof, the above embodiment implements the interaction logic outside the control of the first content by adding the interaction area, so that multiple interaction effects are added under the logic that does not affect the global control of the first content, and the technical problem that the interactivity caused by the incapability of interacting with the played content is poor and the diversified interaction effects cannot be implemented is solved.
Further, in order not to affect the normal control of the first content, the interaction method further includes:
in step S104, in response to not detecting the preset interaction in the interaction area, the interaction layer ignores the detected interaction.
Optionally, the step S104 includes:
step S501, in response to detecting a start event in the display area of the first content, determining whether a trigger position of the start event is in the interaction area;
step S502, in the case that the trigger position is not in the interaction area, the interaction layer ignores the initial event and hands the initial event and subsequent events to a layer process other than the interaction layer.
When a system of the terminal equipment detects an initial event and the triggering position of the initial event is not in the interaction, the user does not make a preset interaction action, the interaction layer ignores the initial event, and at the moment, the layer outside the interaction layer can be simultaneously set to allow interception of the initial time and the event after the initial time so as to realize other control functions. If the interaction layer ignores the initial event, the initial event is distributed from the inner layer to the outer layer, for example, to the layer of the view 204, and the view 204 is used for implementing control logic for the first content, at this time, the initial event is intercepted and consumed by the view 204, and then the view 204 receives an ending event subsequent to the initial event, and determines that the user performs a clicking action, so that a pause function of the first content can be implemented.
In the above step, the triggering position of the initiation event may also be in the interaction area, where the step S104 further includes:
Step S601, in the case that the trigger position is in the interaction area, prohibiting layers other than the interaction layer from intercepting subsequent events and consuming the initial event;
step S602, in response to detecting an intermediate event, judging whether the intermediate event meets an intermediate condition;
in step S603, in the case that the intermediate event does not meet the intermediate condition, the interaction layer ignores the intermediate event.
Step S601 is the same as step S302, and step S602 is the same as step S401; the same as in the above steps is not described here again. In fact, the intermediate condition may be any feature of the preset interaction, if the intermediate event does not meet the feature, the intermediate event sequence cannot form the preset interaction regardless of the subsequent event, at this time, it may be determined in advance that the user does not make the preset action, the interaction layer may ignore the intermediate event, and the subsequent event may be intercepted and consumed by a layer other than the interaction layer and processed. Therefore, whether the user makes the preset interaction action can be judged in advance, and judgment after an ending event is not needed, so that the processing time can be saved; in addition, normal control of the first content may not be affected, as in the above example, if the movement event has indicated that the direction of movement is from top to bottom, it indicates that the detected action has not been the preset interaction action, at which time the subsequent movement event may be handed over to a layer process other than the interaction layer, and it is determined as a slide from bottom to top, and the control function of sliding and switching the display content may be performed accordingly.
After step S502 or step S603, since the interaction performed by the user has not yet ended, after the interaction layer ignores the start event, the subsequent event may be handed over to a layer other than the interaction layer to implement other control functions, so further, the method further includes:
Setting a layer except for the interaction layer to intercept the subsequent event of the initial event;
the layers other than the interaction layer trigger a second interaction response of the first content according to an event subsequent to the initiation event.
Wherein the second interactive response of the first content includes the control response to the first content, such as play, pause, switch, etc., which are not described herein.
The implementation of the above steps is already described in the above step S502 and step S603, and will not be described here again.
Further, there may be a case that the intermediate event in the interaction area always satisfies the intermediate condition, in the above example, the user may slide from left to right until the finger leaves the touch screen of the terminal device, but at this time, the intermediate event sequence may not compose the preset interaction, for example, the sliding distance is insufficient, so further, the step S104 further includes:
Step S701, in a case that the intermediate event satisfies an intermediate condition, prohibiting layers other than the interaction layer from intercepting subsequent events and consuming the intermediate event;
Step S702, in response to detecting the ending event, judging whether the detected intermediate event constitutes the preset interaction action;
In step S703, in the case that the intermediate event does not constitute the preset interaction, the interaction layer ignores the intermediate event.
Step S701 is the same as step S402 described above, and step S702 is the same as step S403 described above. In case the intermediate event does not constitute the preset interaction, as in the above example, the interaction layer ignores the intermediate event.
In one embodiment, in step S703, after the interaction layer ignores the intermediate event, the end event does not need to be transferred to a layer other than the interaction layer again, because the user has completed a complete start, intermediate and end action, all events are consumed by the interaction layer, and the user does an invalid action in the interaction zone, so that no interaction response is triggered, and no messages are intercepted by a layer other than the interaction layer, so that no other control functions are triggered.
In another embodiment, in order not to waste the interaction made by the user or in order to prevent the user from mistakenly making the interaction in the interaction zone when he wants to make global control over the first display content, in case the trigger position is within the interaction zone, the method further comprises:
caching all detected events;
And triggering interaction processing of layers except the interaction layer according to all the cached events when the interaction layer ignores the intermediate events.
When the user makes a preset interaction action in the interaction area, all detected events are cached; when the user does not make the preset action, the interaction processing of the layers except the interaction layer is triggered according to all the cached events.
Optionally, when the interaction layer ignores the intermediate event, the cached event does not constitute a complete action, i.e. no end event has been detected yet, at which time all the cached events are distributed to layers other than the interaction layer, which determines, according to its own determination logic, whether the combination of the cached event and the subsequently detected event corresponds to the control logic of these layers.
Optionally, when the interaction layer ignores the intermediate event, the buffered event already constitutes a complete action, i.e. an end event has been detected, at which time all buffered events may be distributed to layers other than the interaction layer, by which layers it is determined, according to its own determination logic, whether the sequence of buffered events corresponds to the control logic of these layers, in the manner described in the alternative embodiment above. Or at this time, the interaction action corresponding to the event sequence can be identified through the cached event sequence, and then the interaction action is distributed to the layers outside the interaction layer, and whether the identified interaction action accords with the control logic of the layers is judged by the layers according to the judgment logic of the layers.
The above embodiment enables layers outside the interaction layer to determine whether to make the second interaction response of the first content by using the detected event when the interaction layer ignores the intermediate event by buffering the event detected by the interaction layer. It can be appreciated that when the intermediate events constitute the preset interaction, the cached events are released to save system resources.
Fig. 8 is an application scenario schematic diagram of an embodiment of the present disclosure. As shown in fig. 8, first content is displayed in a display area 801 of the first content in the mobile phone, and the first content is represented by oblique lines, and is video; in a case that a preset trigger condition is met, if a preset object appears in the video, displaying interaction prompt information 802 in a display area 801 of the first content, wherein the interaction prompt information includes a boundary 802a of an interaction area, a boundary displayed as shown in the figure 802a, and indication information 802b of a preset interaction action, and a sliding action of an arrow direction and an arrow length shown in the figure 802 b; when the preset interaction is detected in the interaction area, detailed information of the preset object is displayed, and the detailed information is displayed in the form of a popup floating window, as shown by 803 in fig. 8. Therefore, besides the control interaction of the video, any form of interaction and interaction response can be added through the interaction area, so that the interactivity is improved, and the interaction effect is enriched.
The above embodiment discloses an interaction method, which includes: displaying the first content in a display area of the first content; under the condition that a preset triggering condition is met, displaying interaction prompt information in a display area of the first content; the interaction prompt information comprises preset interaction action information, the preset interaction action information is used for indicating preset interaction actions, and the display area of the first content comprises an interaction area used for detecting the preset interaction actions; and responding to the detection of the preset interaction action in the interaction area, and triggering a first interaction response corresponding to the preset interaction action. According to the method, the technical problems that interactivity is poor and diversified interaction effects cannot be achieved are solved by adding the interaction layer of the interaction area for detecting the preset interaction action.
In the foregoing, although the steps in the foregoing method embodiments are described in the foregoing order, it should be clear to those skilled in the art that the steps in the embodiments of the disclosure are not necessarily performed in the foregoing order, but may be performed in reverse order, parallel, cross, etc., and other steps may be further added to those skilled in the art on the basis of the foregoing steps, and these obvious modifications or equivalent manners are also included in the protection scope of the disclosure and are not repeated herein.
Fig. 9 is a schematic structural diagram of an embodiment of an interaction device according to an embodiment of the disclosure, as shown in fig. 9, the device 900 includes: a first display module 901, a second display module 902, and a trigger module 903. Wherein,
A first display module 901 for displaying first content in a display area of the first content;
A second display module 902, configured to display interaction prompt information in a display area of the first content when a preset trigger condition is met; the interaction prompt information comprises preset interaction action information, wherein the preset interaction action information is used for indicating preset interaction actions, and the display area of the first content comprises an interaction area used for detecting the preset interaction actions;
The triggering module 903 is configured to trigger a first interaction response corresponding to the preset interaction action in response to detecting the preset interaction action in the interaction area.
Further, the business logic layer corresponding to the display area of the first content comprises a display layer and an interaction layer, wherein the control logic of the interaction layer has a higher priority than the display layer; the display layer is used for displaying the first content, and the interaction layer is used for realizing interaction of the interaction area.
Further, the interaction device 900 further includes:
And the ignoring module is used for responding to the fact that the preset interaction action is not detected in the interaction area, and the interaction layer ignores the detected interaction action.
Further, the interaction prompt information further includes: the interactive area indication information is used for indicating the area range of the interactive area.
Further, the triggering module 903 is further configured to:
in response to detecting a start event in a display area of the first content, judging whether a trigger position of the start event is in the interaction area;
In the case that the starting position is in the interaction area, prohibiting layers except the interaction layer from intercepting subsequent events and consuming the starting event;
And triggering a first interaction response corresponding to the preset interaction action under the condition that the follow-up event forms the preset interaction action.
Further, the triggering module 903 is further configured to:
In response to detecting an intermediate event, determining whether the intermediate event satisfies an intermediate condition;
In the case that the intermediate event meets an intermediate condition, prohibiting layers other than the interaction layer from intercepting the intermediate event and consuming the intermediate event;
in response to detecting the end event, judging whether the detected intermediate event sequence constitutes the preset interaction action;
And triggering a first interaction response corresponding to the preset interaction action under the condition that the intermediate event sequence forms the preset interaction action.
Further, the intermediate events include a movement event and/or a click event; the triggering module 903 is further configured to: and judging whether the track formed by the moving events and/or the number and/or the position and/or the duration of the clicking events meet preset conditions or not.
Further, the neglecting module is further configured to:
in response to detecting a start event in a display area of the first content, judging whether a trigger position of the start event is in the interaction area;
And under the condition that the trigger position is not in the interaction area, the interaction layer ignores the initial event and gives the initial event and subsequent events to the layers except the interaction layer for processing.
Further, the neglecting module is further configured to:
in the case that the trigger position is within the interaction zone, disabling subsequent events from being intercepted by layers other than the interaction layer and consuming the initiation event;
In response to detecting an intermediate event, determining whether the intermediate event satisfies an intermediate condition;
In the case that the intermediate event meets an intermediate condition, prohibiting layers other than the interaction layer from intercepting subsequent events and consuming the intermediate event;
In response to detecting the end event, judging whether the detected intermediate event constitutes the preset interaction action;
under the condition that the intermediate event does not form the preset interaction action, the interaction layer ignores the intermediate event; or alternatively
In the event that the intermediate event does not satisfy the intermediate condition, the interaction layer ignores the intermediate event.
Further, the interaction device 900 further includes:
the caching module is used for caching all the detected events;
And triggering interaction processing of layers except the interaction layer according to all the cached events when the interaction layer ignores the intermediate events.
Further, the triggering module 903 is further configured to:
Setting a layer except for the interaction layer to intercept the subsequent event of the initial event;
the layers other than the interaction layer trigger a second interaction response of the first content according to an event subsequent to the initiation event.
Further, the preset triggering condition includes:
One or more of the first object appearing in the first content, the display time of the first content reaching a time threshold, or a preset trigger event being detected in a display area of the first content
The apparatus of fig. 9 may perform the method of the embodiment of fig. 1-7, and reference is made to the relevant description of the embodiment of fig. 1-7 for parts of this embodiment that are not described in detail. The implementation process and the technical effect of this technical solution are described in the embodiments shown in fig. 1 to 7, and are not described herein.
Referring now to fig. 10, a schematic diagram of an electronic device (e.g., a terminal device or server) 1000 suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 10 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 10, the electronic device 1000 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 1001 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage means 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the electronic apparatus 1000 are also stored. The processing device 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
In general, the following devices may be connected to the I/O interface 1005: input devices 1006 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 1007 including, for example, a Liquid Crystal Display (LCD), speaker, vibrator, etc.; storage 1008 including, for example, magnetic tape, hard disk, etc.; and communication means 1009. The communication means 1009 may allow the electronic device 1000 to communicate wirelessly or by wire with other devices to exchange data. While fig. 10 shows an electronic device 1000 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 1009, or installed from the storage device 1008, or installed from the ROM 1002. The above-described functions defined in the method of the embodiment of the present disclosure are performed when the computer program is executed by the processing device 1001.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: the interaction method described in any of the embodiments above is performed.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: at least one processor; and
A memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the interaction methods described previously.
According to one or more embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, characterized in that the non-transitory computer-readable storage medium stores computer instructions for causing a computer to perform any one of the interaction methods described above.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Claims (15)
1. An interaction method, comprising:
Displaying first content in a display area of the first content, wherein a business logic layer corresponding to the display area of the first content comprises an interaction layer, and the interaction layer is used for realizing interaction of an interaction area;
Under the condition that a preset triggering condition is met, displaying interaction prompt information in a display area of the first content; the interaction prompt information comprises preset interaction action information, wherein the preset interaction action information is used for indicating preset interaction actions, and the display area of the first content comprises an interaction area used for detecting the preset interaction actions;
In response to detecting the preset interaction action in the interaction area, triggering a first interaction response corresponding to the preset interaction action;
The method further comprises the steps of:
Caching all events detected when the preset interaction action is detected in the interaction area;
Acquiring a cached event;
Responding to the fact that the user does not execute the preset action, and performing neglect processing;
The neglecting process includes:
determining that the cached event does not form a complete action, distributing the cached event to a target layer, and judging whether the combination of the cached event and a subsequently detected event accords with control logic of the target layer by the target layer according to judgment logic of the target layer; or alternatively
Determining that the cached event forms a complete action, distributing the cached event to a target layer, and judging whether the sequence of the cached event accords with the control logic of the target layer by the target layer according to the judgment logic of the target layer; or identifying the interaction action corresponding to the event sequence through the cached event sequence, distributing the interaction action to the target layer, and judging whether the identified interaction action accords with the control logic of the target layer by the target layer according to the judgment logic of the target layer;
the target layer comprises a display layer or a view layer which can intercept and consume events except the interaction layer in a business logic layer corresponding to the display area of the first content.
2. The interaction method of claim 1, wherein:
The business logic layer corresponding to the display area of the first content comprises a display layer and an interaction layer, wherein the control logic of the interaction layer has higher priority than the display layer; the display layer is used for displaying the first content.
3. The interaction method of claim 1, wherein the interaction region is a partial region of a display region of the first content.
4. The interaction method of claim 1, wherein the method further comprises:
In response to not detecting a preset interaction in the interaction zone, the interaction layer ignores the detected interaction.
5. The interaction method of claim 1, wherein the interaction prompt message further comprises: the interactive area indication information is used for indicating the area range of the interactive area.
6. The interaction method of claim 1, wherein the triggering a first interaction response corresponding to the preset interaction action in response to detecting the preset interaction action in the interaction zone comprises:
in response to detecting a start event in a display area of the first content, judging whether a trigger position of the start event is in the interaction area;
under the condition that the triggering position of the initial event is in the interaction area, inhibiting layers except the interaction layer from intercepting subsequent events and consuming the initial event;
And triggering a first interaction response corresponding to the preset interaction action under the condition that the follow-up event forms the preset interaction action.
7. The interaction method of claim 6, wherein the triggering a first interaction response corresponding to the preset interaction action in the case that the subsequent event constitutes the preset interaction action comprises:
In response to detecting an intermediate event, determining whether the intermediate event satisfies an intermediate condition;
In the case that the intermediate event meets an intermediate condition, prohibiting layers other than the interaction layer from intercepting the intermediate event and consuming the intermediate event;
in response to detecting the end event, judging whether the detected intermediate event sequence constitutes the preset interaction action;
And triggering a first interaction response corresponding to the preset interaction action under the condition that the intermediate event sequence forms the preset interaction action.
8. The interaction method of claim 7, wherein the intermediate event comprises a movement event and/or a click event;
The determining whether the detected intermediate event sequence constitutes the preset interaction action includes:
and judging whether the track formed by the moving events and/or the number and/or the position and/or the duration of the clicking events meet preset conditions or not.
9. The interaction method of claim 4, wherein the ignoring the detected interaction in response to not detecting a preset interaction in the interaction zone comprises:
in response to detecting a start event in a display area of the first content, judging whether a trigger position of the start event is in the interaction area;
And under the condition that the trigger position is not in the interaction area, the interaction layer ignores the initial event and gives the initial event and subsequent events to the layers except the interaction layer for processing.
10. The interaction method of claim 9, wherein the method further comprises:
in the case that the trigger position is within the interaction zone, disabling subsequent events from being intercepted by layers other than the interaction layer and consuming the initiation event;
In response to detecting an intermediate event, determining whether the intermediate event satisfies an intermediate condition;
In the case that the intermediate event meets an intermediate condition, prohibiting layers other than the interaction layer from intercepting subsequent events and consuming the intermediate event;
In response to detecting the end event, judging whether the detected intermediate event constitutes the preset interaction action;
under the condition that the intermediate event does not form the preset interaction action, the interaction layer ignores the intermediate event;
Or alternatively
In the event that the intermediate event does not satisfy the intermediate condition, the interaction layer ignores the intermediate event.
11. The interaction method of claim 9, wherein said handing off said initial event and subsequent events to a layer process other than said interaction layer comprises:
Setting a layer except for the interaction layer to intercept the subsequent event of the initial event;
the layers other than the interaction layer trigger a second interaction response of the first content according to an event subsequent to the initiation event.
12. The interaction method of claim 1, wherein the preset trigger condition comprises:
One or more of a first object appears in the first content, a display time of the first content reaches a time threshold, or a preset trigger event is detected in a display area of the first content.
13. An interactive apparatus, comprising:
the first display module is used for displaying first content in a display area of the first content, and a business logic layer corresponding to the display area of the first content comprises an interaction layer which is used for realizing interaction of the interaction area;
The second display module is used for displaying interaction prompt information in the display area of the first content under the condition that the preset trigger condition is met; the interaction prompt information comprises preset interaction action information, the preset interaction action information is used for indicating preset interaction actions, and the display area of the first content comprises interaction area preset triggering conditions for detecting the preset interaction actions;
the triggering module is used for responding to the detection of the preset interaction action in the interaction area and triggering a first interaction response corresponding to the preset interaction action;
Further comprises:
Caching all events detected when the preset interaction action is detected in the interaction area;
Acquiring a cached event;
Responding to the fact that the user does not execute the preset action, and performing neglect processing;
The neglecting process includes:
determining that the cached event does not form a complete action, distributing the cached event to a target layer, and judging whether the combination of the cached event and a subsequently detected event accords with control logic of the target layer by the target layer according to judgment logic of the target layer; or alternatively
Determining that the cached event forms a complete action, distributing the cached event to a target layer, and judging whether the sequence of the cached event accords with the control logic of the target layer by the target layer according to the judgment logic of the target layer; or identifying the interaction action corresponding to the event sequence through the cached event sequence, distributing the interaction action to the target layer, and judging whether the identified interaction action accords with the control logic of the target layer by the target layer according to the judgment logic of the target layer;
the target layer comprises a display layer or a view layer which can intercept and consume events except the interaction layer in a business logic layer corresponding to the display area of the first content.
14. An electronic device, comprising:
a memory for storing computer readable instructions; and
A processor for executing the computer readable instructions such that the processor when run implements the method according to any of claims 1-12.
15. A non-transitory computer readable storage medium storing computer readable instructions which, when executed by a computer, cause the computer to perform the method of any of claims 1-12.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110179793.8A CN113157359B (en) | 2021-02-07 | 2021-02-07 | Interaction method, interaction device, electronic equipment and computer readable storage medium |
PCT/CN2022/074671 WO2022166822A1 (en) | 2021-02-07 | 2022-01-28 | Interaction method and apparatus, electronic device, and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110179793.8A CN113157359B (en) | 2021-02-07 | 2021-02-07 | Interaction method, interaction device, electronic equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113157359A CN113157359A (en) | 2021-07-23 |
CN113157359B true CN113157359B (en) | 2024-04-30 |
Family
ID=76882962
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110179793.8A Active CN113157359B (en) | 2021-02-07 | 2021-02-07 | Interaction method, interaction device, electronic equipment and computer readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113157359B (en) |
WO (1) | WO2022166822A1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113157359B (en) * | 2021-02-07 | 2024-04-30 | 北京字节跳动网络技术有限公司 | Interaction method, interaction device, electronic equipment and computer readable storage medium |
CN114154958A (en) * | 2021-12-03 | 2022-03-08 | 北京字跳网络技术有限公司 | Information processing method, device, electronic equipment and storage medium |
CN115904148A (en) * | 2022-11-14 | 2023-04-04 | 京东方科技集团股份有限公司 | Touch event processing method and device, storage medium and electronic equipment |
CN117453111B (en) * | 2023-12-25 | 2024-03-15 | 合肥联宝信息技术有限公司 | Touch response method and device, electronic equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105786368A (en) * | 2014-12-22 | 2016-07-20 | 厦门幻世网络科技有限公司 | Information instruction input method and device based on interactive screen |
CN110989888A (en) * | 2019-12-13 | 2020-04-10 | 广州华多网络科技有限公司 | Touch event distribution method and device |
CN111669639A (en) * | 2020-06-15 | 2020-09-15 | 北京字节跳动网络技术有限公司 | Display method and device of movable entrance, electronic equipment and storage medium |
CN112261459A (en) * | 2020-10-23 | 2021-01-22 | 北京字节跳动网络技术有限公司 | Video processing method and device, electronic equipment and storage medium |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9595040B2 (en) * | 2009-10-09 | 2017-03-14 | Viacom International Inc. | Integration of an advertising unit containing interactive residual areas and digital media content |
CN103488534A (en) * | 2013-09-23 | 2014-01-01 | 浪潮集团山东通用软件有限公司 | Method for enabling business logic layer to feed back control information to presentation layer |
US10343065B2 (en) * | 2016-06-27 | 2019-07-09 | DISH Technologies L.L.C. | Media consumer data exchange |
US20190079591A1 (en) * | 2017-09-14 | 2019-03-14 | Grabango Co. | System and method for human gesture processing from video input |
US10831513B2 (en) * | 2017-11-06 | 2020-11-10 | International Business Machines Corporation | Control transparency of a top layer provided by an additional transparent layer on top of the top layer based on relevance |
CN109032737A (en) * | 2018-07-18 | 2018-12-18 | 上海哔哩哔哩科技有限公司 | Pop-up message display system, method, storage medium and intelligent terminal |
CN113157359B (en) * | 2021-02-07 | 2024-04-30 | 北京字节跳动网络技术有限公司 | Interaction method, interaction device, electronic equipment and computer readable storage medium |
-
2021
- 2021-02-07 CN CN202110179793.8A patent/CN113157359B/en active Active
-
2022
- 2022-01-28 WO PCT/CN2022/074671 patent/WO2022166822A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105786368A (en) * | 2014-12-22 | 2016-07-20 | 厦门幻世网络科技有限公司 | Information instruction input method and device based on interactive screen |
CN110989888A (en) * | 2019-12-13 | 2020-04-10 | 广州华多网络科技有限公司 | Touch event distribution method and device |
CN111669639A (en) * | 2020-06-15 | 2020-09-15 | 北京字节跳动网络技术有限公司 | Display method and device of movable entrance, electronic equipment and storage medium |
CN112261459A (en) * | 2020-10-23 | 2021-01-22 | 北京字节跳动网络技术有限公司 | Video processing method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113157359A (en) | 2021-07-23 |
WO2022166822A1 (en) | 2022-08-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113157359B (en) | Interaction method, interaction device, electronic equipment and computer readable storage medium | |
CN110764671B (en) | Information display method and device, electronic equipment and computer readable medium | |
US11740772B2 (en) | Method and apparatus for controlling hotspot recommendation pop-up window, and medium and electronic device | |
JP7538329B2 (en) | Horizontal screen interaction method, device, electronic device, and storage medium | |
US20220308741A1 (en) | Method and apparatus for displaying video, electronic device and medium | |
CN114003326B (en) | Message processing method, device, equipment and storage medium | |
US20240319865A1 (en) | Page display method and apparatus, electronic device, storage medium and program product | |
CN113553507B (en) | Interest tag-based processing method, device, equipment and storage medium | |
CN110865734B (en) | Target object display method and device, electronic equipment and computer readable medium | |
CN113094135B (en) | Page display control method, device, equipment and storage medium | |
US20240147014A1 (en) | Control display method and apparatus, device and medium | |
EP4130956A1 (en) | Multimedia playback method and device | |
WO2023125161A1 (en) | Control method for livestreaming room, apparatus, electronic device, medium, and program product | |
EP4459450A1 (en) | Information flow display method and apparatus, and device, storage medium and program | |
WO2023174139A1 (en) | Work display method and apparatus, electronic device, storage medium, and program product | |
KR20140016454A (en) | Method and apparatus for controlling drag for moving object of mobile terminal comprising touch screen | |
EP4224300A1 (en) | Screen capture method and apparatus, and electronic device | |
CN109542296A (en) | A kind of switching method of title, device, electronic equipment and readable medium | |
US20230276079A1 (en) | Live streaming room page jump method and apparatus, live streaming room page return method and apparatus, and electronic device | |
CN116248945A (en) | Video interaction method and device, storage medium and electronic equipment | |
CN115314747B (en) | Method and device for controlling media content, electronic equipment and storage medium | |
CN116320585A (en) | Video playing method and device, storage medium and electronic equipment | |
CN112637409B (en) | Content output method and device and electronic equipment | |
CN112182347B (en) | Method and device for detecting punishment state, electronic equipment and storage medium | |
CN118312068A (en) | Surface layer view processing method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |