CN116980524A - Video display method and electronic equipment - Google Patents
Video display method and electronic equipment Download PDFInfo
- Publication number
- CN116980524A CN116980524A CN202310895920.3A CN202310895920A CN116980524A CN 116980524 A CN116980524 A CN 116980524A CN 202310895920 A CN202310895920 A CN 202310895920A CN 116980524 A CN116980524 A CN 116980524A
- Authority
- CN
- China
- Prior art keywords
- schedule
- video
- user
- information
- handled
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000004088 simulation Methods 0.000 claims abstract description 44
- 230000006399 behavior Effects 0.000 claims description 11
- 230000000694 effects Effects 0.000 claims description 6
- 238000004891 communication Methods 0.000 description 23
- 238000004590 computer program Methods 0.000 description 20
- 238000012545 processing Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 15
- 238000003860 storage Methods 0.000 description 15
- 230000033001 locomotion Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 12
- 230000008859 change Effects 0.000 description 11
- 230000008569 process Effects 0.000 description 8
- 230000009182 swimming Effects 0.000 description 8
- 238000012790 confirmation Methods 0.000 description 6
- 230000003993 interaction Effects 0.000 description 6
- 238000012544 monitoring process Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 239000012634 fragment Substances 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 235000012054 meals Nutrition 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000010267 cellular communication Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005530 etching Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000011022 operating instruction Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/42017—Customized ring-back tones
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
- G06Q10/109—Time management, e.g. calendars, reminders, meetings or time accounting
- G06Q10/1091—Recording time for administrative or management purposes
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Human Resources & Organizations (AREA)
- Entrepreneurship & Innovation (AREA)
- Strategic Management (AREA)
- Economics (AREA)
- Data Mining & Analysis (AREA)
- Educational Administration (AREA)
- Signal Processing (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The disclosure provides a video display method and electronic equipment, wherein the method comprises the following steps: fusing schedule simulation pictures in the virtual scene based on the schedule message to be handled to obtain a video; and when the event meeting the schedule execution condition to be handled is detected, the control terminal displays the video. Therefore, the exemplary embodiment of the disclosure can integrate the schedule message to be handled into the virtual scene and display the schedule message to the user in a video manner, so that the user can more vividly acquire the content contained in the schedule message to be handled through the video, and display the video on the terminal only when the event meeting the execution condition of the schedule to be handled is detected, so that the user can be well prompted to finish the schedule message to be handled.
Description
Technical Field
The disclosure relates to the technical field of communication, and in particular relates to a video display method and electronic equipment.
Background
At present, intelligent terminals play a very important role in modern life. With the acceleration of the pace of life, more and more people adopt terminals for scheduling.
Most of the current schedules are based on program execution of a terminal, and a user is required to set the schedules in the program of the terminal by himself, so that the user is prompted at the terminal at a time point which is set by the user and is close to the starting time of the schedules, and the user is reminded of completing the schedules at the time point of the schedules.
Disclosure of Invention
According to an aspect of the present disclosure, there is provided a video display method, including:
fusing schedule simulation pictures in the virtual scene based on the schedule message to be handled to obtain a video;
and when the event meeting the schedule execution condition to be handled is detected, the control terminal displays the video.
According to another aspect of the present disclosure, there is provided an electronic device including:
a processor; and a memory storing a program;
wherein the program comprises instructions which, when executed by a processor, cause the processor to perform a method according to an exemplary embodiment of the present disclosure.
According to one or more technical schemes provided in the exemplary embodiments of the present disclosure, a schedule simulation picture can be fused in a virtual scene based on a schedule message to be handled, so as to obtain a video; the schedule simulation picture can display the content of the schedule message to be handled, so that when the schedule simulation picture is fused into the virtual scene to obtain the video, the picture of the video is richer, the video containing the schedule simulation picture can also more clearly reflect the content to be expressed of the schedule message to be handled, and a user can more clearly acquire the content of the schedule message to be handled contained in the video when watching the video. And when the event meeting the execution condition of the schedule to be handled is detected, the terminal can be controlled to display the video, so that the user can be prompted before the schedule to be handled is effective based on the video. Therefore, the exemplary embodiment of the disclosure can integrate the schedule message to be handled into the virtual scene and display the schedule message to the user in a video manner, so that the user can more vividly acquire the content contained in the schedule message to be handled through the video, and display the video on the terminal only when the event meeting the execution condition of the schedule to be handled is detected, so that the user can be well prompted to finish the schedule message to be handled.
Drawings
Further details, features and advantages of the present disclosure are disclosed in the following description of exemplary embodiments, with reference to the following drawings, wherein:
FIG. 1 illustrates a schematic diagram of an example system in which various methods described herein may be implemented in accordance with an example embodiment of the present disclosure;
FIG. 2 illustrates a basic flow chart of a method of presentation of video in accordance with an exemplary embodiment of the present disclosure;
FIG. 3 illustrates a schematic diagram of a personal daily movement trajectory in accordance with an exemplary embodiment of the present disclosure;
FIG. 4 illustrates another individual daily movement trajectory schematic of an exemplary embodiment of the present disclosure;
fig. 5 illustrates a schematic view of schedule behavior information of a virtual character corresponding to a soccer kick event according to an exemplary embodiment of the present disclosure;
FIG. 6 illustrates a flowchart of a control terminal presenting video in accordance with an exemplary embodiment of the present disclosure;
FIG. 7 shows a block schematic diagram of a video presentation device of an exemplary embodiment of the present disclosure;
FIG. 8 shows a schematic block diagram of a chip of an exemplary embodiment of the present disclosure;
fig. 9 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below. It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Intelligent terminals play a very important role in modern life. With the acceleration of the pace of life, calendar functions on communication terminals are increasingly being used. The existing communication assistant needs to add relevant schedules in programs of mobile phone terminals, all schedule information needs to be manually set by users, the operation flow is complicated, the time of the users is consumed, and after the operation is finished, the users can only display the schedule information on the mobile phone terminals of the users, and other users cannot watch the schedule information. In addition, the existing video content generation mainly depends on a manual editing and processing mode, generally, the video content generation needs to be completed through complicated editing, manufacturing, modifying and other processes, the production speed is low, continuous repairing is needed, and materials cannot be automatically selected according to a specific scene to complete the video content generation.
In view of the above problems, an exemplary embodiment of the present disclosure provides a video display method, which may perform virtual shared space resculpting on a real scene of a user by using a digital twin technology, trigger and collect information to be handled by the user and generate a mark in the virtual shared space scene by using modes such as session information collection and mobile phone terminal traffic real-time request, analyze and integrate the information to be handled into the virtual shared space scene, and generate a video to be handled by a user person in combination with a virtual reality technology and display the video to the user.
The video presentation method of the exemplary embodiment of the present disclosure may be applied to various scenes of schedule cues. Fig. 1 illustrates a schematic diagram of an example system in which various methods described herein may be implemented, according to an example embodiment of the present disclosure. As shown in fig. 1, a presentation schematic 100 of a video of an exemplary embodiment of the present disclosure includes: video server 110, user terminal 120.
In practical applications, the video server 110 may include one or more processors (the processor may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory for storing data, and in some possible cases, the video server 110 may further include a transmission device for communication functions. It will be appreciated by those of ordinary skill in the art that the structure shown in fig. 1 is merely illustrative and is not intended to limit the structure of the video server 110 described above. For example, video server 110 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The above memory may be used to store a computer program or a computer program corresponding to various types of cloud games, for example, a software program of application software and a cloud game program, and the video server 110 may perform various functional applications and data processing, that is, implement the above method, by running the computer program stored in the memory. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the processor may further include memory remotely located with respect to the processor, which may be connected to the video server 110 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
In practical application, a user needs to start real-time positioning authority, dialogue and text monitoring authority of the user terminal 110, when the flow of the user terminal changes, dialogue text monitoring is triggered to acquire a to-be-handled schedule message, the video server 110 generates a mark according to a corresponding to-be-handled schedule address of the to-be-handled schedule message in a virtual shared space scene, creates an virtual role image matched with the to-be-handled schedule message based on the virtual role, merges the virtual role image and a to-be-handled event in the to-be-handled schedule message into the virtual shared space scene, generates a video associated with the to-be-handled schedule message, and controls the user terminal 110 to display the video at a designated time.
The video presentation method according to the exemplary embodiments of the present disclosure may be performed by a server or a chip applied to the server. Fig. 2 shows a basic flowchart of a presentation method of video of an exemplary embodiment of the present disclosure. As shown in fig. 2, a video presentation method according to an exemplary embodiment of the present disclosure may include:
step 201: and fusing schedule simulation pictures in the virtual scene based on the schedule message to be handled to obtain a video. The user can create a virtual character associated with the user in the virtual scene, so that the schedule simulation screen can be a simulation screen of the virtual character for completing schedule information to be handled in the virtual scene, for example, when the schedule information to be handled is swimming, the schedule simulation screen can be a screen of the virtual character in a swimming state.
By way of example, the lifestyle of a real scene may be digitally twinned based on internet technology by way of example 1:1 re-etching the video server and the user terminal to reach agreement due to different life tracks of different users, and entrusting the user terminal to collect the activity track information of the users in real time. It should be understood that the process of acquiring the user activity track by the video server requires that the positioning authority of the user terminal be opened in real time, and meanwhile, the satellite scene graph can be used for generating an image of a real street view corresponding to the mobile phone user activity track, so that a personal daily movement track graph fused with the real street view is generated.
In practical application, the user terminal can also monitor the dialogue message of the user in real time, if the dialogue message of the user is detected to contain schedule key information, the schedule information to be handled is acquired based on the schedule key information, meanwhile, the personal daily movement track graph is updated based on the schedule information to be handled, and the schedule simulation picture is fused in the virtual scene based on the schedule information to be handled, so that the video associated with the schedule information to be handled is generated. It can be seen that the exemplary embodiments of the present disclosure may re-score a real scene into a virtual scene, and may obtain a life track of a user, and update, in real time, a personal daily movement track map of the user based on schedule key information included in a user session message, thereby reducing editing operations of the user on a schedule message to be handled, saving time costs of the user, and merging the schedule message to be handled into the virtual scene, and displaying the merged schedule message to the user in a video manner, so that the content included in the schedule message to be handled may be obtained more vividly by the video user
Step 202: and when the event meeting the schedule execution condition to be handled is detected, the control terminal displays the video.
For example, the schedule execution condition may include controlling the terminal to display the video if a time difference between the current time and the time key segment corresponding to the schedule information is less than or equal to a preset time difference. Therefore, under the condition that the execution condition of the schedule to be handled is met, the user can be well prompted to finish the schedule message to be handled, and the user is prevented from forgetting the schedule.
Fig. 3 illustrates a schematic diagram of a personal daily movement trajectory in accordance with an exemplary embodiment of the present disclosure. As shown in fig. 3, the individual daily movement trace diagram 300 includes a home 301, a sleep rest video 3011, a Chongqing facet 302, a meal video 3021, a natatorium 303, a swimming video 3031, a company 304, and a work video 3041. Specifically, user a may start going to company 304 from home 301 for 8:00 a day's personal daily movement track, 8:30 user terminal show work video 3041 prompts user 9:00 to work, user starts working after 9:00 arrives at company, and starts working at company to eat early lunch, 18:00 user terminal show swimming video 3031 at afternoon prompts user 18:30 to go to natatorium 303 to swim, user 18:30 arrives at natatorium 303 to swim for one hour, user terminal show meal video 3021 prompts user 20:00 to go to Chongqing facet 302 to eat Chongqing facet at 19:00, user ends driving home after a meal, user terminal shows sleep video 3011 to prompt user to have a rest at 23:30. It should be understood that the user terminal may also play the video of the driving picture of the user in real time during the driving process of the user, and the video Beijing of the video may be consistent with the street view passed by the driving process of the user and change synchronously.
In an alternative manner, when the server detects that the data traffic throughput rate of the terminal meets the acquisition condition, detecting a user dialogue message; if the user dialogue message is detected to contain schedule key information, acquiring schedule information to be handled based on the schedule key information.
In practical application, the mobile phone terminal can collect schedule key information under the authorization of a user, the schedule key information is mainly generated in three modes, namely a telephone communication mode under a communication network domain, an information interaction mode based on instant messaging software and a face-to-face communication mode of the user in daily life. For the first and second modes, the traffic issuing of the communication network domain and the real-time data traffic throughput rate are affected, so that a traffic change monitoring model can be constructed based on the traffic real-time data traffic throughput rate, wherein when the traffic change monitoring model monitors that the traffic changes in gradient, the user terminal is triggered to acquire schedule information to be processed. The gradient change calculation formula is as follows:
y=2x 2 +1
wherein x is the real-time flow detection value, and y is the flow real-time flow throughput.
In the actual monitoring process, the derivative value of the gradient change calculation formula can be obtained, when the derivative value is 0, the change rate of the data flow throughput rate of the point is 0, and at the moment, the user mobile phone terminal can be considered to be in a sleep state; when the derivative value is not 0, the change rate of the data flow throughput rate of the point is changed, and when the change value of the data flow throughput rate of the point is larger than a preset change value, the server controls the user terminal to collect schedule information to be handled. It should be appreciated that, in the first case, the preset variation value may be set to a minimum value for generating a throughput rate of data traffic in the telephone communication under the communication network domain; in the second case, the preset variation value can be set to be the minimum value of the throughput rate of the generated data flow in the information interaction based on the instant messaging software; the setting can also be performed according to actual conditions.
And for the face-to-face communication mode of the third user in daily life, the user terminal is required to start a real-time dialogue monitoring function, dialogue information of the user is acquired in real time, and the schedule information to be handled is acquired.
In an optional manner, for the second manner, when the schedule information to be processed is generated based on the information interaction manner of the instant messaging software, the instant messaging software can also generate a text interaction manner, so that text information generated by the user terminal can be monitored in real time, and the schedule information to be processed is acquired. In order to accurately acquire the schedule information to be handled of the user, the user terminal can be monitored for 24 hours under the condition that the user terminal opens permission, dialogue and text information of the user terminal are monitored in real time, acquisition of the schedule information to be handled is performed in real time, accuracy of acquiring the schedule information to be handled is guaranteed, and omission of the schedule information to be handled is avoided.
In an alternative manner, the schedule information to be handled may be acquired based on schedule key information, where the schedule key information includes at least one of an event key segment, participant information, a place key segment, and a time key segment, and the event key segment, the participant information, the place key segment, and the time key segment in the same set of schedule key information are associated.
In an optional manner, the schedule location, the schedule attribute and the schedule participant information are fused in a virtual scene based on the schedule information to be handled to obtain a video, which includes: generating a schedule simulation picture based on the schedule attribute and the schedule participant information; determining a virtual position based on the schedule place, and fusing virtual environment information corresponding to the virtual position into a schedule simulation picture to obtain a video; the calendar place matches a virtual location in the virtual scene. The schedule attribute may include a schedule event, a time when a schedule starts, a location where the schedule occurs in a real scene, schedule participant information, and participants of the schedule.
In practical application, because the virtual scene and the real street view are re-carved at 1.1, based on the virtual scene and the real street view, virtual environment information corresponding to the virtual position can be fused into a schedule simulation picture, so that the authenticity of a video can be improved, and the position of a user schedule place can be prompted.
In practical application, the schedule simulation picture comprises virtual character behavior information corresponding to the schedule participant information; the display position of the schedule simulation picture in the virtual scene is a virtual position matched with the schedule place. Here, the virtual character behavior information is state information of the virtual character completing the schedule event, for example, when the schedule event is swimming, the schedule behavior information of the virtual character is state information of the virtual character in a swimming state, so that the swimming state can be fused into a schedule simulation picture, the schedule simulation picture is more vivid, and a user can be reminded of completing the swimming schedule event.
In practical application, the server can generate the virtual role of the user in the virtual scene based on the personal image of the user, the user can edit the initial virtual role image to obtain the virtual role, and meanwhile, the position information of the virtual role in the virtual scene can be consistent with the position information of the user in the real scene. After the server determines the schedule information to be handled, generating a schedule simulation picture according to the content of the schedule information to be handled, and displaying the schedule simulation picture on a virtual position matched with a schedule place in a virtual scene, so that the display can be performed in a suspension mode.
In an alternative manner, if the schedule participant information includes a plurality of participants, the control terminal displays the video, including: the video is displayed by controlling a plurality of terminals, wherein the terminals are in one-to-one correspondence with the participants, and therefore the exemplary embodiment of the disclosure can prompt all schedule participants to finish the schedule to be handled simultaneously, so that the user is prevented from prompting the schedule participants to participate in the schedule to be handled one by one, and each participant can be timely notified on the premise of reducing time cost.
In the embodiment of the application, the video obtained based on the schedule message to be handled can be a conventional video or a video in the form of video color ring, and can be sent to the terminal for display through a video distribution channel or can be sent to the terminal for display through a video color ring distribution channel.
When the schedule participant information comprises a plurality of participants, the server can directly send the generated video to a plurality of terminals for display; the server may send the generated video to a terminal corresponding to one of the participants, where the participant corresponding to the terminal may select one or more other participants to notify, or the participant determines to notify all the other participants. And the terminal feeds back to the server when receiving instructions for notifying other participants, and the server distributes the video to other participants to be notified.
Fig. 4 illustrates another individual daily movement trajectory schematic of an exemplary embodiment of the present disclosure. As shown in fig. 4, when the user a calls with other users through the instant messaging software, the user terminal may acquire the dialogue information of the user a terminal in real time, and further acquire the schedule key information in the dialogue information, and it is assumed that the acquired event key segments are football playing, the participant information is user a, user B, the location key segments are football stadium, and the time key Saturday is 9:00 in the morning; meanwhile, another schedule key information is obtained from the dialogue information, the obtained event key fragments are lunch, the participant information is user A, user B, user C, the place key fragments are Xiang menu and time key Saturday at 12:00 am, based on the event key fragments, the schedule information to be handled can be obtained based on the schedule key information, and then the personal daily movement track of the user A on Saturday is updated.
And in fig. 4, the presentation position of the schedule simulation screen in the virtual scene may be located at virtual position information matched with the schedule place. For example: in the user's home 401 in the virtual scene, the schedule simulation screen 4011 may be a screen for a user to rest, at which time a schedule place matches with virtual position information of the user's home 401 in the virtual scene; in the soccer field 402 in the virtual scene, the schedule simulation screen 4021 may be a screen of a user playing a soccer, at which time a schedule place matches with virtual position information of the soccer field 402 in the virtual scene; in the Xiang-Caesan 403 in the virtual scene, the schedule simulation screen 4031 may be a screen for the user to eat, at which time the schedule place matches with the virtual position information of the Xiang-Caesan 403 in the virtual scene. It should be understood that the calendar place may be location information of a calendar time occurring in a real street view.
In the schedule information to be handled of the first event, the schedule place is a football field; the schedule attribute can comprise schedule participant information including a user A and a user B, and the schedule event is playing football; the schedule starting condition is 9:00 am, and in schedule information to be handled of the second event, the schedule place is a Hunan museum; the schedule attribute may include schedule participant information including user a, user B, and user C, the schedule event being lunch; and the schedule start condition is 12:00 am on Saturday.
Fig. 5 illustrates a schematic view of schedule behavior information of a virtual character corresponding to a soccer kick event according to an exemplary embodiment of the present disclosure. As shown in fig. 5, the schedule simulation screen includes schedule behavior information of at least one virtual character satisfying schedule participant information, for example, in the schedule simulation screen of kicking, content corresponding to schedule behavior information of a virtual character corresponding to user a is shown as virtual character a501, and content corresponding to schedule behavior information of a virtual character corresponding to user B is shown as virtual character a502, both user a and user B show as a behavior of kicking football.
In an alternative approach, the virtual scene may also be constructed from a user activity trajectory and corresponding environment. For example, a virtual scene can be constructed according to a motion track of a user and a real street view corresponding to the motion track, the virtual scene constructed based on the motion track of the user is a virtual scene exclusive to the user, for constructing the virtual scene exclusive to the user, the user can collect real street view environment images by himself, and can select image contents of the real street view by himself afterwards, for example, some pieces of goods favored by the user can be added into the real scene, or angles of image acquisition equipment can be adjusted according to the preference of the user, so that the virtual scene exclusive to the user can be constructed according to the preference of the user.
In an optional manner, controlling the terminal to display the video may include acquiring a video data segment to be played based on a target video if the terminal is detected to be playing the target video; inserting the video into the video data segment to be played to obtain a video segment to be played containing the video; and controlling the terminal to display the video clips to be played, wherein the video clips contain videos.
Fig. 6 shows a flowchart of controlling a terminal to present video according to an exemplary embodiment of the present disclosure. As shown in fig. 6, the method specifically comprises the following steps:
and 601, fusing schedule simulation pictures in a virtual scene based on the schedule message to be handled to obtain a video.
Step 602: the server detects the running content of the user terminal;
step 603: if the target video browsing based on the video application program is detected, acquiring a video data fragment to be played based on the target video; inserting the video into the video data segment to be played to obtain a video segment to be played containing the video; and controlling the terminal to display the video clips to be played, wherein the video clips contain videos.
According to the method, the display of the video clips to be played is achieved by adopting a player borrowing method, when a server fuses schedule simulation pictures in a virtual scene based on schedule information to be handled to obtain a video, and when an event meeting the execution condition of the schedule to be handled is detected, the running content of the current user terminal of the user is identified through the user terminal; if the user is inquired that the target video browsing is performed based on the video application program, firstly acquiring a video data segment to be played based on the target video, further packaging the video data to the target video watched by the user through occupying a digital content issuing channel of the target video content, acquiring the video segment to be played containing the video, and controlling the terminal to display the video segment to be played containing the video, so that the video inserting is realized.
Step 604: if the fact that the mobile terminal is not in the process of using the video application program to browse the target video currently is detected, and other application programs are running, a video notification layer is built, and videos are displayed to the user based on the notification layer.
If the fact that the mobile terminal is not in the process of using the video application program to browse the target video currently is detected, and other application programs are running, a video notification layer can be built through the user terminal, and videos are displayed to the user based on the notification layer.
Step 605: the user confirms the video content based on the message confirmation control.
In an alternative manner, the server may determine the user terminal from which the schedule message to be handled is initiated, and notify the user based on the user terminal in a video manner, where the schedule message to be handled is validated. Meanwhile, the message confirmation control can be displayed in the video, after the user terminal displays the video, the user needs to click the message confirmation control, based on the fact, after the terminal detects that the user clicks the message confirmation control, the message of the message confirmation control clicked by the user is fed back to the server, and the server confirms that the user knows the schedule to be handled contained in the video based on the message of the message confirmation control clicked by the user, so that the event meeting the execution condition of the next schedule to be handled is detected.
The schedule execution condition to be handled may include that the control terminal displays the video if the time difference between the current time and the time key segment corresponding to the schedule information to be handled is less than or equal to a preset time difference. It should be understood that the schedule execution conditions herein may be set by the user or by the server.
In practical application, if the preset time difference is 30 minutes, and the current time of the user terminal is 9:00 and the time key segments corresponding to the schedule information to be handled are 9:30, the time difference between the current time and the time key segments corresponding to the schedule information to be handled is equal to the preset time difference, and the server controls the terminal to display the video.
According to one or more technical schemes provided in the exemplary embodiments of the present disclosure, a schedule simulation picture can be fused in a virtual scene based on a schedule message to be handled, so as to obtain a video; the schedule simulation picture can display the content of the schedule message to be handled, so that when the schedule simulation picture is fused into the virtual scene to obtain the video, the picture of the video is richer, the video containing the schedule simulation picture can also more clearly reflect the content to be expressed of the schedule message to be handled, and a user can more clearly acquire the content of the schedule message to be handled contained in the video when watching the video. And when the event meeting the execution condition of the schedule to be handled is detected, the terminal can be controlled to display the video, so that the user can be prompted before the schedule to be handled is effective based on the video. Therefore, the exemplary embodiment of the disclosure can integrate the schedule message to be handled into the virtual scene and display the schedule message to the user in a video manner, so that the user can more vividly acquire the content contained in the schedule message to be handled through the video, and display the video on the terminal only when the event meeting the execution condition of the schedule to be handled is detected, so that the user can be well prompted to finish the schedule message to be handled.
The above description has been presented mainly from the point of view of the terminal and the server for the solutions provided by the embodiments of the present disclosure. It will be appreciated that the terminals and servers, in order to implement the above-described functions, include corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The embodiment of the disclosure may divide functional units of the terminal and the server according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated in one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present disclosure, the division of the modules is merely a logic function division, and other division manners may be implemented in actual practice.
In the case of dividing each functional module by corresponding each function, exemplary embodiments of the present disclosure provide a video display apparatus. Fig. 7 shows a block schematic diagram of a video presentation device according to an exemplary embodiment of the present disclosure. As shown in fig. 7, the apparatus 700 includes:
an obtaining module 701, configured to fuse a schedule simulation picture in a virtual scene based on a schedule message to be handled, and obtain a video;
and the control module 702 is used for controlling the terminal to display the video when the event meeting the schedule execution condition to be handled is detected.
As a possible implementation manner, the apparatus 700 further includes a detecting module 703, configured to detect a user session message when it is detected that the data traffic throughput rate of the terminal meets the acquisition condition; and if the user dialogue message is detected to contain schedule key information, acquiring the schedule information to be handled based on the schedule key information.
As a possible implementation manner, the data traffic throughput rate change value is greater than a preset change value.
As a possible implementation manner, the schedule key information includes at least one of an event key segment, participant information, a location key segment, and a time key segment.
As a possible implementation manner, the to-be-handled schedule information includes a schedule location, a schedule attribute, and schedule participant information, and the fusing of a schedule simulation picture in a virtual scene based on the to-be-handled schedule information to obtain a video includes: generating a schedule simulation picture based on the schedule attribute and the schedule participant information; determining a virtual position based on the schedule place, and fusing virtual environment information corresponding to the virtual position into a schedule simulation picture to obtain a video; the schedule place is matched with a virtual position in the virtual scene; the schedule simulation picture comprises virtual character behavior information corresponding to the schedule participant information; and the display position of the schedule simulation picture in the virtual scene is positioned at a virtual position matched with the schedule place.
As a possible implementation manner, if the schedule participant information includes a plurality of participants, the control terminal displays the video, including: and controlling a plurality of terminals to display the video, wherein the terminals are in one-to-one correspondence with the participants.
As a possible implementation manner, the virtual scene is constructed according to the user activity track and the corresponding environment.
As a possible implementation manner, the controlling the terminal to display the video includes: if the terminal is detected to play a target video, acquiring a video data segment to be played based on the target video; inserting the video into the video data segment to be played to obtain a video segment to be played containing the video; and controlling the terminal to display the video clip to be played containing the video.
As a possible implementation manner, the schedule execution condition to be handled includes controlling the terminal to display the video if a time difference between a current time and a time key segment corresponding to schedule information to be handled is less than or equal to a preset time difference.
Fig. 8 shows a schematic block diagram of a chip of an exemplary embodiment of the present disclosure. As shown in fig. 8, the chip 800 includes one or more (including two) processors 801 and a communication interface 802. The communication interface 802 may support a server to perform the data transceiving steps of the method described above, and the processor 801 may support a server to perform the data processing steps of the method described above.
Optionally, as shown in fig. 8, the chip 800 further includes a memory 803, and the memory 803 may include a read only memory and a random access memory, and provide operation instructions and data to the processor. A portion of the memory may also include non-volatile random access memory (non-volatile random access memory, NVRAM).
In some implementations, as shown in fig. 8, the processor 801 performs the corresponding operation by invoking a memory-stored operating instruction (which may be stored in an operating system). The processor 801 controls the processing operations of any of the terminal devices, and may also be referred to as a central processing unit (central processing unit, CPU). Memory 803 may include read only memory and random access memory, and provide instructions and data to processor 801. A portion of the memory 803 may also include NVRAM. Such as a memory, a communication interface, and a memory coupled together by a bus system that may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus. But for clarity of illustration the various buses are labeled as bus system 804 in fig. 8.
The method disclosed by the embodiment of the disclosure can be applied to a processor or implemented by the processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The processor may be a general purpose processor, a digital signal processor (digital signal processing, DSP), an ASIC, an off-the-shelf programmable gate array (field-programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks of the disclosure in the embodiments of the disclosure may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present disclosure may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
The exemplary embodiments of the present disclosure also provide an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor. The memory stores a computer program executable by the at least one processor for causing the electronic device to perform a method according to embodiments of the present disclosure when executed by the at least one processor.
The present disclosure also provides a non-transitory computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor of a computer, is for causing the computer to perform a method according to an embodiment of the present disclosure.
The present disclosure also provides a computer program product comprising a computer program, wherein the computer program, when executed by a processor of a computer, is for causing the computer to perform a method according to embodiments of the disclosure.
Referring to fig. 9, a block diagram of an electronic device 900 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the electronic device 900 includes a computing unit 901 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 902 or a computer program loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the device 900 can also be stored. The computing unit 901, the ROM 902, and the RAM 903 are connected to each other through a bus 909. An input/output (I/O) interface 905 is also connected to the bus 904.
A number of components in the electronic device 900 are connected to the I/O interface 905, including: an input unit 906, an output unit 905, a storage unit 908, and a communication unit 909. The input unit 906 may be any type of device capable of inputting information to the electronic device 800, and the input unit 806 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device. The output unit 807 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. The storage unit 808 may include, but is not limited to, magnetic disks, optical disks. The communication unit 909 allows the electronic device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
As shown in fig. 9, the computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 901 performs the respective methods and processes described above. For example, in some embodiments, the methods of the exemplary embodiments of the present disclosure may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 900 via the ROM 902 and/or the communication unit 909. In some embodiments, the computing unit 901 may be configured to perform the method by any other suitable means (e.g., by means of firmware).
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As used in this disclosure, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, dialogue input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the processes or functions described by the embodiments of the present disclosure are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a terminal, a user equipment, or other programmable apparatus. The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program or instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired or wireless means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that integrates one or more available media. The usable medium may be a magnetic medium, e.g., floppy disk, hard disk, tape; optical media, such as digital video discs (digital video disc, DVD); but also semiconductor media such as solid state disks (solid state drive, SSD).
Although the present disclosure has been described in connection with specific features and embodiments thereof, it will be apparent that various modifications and combinations thereof can be made without departing from the spirit and scope of the disclosure. Accordingly, the specification and drawings are merely exemplary illustrations of the present disclosure as defined in the appended claims and are considered to cover any and all modifications, variations, combinations, or equivalents within the scope of the disclosure. It will be apparent to those skilled in the art that various modifications and variations can be made to the present disclosure without departing from the spirit or scope of the disclosure. Thus, the present disclosure is intended to include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (10)
1. A method of displaying video, the method comprising:
fusing schedule simulation pictures in the virtual scene based on the schedule message to be handled to obtain a video;
and when the event meeting the schedule execution condition to be handled is detected, the control terminal displays the video.
2. The method according to claim 1, wherein the method further comprises:
detecting a user dialogue message when detecting that the data traffic throughput rate of the terminal meets the acquisition condition;
And if the user dialogue message is detected to contain schedule key information, acquiring the schedule information to be handled based on the schedule key information.
3. The method of claim 2, wherein the data traffic throughput rate variation value is greater than a preset variation value.
4. The method of claim 2, wherein the schedule-critical information includes at least one of an event-critical section, a participant information, a location-critical section, a time-critical section.
5. The method of claim 1, wherein the to-Do schedule information includes a schedule place, a schedule attribute, and schedule participant information, wherein fusing a schedule simulation screen in a virtual scene based on the to-Do schedule information to obtain a video includes:
generating a schedule simulation picture based on the schedule attribute and the schedule participant information;
determining a virtual position based on the schedule place, and fusing virtual environment information corresponding to the virtual position into a schedule simulation picture to obtain a video; the schedule place is matched with a virtual position in the virtual scene;
the schedule simulation picture comprises virtual character behavior information corresponding to the schedule participant information;
And the display position of the schedule simulation picture in the virtual scene is positioned at a virtual position matched with the schedule place.
6. The method of claim 1, wherein if the schedule participant information includes a plurality of participants, the control terminal displays the video, comprising:
and controlling a plurality of terminals to display the video, wherein the terminals are in one-to-one correspondence with the participants.
7. The method of claim 1, wherein the virtual scene is constructed from a user activity trajectory and a corresponding environment.
8. The method of claim 1, wherein the controlling the terminal to display the video comprises:
if the terminal is detected to play a target video, acquiring a video data segment to be played based on the target video;
inserting the video into the video data segment to be played to obtain a video segment to be played containing the video;
and controlling the terminal to display the video clip to be played containing the video.
9. The method of claim 1, wherein the schedule execution condition includes controlling the terminal to display the video if a time difference between the current time and a time-critical segment corresponding to the schedule information is less than or equal to a preset time difference.
10. An electronic device, comprising:
a processor; and a memory storing a program;
wherein the program comprises instructions which, when executed by the processor, cause the processor to perform the method according to any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310895920.3A CN116980524A (en) | 2023-07-20 | 2023-07-20 | Video display method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310895920.3A CN116980524A (en) | 2023-07-20 | 2023-07-20 | Video display method and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116980524A true CN116980524A (en) | 2023-10-31 |
Family
ID=88470557
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310895920.3A Pending CN116980524A (en) | 2023-07-20 | 2023-07-20 | Video display method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116980524A (en) |
-
2023
- 2023-07-20 CN CN202310895920.3A patent/CN116980524A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP4068795A1 (en) | Method and apparatus for displaying multimedia resources, device and storage medium | |
CN108681436A (en) | Image quality parameter adjusting method, device, terminal and storage medium | |
AU2021314277B2 (en) | Interaction method and apparatus, and electronic device and computer-readable storage medium | |
CN109495427B (en) | Multimedia data display method and device, storage medium and computer equipment | |
CN108337547B (en) | Character animation realization method, device, terminal and storage medium | |
CN107734352B (en) | Information determination method, device and storage medium | |
CN107004182A (en) | The souvenir taken action from Real-Time Sharing | |
US20240267586A1 (en) | Display control method and apparatus, and device and storage medium | |
CN113094135A (en) | Page display control method, device, equipment and storage medium | |
US20170220314A1 (en) | Group-viewing assistance device, group-viewing assistance method, and viewing apparatus | |
CN111596995B (en) | Display method and device and electronic equipment | |
CN111131757B (en) | Video conference display method, device and storage medium | |
CN114885199B (en) | Real-time interaction method, device, electronic equipment, storage medium and system | |
CN115278275A (en) | Information presentation method, device, equipment, storage medium and program product | |
CN110134480B (en) | User trigger operation processing method and device, electronic equipment and storage medium | |
CN111782166B (en) | Multi-screen interaction method, device, equipment and storage medium | |
WO2021010012A1 (en) | System, method, and program for distribution of live stream video | |
CN116980524A (en) | Video display method and electronic equipment | |
CN117354447A (en) | Video playing method and device and computer equipment | |
JP2016512623A (en) | Context-sensitive application / event launch for people with various cognitive levels | |
US20160266743A1 (en) | System, method, and storage medium storing program for distributing video or audio | |
CN118051157A (en) | Media content processing method, apparatus, device, readable storage medium and product | |
JP6889323B2 (en) | Systems, methods, and programs for delivering live video | |
US10847047B2 (en) | Information processing device, information processing method, and computer program | |
CN113992930A (en) | Virtual resource conversion method and device, live broadcast system, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |