CN105204846A - Method for displaying video picture in multi-user video, device and terminal equipment - Google Patents
Method for displaying video picture in multi-user video, device and terminal equipment Download PDFInfo
- Publication number
- CN105204846A CN105204846A CN201510531744.0A CN201510531744A CN105204846A CN 105204846 A CN105204846 A CN 105204846A CN 201510531744 A CN201510531744 A CN 201510531744A CN 105204846 A CN105204846 A CN 105204846A
- Authority
- CN
- China
- Prior art keywords
- video
- floating window
- chat object
- image
- chat
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000007667 floating Methods 0.000 claims abstract description 173
- 230000008569 process Effects 0.000 claims abstract description 11
- 238000005516 engineering process Methods 0.000 claims description 12
- 238000004891 communication Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 230000009286 beneficial effect Effects 0.000 description 5
- 239000000725 suspension Substances 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The invention relates to a method for displaying a video picture in a multi-user video, a device and terminal equipment. The method provided by the invention comprises the following steps: receiving a floating window starting instruction during a multi-user video chat process and displaying a video floating window on a display interface of the terminal equipment according to the floating window starting instruction, wherein the area of the video floating window is less than the area of the display interface and the other area except for the video floating window on the display interface is used for displaying the images different from the images on the video floating window according to the operation of the user; displaying the image of a first chatting object in the video floating window; acquiring a video switching trigger event according to the operation of the user; switching the image of the first chatting object displayed in the video floating window to the image of a second chatting object according to the video switching trigger event. According to the invention, the video chat and the other operations of the user are realized, and meanwhile, the use efficiency of the terminal equipment is increased, the switching display between the images of a plurality of chatting objects is realized, and the user experience is promoted.
Description
Technical Field
The present disclosure relates to video processing, and in particular, to a method and an apparatus for displaying a video frame in a multi-user video, and a terminal device.
Background
In the related technology, a user chats with one or more other users through a terminal device, a video interface displayed on a screen of the terminal device in the chatting process comprises a full-screen window and a small window, an image of the other user is displayed in the full-screen window, the image of the user is displayed in the small window, and if the multiple users chat with videos at the same time, the user can switch the image of the other user displayed in the full-screen window by pressing a Tab key.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a method and an apparatus for displaying a video frame in a multi-user video, and a terminal device.
According to a first aspect of the embodiments of the present disclosure, there is provided a method for displaying a video frame in a multi-person video, including:
receiving a floating window starting instruction in the process of carrying out multi-person video chat, and displaying a video floating window on a display interface of terminal equipment according to the floating window starting instruction, wherein the area of the video floating window is smaller than that of the display interface, and the other areas except the video floating window on the display interface are used for displaying images different from the video floating window according to the operation of a user;
displaying an image of a first chat object in the video floating window;
acquiring a video switching trigger event according to user operation;
switching the image of the first chat object displayed by the video floating window into the image of a second chat object according to the video switching trigger event; the chat objects of the multi-person video chat at least comprise the first chat object and the second chat object.
According to a second aspect of the embodiments of the present disclosure, there is provided a display apparatus of a video picture in a multi-person video, including:
the receiving unit is configured to receive a floating window starting instruction in the process of carrying out multi-person video chat;
the floating window starting unit is configured to display a video floating window on a display interface according to the floating window starting instruction, the area of the video floating window is smaller than that of the display interface, and other areas except the video floating window on the display interface are used for displaying images different from the video floating window according to the operation of a user;
a display unit configured to display an image of a first chat object in the video floating window;
an acquisition unit configured to acquire a video switching trigger event according to a user operation;
an image switching unit configured to switch the image of the first chat object displayed by the video floating window into an image of a second chat object according to the video switching trigger event; the chat objects of the multi-person video chat at least comprise the first chat object and the second chat object.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: through the image that shows video chat in video suspension window, realize going on simultaneously that video chat and other operations of user improve terminal equipment's availability factor, switch through the image that shows in according to user's operation to video suspension window in addition, realize the switching between the image of a plurality of chat objects and show, user's use is greatly made things convenient for, promotes user experience.
Further, the first chat object includes:
randomly selecting a chat object from the chat objects of the multi-person video chat;
or the chat object which is speaking when the floating window starting instruction is received.
The beneficial effects are as follows: when the video floating window is initially started, a user does not explicitly specify which image of the chat object needs to be displayed in the video floating window, and the display picture of the floating window can be determined through random selection or a voice recognition technology.
Further, the acquiring a video switching trigger event according to a user operation includes:
and acquiring the video switching trigger event according to the sliding operation of a user in the first direction or the second direction in the area where the video floating window is located.
The switching the image of the first chat object displayed by the video floating window into the image of the second chat object according to the video switching triggering event comprises:
according to the video switching trigger event obtained by the sliding operation towards the first direction, determining a chat object behind the first chat object in a prearranged sequence as the second chat object, and switching the image of the first chat object displayed by the video floating window into the image of the second chat object; or,
and according to the video switching trigger event obtained by the sliding operation in the second direction, determining a chat object which is in front of the first chat object in a prearranged sequence as the second chat object, and switching the image of the first chat object displayed by the video floating window into the image of the second chat object.
The beneficial effects are as follows: the images displayed in the floating window can be switched through the sliding operation of the user in the area where the video floating window is located, the switching direction can comprise two directions, and different sliding directions can trigger the floating window to display the image of the next chat object or the previous chat object of the first chat object, so that the switching efficiency and flexibility of the picture of the floating window are improved.
Further, the acquiring a video switching trigger event according to a user operation includes:
and acquiring the video switching trigger event according to the clicking operation of the user in the area where the video floating window is located.
The switching the image of the first chat object displayed by the video floating window into the image of the second chat object according to the video switching triggering event comprises:
and determining a chat object behind the first chat object in a prearranged sequence as the second chat object according to the video switching trigger event obtained by the clicking operation, and switching the image of the first chat object displayed by the video floating window into the image of the second chat object.
The switching the image of the first chat object displayed by the video floating window into the image of the second chat object according to the video switching triggering event comprises:
and according to the video switching trigger event obtained by the clicking operation, determining the talking chat object as the second chat object when the video switching trigger event is received by a voice recognition technology, and switching the image of the first chat object displayed by the video floating window into the image of the second chat object.
The beneficial effects are as follows: the images displayed in the floating window can be switched by clicking operation of a user in the area where the video floating window is located, and the switching mode can be that the images of the next chat object of the first chat object are sequentially switched or that the images of the chat objects which are speaking are switched, so that the switching efficiency and flexibility of the images of the floating window are improved.
Further, after the video floating window is displayed on the display interface of the terminal device according to the floating window starting instruction, the method further includes:
and adjusting the position of the video floating window on the display interface according to the dragging operation of the user.
The beneficial effects are as follows: the position of the suspension window on the display interface can be adjusted along with dragging of a user, so that the flexibility of the position of the suspension window is improved, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart illustrating a method for displaying a video frame in a multi-person video according to an exemplary embodiment;
fig. 2 is a display interface of the terminal device after the video floating window is started;
FIG. 3 is a schematic view of a user performing a sliding operation to the left or right on the floating window;
FIG. 4 is a diagram of a user dragging a video floating window with a finger;
FIG. 5 is a block diagram of a display device showing a video frame in a multi-person video according to an example embodiment;
FIG. 6 is a block diagram of a display device showing a video picture in a multi-person video according to an example embodiment;
fig. 7 is a block diagram illustrating a terminal device according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a method for displaying a video picture in a multi-person video according to an exemplary embodiment, where the method for displaying a video is used in a terminal device, as shown in fig. 1, for example: a smart phone, a tablet device, a computer, a personal digital assistant, etc., which may be specifically executed by a central processor or other component with processing function in the above device, comprising the following steps:
in step 101, receiving a floating window starting instruction in a process of conducting multi-person video chat, and displaying a video floating window on a display interface of a terminal device according to the floating window starting instruction, wherein the area of the video floating window is smaller than that of the display interface, and other areas on the display interface except the video floating window are used for displaying images different from the video floating window according to user operation;
the user can carry out video chat with one or more other users through the terminal equipment, sometimes the user needs to carry out other operations through the terminal equipment in the chat process, such as sending and receiving mails, browsing webpages, playing games and the like, at the moment, in order to not interrupt the video chat, the user can only need to switch the chat mode from a full screen mode to a video floating window mode without quitting the software of the video chat, the terminal equipment can place the video floating window at any position of the display interface, the video image is displayed in the video floating window, the area of the video floating window is smaller than that of the display interface of the terminal equipment, the area of the video floating window is arranged at the top end of the display interface of the terminal equipment, no matter which other operations are carried out on the terminal equipment by the user, the video chat in the video floating window is not influenced, and the video floating window does not influence other operations on the display interface, thus, the user can simultaneously carry out video chat and other operations. In this embodiment, a user may start the video floating window by clicking a specially-set switching key, or start the video floating window by sliding a finger on a touch screen of the terminal device, where a floating window start instruction may be formed and received by the terminal device as long as the user operation from a full-screen mode to a video floating window mode is realized.
In step 102, displaying an image of a first chat object in the video floating window;
generally, an object for performing a video chat includes two or more parties, and after a terminal device starts a video floating window, an image of one of chat objects (i.e., a first chat object) may be displayed in the video floating window, where the first chat object may be a user who is using the terminal device or one of opposite users who perform a video chat with the user.
In step 103, acquiring a video switching trigger event according to user operation;
when a user wants to switch an image displayed in the video floating window, the user can operate the video floating window, the operation can be that the user realizes the operation in the area where the video floating window is located through a mouse, the operation can also be that the user directly operates the finger in the area where the video floating window is located, and the terminal equipment obtains a video switching trigger event according to the user operation. In this embodiment, the user operation for triggering video switching may include multiple forms, and the terminal device may capture the user operation according to its own characteristics, for example, the intelligent terminal is used by a user to issue an operation instruction through an operation on a touch screen, and the computer is used by a user to issue an operation instruction through a mouse or a keyboard.
In step 104, switching the image of the first chat object displayed by the video floating window into the image of a second chat object according to the video switching trigger event; the chat objects of the multi-person video chat at least comprise the first chat object and the second chat object.
After the terminal device obtains the video switching trigger event, the image of the first chat object in the video floating window is switched to the image of the second chat object, and the second chat object can be any one of the objects for video chat except the first chat object.
According to the video display method, the images of the video chats are displayed in the video floating window, the video chats and other operations of the user are performed at the same time, the use efficiency of the terminal device is improved, the images displayed in the video floating window are switched according to the user operations, the switched display among the images of a plurality of chatting objects is realized, the use of the user is greatly facilitated, and the user experience is improved.
As an example, fig. 2 is a display interface of a terminal device after a video floating window is started, as shown in fig. 2, an image of one chat object for performing a video chat is displayed in the video floating window, and a user operation may cause the terminal device to switch the image of the chat object displayed in the video floating window, so that the video floating window may switch and display among images of a plurality of chat objects, and an area other than the video floating window on the display interface of the terminal device may display other content, for example, a call interface of social software is displayed, so that the user may perform the video chat and implement other operations at the same time.
Further, the first chat object includes: randomly selecting a chat object from the chat objects of the multi-person video chat; or the chat object which is speaking when the floating window starting instruction is received.
When the video floating window is initially started, a user does not explicitly specify which image of the chat object needs to be displayed in the video floating window, and at this time, the terminal device can randomly select one of two or more chat objects and display the image in the video floating window.
Or, when the user selects only one image of the chat object to display, who is usually selected when speaking, the inertial terminal device can perform voice recognition on the video data of each chat object to determine the chat object currently speaking, and the image of the chat object is displayed in the video floating window.
Further, in step 103, the terminal device obtains the video switching trigger event according to the user operation, which may be obtaining the video switching trigger event according to the sliding operation of the user in the first direction or the second direction in the area where the video floating window is located; at this time, the step 104 includes determining a chat object subsequent to the first chat object in the pre-arranged sequence as the second chat object according to the video switching trigger event obtained by the sliding operation in the first direction, and switching the image of the first chat object displayed on the video floating window into the image of the second chat object; or, according to the video switching trigger event obtained by the sliding operation in the second direction, determining a chat object before the first chat object in a prearranged sequence as the second chat object, and switching the image of the first chat object displayed by the video floating window into the image of the second chat object.
As an example, fig. 3 is a schematic diagram of a user performing a sliding operation on the floating window in a first direction or a second direction, where the first direction and the second direction may be, for example, a left direction or a right direction, and may also be two opposite directions, which is not limited in particular. As shown in fig. 3, a user may trigger video switching by a sliding operation of a finger in a left or right direction in an area where a video floating window is located during a video chat, a terminal device sorts objects participating in the video chat in advance, and when the finger of the user slides to the left, the triggered video switching is to switch an image of a current first chat object to an image of a next chat object; when the user's finger slides to the right, the video switch triggered at this time is to switch the image of the current first chat object to the image of the previous chat object. Therefore, according to the requirements of the user, the images in the video floating window can be switched among the images of different chat objects, so that the use of the user is greatly facilitated, and the user experience is improved.
Further, in step 103, the terminal device obtains a video switching trigger event according to a user operation, which may be obtaining the video switching trigger event according to a click operation of a user in an area where the video floating window is located; at this time, the step 104 includes determining a chat object subsequent to the first chat object in the prearranged sequence as the second chat object according to the video switching trigger event obtained by the click operation, and switching the image of the first chat object displayed by the video floating window into the image of the second chat object.
The user can execute click or double click operation in the area where the video floating window is located by using a finger or a mouse in the video chatting process to trigger video switching, the terminal device can also perform circular sequencing on the objects participating in the video chatting in advance, and the image of the current first chatting object is switched to the image of the next chatting object along with each click of the user. Therefore, according to the requirements of the user, the images in the video floating window can be switched among the images of different chat objects, so that the use of the user is greatly facilitated, and the user experience is improved.
Further, in step 103, the terminal device obtains a video switching trigger event according to a user operation, which may be obtaining the video switching trigger event according to a click operation of a user in an area where the video floating window is located; at this time, the step 104 includes determining, by using a speech recognition technology, a chat object that is speaking when the video switching trigger event is received as the second chat object according to the video switching trigger event obtained by the click operation, and switching the image of the first chat object displayed on the video floating window to the image of the second chat object.
The user triggers video switching by performing single-click or double-click operation in the area where the video floating window is located by using a finger or a mouse in the video chatting process, and when the terminal equipment acquires the trigger of switching the video, the terminal equipment directly performs voice recognition on video data of each chatting object to determine the chatting object which is speaking at present, and switches the image of the first chatting object displayed in the video floating window to the image of the chatting object which is speaking at present. Therefore, as long as the chat object for speaking changes, the user can quickly switch to the corresponding image, so that the use of the user is greatly facilitated, and the user experience is improved.
Further, after the terminal device displays the video floating window on the display interface, the position of the video floating window on the display interface can be adjusted according to the dragging operation of the user. By way of example, fig. 4 is a schematic diagram of a user dragging a video floating window with a finger, and as shown in fig. 4, if the position of the video floating window blocks a display interface of other operations being performed by the user, the user may drag the video floating window with the finger or a mouse to place the video floating window at other positions of the display interface. The flexible position adjustment of the video suspension window can facilitate the execution of other operations of a user and improve the user experience.
Fig. 5 is a block diagram of a display device for video pictures in a multi-person video according to an example embodiment. Referring to fig. 5, the apparatus includes a receiving unit 11, a floating window starting unit 12, a display unit 13, an acquiring unit 14, and an image switching unit 15.
The receiving unit 11 is configured to receive a floating window starting instruction in the process of carrying out multi-person video chat;
the floating window starting unit 12 is configured to display a video floating window on a display interface according to the floating window starting instruction, where the area of the video floating window is smaller than that of the display interface, and other areas on the display interface except the video floating window are used for displaying an image different from the video floating window according to an operation of a user;
the display unit 13 configured to display an image of a first chat object in the video floating window;
the acquiring unit 14 is configured to acquire a video switching trigger event according to a user operation;
the image switching unit 15 is configured to switch the image of the first chat object displayed by the video floating window into the image of the second chat object according to the video switching trigger event; the chat objects of the multi-person video chat at least comprise the first chat object and the second chat object.
Further, the first chat object includes: randomly selecting a chat object from the chat objects of the multi-person video chat; or the chat object which is speaking when the floating window starting instruction is received.
Further, the obtaining unit 14 is configured to obtain the video switching trigger event according to a sliding operation of a user in a first direction or a second direction in an area where the video floating window is located. The image switching unit 15 is configured to determine, according to the video switching trigger event obtained by the sliding operation in the first direction, a chat object subsequent to the first chat object in a pre-arranged order as the second chat object, and switch the image of the first chat object displayed on the video floating window to the image of the second chat object; or, according to the video switching trigger event obtained by the sliding operation in the second direction, determining a chat object before the first chat object in a prearranged sequence as the second chat object, and switching the image of the first chat object displayed by the video floating window into the image of the second chat object.
Further, the obtaining unit 14 is configured to obtain the video switching trigger event according to a click operation of a user in an area where the video floating window is located. The image switching unit 15 is configured to determine, according to the video switching trigger event obtained by the click operation, a chat object subsequent to the first chat object in a pre-arranged order as the second chat object, and switch the image of the first chat object displayed on the video floating window to the image of the second chat object; or, the image switching unit 15 is configured to determine, according to the video switching trigger event obtained by the click operation, a chat object that is speaking when the video switching trigger event is received as the second chat object by using a voice recognition technology, and switch the image of the first chat object displayed by the video floating window to the image of the second chat object.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 6 is a block diagram of a display device for video pictures in a multi-person video according to an example embodiment. Referring to fig. 6, the apparatus further includes, on the basis of the block diagram shown in fig. 6: a position adjusting unit 16.
The position adjusting unit 16 is configured to adjust the position of the video floating window on the display interface according to a dragging operation of a user.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 7 is a block diagram illustrating a terminal device according to an example embodiment. For example, the terminal device 800 may be a smart phone, a computer, a tablet device, a personal digital assistant, and the like.
Referring to fig. 7, terminal device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the terminal device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the terminal device 800. Examples of such data include instructions for any application or method operating on terminal device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as a Static Random Access Memory (SRAM), an electrically erasable programmable Read-only memory (EEPROM), an erasable programmable Read-only memory (EPROM), a programmable Read-only memory (PROM), a Read-only memory (ROM), a magnetic memory, a flash memory, a magnetic disk, or an optical disk.
Power components 806 provide power to the various components of terminal device 800. Power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for terminal device 800.
The multimedia component 808 comprises a screen providing an output interface between the terminal device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. When the terminal device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the terminal device 800 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
Sensor component 814 includes one or more sensors for providing various aspects of state assessment for terminal device 800. For example, sensor assembly 814 may detect an open/closed status of terminal device 800, the relative positioning of components, such as a display and keypad of terminal device 800, sensor assembly 814 may also detect a change in the position of terminal device 800 or a component of terminal device 800, the presence or absence of user contact with terminal device 800, orientation or acceleration/deceleration of terminal device 800, and a change in the temperature of terminal device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. Sensor assembly 814 may also include a photosensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge-coupled device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
Communication component 816 is configured to facilitate communications between terminal device 800 and other devices in a wired or wireless manner. The terminal device 800 may access a WIreless network based on a communication standard, such as WIreless Fidelity (WiFi), 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the terminal device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the terminal device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a compact disc-read only memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of a terminal device, enable the terminal device to perform a video display method, the method comprising: receiving a floating window starting instruction, and displaying a video floating window on a display interface of terminal equipment according to the floating window starting instruction, wherein the area of the video floating window is smaller than that of the display interface, and the other areas except the video floating window on the display interface are used for displaying images different from the video floating window according to the operation of a user; displaying an image of a first chat object in the video floating window; acquiring a video switching trigger event according to user operation; and switching the image of the first chat object displayed by the video floating window into the image of the second chat object according to the video switching trigger event.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.
Claims (17)
1. A method for displaying a video frame in a multi-person video, comprising:
receiving a floating window starting instruction in the process of carrying out multi-person video chat, and displaying a video floating window on a display interface of terminal equipment according to the floating window starting instruction, wherein the area of the video floating window is smaller than that of the display interface, and the other areas except the video floating window on the display interface are used for displaying images different from the video floating window according to the operation of a user;
displaying an image of a first chat object in the video floating window;
acquiring a video switching trigger event according to user operation;
switching the image of the first chat object displayed by the video floating window into the image of a second chat object according to the video switching trigger event; the chat objects of the multi-person video chat at least comprise the first chat object and the second chat object.
2. The method of claim 1, wherein the first chat object comprises:
randomly selecting a chat object from the chat objects of the multi-person video chat;
or the chat object which is speaking when the floating window starting instruction is received.
3. The method according to claim 1 or 2, wherein the obtaining of the video handover trigger event according to the user operation comprises:
and acquiring the video switching trigger event according to the sliding operation of a user in the first direction or the second direction in the area where the video floating window is located.
4. The method of claim 3, wherein switching the image of the first chat object displayed by the video floating window to the image of the second chat object according to the video switching trigger event comprises:
according to the video switching trigger event obtained by the sliding operation towards the first direction, determining a chat object behind the first chat object in a prearranged sequence as the second chat object, and switching the image of the first chat object displayed by the video floating window into the image of the second chat object; or,
and according to the video switching trigger event obtained by the sliding operation in the second direction, determining a chat object which is in front of the first chat object in a prearranged sequence as the second chat object, and switching the image of the first chat object displayed by the video floating window into the image of the second chat object.
5. The method according to claim 1 or 2, wherein the obtaining of the video handover trigger event according to the user operation comprises:
and acquiring the video switching trigger event according to the clicking operation of the user in the area where the video floating window is located.
6. The method of claim 5, wherein switching the image of the first chat object displayed by the video floating window to the image of the second chat object according to the video switching trigger event comprises:
and determining a chat object behind the first chat object in a prearranged sequence as the second chat object according to the video switching trigger event obtained by the clicking operation, and switching the image of the first chat object displayed by the video floating window into the image of the second chat object.
7. The method of claim 5, wherein switching the image of the first chat object displayed by the video floating window to the image of the second chat object according to the video switching trigger event comprises:
and according to the video switching trigger event obtained by the clicking operation, determining the talking chat object as the second chat object when the video switching trigger event is received by a voice recognition technology, and switching the image of the first chat object displayed by the video floating window into the image of the second chat object.
8. The method according to claim 1, wherein after the video floating window is displayed on the display interface of the terminal device according to the floating window starting instruction, the method further comprises:
and adjusting the position of the video floating window on the display interface according to the dragging operation of the user.
9. A device for displaying video frames in a multi-person video, comprising:
the receiving unit is configured to receive a floating window starting instruction in the process of carrying out multi-person video chat;
the floating window starting unit is configured to display a video floating window on a display interface according to the floating window starting instruction, the area of the video floating window is smaller than that of the display interface, and other areas except the video floating window on the display interface are used for displaying images different from the video floating window according to the operation of a user;
a display unit configured to display an image of a first chat object in the video floating window;
an acquisition unit configured to acquire a video switching trigger event according to a user operation;
an image switching unit configured to switch the image of the first chat object displayed by the video floating window into an image of a second chat object according to the video switching trigger event; the chat objects of the multi-person video chat at least comprise the first chat object and the second chat object.
10. The apparatus of claim 9, wherein the first chat object comprises:
randomly selecting a chat object from the chat objects of the multi-person video chat;
or the chat object which is speaking when the floating window starting instruction is received.
11. The apparatus according to claim 9 or 10, wherein the obtaining unit is configured to obtain the video switching trigger event according to a sliding operation of a user in a first direction or a second direction in an area where the video floating window is located.
12. The apparatus according to claim 11, wherein the image switching unit is configured to determine a chat object subsequent to the first chat object in a pre-arranged order as the second chat object according to the video switching trigger event obtained by the sliding operation to the first direction, and switch the image of the first chat object displayed on the video floating window to the image of the second chat object; or, according to the video switching trigger event obtained by the sliding operation in the second direction, determining a chat object before the first chat object in a prearranged sequence as the second chat object, and switching the image of the first chat object displayed by the video floating window into the image of the second chat object.
13. The apparatus according to claim 9 or 10, wherein the obtaining unit is configured to obtain the video switching trigger event according to a click operation of a user in an area where the video floating window is located.
14. The apparatus according to claim 13, wherein the image switching unit is configured to determine a chat object subsequent to the first chat object in a pre-arranged order as the second chat object according to the video switching trigger event obtained by the click operation, and switch the image of the first chat object displayed on the video floating window to the image of the second chat object.
15. The apparatus according to claim 13, wherein the image switching unit is configured to determine, according to the video switching trigger event obtained by the click operation, a chat object that is speaking when the video switching trigger event is received as the second chat object by using a voice recognition technology, and switch the image of the first chat object displayed on the video floating window to the image of the second chat object.
16. The apparatus of claim 9, further comprising:
and the position adjusting unit is configured to adjust the position of the video floating window on the display interface according to the dragging operation of the user.
17. A terminal device, comprising: a display screen, a processor, and a memory for storing processor-executable instructions;
wherein the processor is configured to execute instructions to perform the method of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510531744.0A CN105204846B (en) | 2015-08-26 | 2015-08-26 | Display methods, device and the terminal device of video pictures in more people's videos |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510531744.0A CN105204846B (en) | 2015-08-26 | 2015-08-26 | Display methods, device and the terminal device of video pictures in more people's videos |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105204846A true CN105204846A (en) | 2015-12-30 |
CN105204846B CN105204846B (en) | 2019-07-09 |
Family
ID=54952552
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510531744.0A Active CN105204846B (en) | 2015-08-26 | 2015-08-26 | Display methods, device and the terminal device of video pictures in more people's videos |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105204846B (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105771243A (en) * | 2016-03-14 | 2016-07-20 | 广州趣丸网络科技有限公司 | Method and system for achieving multiplayer voice interaction in mobile terminal game |
CN105797380A (en) * | 2016-03-14 | 2016-07-27 | 广州趣丸网络科技有限公司 | Method and system for achieving information interaction among group members in game of mobile terminal |
CN106168869A (en) * | 2016-06-24 | 2016-11-30 | 北京奇虎科技有限公司 | Desktop view processing method based on suspended window, device and terminal |
CN108182021A (en) * | 2018-01-30 | 2018-06-19 | 腾讯科技(深圳)有限公司 | Multimedia messages methods of exhibiting, device, storage medium and equipment |
CN109101161A (en) * | 2018-08-24 | 2018-12-28 | 北京新界教育科技有限公司 | Display methods and device |
CN109819331A (en) * | 2019-01-21 | 2019-05-28 | 维沃移动通信有限公司 | A kind of video call method, device, mobile terminal |
CN110457095A (en) * | 2018-05-07 | 2019-11-15 | 苹果公司 | Multi-player real time communication user interface |
CN112463274A (en) * | 2020-11-20 | 2021-03-09 | 北京搜狗科技发展有限公司 | Interface adjusting method and device and electronic equipment |
CN113300934A (en) * | 2020-02-24 | 2021-08-24 | 钉钉控股(开曼)有限公司 | Communication method, device, equipment and storage medium |
CN114092167A (en) * | 2020-08-10 | 2022-02-25 | 北京京东尚科信息技术有限公司 | Interface display method and device |
US11360634B1 (en) | 2021-05-15 | 2022-06-14 | Apple Inc. | Shared-content session user interfaces |
US11399155B2 (en) | 2018-05-07 | 2022-07-26 | Apple Inc. | Multi-participant live communication user interface |
US11431891B2 (en) | 2021-01-31 | 2022-08-30 | Apple Inc. | User interfaces for wide angle video conference |
US11435877B2 (en) | 2017-09-29 | 2022-09-06 | Apple Inc. | User interface for multi-user communication session |
US11513667B2 (en) | 2020-05-11 | 2022-11-29 | Apple Inc. | User interface for audio message |
WO2023087969A1 (en) * | 2021-11-22 | 2023-05-25 | 北京字节跳动网络技术有限公司 | Speaking user selecting method and apparatus, electronic device, and storage medium |
US11770600B2 (en) | 2021-09-24 | 2023-09-26 | Apple Inc. | Wide angle video conference |
US11895391B2 (en) | 2018-09-28 | 2024-02-06 | Apple Inc. | Capturing and displaying images with multiple focal planes |
US11893214B2 (en) | 2021-05-15 | 2024-02-06 | Apple Inc. | Real-time communication user interface |
US11907605B2 (en) | 2021-05-15 | 2024-02-20 | Apple Inc. | Shared-content session user interfaces |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103546611A (en) * | 2012-07-13 | 2014-01-29 | Lg电子株式会社 | Mobile terminal and controlling method thereof |
CN103957447A (en) * | 2014-05-08 | 2014-07-30 | 济南四叶草信息技术有限公司 | Multi-window floating playing system |
CN104394480A (en) * | 2014-03-10 | 2015-03-04 | 贵阳朗玛信息技术股份有限公司 | Method and device for realizing chat on mobile terminal |
CN104519303A (en) * | 2013-09-29 | 2015-04-15 | 华为技术有限公司 | Multi-terminal conference communication processing method and device |
-
2015
- 2015-08-26 CN CN201510531744.0A patent/CN105204846B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103546611A (en) * | 2012-07-13 | 2014-01-29 | Lg电子株式会社 | Mobile terminal and controlling method thereof |
CN104519303A (en) * | 2013-09-29 | 2015-04-15 | 华为技术有限公司 | Multi-terminal conference communication processing method and device |
CN104394480A (en) * | 2014-03-10 | 2015-03-04 | 贵阳朗玛信息技术股份有限公司 | Method and device for realizing chat on mobile terminal |
CN103957447A (en) * | 2014-05-08 | 2014-07-30 | 济南四叶草信息技术有限公司 | Multi-window floating playing system |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105771243B (en) * | 2016-03-14 | 2020-06-19 | 广州趣丸网络科技有限公司 | Method and system for realizing multi-user voice interaction in mobile terminal game |
CN105797380A (en) * | 2016-03-14 | 2016-07-27 | 广州趣丸网络科技有限公司 | Method and system for achieving information interaction among group members in game of mobile terminal |
CN105771243A (en) * | 2016-03-14 | 2016-07-20 | 广州趣丸网络科技有限公司 | Method and system for achieving multiplayer voice interaction in mobile terminal game |
CN105797380B (en) * | 2016-03-14 | 2020-06-19 | 广州趣丸网络科技有限公司 | Method and system for realizing information interaction among group members in mobile terminal game |
CN106168869A (en) * | 2016-06-24 | 2016-11-30 | 北京奇虎科技有限公司 | Desktop view processing method based on suspended window, device and terminal |
CN106168869B (en) * | 2016-06-24 | 2019-06-21 | 北京奇虎科技有限公司 | Desktop view processing method, device and terminal based on suspended window |
US11435877B2 (en) | 2017-09-29 | 2022-09-06 | Apple Inc. | User interface for multi-user communication session |
CN108182021A (en) * | 2018-01-30 | 2018-06-19 | 腾讯科技(深圳)有限公司 | Multimedia messages methods of exhibiting, device, storage medium and equipment |
US11399155B2 (en) | 2018-05-07 | 2022-07-26 | Apple Inc. | Multi-participant live communication user interface |
US11849255B2 (en) | 2018-05-07 | 2023-12-19 | Apple Inc. | Multi-participant live communication user interface |
CN110457095A (en) * | 2018-05-07 | 2019-11-15 | 苹果公司 | Multi-player real time communication user interface |
CN109101161A (en) * | 2018-08-24 | 2018-12-28 | 北京新界教育科技有限公司 | Display methods and device |
CN109101161B (en) * | 2018-08-24 | 2021-06-18 | 北京新界教育科技有限公司 | Display method and device |
US11895391B2 (en) | 2018-09-28 | 2024-02-06 | Apple Inc. | Capturing and displaying images with multiple focal planes |
CN109819331B (en) * | 2019-01-21 | 2021-08-20 | 维沃移动通信有限公司 | Video call method, device and mobile terminal |
CN109819331A (en) * | 2019-01-21 | 2019-05-28 | 维沃移动通信有限公司 | A kind of video call method, device, mobile terminal |
CN113300934B (en) * | 2020-02-24 | 2023-08-22 | 钉钉控股(开曼)有限公司 | Communication method, device, equipment and storage medium |
CN113300934A (en) * | 2020-02-24 | 2021-08-24 | 钉钉控股(开曼)有限公司 | Communication method, device, equipment and storage medium |
US11513667B2 (en) | 2020-05-11 | 2022-11-29 | Apple Inc. | User interface for audio message |
CN114092167A (en) * | 2020-08-10 | 2022-02-25 | 北京京东尚科信息技术有限公司 | Interface display method and device |
CN112463274B (en) * | 2020-11-20 | 2024-02-02 | 北京搜狗智能科技有限公司 | Interface adjustment method and device and electronic equipment |
CN112463274A (en) * | 2020-11-20 | 2021-03-09 | 北京搜狗科技发展有限公司 | Interface adjusting method and device and electronic equipment |
US11431891B2 (en) | 2021-01-31 | 2022-08-30 | Apple Inc. | User interfaces for wide angle video conference |
US11467719B2 (en) | 2021-01-31 | 2022-10-11 | Apple Inc. | User interfaces for wide angle video conference |
US11671697B2 (en) | 2021-01-31 | 2023-06-06 | Apple Inc. | User interfaces for wide angle video conference |
US11360634B1 (en) | 2021-05-15 | 2022-06-14 | Apple Inc. | Shared-content session user interfaces |
US11822761B2 (en) | 2021-05-15 | 2023-11-21 | Apple Inc. | Shared-content session user interfaces |
US11449188B1 (en) | 2021-05-15 | 2022-09-20 | Apple Inc. | Shared-content session user interfaces |
US11893214B2 (en) | 2021-05-15 | 2024-02-06 | Apple Inc. | Real-time communication user interface |
US11907605B2 (en) | 2021-05-15 | 2024-02-20 | Apple Inc. | Shared-content session user interfaces |
US11928303B2 (en) | 2021-05-15 | 2024-03-12 | Apple Inc. | Shared-content session user interfaces |
US11812135B2 (en) | 2021-09-24 | 2023-11-07 | Apple Inc. | Wide angle video conference |
US11770600B2 (en) | 2021-09-24 | 2023-09-26 | Apple Inc. | Wide angle video conference |
WO2023087969A1 (en) * | 2021-11-22 | 2023-05-25 | 北京字节跳动网络技术有限公司 | Speaking user selecting method and apparatus, electronic device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN105204846B (en) | 2019-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105204846B (en) | Display methods, device and the terminal device of video pictures in more people's videos | |
CN107908351B (en) | Application interface display method and device and storage medium | |
CN109683714B (en) | Multimedia resource management method, device and storage medium | |
US10509540B2 (en) | Method and device for displaying a message | |
CN108108418B (en) | Picture management method, device and storage medium | |
CN107102772B (en) | Touch control method and device | |
CN109521918B (en) | Information sharing method and device, electronic equipment and storage medium | |
CN105786507B (en) | Display interface switching method and device | |
CN105511777B (en) | Session display method and device on touch display screen | |
CN109451341B (en) | Video playing method, video playing device, electronic equipment and storage medium | |
CN106020682A (en) | Multi-task management method and device | |
CN104216525B (en) | Method and device for mode control of camera application | |
CN105094539B (en) | Reference information display methods and device | |
CN105242837B (en) | Five application page acquisition methods and terminal | |
CN112463084A (en) | Split screen display method and device, terminal equipment and computer readable storage medium | |
CN109783171B (en) | Desktop plug-in switching method and device and storage medium | |
CN108881634A (en) | Terminal control method, device and computer readable storage medium | |
CN106919302B (en) | Operation control method and device of mobile terminal | |
CN112269525B (en) | Small screen window display method and device and storage medium | |
CN106447747B (en) | Image processing method and device | |
CN109739415B (en) | Session switching method and device | |
CN114356476B (en) | Content display method, content display device, electronic equipment and storage medium | |
CN106020699B (en) | A kind of input method switching method, device and terminal | |
CN106708930B (en) | Method and device for refreshing application page | |
CN106375744B (en) | Information projecting method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |