Nothing Special   »   [go: up one dir, main page]

CN103888840A - Method and device for dragging and zooming video mobile terminal in real time - Google Patents

Method and device for dragging and zooming video mobile terminal in real time Download PDF

Info

Publication number
CN103888840A
CN103888840A CN201410119164.6A CN201410119164A CN103888840A CN 103888840 A CN103888840 A CN 103888840A CN 201410119164 A CN201410119164 A CN 201410119164A CN 103888840 A CN103888840 A CN 103888840A
Authority
CN
China
Prior art keywords
zooming
submodule
dragging
decoding
thread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410119164.6A
Other languages
Chinese (zh)
Other versions
CN103888840B (en
Inventor
田玲
周益民
罗光春
闫科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201410119164.6A priority Critical patent/CN103888840B/en
Publication of CN103888840A publication Critical patent/CN103888840A/en
Application granted granted Critical
Publication of CN103888840B publication Critical patent/CN103888840B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Controls And Circuits For Display Device (AREA)

Abstract

The invention discloses a method and device for dragging and zooming a video mobile terminal in real time. The device mainly comprises a decoding module, a drawing module and an interaction module. The decoding module comprises a decoding submodule, a decoding control submodule, a conversion submodule and double registers. The drawing module comprises an extraction submodule, a filling submodule and a drawing control submodule. The interaction module comprises a monitoring submodule, a computing submodule, a request diverter, a zooming adapter and a dragging adapter. The method and device solve the problems existing in video playing of a mobile terminal in the prior art, the implementation process is simple, and efficiency is high. Meanwhile, a method for detecting the range of a video source going beyond a display area after dragging or zooming is provided at the same time, and operation to video playing at the mobile terminal is effectively facilitated.

Description

Method and device for dragging and zooming video mobile terminal in real time
Technical Field
The invention belongs to the technical field of video processing, and particularly relates to a method and a device for dragging and zooming a video mobile terminal in real time.
Background
The development of information technology is gradually changing along with the wide popularization of internet, the era of mobile internet has come, 3G networks are now widely applied to the lives of people, at present, more than 20 countries in the world deploy 4G networks, and more terminal devices support the 4G networks. Video related services are also transferred from traditional television terminals, PC terminals and the like to mobile terminals, and the mobile terminals have high client alignment viscosity and dependence due to portability, so that the internet is moved in the future. Users have become accustomed to using portable devices to access the internet for enjoying services anytime and anywhere, and the continuous innovation of high compression ratio coding standards, such as MPEG4, h.264/AVC and HEVC, makes low bit rate and high quality video playback no longer difficult, which accelerates the arrival of the "dynamic exhibition" era. The traditional internet video has the characteristics of high resolution, large display size and high resolution, and the mobile terminal has portability, is generally small in size and limited in size of a display screen, so that a user can expect to obtain complete information and detailed information of the video. The mobile terminal device has the characteristics that the desktop device does not have, so that the technology of dragging and zooming the video on the mobile terminal is particularly important and indispensable.
The mobile terminal has a limited display data amount due to the small size of the display screen, and displayed information is incomplete when the current video resources on the internet are directly transferred to the mobile terminal. For real-time video playing, the same video resources are played on various devices at the same time, characteristics of playing ends are different, particularly diversity of the mobile terminal is large, differences of display size and resolution are large, and users have strong requirements for real-time performance, so that urgent user requirements and great social application value are provided for researching real-time processing of videos of the mobile terminal. The most common operations of the mobile terminal are dragging a selection display area and zooming display detail information.
Disclosure of Invention
The invention aims to provide a method and a device for dragging and zooming a video mobile terminal in real time in order to solve the problem that videos are played on the mobile terminal in the prior art.
The technical scheme of the invention is as follows: a method for dragging and zooming a video mobile terminal in real time specifically comprises the following steps: the method comprises the following steps of decoding a thread, a drawing thread and an interaction thread, wherein the decoding thread, the drawing thread and the interaction thread run in parallel;
the thread decoding method comprises the following specific steps:
s11, after the video stream reaches the mobile terminal, decoding the video stream to obtain an original data frame;
s12, collecting the scaling parameters generated by the interactive threads and controlling the scaling ratio of the video;
s13, converting the video format, and finally converting the video format into an RGB intermediate frame of an ARGB8888 standard in an RGB format;
s14, storing the converted RGB intermediate frame into a double buffer;
s15, waking up the drawing thread after frame synchronization, and simultaneously entering the decoding of the next frame of video;
the thread drawing method comprises the following specific steps:
s21, starting a drawing thread after receiving a wake-up signal of the decoding thread;
s22, collecting the positioning parameters generated by the interactive thread, and controlling the extraction area of the video image;
s23, extracting data of a part to be displayed from the double buffers;
s24, writing the data to be displayed extracted in the step S23 into the area to be displayed in the area to be displayed;
s25, entering a dormant state after the data filling is finished until a wake-up signal of the decoding thread is received again, and entering the drawing of the next frame;
the specific steps of the interaction thread are as follows:
s31, starting an interaction thread, and monitoring an interaction event in a display window, wherein the interaction event comprises: single-finger dragging and double-finger zooming;
s32, calculating and analyzing the interaction events in the display window, and judging whether the requirements of the user change; if yes, the step S33 is entered, otherwise, the step S31 is returned to continue monitoring the interaction events in the display window;
s33, calculating a zooming component and a dragging component;
s34, performing coordinate conversion calculation according to the zooming components to obtain zooming ratio, generating zooming parameters, and transmitting the zooming parameters to the decoding thread;
and S35, performing coordinate conversion calculation according to the dragging component to obtain the dragging displacement, generating a positioning parameter, and transmitting the positioning parameter to the drawing thread.
Further, the specific process of generating the scaling parameter in step S34 is as follows:
before zooming, the finger touch points are respectively Et and Ft, and after zooming, the finger touch points are respectively Et+1And Ft+1,StIn order to extract the start of the data area,
Figure BDA0000483276390000038
is EtRelative to StThe coordinates of the points are such that,is FtRelative to StThe coordinates of the points are such that,
Figure BDA00004832763900000310
is Et+1Relative to StThe coordinates of the points are such that,
Figure BDA00004832763900000311
is Ft+1Relative to StCoordinates of points, Wv、HvWidth and height, W, respectively, of the extracted data areat、HtRespectively the width and height of RGB intermediate frame at t moment before scaling operation, and W after scaling operationt、HtWill become smaller in geometric proportion, Wv、HvThe change is not changed;
selecting
Figure BDA0000483276390000031
Calculating a zoom ratio Z in the extracted data region as a ratio of the extracted data region to the RGB intermediate frame sizev Z v = | | E t F t | | | | E t + 1 F t + 1 | | , Wherein, | | E t F t | | = ( x S t | E t - x S t | F t ) 2 + ( y S t | E t - y S t | F t ) 2 , | | E t + 1 F t + 1 | | = ( x S t | E t + 1 - x S t | F t + 1 ) 2 + ( y S t | E t + 1 - y S t | F t + 1 ) 2 , scaling ratio of RGB inter frame size Z R = Z v / ( H v H t ) ;
W after performing a zoom operationt+1、Ht+1Respectively as follows: <math> <mrow> <msub> <mi>H</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>H</mi> <mi>t</mi> </msub> <mo>&times;</mo> <msub> <mi>Z</mi> <mi>R</mi> </msub> <mo>,</mo> <msub> <mi>W</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>H</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>&times;</mo> <mrow> <mo>(</mo> <mfrac> <msub> <mi>W</mi> <mi>t</mi> </msub> <msub> <mi>H</mi> <mi>t</mi> </msub> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
further, the specific process of generating the positioning parameter in step S35 is as follows:
the finger touch point before the drag operation is set to AtThe finger touch point after the drag operation is At+1Point D is the starting point of the RGB intermediate frame, point StAnd St+1Respectively as a starting point for extracting data before and after the drag operation,
Figure BDA00004832763900000312
is marked as AtDot relative to StThe coordinates of the points are determined by the coordinates of the points,
Figure BDA00004832763900000313
is marked as At+1Dot relative to StThe coordinates of the points are determined by the coordinates of the points,is marked as StThe coordinates of the point with respect to the D point,is St+1Relative to the coordinates of point D, said
Figure BDA00004832763900000316
The formula for calculating the coordinates is:
x D | S t + 1 = x D | S t + x S t | A t + 1 - x S t | A t y D | S t + 1 = y D | S t + y S t | A t + 1 - y S t | A t .
further, when the positioning parameter value after the zoom operation or the drag operation exceeds the RGB intermediate frame range, the zoom parameter and the positioning parameter are subjected to border crossing detection, which specifically comprises the following steps: let WvAnd HvRespectively representing the width and height of the display area, WSAnd HsRepresenting the width and height of the video source, (x)D,yD)|SCoordinates representing an origin S of the extracted data region with respect to an origin D of the RGB intermediate frame;
when W isS<WVAnd WS-WV<HS-HVWhen it is set to WS=WV,HS=WS′HV/WV
When H is presentS<HVAnd HS-HV<WS-WVThen set HS=HV,WS=HS′Wv/Hv;
When x isD|S<0 or WS<WVThen x is setD|S=0;
When y isD|S<0 or HS<HVThen, set yD|S=0;
When x isD|S>WS-WVThen x is setD|S=WS-WV
When y isD|S>HS-HVThen, set yD|S=HS-HV
Meanwhile, in order to solve the technical problem, the invention also provides a device for dragging and zooming the video mobile terminal in real time, which specifically comprises the following steps: a decoding module, a drawing module and an interaction module,
the decoding module includes: the decoding submodule, the decoding control submodule, the conversion submodule and the double buffer; the decoding submodule decodes the video stream file into video frame data; the decoding control sub-module is used for receiving the zooming parameters transmitted by the interactive module and controlling the conversion sub-module to carry out format conversion; the conversion submodule is used for converting the video frame data obtained by the decoding submodule into RGB intermediate frames of an ARGB8888 standard in an RGB format; the double buffers are used for storing the converted RGB intermediate frames;
the drawing module includes: an extraction submodule, a filling submodule and a drawing control submodule; the extraction submodule is used for extracting data of a part to be displayed from the double buffers; the filling submodule writes the extracted data to be displayed into an area to be displayed; the drawing control submodule is used for receiving the positioning parameters transmitted by the interaction module and controlling the extraction submodule to extract the position of the data;
the interaction module comprises: the system comprises a monitoring submodule, a calculation submodule, a request diverter, a zooming adapter and a dragging adapter; the monitoring submodule is used for monitoring user interaction requests in a display area in real time; the computing submodule is used for computing the interaction request of the user and transmitting the computing result to the request diverter; the request diverter is used for diverting the interactive request, and separating out whether the request is a zooming request or a dragging request or both the requests; the zooming adapter receives a zooming request separated by the request diverter and calculates zooming coordinates to obtain zooming parameters; the dragging adapter is used for receiving dragging requests separated by the request diverter and carrying out dragging coordinate calculation to obtain positioning parameters.
Furthermore, the interaction module further comprises a border crossing sub-module, and the border crossing sub-module is used for detecting whether the interaction request crosses the border and performing border crossing correction.
The invention has the beneficial effects that: the method and the device for dragging and zooming the video mobile terminal in real time solve the problems of video playing of the mobile terminal in the prior art by dragging and zooming the video image in real time at the mobile terminal, have simple and convenient realization process and high efficiency, provide an out-of-range detection and correction method for detecting and correcting the video source range exceeding the display area after dragging or zooming, and more effectively facilitate the operation of video playing at the mobile terminal.
Drawings
Fig. 1 is a flowchart illustrating a method for dragging and zooming a video mobile terminal in real time according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating scaling coordinates in a scaling operation according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a conversion of dragging coordinates in a dragging operation according to an embodiment of the present invention;
fig. 4 is a block diagram illustrating a real-time dragging and zooming apparatus of a video mobile terminal according to an embodiment of the present invention.
Detailed Description
The invention is further illustrated with reference to the figures and the specific examples.
The invention discloses a method for dragging and zooming a video mobile terminal in real time, which specifically comprises the following steps: the method comprises the following steps of decoding a thread, a drawing thread and an interaction thread, wherein the decoding thread, the drawing thread and the interaction thread run in parallel;
the thread decoding method comprises the following specific steps:
s11, after the video stream reaches the mobile terminal, decoding the video stream to obtain an original data frame;
s12, collecting the scaling parameters generated by the interactive threads and controlling the scaling ratio of the video;
s13, converting the video format, and finally converting the video format into an RGB intermediate frame of an ARGB8888 standard in an RGB format;
s14, storing the converted RGB intermediate frame into a double buffer;
s15, waking up the drawing thread after frame synchronization, and simultaneously entering the decoding of the next frame of video;
the thread drawing method comprises the following specific steps:
s21, starting a drawing thread after receiving a wake-up signal of the decoding thread;
s22, collecting the positioning parameters generated by the interactive thread, and controlling the extraction area of the video image;
s23, extracting data of a part to be displayed from the double buffers;
s24, writing the data to be displayed extracted in the step S23 into the area to be displayed in the area to be displayed;
s25, entering a dormant state after the data filling is finished until a wake-up signal of the decoding thread is received again, and entering the drawing of the next frame;
the specific steps of the interaction thread are as follows:
s31, starting an interaction thread, and monitoring an interaction event in a display window, wherein the interaction event comprises: single-finger dragging and double-finger zooming;
s32, calculating and analyzing the interaction events in the display window, and judging whether the requirements of the user change; if yes, the step S33 is entered, otherwise, the step S31 is returned to continue monitoring the interaction events in the display window;
s33, calculating a zooming component and a dragging component;
s34, performing coordinate conversion calculation according to the zooming components to obtain zooming ratio, generating zooming parameters, and transmitting the zooming parameters to the decoding thread;
and S35, performing coordinate conversion calculation according to the dragging component to obtain the dragging displacement, generating a positioning parameter, and transmitting the positioning parameter to the drawing thread.
In order that those skilled in the art can understand and implement the technical solution of the present invention, the method of the present invention will be explained with reference to the flow chart shown in fig. 1:
step 1, opening a file to complete initialization: after the decoding thread is started, finding and opening a video stream file according to the position of the file, and completing corresponding parameter setting initialization;
and step 2, decoding: decoding the video stream file according to a specified decoder to obtain an original data frame;
step 3, collecting scaling parameters to control scaling ratio: collecting control scaling ratios transmitted by the interactive threads;
step 4, system conversion: carrying out format conversion on the original data frame, and converting the original data frame into an RGB format which can be processed by the mobile terminal;
and 5, storing the RGB intermediate frame to a double buffer: storing the RGB intermediate frame after system conversion in an idle one of the double buffers;
and 6, awakening a drawing thread after frame synchronization: storing RGB intermediate frames and synchronizing double buffer areas, then going to step 16, waking up a drawing thread to draw, and simultaneously going to step 2 to perform the decoding process of the next frame;
step 7, monitoring window events: monitoring events in a display window after a monitoring thread is started, wherein main window events comprise single click, double click, multi-point touch and the like, and only single-finger drag and drop and double-finger zoom events are concerned;
step 8, calculating and analyzing the interaction behavior: calculating and analyzing the interaction behavior, and judging whether the user requirement is changed;
step 9, changing user requirements: judging whether the tolerance is exceeded or not according to the calculated interactive behavior result, and if so, entering step 10 to further separate the user requirements; otherwise, entering step 7, and continuing to monitor the window event;
step 10, component separation: under the condition that the user requirement is changed, further calculating dragging and zooming components, separating the dragging and zooming components, separating corresponding vectors, and entering the steps 11 and 12;
step 11, generating a scaling parameter: separating to obtain a scaling component, carrying out coordinate conversion calculation to obtain a scaling ratio, generating a scaling parameter, transmitting the scaling parameter to a collection scaling parameter of a decoding thread to control the scaling ratio, and entering the step 3;
step 12, generating positioning parameters: separating to obtain dragging components, performing coordinate conversion calculation to obtain dragging displacement, generating dragging positioning parameters, transmitting the dragging positioning parameters to a collection positioning parameter control extraction area of the drawing thread, and entering step 13;
step 13, collecting positioning parameters to control an extraction area: collecting positioning parameters transmitted by the interactive threads, determining an extraction initial position, and performing data extraction in step 14;
step 14, extraction: extracting data to be displayed from the double caches, extracting only partial data to be displayed in order to improve efficiency without extracting complete RGB intermediate frames, and entering step 15 to fill data after extraction is completed;
step 15, filling: and opening the display area, writing the extracted data into the display area in a matching way, finishing the drawing of one frame of image, and then entering step 16, wherein the drawing thread enters a dormant state.
Step 16, sleep/wake-up trigger: and after the data is filled, the drawing thread enters a dormant state, waits for the arrival of the next frame data to wake up the drawing thread, and enters the step 13 to draw the next frame.
Firstly, in a decoding thread, after a video stream reaches a mobile terminal, a video needs to be decoded, and a common decoding processor comprises MPEG, h.261, h.263, WMV and the like, wherein in the embodiment of the present invention, an FFmpeg is taken as an example, a data frame obtained by decoding is in a YUV420P format, and is finally converted into an RGB intermediate frame in an ARGB8888 standard in an RGB format, and since the FFmpeg cannot be directly used in an Android platform, the Android platform of the mobile terminal in the embodiment of the present invention needs to be moved to the Android platform, and a specific transplanting process is as follows:
1. acquiring an FFmpeg source code and an NDK development kit;
2. configuring an NDK development environment;
3. adding an mk file according to the NDK project specification, and modifying the original compiling configuration file to enable the original compiling configuration file to be in line with the NDK compiling specification;
4. compiling the FFmpeg by using an NDK-built command of the NDK development kit;
5. the compiled so shared library is deployed.
After the platform of the decoder is transplanted, video decoding can be performed, a data frame obtained by decoding is in a YUV420P format, the YUV420P format is divided into three components, Y represents brightness, namely a gray value, and U and V represent chroma, and the functions of describing image color and saturation are used for specifying the color of a pixel. Since the mobile terminal cannot directly process the data in YUV420P format, it needs to be converted into RGB format.
According to the requirement, considering that the present invention adopts parallel processing of decoding and rendering, the bottleneck influencing the system operation should be in decoding rather than rendering, and the decoding time consumption should be reduced as much as possible, so the embodiment of the present invention converts the YUV420P format into the RGB intermediate frame of the ARGB8888 standard of the RGB format. According to the API provided by FFmpeg, the API can be directly called to carry out format conversion, the conversion can be completed with the subsequent scaling operation, and only the target frame format member variable in the SwsContext structure variable needs to be set as ARGB8888 in the conversion stage.
Secondly, in the drawing thread, the drawing thread adopts a passive triggering type message mechanism, after the decoding thread completes the decoding of one frame of data, the drawing thread is informed to be woken up to draw one frame of image, and the drawing thread enters a dormant state after the drawing is completed to wait for being woken up next time.
The data to be displayed is extracted from the RGB intermediate frame firstly when the image is drawn, in order to simplify calculation, the extraction operation only needs to acquire the coordinate of the video visual area relative to the RGB intermediate frame origin, the coordinate is controlled by the interaction thread and is stored in the drawing thread as a global variable, if a dragging request occurs, the coordinate value is updated, otherwise, the coordinate is repeatedly used all the time.
In practical application, a drawing thread needs to extract data from an RGB intermediate frame, a decoding thread needs to write data into the RGB intermediate frame, and the RGB intermediate frame needs to be subjected to synchronous mutual exclusion processing in order to maintain data consistency and normal operation of a program, which reduces system operation efficiency. The invention adopts the bidirectional RGB intermediate frame to simulate the double buffer, detects and takes out the idle RGB intermediate frame for storing the newly obtained frame after decoding and conversion, manages a flag bit in the drawing thread to represent the occupation state, provides a uniform outlet in the decoding thread, shields the characteristic of double buffer for the drawing thread, and simplifies the operation of the drawing thread.
In the interaction thread, the mobile terminal mainly monitors gesture operation on a display component SurfaceView for gesture recognition, and mainly monitors single-finger touch dragging and double-finger touch zooming events for the embodiment of the invention.
And when zooming operation is carried out, because the video display accords with the visual habit of human, when zooming is carried out to the maximum and then zooming is carried out, the image edge is blackened to influence the visual effect, and the zooming at the moment is meaningless zooming, the processing request can be rejected. The maximum value of zooming is set to display the whole intermediate frame data, zooming is performed according to linear equal proportion in the embodiment of the invention, because the display screen of the mobile terminal is smaller, if the zooming is performed by adopting an exponential growth mode in the prior art, the zooming speed is too fast, and a user is difficult to finely control, so that the zooming can be performed by adopting the linear equal proportion, and secondary zooming can be performed when the zooming is not expected to a certain degree, so that the zooming is more fine and the requirements of the user are better met although the zooming times are increased.
The real-time dragging of the video makes the drawing thread respond immediately after the dragging operation occurs, the position of extracting data from the RGB intermediate frame is changed and then the data is filled in the display area, but the real-time zooming of the video does not wake up the drawing thread immediately after the zooming occurs, but sends the zooming parameter to the decoding thread to control the decoding of the next frame, because the currently available RGB intermediate frame is generated and can not be changed, the size of the video can not be changed immediately according to the zooming request; and the origin coordinates of the extracted data area are relative to the new scale ratio, if rendered immediately, the rendering thread will render the old frame with the origin coordinates of the new extracted data area, which can cause video jitter. Therefore, the embodiment of the invention adopts the awakening trigger of the decoding thread as the synchronous command of the drawing thread, so that the drawing of the new visible area and the new decoded frame are kept synchronous, and the occurrence of video jitter is avoided.
Fig. 2 is a schematic diagram illustrating scaling coordinates in a scaling operation according to an embodiment of the present invention, where a specific process of generating scaling parameters includes:
the finger touch points before zooming are respectively EtAnd FtAfter the zoom operation, the finger touch points are respectively Et+1And Ft+1,StIn order to extract the start of the data area,
Figure BDA0000483276390000108
is EtRelative to StThe coordinates of the points are such that,
Figure BDA0000483276390000109
is FtRelative to StThe coordinates of the points are such that,
Figure BDA00004832763900001010
is Et+1Relative to StThe coordinates of the points are such that,
Figure BDA00004832763900001011
is Ft+1Relative to StCoordinates of points, Wv、HvWidth and height, W, respectively, of the extracted data areat、HtRespectively the width and height of RGB intermediate frame at t moment before scaling operation, and W after scaling operationt、HtWill become smaller in geometric proportion, Wv、HvThe change is not changed;
due to the fact that
Figure BDA0000483276390000101
Therefore, the user can not zoom in and out according to the zoom ratio at the same time, and the user can choose the zoom method in consideration of the visual habits of the user and pay more attention to the longitudinal zooming effect
Figure BDA0000483276390000102
Calculating a zoom ratio Z in the extracted data region as a ratio of the extracted data region to the RGB intermediate frame sizev Z v = | | E t F t | | | | E t + 1 F t + 1 | | , Wherein, | | E t F t | | = ( x S t | E t - x S t | F t ) 2 + ( y S t | E t - y S t | F t ) 2 , | | E t + 1 F t + 1 | | = ( x S t | E t + 1 - x S t | F t + 1 ) 2 + ( y S t | E t + 1 - y S t | F t + 1 ) 2 , scaling ratio of RGB inter frame size Z R = Z v / ( H v H t ) ;
W after performing a zoom operationt+1、Ht+1Respectively as follows: <math> <mrow> <msub> <mi>H</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>H</mi> <mi>t</mi> </msub> <mo>&times;</mo> <msub> <mi>Z</mi> <mi>R</mi> </msub> <mo>,</mo> <msub> <mi>W</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>H</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>&times;</mo> <mrow> <mo>(</mo> <mfrac> <msub> <mi>W</mi> <mi>t</mi> </msub> <msub> <mi>H</mi> <mi>t</mi> </msub> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> the presence of positioning parameters is not taken into account in the above-described scaling operationHowever, the calculation of the positioning parameters is the same as the calculation of the positioning parameters described below, and the embodiments of the present invention will not be described in detail.
Fig. 3 is a schematic diagram illustrating conversion of a dragging coordinate in a dragging operation according to an embodiment of the present invention, where a specific process of generating a positioning parameter includes:
the finger touch point before the drag operation is set to AtThe finger touch point after the drag operation is At+1Point D is the starting point of the RGB intermediate frame, point StAnd St+1Respectively as a starting point for extracting data before and after the drag operation,
Figure BDA0000483276390000114
is marked as AtDot relative to StThe coordinates of the points are determined by the coordinates of the points,
Figure BDA0000483276390000112
is marked as At+1Dot relative to StThe coordinates of the points are determined by the coordinates of the points,
Figure BDA0000483276390000113
is marked as StThe coordinates of the point with respect to the D point,
Figure BDA0000483276390000115
is St+1Relative to the coordinates of point D, said
Figure BDA0000483276390000116
The formula for calculating the coordinates is:
x D | S t + 1 = x D | S t + x S t | A t + 1 - x S t | A t y D | S t + 1 = y D | S t + y S t | A t + 1 - y S t | A t .
however, no matter single-finger dragging or double-finger zooming operation is performed, after the positioning coordinate and the zooming ratio are calculated, the calculation is updated to the drawing and decoding thread, and border-crossing detection and correction are also needed to ensure that the value of the transformation parameter falls within a reasonable range.
Correspondingly, after the scaling operation is performed, the size of the target video needs to be: 1. the situation that the size of a target video is smaller than that of a display area, so that the periphery of the displayed video is kept black, which is not in line with the visual habit of a user, is avoided; 2. overflow caused by unregulated amplification is avoided, the size of a target video has a maximum plane which can be reached, the value is specified during initialization, and the limit cannot be exceeded during amplification; 3. and reasonably processing the problem that the proportion of the target display area is not matched with the proportion of the video source.
After the dragging operation is carried out, the positioning parameters cannot take values beyond the range of the RGB intermediate frame, otherwise, errors can occur in reading data; and when the positioning parameters are in the RGB intermediate frame range, the values are reasonable, so that the upper, lower, left and right boundaries of the target display area do not exceed the upper, lower, left and right boundaries of the video when the video is dragged.
Therefore, it is summarized that when performing single-finger dragging or two-finger zooming, the problems needing attention include: 1. the minimum form of the video must be displayed completely in the target area, and the irregular area is filled with black; 2. the minimal form cannot leave a black edge in both the x and y directions, which is not consistent with the user's visual habits; 3. for convenient processing, when a black edge appears, the upper left corner of the video and the target display area are preferably aligned; 4. when the video size is entirely larger than the target display area, a black border cannot be displayed. In response to the above problem, the solution of the embodiment of the present invention is that, when the value of the positioning parameter after the zoom operation or the drag operation exceeds the RGB intermediate frame range, the zoom parameter and the positioning parameter are subjected to border crossing detection, and the specific process is as follows: let Wv and Hv represent the width and height of the display area, respectively, and WS and Hs represent the width and height of the video source, (xD, yD) | S represents the origin coordinates of the extracted data area;
when W isS<WVAnd WS-WV<HS-HVWhen it is set to WS=WV,HS=WS′HV/WV
When H is presentS<HVAnd HS-HV<WS-WVThen set HS=HV,WS=HS′Wv/Hv
When x isD|S<0 or WS<WVThen x is setD|S=0;
When y isD|S<0 or HS<HVThen, set yD|S=0;
When x isD|S>WS-WVThen x is setD|S=WS-WV
When y isD|S>HS-HVThen, set yD|S=HS-HV
Meanwhile, aiming at the method, the invention also provides a device for dragging and zooming the video mobile terminal in real time, which specifically comprises the following steps: a decoding module, a drawing module and an interaction module,
the decoding module includes: the decoding submodule, the decoding control submodule, the conversion submodule and the double buffer; the decoding submodule decodes the video stream file into video frame data; the decoding control sub-module is used for receiving the zooming parameters transmitted by the interactive module and controlling the conversion sub-module to carry out format conversion; the conversion submodule is used for converting the video frame data obtained by the decoding submodule into RGB intermediate frames of an ARGB8888 standard in an RGB format; the double buffers are used for storing the converted RGB intermediate frames;
the drawing module includes: an extraction submodule, a filling submodule and a drawing control submodule; the extraction submodule is used for extracting data of a part to be displayed from the double buffers; the filling submodule writes the extracted data to be displayed into an area to be displayed; the drawing control submodule is used for receiving the positioning parameters transmitted by the interaction module and controlling the extraction submodule to extract the position of the data;
the interaction module comprises: the system comprises a monitoring submodule, a calculation submodule, a request diverter, a zooming adapter and a dragging adapter; the monitoring submodule is used for monitoring user interaction requests in a display area in real time; the computing submodule is used for computing the interaction request of the user and transmitting the computing result to the request diverter; the zooming adapter is used for shunting the interactive request, and separating the zooming request or the dragging request or both the zooming request and the dragging request; the zooming adapter receives a zooming request separated by the request diverter and calculates zooming coordinates to obtain zooming parameters; the dragging adapter is used for receiving dragging requests separated by the request diverter and carrying out dragging coordinate calculation to obtain positioning parameters.
The interaction module further comprises a border crossing processing submodule, and the border crossing processing submodule is used for detecting whether the interaction request crosses the border and performing border crossing correction.
The following will describe each sub-module of the apparatus in detail with reference to the block diagram of the apparatus for real-time dragging and zooming of the video mobile terminal shown in fig. 4:
decoding the sub-module: decoding of a video stream file is referred to as video frame data, which is the original frame. The decoding standards of video are many, MPEG, h.261, h.263, WMV and the like are common, and the present invention takes FFmpeg as an example, and the data frame obtained by decoding is in YUV420P format.
The decoding control sub-module: the module is a control sub-module under the decoding module, receives the zoom ratio control parameter transmitted by the interactive module, and controls the conversion sub-module to carry out format conversion.
A conversion submodule: and converting the original frame in YUV420P format obtained by the coding submodule into an RGB intermediate frame in RGB format of ARGB8888 standard. According to the requirement, the YUV420P format can be converted into RGB565 or RGB8888 standard, under FFmpeg, the YUV420P format is converted into RGB8888 standard faster than RGB565 standard, although RGB format of ARGB8888 standard increases the burden of drawing picture, the speed of drawing one frame is much faster than that of decoding one frame, the invention adopts parallel processing of decoding and drawing, the bottleneck influencing the system operation is decoding rather than drawing, therefore, the invention needs to shorten the decoding time consumption, therefore, the invention adopts the scheme of converting YUV420P format into RGB format of ARGB8888 standard for format conversion.
Double buffer: and detecting the free one in the double buffer, and storing the converted RGB intermediate frame therein.
Extracting a submodule: and extracting data to be displayed from a double-buffer area in a decoding module, only extracting data for a display part, and not extracting the whole RGB intermediate frame. The extracted data serves as source data for populating the sub-modules.
Filling a submodule: and opening the target display area, and writing the data obtained by the extraction sub-module in a matching way.
A drawing control submodule: the module is a control submodule under the drawing module, receives the positioning parameters generated by dragging transmitted by the interaction module, and controls the position of the extraction submodule for extracting data.
A monitoring submodule: and monitoring the user interaction request in the display area in real time, wherein the user interaction request is mainly represented as a touch display area.
A calculation submodule: the triggered request is calculated, and if the request generation exceeding the tolerance is detected, the request is processed by a request splitter.
Requesting a diverter: the interactive request is shunted, and whether the request is a zooming request or a dragging request or both are separated.
Scaling the adapter: and receiving a zooming request for requesting the splitter to separate, and calculating zooming coordinates to obtain zooming parameters.
Dragging the adapter: and receiving a dragging request for requesting the splitter to separate, and calculating dragging coordinates to obtain positioning parameters.
And a border crossing processing submodule: and detecting whether the interaction request is out of range and performing out-of-range correction.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (6)

1. A method for dragging and zooming a video mobile terminal in real time is characterized by specifically comprising the following steps: the method comprises the following steps of decoding a thread, a drawing thread and an interaction thread, wherein the decoding thread, the drawing thread and the interaction thread run in parallel;
the thread decoding method comprises the following specific steps:
s11, after the video stream reaches the mobile terminal, decoding the video stream to obtain an original data frame;
s12, collecting the scaling parameters generated by the interactive threads and controlling the scaling ratio of the video;
s13, converting the video format, and finally converting the video format into an RGB intermediate frame of an ARGB8888 standard in an RGB format;
s14, storing the converted RGB intermediate frame into a double buffer;
s15, waking up the drawing thread after frame synchronization, and simultaneously entering the decoding of the next frame of video;
the thread drawing method comprises the following specific steps:
s21, starting a drawing thread after receiving a wake-up signal of the decoding thread;
s22, collecting the positioning parameters generated by the interactive thread, and controlling the extraction area of the video image;
s23, extracting data of a part to be displayed from the double buffers;
s24, writing the data to be displayed extracted in the step S23 into the area to be displayed in the area to be displayed;
s25, entering a dormant state after the data filling is finished until a wake-up signal of the decoding thread is received again, and entering the drawing of the next frame;
the specific steps of the interaction thread are as follows:
s31, starting an interaction thread, and monitoring an interaction event in a display window, wherein the interaction event comprises: single-finger dragging and double-finger zooming;
s32, calculating and analyzing the interaction events in the display window, and judging whether the requirements of the user change; if yes, the step S33 is entered, otherwise, the step S31 is returned to continue monitoring the interaction events in the display window;
s33, calculating a zooming component and a dragging component;
s34, performing coordinate conversion calculation according to the zooming components to obtain zooming ratio, generating zooming parameters, and transmitting the zooming parameters to the decoding thread;
and S35, performing coordinate conversion calculation according to the dragging component to obtain the dragging displacement, generating a positioning parameter, and transmitting the positioning parameter to the drawing thread.
2. The method for dragging and zooming in and out of a video mobile terminal in real time according to claim 1, wherein the specific process of generating the zooming parameter in the step S34 is as follows:
the finger touch points before zooming are respectively EtAnd FtAfter the zoom operation, the finger touch points are respectively Et+1And Ft+1,StIn order to extract the start of the data area,
Figure FDA0000483276380000027
is EtRelative to StThe coordinates of the points are such that,
Figure FDA0000483276380000028
is FtRelative to StThe coordinates of the points are such that,
Figure FDA0000483276380000029
is Et+1Relative to StThe coordinates of the points are such that,
Figure FDA00004832763800000210
is Ft+1Relative to StCoordinates of points, Wv、HvWidth and height, W, respectively, of the extracted data areat、HtRespectively the width and height of RGB intermediate frame at t moment before scaling operation, and W after scaling operationt、HtWill become smaller in geometric proportion, Wv、HvThe change is not changed;
selectingCalculating a zoom ratio Z in the extracted data region as a ratio of the extracted data region to the RGB intermediate frame sizev Z v = | | E t F t | | | | E t + 1 F t + 1 | | , Wherein, | | E t F t | | = ( x S t | E t - x S t | F t ) 2 + ( y S t | E t - y S t | F t ) 2 , | | E t + 1 F t + 1 | | = ( x S t | E t + 1 - x S t | F t + 1 ) 2 + ( y S t | E t + 1 - y S t | F t + 1 ) 2 , scaling ratio of RGB inter frame size
Z R = Z v / ( H v H t ) ;
W after performing a zoom operationt+1、Ht+1Respectively as follows: <math> <mrow> <msub> <mi>H</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>H</mi> <mi>t</mi> </msub> <mo>&times;</mo> <msub> <mi>Z</mi> <mi>R</mi> </msub> <mo>,</mo> <msub> <mi>W</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>H</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>&times;</mo> <mrow> <mo>(</mo> <mfrac> <msub> <mi>W</mi> <mi>t</mi> </msub> <msub> <mi>H</mi> <mi>t</mi> </msub> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
3. the method for dragging and zooming in and out of a video mobile terminal in real time as claimed in claim 1, wherein the specific process of generating the positioning parameter in step S35 is as follows:
the finger touch point before the drag operation is set to AtThe finger touch point after the drag operation is At+1Point D is the starting point of the RGB intermediate frame, point StAnd St+1Respectively as a starting point for extracting data before and after the drag operation,
Figure FDA00004832763800000211
is marked as AtDot relative to StThe coordinates of the points are determined by the coordinates of the points,
Figure FDA00004832763800000212
is marked as At+1Dot relative to StThe coordinates of the points are determined by the coordinates of the points,is marked as StThe coordinates of the point with respect to the D point,
Figure FDA00004832763800000214
is St+1Relative to the coordinates of point D, said
Figure FDA00004832763800000215
The formula for calculating the coordinates is:
x D | S t + 1 = x D | S t + x S t | A t + 1 - x S t | A t y D | S t + 1 = y D | S t + y S t | A t + 1 - y S t | A t .
4. the method as claimed in any one of claims 1 to 3, wherein when the value of the zoom operation or the positioning parameter after the drag operation exceeds the RGB mid-frame range, the zoom parameter and the positioning parameter are detected to be out of range, which comprises the following steps: let WvAnd HvRespectively representing the width and height of the display area, WSAnd HsIndicating the width and height of the video source,
Figure FDA0000483276380000032
coordinates representing an origin S of the extracted data region with respect to an origin D of the RGB intermediate frame;
when W isS<WVAnd WS-WV<HS-HVWhen it is set to WS=WV,HS=WS′HV/WV
When H is presentS<HVAnd HS-HV<WS-WVThen set HS=HV,WS=HS′Wv/Hv
When x isD|S<0 or WS<WVThen x is setD|S=0;
When y isD|S<0 or HS<HVThen, set yD|S=0;
When x isD|S>WS-WVThen x is setD|S=WS-WV
When y isD|S>HS-HVThen, set yD|S=HS-HV
5. A device for dragging and zooming a video mobile terminal in real time is characterized by specifically comprising: a decoding module, a drawing module and an interaction module,
the decoding module includes: the decoding submodule, the decoding control submodule, the conversion submodule and the double buffer; the decoding submodule decodes the video stream file into video frame data; the decoding control sub-module is used for receiving the zooming parameters transmitted by the interactive module and controlling the conversion sub-module to carry out format conversion; the conversion submodule is used for converting the video frame data obtained by the decoding submodule into RGB intermediate frames of an ARGB8888 standard in an RGB format; the double buffers are used for storing the converted RGB intermediate frames;
the drawing module includes: an extraction submodule, a filling submodule and a drawing control submodule; the extraction submodule is used for extracting data of a part to be displayed from the double buffers; the filling submodule writes the extracted data to be displayed into an area to be displayed; the drawing control submodule is used for receiving the positioning parameters transmitted by the interaction module and controlling the extraction submodule to extract the position of the data;
the interaction module comprises: the system comprises a monitoring submodule, a calculation submodule, a request diverter, a zooming adapter and a dragging adapter; the monitoring submodule is used for monitoring user interaction requests in a display area in real time; the computing submodule is used for computing the interaction request of the user and transmitting the computing result to the request diverter; the request diverter is used for diverting the interactive request, and separating out whether the request is a zooming request or a dragging request or both the requests; the zooming adapter receives a zooming request separated by the request diverter and calculates zooming coordinates to obtain zooming parameters; the dragging adapter is used for receiving dragging requests separated by the request diverter and carrying out dragging coordinate calculation to obtain positioning parameters.
6. The apparatus for real-time dragging and zooming of a video mobile terminal according to claim 5, wherein the interactive module further comprises an out-of-range processing sub-module, and the out-of-range processing sub-module is configured to detect whether the interactive request is out-of-range and perform out-of-range correction.
CN201410119164.6A 2014-03-27 2014-03-27 A kind of video mobile terminal Real Time Dragging and the method and device for scaling Expired - Fee Related CN103888840B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410119164.6A CN103888840B (en) 2014-03-27 2014-03-27 A kind of video mobile terminal Real Time Dragging and the method and device for scaling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410119164.6A CN103888840B (en) 2014-03-27 2014-03-27 A kind of video mobile terminal Real Time Dragging and the method and device for scaling

Publications (2)

Publication Number Publication Date
CN103888840A true CN103888840A (en) 2014-06-25
CN103888840B CN103888840B (en) 2017-03-29

Family

ID=50957528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410119164.6A Expired - Fee Related CN103888840B (en) 2014-03-27 2014-03-27 A kind of video mobile terminal Real Time Dragging and the method and device for scaling

Country Status (1)

Country Link
CN (1) CN103888840B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104104897A (en) * 2014-06-27 2014-10-15 北京奇艺世纪科技有限公司 Mobile terminal video editing method and device
CN104822088A (en) * 2015-04-16 2015-08-05 腾讯科技(北京)有限公司 Video image zooming method and device
CN105491386A (en) * 2014-09-19 2016-04-13 杭州海康威视数字技术股份有限公司 Format conversion method and device of video data
CN105635832A (en) * 2014-11-10 2016-06-01 阿里巴巴集团控股有限公司 Video processing method and device
CN106201318A (en) * 2016-07-13 2016-12-07 乐视控股(北京)有限公司 Adjust the method for video playback window, device and display terminal
CN108509133A (en) * 2017-02-27 2018-09-07 腾讯科技(深圳)有限公司 Searching component methods of exhibiting and device
CN108804187A (en) * 2018-05-30 2018-11-13 北京小米移动软件有限公司 The setting method and device of interfaces windows
CN113766255A (en) * 2021-01-19 2021-12-07 北京沃东天骏信息技术有限公司 Video stream merging method and device, electronic equipment and computer medium
CN116095250A (en) * 2022-05-30 2023-05-09 荣耀终端有限公司 Method and device for video cropping

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1919210A1 (en) * 2005-08-25 2008-05-07 Sony Corporation Reproduction device, reproduction method, program, program storage medium, data structure, and recording medium fabrication method
CN101616281A (en) * 2009-06-26 2009-12-30 中兴通讯股份有限公司南京分公司 A kind of with local method and the portable terminal that amplifies of mobile TV playing picture
CN102520860A (en) * 2011-12-09 2012-06-27 中兴通讯股份有限公司 Method for controlling desktop display and mobile terminal
CN103677543A (en) * 2012-09-03 2014-03-26 中兴通讯股份有限公司 Method for adjusting screen display area of mobile terminal and mobile terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1919210A1 (en) * 2005-08-25 2008-05-07 Sony Corporation Reproduction device, reproduction method, program, program storage medium, data structure, and recording medium fabrication method
CN101616281A (en) * 2009-06-26 2009-12-30 中兴通讯股份有限公司南京分公司 A kind of with local method and the portable terminal that amplifies of mobile TV playing picture
CN102520860A (en) * 2011-12-09 2012-06-27 中兴通讯股份有限公司 Method for controlling desktop display and mobile terminal
CN103677543A (en) * 2012-09-03 2014-03-26 中兴通讯股份有限公司 Method for adjusting screen display area of mobile terminal and mobile terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周益民 等: "视频图像复杂度估计的H·264/AVC码率控制", 《计算机工程与应用》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104104897A (en) * 2014-06-27 2014-10-15 北京奇艺世纪科技有限公司 Mobile terminal video editing method and device
CN105491386B (en) * 2014-09-19 2019-05-28 杭州海康威视数字技术股份有限公司 A kind of format conversion method and device of video data
CN105491386A (en) * 2014-09-19 2016-04-13 杭州海康威视数字技术股份有限公司 Format conversion method and device of video data
CN105635832A (en) * 2014-11-10 2016-06-01 阿里巴巴集团控股有限公司 Video processing method and device
CN105635832B (en) * 2014-11-10 2019-05-28 阿里巴巴集团控股有限公司 A kind of method for processing video frequency and device
CN104822088A (en) * 2015-04-16 2015-08-05 腾讯科技(北京)有限公司 Video image zooming method and device
US10397649B2 (en) 2015-04-16 2019-08-27 Tencent Technology (Shenzhen) Company Limited Method of zooming video images and mobile display terminal
CN104822088B (en) * 2015-04-16 2019-03-19 腾讯科技(北京)有限公司 Video image zooming method and apparatus
CN106201318A (en) * 2016-07-13 2016-12-07 乐视控股(北京)有限公司 Adjust the method for video playback window, device and display terminal
CN108509133A (en) * 2017-02-27 2018-09-07 腾讯科技(深圳)有限公司 Searching component methods of exhibiting and device
CN108509133B (en) * 2017-02-27 2023-05-02 腾讯科技(深圳)有限公司 Search component display method and device
CN108804187A (en) * 2018-05-30 2018-11-13 北京小米移动软件有限公司 The setting method and device of interfaces windows
CN108804187B (en) * 2018-05-30 2023-11-24 北京小米移动软件有限公司 Interface window setting method and device
CN113766255A (en) * 2021-01-19 2021-12-07 北京沃东天骏信息技术有限公司 Video stream merging method and device, electronic equipment and computer medium
CN113766255B (en) * 2021-01-19 2024-07-16 北京沃东天骏信息技术有限公司 Video stream merging method, device, electronic equipment and computer medium
CN116095250A (en) * 2022-05-30 2023-05-09 荣耀终端有限公司 Method and device for video cropping
CN116095250B (en) * 2022-05-30 2023-10-31 荣耀终端有限公司 Method and device for video cropping

Also Published As

Publication number Publication date
CN103888840B (en) 2017-03-29

Similar Documents

Publication Publication Date Title
CN103888840B (en) A kind of video mobile terminal Real Time Dragging and the method and device for scaling
CN109783178B (en) Color adjusting method, device, equipment and medium for interface component
CN112738541B (en) Picture display method and device and electronic equipment
US10257436B1 (en) Method for using deep learning for facilitating real-time view switching and video editing on computing devices
WO2016095429A1 (en) Screenshot method and device for mobile terminal
US20180247613A1 (en) Display apparatus and control method thereof
CN113852757B (en) Video processing method, device, equipment and storage medium
WO2016188276A1 (en) Video playing method, client and computer storage medium
US20220248102A1 (en) Subtitle border-crossing processing method and apparatus, and electronic device
WO2024051536A1 (en) Livestreaming special effect rendering method and apparatus, device, readable storage medium, and product
CN113126862B (en) Screen capture method and device, electronic equipment and readable storage medium
US12019669B2 (en) Method, apparatus, device, readable storage medium and product for media content processing
CN111556350B (en) Intelligent terminal and man-machine interaction method
CN112905134A (en) Method and device for refreshing display and electronic equipment
WO2015078257A1 (en) Search information display device and method
US20230300475A1 (en) Image processing method and apparatus, and electronic device
CN104010204B (en) Image information processing method and device
CN112835506B (en) Display device and control method thereof
WO2020037754A1 (en) Method and device for enhancing image quality of video
US20230368338A1 (en) Image display method and apparatus, and electronic device
CN113347356A (en) Shooting method, shooting device, electronic equipment and storage medium
CN112860212A (en) Volume adjusting method and display device
CN113852756B (en) Image acquisition method, device, equipment and storage medium
WO2020037755A1 (en) Method and apparatus for enhancing video image quality
CN115190351A (en) Display device and media asset scaling control method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170329

Termination date: 20180327