CN103888840A - Method and device for dragging and zooming video mobile terminal in real time - Google Patents
Method and device for dragging and zooming video mobile terminal in real time Download PDFInfo
- Publication number
- CN103888840A CN103888840A CN201410119164.6A CN201410119164A CN103888840A CN 103888840 A CN103888840 A CN 103888840A CN 201410119164 A CN201410119164 A CN 201410119164A CN 103888840 A CN103888840 A CN 103888840A
- Authority
- CN
- China
- Prior art keywords
- module
- zooming
- decoding
- video
- thread
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 230000003993 interaction Effects 0.000 claims abstract description 42
- 238000006243 chemical reaction Methods 0.000 claims abstract description 31
- 239000000872 buffer Substances 0.000 claims abstract description 24
- 238000000605 extraction Methods 0.000 claims abstract description 21
- 238000012544 monitoring process Methods 0.000 claims abstract description 17
- 230000008569 process Effects 0.000 claims abstract description 12
- 238000012937 correction Methods 0.000 claims abstract description 4
- 230000002452 interceptive effect Effects 0.000 claims description 31
- 238000012545 processing Methods 0.000 claims description 13
- 230000008859 change Effects 0.000 claims description 6
- 238000006073 displacement reaction Methods 0.000 claims description 4
- 239000004576 sand Substances 0.000 claims 1
- 230000002618 waking effect Effects 0.000 claims 1
- 238000001514 detection method Methods 0.000 abstract description 5
- 239000000284 extract Substances 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 230000000007 visual effect Effects 0.000 description 5
- 238000011161 development Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000003139 buffering effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000013075 data extraction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000002054 transplantation Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Images
Landscapes
- Controls And Circuits For Display Device (AREA)
Abstract
本发明公开了一种视频移动终端实时拖动与缩放的方法及装置,其中该装置主要具体包括:解码模块、绘制模块和交互模块,所述解码模块包括:解码子模块、解码控制子模块、转换子模块、双缓存器;所述绘制模块包括:提取子模块、填充子模块、绘制控制子模块;所述交互模块包括:监听子模块、计算子模块、请求分流器、缩放适配器以及拖动适配器。本发明所述方法及装置解决了现有技术中在移动终端视频播放存在的问题,实现的过程较为简便,效率较高,并且同时提供了在进行拖动或缩放之后视频源的范围超出显示区域的越界检测纠正方法,更加有效方便了在移动终端对视频播放的操作。
The invention discloses a method and device for real-time dragging and zooming of a video mobile terminal, wherein the device mainly specifically includes: a decoding module, a drawing module and an interaction module, and the decoding module includes: a decoding submodule, a decoding control submodule, conversion sub-module, double buffer; the drawing module includes: extraction sub-module, filling sub-module, drawing control sub-module; the interaction module includes: monitoring sub-module, calculation sub-module, request splitter, scaling adapter and dragging adapter. The method and device of the present invention solve the problems existing in the video playback of mobile terminals in the prior art, and the implementation process is relatively simple and efficient, and at the same time, it provides that the range of the video source exceeds the display area after dragging or zooming. The out-of-boundary detection and correction method is more effective and convenient for the operation of video playback on the mobile terminal.
Description
技术领域technical field
本发明属于视频处理技术领域,具体涉及一种视频移动终端实时拖动与缩放的方法及装置的设计。The invention belongs to the technical field of video processing, and in particular relates to a method and device design for real-time dragging and zooming of a video mobile terminal.
背景技术Background technique
信息化技术的发展伴随着互联网络的广泛普及而日新月异,移动互联网络时代已经到来,3G网络现在已广泛得应用于人们的生活当中,世界上目前已有20多个国家在部署4G网络,越来越多的终端设备支持4G网络。视频相关业务也由传统的电视终端,PC端等向移动终端转移,移动终端由于其便携性,客户对齐黏度和依赖性较高,因此互联网的未来在移动互联网。用户已经习惯了随时随地使用便携设备接入互联网享受服务,高压缩比编码标准的不断推新,如MPEG4、H.264/AVC、HEVC使得低码率高质量的视频回放已不再困难,这将加速“动态展览”时代的到来。传统互联网上的视频具有高分辨率、显示器尺寸大、分辨率高的特点,而移动终端在拥有便携性的同时,尺寸一般较小,显示屏的大小受限,使得用户更加期待获得视频的完整信息和细节信息。移动终端设备拥有桌面设备不具备的特性,因此在移动终端对视频进行拖动和缩放处理技术尤为重要,不可缺少。The development of information technology is changing with each passing day along with the widespread popularization of the Internet. The era of mobile Internet has arrived. 3G networks are now widely used in people's lives. More than 20 countries in the world are currently deploying 4G networks. More and more terminal devices support 4G network. Video-related services are also shifting from traditional TV terminals, PC terminals, etc. to mobile terminals. Due to their portability, mobile terminals have high customer alignment viscosity and dependence. Therefore, the future of the Internet lies in the mobile Internet. Users have become accustomed to using portable devices to access the Internet to enjoy services anytime and anywhere. The continuous introduction of high-compression coding standards, such as MPEG4, H.264/AVC, and HEVC, makes it no longer difficult to play low-bit-rate and high-quality video. It will accelerate the arrival of the era of "dynamic exhibition". Video on the traditional Internet has the characteristics of high resolution, large display size, and high resolution, while mobile terminals are generally small in size while being portable, and the size of the display screen is limited, which makes users expect more complete video. information and details. Mobile terminal devices have features that desktop devices do not have, so the technology of dragging and zooming video on mobile terminals is particularly important and indispensable.
移动终端由于其显示屏尺寸较小,显示数据量有限,对于目前的互联网上的视频资源直接移用到移动终端时,显示的信息不够完整。对于实时视频播放,同样的视频资源同时在多种设备上播放,播放端的特性不同,尤其是移动终端的多样性,显示尺寸、分辨率差异较大,用户对实时性又有强烈的需求,因此研究移动终端视频实时处理有迫切的用户需求和重大的社会应用价值。移动终端最常见的操作即拖动选择显示区域和缩放显示细节信息。Due to the small size of the display screen of the mobile terminal, the amount of displayed data is limited. When the current video resources on the Internet are directly transferred to the mobile terminal, the displayed information is not complete. For real-time video playback, the same video resource is played on multiple devices at the same time, the characteristics of the playback end are different, especially the diversity of mobile terminals, the display size and resolution are quite different, and users have a strong demand for real-time performance. Research on real-time video processing of mobile terminals has urgent user needs and great social application value. The most common operations of the mobile terminal are dragging to select a display area and zooming to display detailed information.
发明内容Contents of the invention
本发明的目的是为了解决现有技术中视频在移动终端上进行播放存在的问题而提出一种视频移动终端实时拖动与缩放的方法及装置。The purpose of the present invention is to propose a method and device for real-time dragging and zooming of a video mobile terminal in order to solve the existing problems of video playback on a mobile terminal in the prior art.
本发明的技术方案是:一种视频移动终端实时拖动与缩放的方法,具体包括:解码线程、绘制线程以及交互线程,所述解码线程、绘制线程、交互线程并行运行;The technical solution of the present invention is: a method for real-time dragging and zooming of a video mobile terminal, specifically comprising: a decoding thread, a drawing thread, and an interactive thread, and the decoding thread, the drawing thread, and the interactive thread run in parallel;
其中,所述解码线程的具体步骤为:Wherein, the specific steps of the decoding thread are:
S11、在视频流到达移动终端后,对该视频流进行解码,得到原始数据帧;S11. After the video stream arrives at the mobile terminal, decode the video stream to obtain the original data frame;
S12、收集所述交互线程生成的缩放参数,控制视频的缩放比;S12. Collect the scaling parameters generated by the interactive thread, and control the scaling ratio of the video;
S13、对视频格式进行转换,最终转换为RGB格式的ARGB8888标准的RGB中间帧;S13, the video format is converted, and finally converted into an RGB intermediate frame of the ARGB8888 standard in RGB format;
S14、将转换后的RGB中间帧存放至双缓存器;S14. Store the converted RGB intermediate frame in a double buffer;
S15、帧同步后即唤醒绘制线程,同时进入下一帧视频的解码;S15, wake up the drawing thread after frame synchronization, and enter the decoding of the next frame of video at the same time;
所述绘制线程的具体步骤为:The specific steps of the drawing thread are:
S21、接收到解码线程的唤醒信号后,启动绘制线程;S21. After receiving the wake-up signal of the decoding thread, start the drawing thread;
S22、收集所述交互线程生成的定位参数,控制视频图像的提取区域;S22. Collect the positioning parameters generated by the interactive thread, and control the extraction area of the video image;
S23、从双缓存器中提取待显示部分的数据;S23. Extract the data of the part to be displayed from the double buffer;
S24、在待显示区域,将所述步骤S23提取的待显示的数据写入至待显示区域;S24. In the area to be displayed, write the data to be displayed extracted in step S23 into the area to be displayed;
S25、数据填充完成后进入休眠状态,直至再次接收到解码线程的唤醒信号,进入下一帧的绘制;S25. Enter the dormant state after the data filling is completed, until the wake-up signal of the decoding thread is received again, and enter the drawing of the next frame;
所述交互线程的具体步骤为:The specific steps of the interactive thread are:
S31、交互线程启动,监听显示窗口内的交互事件,所述交互事件包括:单指拖动和双指缩放;S31. The interaction thread is started, and the interaction event in the display window is monitored, and the interaction event includes: one-finger dragging and two-finger zooming;
S32、计算分析显示窗口内发生的交互事件,判断用户的需求是否发生变化;若是,则进入步骤S33,否则返回所述步骤S31继续监听显示窗口内的交互事件;S32, calculate and analyze the interactive events that occur in the display window, and judge whether the user's needs change; if so, then enter step S33, otherwise return to the step S31 to continue monitoring the interactive events in the display window;
S33、计算缩放分量和拖动分量;S33. Calculate the scaling component and the dragging component;
S34、根据缩放分量进行坐标转换计算,得出缩放比,生成缩放参数,并将缩放参数传递给解码线程;S34. Perform coordinate conversion calculation according to the scaling component, obtain a scaling ratio, generate a scaling parameter, and pass the scaling parameter to the decoding thread;
S35、根据拖动分量进行坐标转换计算,得出拖动位移大小,生成定位参数,并将定位参数传递给绘制线程。S35. Carry out coordinate transformation calculation according to the dragging component, obtain the dragging displacement, generate a positioning parameter, and pass the positioning parameter to the drawing thread.
进一步的,所述步骤S34中生成缩放参数的具体过程为:Further, the specific process of generating scaling parameters in the step S34 is as follows:
设在进行缩放操作前手指触摸点分别为Et和Ft,缩放操作过后手指触摸点分别为Et+1和Ft+1,St为提取数据区域的起点,为Et相对于St点的坐标,为Ft相对于St点的坐标,为Et+1相对于St点的坐标,为Ft+1相对于St点的坐标,Wv、Hv分别为提取数据区域的宽度和高度,Wt、Ht别为缩放操作前t时刻RGB中间帧的宽度和高度,缩放操作后Wt、Ht会等比变小,Wv、Hv不变;Let the finger touch points be Et and Ft before the zoom operation, and the finger touch points after the zoom operation are Et +1 and Ft +1 respectively, S t is the starting point of the extracted data area, is the coordinate of E t relative to S t point, is the coordinates of F t relative to S t point, is the coordinate of E t+1 relative to S t point, is the coordinates of F t+1 relative to point S t , W v and H v are the width and height of the extracted data area respectively, W t and H t are the width and height of the RGB intermediate frame at time t before the scaling operation, and the scaling operation Afterwards, W t and H t will become proportionally smaller, while W v and H v will remain unchanged;
选择作为提取数据区域与RGB中间帧大小比,计算在提取数据区域内缩放比Zv,
在进行缩放操作后Wt+1、Ht+1分别为:
进一步的,所述步骤S35中生成定位参数的具体过程为:Further, the specific process of generating positioning parameters in the step S35 is:
设在进行拖动操作前手指触摸点为At,拖动操作后手指触摸点为At+1,点D为RGB中间帧的起始点,点St和St+1分别为在进行拖动操作前和拖动操作后提取数据的起点,记为At点相对于St点坐标,记为At+1点相对于St点坐标,记为St点相对于D点的坐标,为St+1相对于D点的坐标,所述坐标的计算公式为:Assume that the finger touch point is A t before the drag operation, the finger touch point after the drag operation is A t+1 , the point D is the starting point of the RGB intermediate frame, and the points S t and S t+1 are respectively the points during the drag operation. The starting point for extracting data before the drag operation and after the drag operation, Recorded as the coordinates of point A t relative to point S t , Recorded as the coordinates of point A t+1 relative to point S t , Recorded as the coordinates of point S t relative to point D, is the coordinate of S t+1 relative to point D, the The formula for calculating coordinates is:
进一步的,当缩放操作或拖动操作后的定位参数取值超出RGB中间帧范围时,要对缩放参数及定位参数进行越界检测,其具体过程为:设Wv和Hv分别表示显示区域的宽度和高度,WS和Hs表示视频源的宽度和高度,(xD,yD)|S表示提取的数据区域的原点S相对于RGB中间帧的原点D的坐标;Further, when the value of the positioning parameter after the zooming operation or the dragging operation exceeds the range of the RGB intermediate frame, it is necessary to perform an out-of-bounds detection on the zooming parameter and the positioning parameter, and the specific process is as follows: let W v and H v respectively represent the Width and height, W S and H s represent the width and height of the video source, (x D , y D ) | S represents the coordinates of the origin S of the data area extracted relative to the origin D of the RGB intermediate frame;
当WS<WV且WS-WV<HS-HV时,则设置WS=WV,HS=WS′HV/WV;When W S <W V and W S -W V <H S -H V , then set W S =W V , H S =W S ′H V /W V ;
当HS<HV且HS-HV<WS-WV时,则设置HS=HV,WS=HS′Wv/Hv;When H S <H V and H S -H V <W S -W V , then set H S =H V , W S =H S ′W v /H v ;
当xD|S<0或WS<WV时,则设置xD|S=0;When x D | S <0 or W S <W V , then set x D | S =0;
当yD|S<0或HS<HV时,则设置yD|S=0;When y D | S <0 or H S <H V , then set y D | S =0;
当xD|S>WS-WV时,则设置xD|S=WS-WV;When x D | S >W S -W V , then set x D | S =W S -W V ;
当yD|S>HS-HV时,则设置yD|S=HS-HV。When y D | S >H S -H V , then set y D | S =H S -H V .
同时,为解决技术问题本发明还提供了一种视频移动终端实时拖动与缩放的装置,具体包括:解码模块、绘制模块和交互模块,At the same time, in order to solve the technical problem, the present invention also provides a device for real-time dragging and zooming of a video mobile terminal, which specifically includes: a decoding module, a drawing module and an interactive module,
所述解码模块包括:解码子模块、解码控制子模块、转换子模块、双缓存器;所述解码子模块将视频流文件解码为视频帧数据;所述解码控制子模块用于接收交互模块传来的缩放参数,控制转换子模块进行格式转换;所述转换子模块用于将解码子模块得到的视频帧数据换成RGB格式的ARGB8888标准的RGB中间帧;所述双缓存器用于存放转换后的RGB中间帧;The decoding module includes: a decoding submodule, a decoding control submodule, a conversion submodule, and a double buffer; the decoding submodule decodes the video stream file into video frame data; the decoding control submodule is used to receive the interactive module transmission The scaling parameter that comes, controls conversion sub-module to carry out format conversion; Said conversion sub-module is used for changing the video frame data that decoding sub-module obtains into the RGB intermediate frame of the ARGB8888 standard of RGB format; Described double buffer is used for storing converted The RGB intermediate frame;
所述绘制模块包括:提取子模块、填充子模块、绘制控制子模块;所述提取子模块用于从双缓存器中提取待显示部分的数据;所述填充子模块将提取的待显示的数据写入至待显示区域;所述绘制控制子模块用于接收交互模块传来的定位参数,控制提取子模块提取数据的位置;The drawing module includes: an extraction submodule, a filling submodule, and a drawing control submodule; the extraction submodule is used to extract the data of the part to be displayed from the double buffer; the data to be displayed that the filling submodule will extract Write to the area to be displayed; the drawing control submodule is used to receive the positioning parameters from the interaction module, and control the position of the extraction submodule to extract data;
所述交互模块包括:监听子模块、计算子模块、请求分流器、缩放适配器以及拖动适配器;所述监听子模块用于实时监测显示区域内的用户交互请求;所述计算子模块用于对用户的交互请求进行计算,并将计算结果传输到请求分流器;所述请求分流器用于对交互请求进行分流,分离出是缩放请求还是拖动请求,或者是二者都有;所述缩放适配器收请求分流器分离出的缩放请求,进行缩放坐标计算得到缩放参数;所述拖动适配器用于接收请求分流器分离出的拖动请求,进行拖动坐标计算得到定位参数。The interaction module includes: a monitoring sub-module, a calculation sub-module, a request splitter, a scaling adapter and a drag adapter; the monitoring sub-module is used for real-time monitoring of user interaction requests in the display area; the calculation sub-module is used for The user's interaction request is calculated, and the calculation result is transmitted to the request splitter; the request splitter is used to split the interaction request, separating the zoom request or the drag request, or both; the zoom adapter The scaling request separated by the request splitter is received, and scaling coordinates are calculated to obtain scaling parameters; the drag adapter is used to receive the drag request separated by the request splitter, and the dragging coordinates are calculated to obtain positioning parameters.
进一步的,所述交互模块还包括越界处理子模块,所述越界处理子模用于检测交互请求是否越界并进行越界纠正。Further, the interaction module further includes an out-of-bounds processing sub-module, and the out-of-bounds processing sub-module is used to detect whether the interaction request is out of bounds and to correct the out-of-bounds.
本发明的有益效果是:本发明一种视频移动终端实时拖动与缩放的方法及装置通过在移动终端对视频图像进行实时拖动与缩放,解决了现有技术中在移动终端视频播放存在的问题,方法实现的过程较为简便,效率较高,并且同时提供了在进行拖动或缩放之后视频源的范围超出显示区域的越界检测纠正方法,更加有效方便了在移动终端对视频播放的操作。The beneficial effects of the present invention are: a method and device for real-time dragging and zooming of a video mobile terminal in the present invention solves the problems existing in the video playback of the mobile terminal in the prior art by dragging and zooming the video image in real time on the mobile terminal The problem is that the implementation process of the method is relatively simple and efficient, and at the same time, it provides a cross-border detection and correction method for the range of the video source to exceed the display area after dragging or zooming, which is more effective and convenient for the operation of video playback on mobile terminals.
附图说明Description of drawings
图1为本发明实施例的一种视频移动终端实时拖动与缩放的方法的流程框图;FIG. 1 is a block flow diagram of a method for real-time dragging and zooming of a video mobile terminal according to an embodiment of the present invention;
图2为本发明实施例对缩放操作中缩放坐标的换算示意图;FIG. 2 is a schematic diagram of conversion of zoom coordinates in a zoom operation according to an embodiment of the present invention;
图3为本发明实施例对拖动操作中拖动坐标的换算示意图;FIG. 3 is a schematic diagram of conversion of drag coordinates in a drag operation according to an embodiment of the present invention;
图4为本发明实施例的一种视频移动终端实时拖动与缩放的装置的结构框图。FIG. 4 is a structural block diagram of an apparatus for real-time dragging and zooming of a video mobile terminal according to an embodiment of the present invention.
具体实施方式Detailed ways
下面结合附图和具体的实施例对本发明作进一步的阐述。The present invention will be further elaborated below in conjunction with the accompanying drawings and specific embodiments.
本发明一种视频移动终端实时拖动与缩放的方法,具体包括:解码线程、绘制线程以及交互线程,所述解码线程、绘制线程、交互线程并行运行;A method for dragging and zooming a video mobile terminal in real time according to the present invention, specifically comprising: a decoding thread, a drawing thread, and an interactive thread, wherein the decoding thread, the drawing thread, and the interactive thread run in parallel;
其中,所述解码线程的具体步骤为:Wherein, the specific steps of the decoding thread are:
S11、在视频流到达移动终端后,对该视频流进行解码,得到原始数据帧;S11. After the video stream arrives at the mobile terminal, decode the video stream to obtain the original data frame;
S12、收集所述交互线程生成的缩放参数,控制视频的缩放比;S12. Collect the scaling parameters generated by the interactive thread, and control the scaling ratio of the video;
S13、对视频格式进行转换,最终转换为RGB格式的ARGB8888标准的RGB中间帧;S13, the video format is converted, and finally converted into an RGB intermediate frame of the ARGB8888 standard in RGB format;
S14、将转换后的RGB中间帧存放至双缓存器;S14. Store the converted RGB intermediate frame in a double buffer;
S15、帧同步后即唤醒绘制线程,同时进入下一帧视频的解码;S15, wake up the drawing thread after frame synchronization, and enter the decoding of the next frame of video at the same time;
所述绘制线程的具体步骤为:The specific steps of the drawing thread are:
S21、接收到解码线程的唤醒信号后,启动绘制线程;S21. After receiving the wake-up signal of the decoding thread, start the drawing thread;
S22、收集所述交互线程生成的定位参数,控制视频图像的提取区域;S22. Collect the positioning parameters generated by the interactive thread, and control the extraction area of the video image;
S23、从双缓存器中提取待显示部分的数据;S23. Extract the data of the part to be displayed from the double buffer;
S24、在待显示区域,将所述步骤S23提取的待显示的数据写入至待显示区域;S24. In the area to be displayed, write the data to be displayed extracted in step S23 into the area to be displayed;
S25、数据填充完成后进入休眠状态,直至再次接收到解码线程的唤醒信号,进入下一帧的绘制;S25. Enter the dormant state after the data filling is completed, until the wake-up signal of the decoding thread is received again, and enter the drawing of the next frame;
所述交互线程的具体步骤为:The specific steps of the interactive thread are:
S31、交互线程启动,监听显示窗口内的交互事件,所述交互事件包括:单指拖动和双指缩放;S31. The interaction thread is started, and the interaction event in the display window is monitored, and the interaction event includes: one-finger dragging and two-finger zooming;
S32、计算分析显示窗口内发生的交互事件,判断用户的需求是否发生变化;若是,则进入步骤S33,否则返回所述步骤S31继续监听显示窗口内的交互事件;S32, calculate and analyze the interactive events that occur in the display window, and judge whether the user's needs change; if so, then enter step S33, otherwise return to the step S31 to continue monitoring the interactive events in the display window;
S33、计算缩放分量和拖动分量;S33. Calculate the scaling component and the dragging component;
S34、根据缩放分量进行坐标转换计算,得出缩放比,生成缩放参数,并将缩放参数传递给解码线程;S34. Perform coordinate conversion calculation according to the scaling component, obtain a scaling ratio, generate a scaling parameter, and pass the scaling parameter to the decoding thread;
S35、根据拖动分量进行坐标转换计算,得出拖动位移大小,生成定位参数,并将定位参数传递给绘制线程。S35. Carry out coordinate transformation calculation according to the dragging component, obtain the dragging displacement, generate a positioning parameter, and pass the positioning parameter to the drawing thread.
为了本领域技术人员能够理解并且实施本发明技术方案,下面将结合如图1所示的流程框图对本发明所述方法进行阐述:In order for those skilled in the art to understand and implement the technical solution of the present invention, the method of the present invention will be described below in conjunction with the flow chart shown in Figure 1:
步骤1、打开文件完成初始化:解码线程启动后,根据文件所在的位置找到并打开视频流文件,完成相应的参数设置初始化;
步骤2、解码:将视频流文件按规定的解码器进行解码,得到原始数据帧;Step 2, decoding: decode the video stream file according to the specified decoder to obtain the original data frame;
步骤3、收集缩放参数控制缩放比:收集交互线程传来的控制缩放比;Step 3. Collect scaling parameters to control the scaling ratio: collect the controlling scaling ratio from the interactive thread;
步骤4、制式转换:将原始数据帧进行格式转换,转换成移动终端可以处理的RGB格式;Step 4, format conversion: convert the original data frame into an RGB format that can be processed by the mobile terminal;
步骤5、存放RGB中间帧至双缓存器:将制式转换后的RGB中间帧存放在双缓存器中空闲的那个;Step 5. Store the RGB intermediate frame in the double buffer: store the converted RGB intermediate frame in the idle one in the double buffer;
步骤6、帧同步后唤醒绘制线程:存放好RGB中间帧并进行双缓存区同步,然后到步骤16,唤醒绘制线程进行绘制,同时进入步骤2,进行下一帧的解码过程;Step 6, wake up the drawing thread after frame synchronization: store the RGB intermediate frame and perform double buffer synchronization, then go to step 16, wake up the drawing thread to draw, and enter step 2 at the same time to decode the next frame;
步骤7、监听窗口事件:监护线程启动后监听显示窗口内的事件,主要的窗口事件包括单击、双击、多点触摸等,这里只关心单指拖放和双指缩放事件;Step 7. Monitor window events: monitor the events in the display window after the monitoring thread is started. The main window events include single click, double click, multi-touch, etc. Here we only care about single-finger drag and drop and two-finger zoom events;
步骤8、计算分析交互行为:计算交互行为并进行分析,判断用户需求是否改变;Step 8. Calculate and analyze the interaction behavior: calculate and analyze the interaction behavior to determine whether the user demand has changed;
步骤9、用户需求改变:根据计算的交互行为结果判断是否超过容差,若超过则进入步骤10进一步分离用户需求;否则进入步骤7,继续监听窗口事件;Step 9. Change of user needs: judge whether the tolerance is exceeded according to the calculated interactive behavior results, and if so, go to step 10 to further separate user needs; otherwise, go to step 7 and continue to monitor window events;
步骤10、分离分量:在用户需求改变的条件下,进一步计算拖动和缩放分量,将其分离,分离出相应的向量进入步骤11和步骤12;Step 10, separate components: under the condition of changing user needs, further calculate the drag and zoom components, separate them, and separate the corresponding vectors to enter step 11 and step 12;
步骤11、生成缩放参数:分离得到缩放分量,进行坐标转换计算,得出缩放比,生成缩放参数,传递给解码线程的收集缩放参数控制缩放比,进入步骤3;Step 11, generating scaling parameters: separate the scaling components, perform coordinate conversion calculations, obtain the scaling ratio, generate scaling parameters, and pass the collected scaling parameters to the decoding thread to control the scaling ratio, and proceed to step 3;
步骤12、生成定位参数:分离得到拖动分量,进行坐标转换计算,得出拖动位移大小,生成拖动定位参数,传递给绘制线程的收集定位参数控制提取区域,进入步骤13;Step 12, generate positioning parameters: separate and obtain the drag components, perform coordinate conversion calculations, obtain the drag displacement size, generate drag positioning parameters, pass the collected positioning parameters to the drawing thread to control the extraction area, and enter step 13;
步骤13、收集定位参数控制提取区域:收集交互线程传来的定位参数,确定提取的起始位置,进入步骤14进行数据提取;Step 13, collect positioning parameters to control the extraction area: collect the positioning parameters transmitted from the interactive thread, determine the starting position of extraction, and enter step 14 for data extraction;
步骤14、提取:从双缓存中提取需要显示的数据,为了提高效率,只提取待显示部分数据,不需提取完整的RGB中间帧,提取完成后进入步骤15填充数据;Step 14, extraction: extract the data to be displayed from the double buffer, in order to improve efficiency, only extract the part of the data to be displayed, do not need to extract the complete RGB intermediate frame, after the extraction is completed, enter step 15 to fill the data;
步骤15、填充:打开显示区域,将提取到的数据适配的写入显示区域,至此完成一帧图像的绘制,然后进入步骤16,绘制线程进入休眠状态。Step 15. Filling: Open the display area, and write the extracted data into the display area. So far, the drawing of one frame of image is completed, and then enter step 16, and the drawing thread enters a dormant state.
步骤16、休眠/唤醒触发:填充完数据后绘制线程进入休眠状态,等待下一帧数据的到来唤醒绘制线程,进入步骤13,进行下一帧的绘制。Step 16, sleep/wake-up trigger: After filling the data, the drawing thread enters the dormant state, waits for the arrival of the next frame of data to wake up the drawing thread, enters step 13, and draws the next frame.
首先在解码线程中,视频流到达移动终端后需要对视频进行解码处理,常见的解码处理器包括MPEG、H.261、H.263、WMV等,其中本发明实施例以FFmpeg为例,解码得到的数据帧为YUV420P格式,最后转换为RGB格式的ARGB8888标准的RGB中间帧,本发明实施例的移动终端Android平台为例,由于FFmpeg无法直接在Android平台使用,需将其移到Android平台上,具体移植流程如下:Firstly, in the decoding thread, after the video stream arrives at the mobile terminal, the video needs to be decoded. Common decoding processors include MPEG, H.261, H.263, WMV, etc., wherein the embodiment of the present invention takes FFmpeg as an example, and the decoding is obtained The data frame is the YUV420P format, and finally converted to the RGB intermediate frame of the ARGB8888 standard of the RGB format. The mobile terminal Android platform of the embodiment of the present invention is an example. Since FFmpeg cannot be directly used on the Android platform, it needs to be moved to the Android platform. The specific transplant process is as follows:
1、获取FFmpeg源码,NDK开发包;1. Obtain FFmpeg source code and NDK development kit;
2、配置NDK开发环境;2. Configure the NDK development environment;
3、按照NDK项目规范添加.mk文件,并修改原有的编译配置文件使得符合NDK编译规范;3. Add the .mk file according to the NDK project specification, and modify the original compilation configuration file to conform to the NDK compilation specification;
4、使用NDK开发包的ndk-build命令编译FFmpeg;4. Use the ndk-build command of the NDK development kit to compile FFmpeg;
5、部署编译得到的.so共享库。5. Deploy the compiled .so shared library.
完成解码器的平台移植后即可进行视频解码,解码得到的数据帧为YUV420P格式,YUV420P格式其中YUV分为三个分量,“Y”表示明亮度,也就是灰度值,而“U”和“V”表示的则是色度,作用是描述影像色彩及饱和度,用于指定像素的颜色。由于移动终端无法直接处理显示YUV420P格式数据,需要将其转换成RGB格式。After the platform transplantation of the decoder is completed, the video decoding can be performed. The decoded data frame is in the YUV420P format, and the YUV420P format is divided into three components. "V" stands for chroma, which is used to describe the color and saturation of the image, and is used to specify the color of the pixel. Since the mobile terminal cannot directly process and display the data in YUV420P format, it needs to be converted into RGB format.
根据需要,再考虑到本发明采用解码、绘制并行处理,影响系统运行的瓶颈在解码而非绘制,应尽力缩短解码时耗,所以本发明实施例将YUV420P格式转换成RGB格式的ARGB8888标准的RGB中间帧。根据FFmpeg提供的API,可以直接调用API进行格式转换,同时转换可以和后续的缩放操作一并完成,只需在转换阶段也设置SwsContext结构变量中的目标帧格式成员变量为ARGB8888。According to needs, considering that the present invention adopts decoding and drawing parallel processing, the bottleneck affecting system operation is decoding rather than drawing, and the decoding time consumption should be shortened as much as possible, so the embodiment of the present invention converts the YUV420P format into the RGB format of the ARGB8888 standard RGB intermediate frame. According to the API provided by FFmpeg, you can directly call the API for format conversion, and the conversion can be completed together with the subsequent scaling operation. You only need to set the target frame format member variable in the SwsContext structure variable to ARGB8888 during the conversion phase.
其次在绘制线程中,绘制线程采用被动触发式的消息机制,在解码线程完成一帧数据解码后,通知唤醒绘制线程进行一帧图像的绘制,绘制完成后进入休眠状态,等待下一次被唤醒。Secondly, in the drawing thread, the drawing thread adopts a passive trigger message mechanism. After the decoding thread completes the decoding of a frame of data, it notifies the drawing thread to wake up to draw a frame of image.
绘制图像需要先从RGB中间帧中提取需要显示的数据,为了简化计算了,提取操作只需要获取视频可视区域相对于RGB中间帧原点的坐标,这个坐标受交互线程控制,作为全局变量保存在绘制线程中,如果发生拖动请求,则更新这个坐标数值,否则一直反复沿用该坐标。To draw an image, the data to be displayed needs to be extracted from the RGB intermediate frame first. In order to simplify the calculation, the extraction operation only needs to obtain the coordinates of the visible area of the video relative to the origin of the RGB intermediate frame. This coordinate is controlled by the interactive thread and stored as a global variable in In the drawing thread, if a drag request occurs, the coordinate value is updated, otherwise the coordinate is used repeatedly.
在实际应用中,绘制线程需要从RGB中间帧中提取数据,解码线程需要向RGB中间帧中写入数据,为了保持数据的一致性及程序的正常运行,需要对RGB中间帧进行同步互斥处理,而这样会降低系统运行效率。本发明采用双向RGB中间帧来模拟双缓存,在解码并转换后检测并取出空闲RGB中间帧用于存放新得到的帧,在绘制线程管理一个标志位来表示占用状态,并在解码线程提供统一的出口,对绘制线程屏蔽双缓存的特性,简化了绘制线程的操作。In practical applications, the drawing thread needs to extract data from the RGB intermediate frame, and the decoding thread needs to write data to the RGB intermediate frame. In order to maintain data consistency and normal operation of the program, the RGB intermediate frame needs to be synchronized and mutually exclusive. , which will reduce the operating efficiency of the system. The present invention uses two-way RGB intermediate frames to simulate double buffering, detects and takes out idle RGB intermediate frames after decoding and converting to store newly obtained frames, manages a flag bit in the drawing thread to indicate the occupied state, and provides a unified buffer in the decoding thread The exit of , which shields the feature of double buffering from the drawing thread, simplifies the operation of the drawing thread.
在交互线程中,移动终端对于手势识别主要是监听显示组件SurfaceView上的手势操作,对于本发明实施例主要是监听单指触摸拖动及双指触摸缩放事件,由于考虑到系统的运行效率及良好的交互响应,只有当交互请求积累到一定的程度时才响应。In the interactive thread, the mobile terminal mainly listens to the gesture operation on the display component SurfaceView for gesture recognition. For the embodiment of the present invention, it mainly monitors the single-finger touch drag and two-finger touch zoom events. The interaction response of , only respond when the interaction request accumulates to a certain level.
并且在进行缩放操作时,由于视频显示要符合人类的视觉习惯,当缩放进行到最大化再缩放,图像边缘将补黑,影响视觉效果,此时的缩放为无意义的缩放,则可以拒绝处理请求。设定缩放的最大值为显示整个中间帧数据,在本发明实施例中缩放是按照线性等比例进行缩放的,因为移动终端显示屏较小,若采用现有技术中的指数增长方式缩放,缩放速度太快,用户难以精细控制,因此可以采用线性等比例进行缩放,当缩放到一定程度仍未达到预期,可以进行二次缩放,这样虽然增加了缩放次数,但缩放更加精细,更符合用户需求。And when zooming, because the video display must conform to human visual habits, when zooming to the maximum and then zooming, the edge of the image will be filled with black, which will affect the visual effect. At this time, the zooming is meaningless zooming, and the processing can be rejected ask. Setting the maximum value of scaling is to display the entire intermediate frame data. In the embodiment of the present invention, scaling is performed according to linear equal proportions. Because the display screen of the mobile terminal is relatively small, if the exponential growth method in the prior art is used to scale, the scaling The speed is too fast, and it is difficult for the user to fine-tune the control. Therefore, it can be scaled linearly and proportionally. When the scaling reaches a certain level, it still does not meet expectations, and secondary scaling can be performed. Although the number of scaling is increased, the scaling is more refined and more in line with user needs. .
由于视频的实时拖动在拖动操作发生后绘制线程立即响应,改变从RGB中间帧提取数据的位置然后填充到显示区域,但视频的实时缩放并不是在缩放产生后立即唤醒绘制线程,而是将缩放参数发送到解码线程,控制下一帧的解码,因为目前可用的RGB中间帧已经生成且不可变,视频大小不会立即根据缩放请求而改变;而且提取数据区域的原点坐标是相对于新的缩放比而定的,如果立即绘制,绘制线程会采用新的提取数据区域的原点坐标来绘制旧帧,这会引起视频抖动。所以本发明实施例采用解码线程的唤醒触发作为绘制线程的同步命令,使得新的可视区域的绘制跟新解码的帧保持同步,避免了视频抖动的发生。Since the real-time dragging of the video responds immediately after the dragging operation occurs, the drawing thread changes the position of extracting data from the RGB intermediate frame and then fills it into the display area, but the real-time zooming of the video does not wake up the drawing thread immediately after the zooming occurs, but Send the scaling parameters to the decoding thread to control the decoding of the next frame, because the currently available RGB intermediate frame has been generated and is immutable, the video size will not change immediately according to the scaling request; and the origin coordinates of the extracted data area are relative to the new Depending on the zoom ratio, if you draw immediately, the drawing thread will use the origin coordinates of the new extracted data area to draw the old frame, which will cause video jitter. Therefore, the embodiment of the present invention uses the wake-up trigger of the decoding thread as the synchronization command of the drawing thread, so that the drawing of the new visible area is synchronized with the newly decoded frame, and the occurrence of video jitter is avoided.
如图2所示为本发明实施例对缩放操作中缩放坐标的换算示意图,生成缩放参数的具体过程为:As shown in Figure 2, it is a schematic diagram of the conversion of zoom coordinates in the zoom operation according to the embodiment of the present invention, and the specific process of generating zoom parameters is as follows:
设在进行缩放操作前手指触摸点分别为Et和Ft,缩放操作过后手指触摸点分别为Et+1和Ft+1,St为提取数据区域的起点,为Et相对于St点的坐标,为Ft相对于St点的坐标,为Et+1相对于St点的坐标,为Ft+1相对于St点的坐标,Wv、Hv分别为提取数据区域的宽度和高度,Wt、Ht别为缩放操作前t时刻RGB中间帧的宽度和高度,缩放操作后Wt、Ht会等比变小,Wv、Hv不变;Assume that the finger touch points are E t and F t before the zoom operation, and the finger touch points are E t+1 and F t+1 after the zoom operation, and S t is the starting point of the extracted data area. is the coordinate of E t relative to S t point, is the coordinates of F t relative to S t point, is the coordinate of E t+1 relative to S t point, is the coordinates of F t+1 relative to point S t , W v and H v are the width and height of the extracted data area respectively, W t and H t are the width and height of the RGB intermediate frame at time t before the scaling operation, and the scaling operation Afterwards, W t and H t will become proportionally smaller, while W v and H v will remain unchanged;
由于所以缩放时不能宽和高同时按缩放比缩放,考虑到用户的视觉习惯,更关心纵向的缩放效果,因此选用作为提取数据区域与RGB中间帧大小比,计算在提取数据区域内缩放比Zv,
在进行缩放操作后Wt+1、Ht+1分别为:
如图3所示为本发明实施例对拖动操作中拖动坐标的换算示意图,生成定位参数的具体过程为:As shown in Figure 3, it is a schematic diagram of the conversion of the dragging coordinates in the dragging operation according to the embodiment of the present invention, and the specific process of generating the positioning parameters is as follows:
设在进行拖动操作前手指触摸点为At,拖动操作后手指触摸点为At+1,点D为RGB中间帧的起始点,点St和St+1分别为在进行拖动操作前和拖动操作后提取数据的起点,记为At点相对于St点坐标,记为At+1点相对于St点坐标,记为St点相对于D点的坐标,为St+1相对于D点的坐标,所述坐标的计算公式为:Assume that the finger touch point is A t before the drag operation, the finger touch point after the drag operation is A t+1 , the point D is the starting point of the RGB intermediate frame, and the points S t and S t+1 are respectively the points during the drag operation. The starting point for extracting data before the drag operation and after the drag operation, Recorded as the coordinates of point A t relative to point S t , Recorded as the coordinates of point A t+1 relative to point S t , Recorded as the coordinates of point S t relative to point D, is the coordinate of S t+1 relative to point D, the The formula for calculating coordinates is:
但是不管是进行单指拖动还是双指缩放操作,在进行定位坐标和缩放比计算后,更新到绘制和解码线程前,都还需要进行越界检测纠正,确保变换参数的取值落在合理的范围内。However, regardless of whether it is a one-finger drag or two-finger zoom operation, after the calculation of the positioning coordinates and zoom ratio, before updating to the drawing and decoding threads, it is necessary to perform out-of-bounds detection and correction to ensure that the value of the transformation parameter falls within a reasonable value. within range.
对应的,在进行缩放操作过后,目标视频的尺寸需要做到:1、避免目标视频尺寸比显示区域小,这样将造成显示的视频四周留黑,这是不符合用户的视觉习惯的;2、避免无节制放大造成溢出,目标视频的大小有一个可以达到的最大平面,该值在初始化时指定,进行放大时不能超出该限制;3、合理处理目标显示区域的比例与视频源比例不匹配的问题。Correspondingly, after the zoom operation, the size of the target video needs to be: 1. Avoid the size of the target video being smaller than the display area, which will cause the displayed video to remain black around, which is not in line with the user's visual habits; 2. To avoid overflow caused by uncontrolled zoom-in, the size of the target video has a maximum plane that can be reached. This value is specified at the time of initialization, and the limit cannot be exceeded when zooming in; 3. Reasonably handle the mismatch between the ratio of the target display area and the ratio of the video source question.
在进行拖动操作过后,定位参数不能取值超出RGB中间帧范围,不然读取数据会出错;并且当定位参数在RGB中间帧范围时,取值要合理,使得拖动视频时目标显示区域的上下左右边界均不超出视频的上下左右边界。After the drag operation, the value of the positioning parameter cannot exceed the range of the RGB intermediate frame, otherwise the reading data will be wrong; and when the positioning parameter is within the range of the RGB intermediate frame, the value should be reasonable, so that the target display area when dragging the video Neither the top, bottom, left, or right borders exceed the top, bottom, left, or right borders of the video.
由此,总结出在进行单指拖动或双指缩放时,需要注意的问题有:1、视频的最小形态必须可以在目标区域完整显示,不齐整的区域用黑色填充;2、最小形态不能在x和y方向上同时留下黑边,这是不符合用户的视觉习惯的;3、为处理方便,当出现黑边时视频和目标显示区域优先进行左上角对齐;4、当视频大小全面大于目标显示显示区域时,不能显示黑边。对应上述问题,本发明实施例的解决方案为,当缩放操作或拖动操作后的定位参数取值超出RGB中间帧范围时,要对缩放参数及定位参数的进行越界检测,其具体过程为:设Wv和Hv分别表示显示区域的宽度和高度,WS和Hs表示视频源的宽度和高度,(xD,yD)|S表示提取的数据区域的原点坐标;From this, it is concluded that when dragging with one finger or zooming with two fingers, the problems that need to be paid attention to are: 1. The minimum form of the video must be completely displayed in the target area, and the uneven area is filled with black; 2. The minimum form cannot Leaving black borders in the x and y directions at the same time is not in line with the user's visual habits; 3. For the convenience of processing, when black borders appear, the video and the target display area are first aligned in the upper left corner; 4. When the video size is comprehensive When it is larger than the target display area, the black border cannot be displayed. Corresponding to the above problems, the solution of the embodiment of the present invention is that when the value of the positioning parameter after the zooming operation or dragging operation exceeds the range of the RGB intermediate frame, it is necessary to perform out-of-bounds detection on the zooming parameter and the positioning parameter, and the specific process is: Let Wv and Hv represent the width and height of the display area respectively, WS and Hs represent the width and height of the video source, and (xD, yD)|S represent the origin coordinates of the extracted data region;
当WS<WV且WS-WV<HS-HV时,则设置WS=WV,HS=WS′HV/WV;When W S <W V and W S -W V <H S -H V , then set W S =W V , H S =W S ′H V /W V ;
当HS<HV且HS-HV<WS-WV时,则设置HS=HV,WS=HS′Wv/Hv;When H S <H V and H S -H V <W S -W V , then set H S =H V , W S =H S ′W v /H v ;
当xD|S<0或WS<WV时,则设置xD|S=0;When x D | S <0 or W S <W V , then set x D | S =0;
当yD|S<0或HS<HV时,则设置yD|S=0;When y D | S <0 or H S <H V , then set y D | S =0;
当xD|S>WS-WV时,则设置xD|S=WS-WV;When x D | S >W S -W V , then set x D | S =W S -W V ;
当yD|S>HS-HV时,则设置yD|S=HS-HV。When y D | S >H S -H V , then set y D | S =H S -H V .
同时,针对上述方法,本发明还提供了一种视频移动终端实时拖动与缩放的装置,具体包括:解码模块、绘制模块和交互模块,At the same time, for the above method, the present invention also provides a device for real-time dragging and zooming of a video mobile terminal, which specifically includes: a decoding module, a drawing module and an interactive module,
所述解码模块包括:解码子模块、解码控制子模块、转换子模块、双缓存器;所述解码子模块将视频流文件解码为视频帧数据;所述解码控制子模块用于接收交互模块传来的缩放参数,控制转换子模块进行格式转换;所述转换子模块用于将解码子模块得到的视频帧数据换成RGB格式的ARGB8888标准的RGB中间帧;所述双缓存器用于存放转换后的RGB中间帧;The decoding module includes: a decoding submodule, a decoding control submodule, a conversion submodule, and a double buffer; the decoding submodule decodes the video stream file into video frame data; the decoding control submodule is used to receive the interactive module transmission The scaling parameter that comes, controls conversion sub-module to carry out format conversion; Said conversion sub-module is used for changing the video frame data that decoding sub-module obtains into the RGB intermediate frame of the ARGB8888 standard of RGB format; Described double buffer is used for storing converted The RGB intermediate frame;
所述绘制模块包括:提取子模块、填充子模块、绘制控制子模块;所述提取子模块用于从双缓存器中提取待显示部分的数据;所述填充子模块将提取的待显示的数据写入至待显示区域;所述绘制控制子模块用于接收交互模块传来的定位参数,控制提取子模块提取数据的位置;The drawing module includes: an extraction submodule, a filling submodule, and a drawing control submodule; the extraction submodule is used to extract the data of the part to be displayed from the double buffer; the data to be displayed that the filling submodule will extract Write to the area to be displayed; the drawing control submodule is used to receive the positioning parameters from the interaction module, and control the position of the extraction submodule to extract data;
所述交互模块包括:监听子模块、计算子模块、请求分流器、缩放适配器以及拖动适配器;所述监听子模块用于实时监测显示区域内的用户交互请求;所述计算子模块用于对用户的交互请求进行计算,并将计算结果传输到请求分流器;所述缩放适配器用于对交互请求进行分流,分离出是缩放请求还是拖动请求,或者是二者都有;所述缩放适配器收请求分流器分离出的缩放请求,进行缩放坐标计算得到缩放参数;所述拖动适配器用于接收请求分流器分离出的拖动请求,进行拖动坐标计算得到定位参数。The interaction module includes: a monitoring sub-module, a calculation sub-module, a request splitter, a scaling adapter and a drag adapter; the monitoring sub-module is used for real-time monitoring of user interaction requests in the display area; the calculation sub-module is used for The user's interaction request is calculated, and the calculation result is transmitted to the request splitter; the zoom adapter is used to split the interaction request, separating the zoom request or the drag request, or both; the zoom adapter The scaling request separated by the request splitter is received, and scaling coordinates are calculated to obtain scaling parameters; the drag adapter is used to receive the drag request separated by the request splitter, and the dragging coordinates are calculated to obtain positioning parameters.
其中,所述交互模块还包括越界处理子模块,所述越界处理子模用于检测交互请求是否越界并进行越界纠正。Wherein, the interaction module further includes an out-of-bounds processing sub-module, and the out-of-bounds processing sub-module is used to detect whether the interaction request is out of bounds and to correct the out-of-bounds.
下面将结合如图4所示的一种视频移动终端实时拖动与缩放的装置的结构框图对本装置的各个子模块进行详细说明:Each sub-module of this device will be described in detail below in conjunction with a structural block diagram of a device for real-time dragging and zooming of a video mobile terminal as shown in Figure 4:
解码子模块:用于将视频流文件解码称为视频帧数据,该视频帧数据为原始帧。视频的解码标准很多,常见的有MPEG、H.261、H.263、WMV等,本发明以FFmpeg为例,解码得到的数据帧为YUV420P格式。Decoding sub-module: used to decode the video stream file called video frame data, the video frame data is the original frame. There are many video decoding standards, common ones are MPEG, H.261, H.263, WMV, etc. The present invention takes FFmpeg as an example, and the decoded data frames are in YUV420P format.
解码控制子模块:该模块为解码模块下的控制子模块,接收交互模块传来的缩放比控制参数,控制转换子模块进行格式转换。Decoding control sub-module: This module is a control sub-module under the decoding module, which receives the zoom ratio control parameters from the interactive module, and controls the conversion sub-module to perform format conversion.
转换子模块:将编码子模块得到的YUV420P格式的原始帧转换成RGB格式的ARGB8888标准的RGB中间帧。根据需要,可以将YUV420P格式转换成RGB565或者ARGB8888标准的RGB格式,而在FFmpeg下,将YUV420P格式转换成ARGB8888标准的RGB格式快于转换成RGB565标准的RGB格式,虽然ARGB8888标准的RGB格式会增加绘制画面的负担,但绘制一帧画面速度要远远快于解码一帧的速度,本发明采用解码、绘制并行处理,影响系统运行的瓶颈在解码而非绘制,因此需尽力缩短解码时耗,故本发明采用将YUV420P格式转换成ARGB8888标准的RGB格式的方案进行格式转换。Conversion sub-module: convert the original frame in YUV420P format obtained by the encoding sub-module into an RGB intermediate frame of ARGB8888 standard in RGB format. According to needs, YUV420P format can be converted to RGB565 or ARGB8888 standard RGB format, and under FFmpeg, converting YUV420P format to ARGB8888 standard RGB format is faster than converting to RGB565 standard RGB format, although ARGB8888 standard RGB format will increase The burden of drawing pictures, but the speed of drawing a frame is much faster than the speed of decoding a frame. The present invention adopts decoding and drawing parallel processing, and the bottleneck affecting system operation is decoding rather than drawing. Therefore, it is necessary to shorten the time consumption of decoding as much as possible. Therefore, the present invention adopts the scheme of converting the YUV420P format into the RGB format of the ARGB8888 standard for format conversion.
双缓存器:检测出双缓存中空闲的那一个,并将转换后的RGB中间帧存放在其中。Double buffer: Detect the idle one in the double buffer, and store the converted RGB intermediate frame in it.
提取子模块:从解码模块中的双缓存区中提取需要显示的数据,只提取用于显示部分的数据,不需要提取整个RGB中间帧。提取的数据作为填充子模块的源数据。Extraction sub-module: extract the data to be displayed from the double buffer area in the decoding module, only extract the data for the display part, and do not need to extract the entire RGB intermediate frame. The extracted data serves as source data for populating submodules.
填充子模块:打开目标显示区域,将提取子模块得到的数据适配的写入。Fill submodule: Open the target display area, and write the data adapted from the extracted submodule.
绘制控制子模块:该模块为绘制模块下的控制子模块,接收交互模块传来的拖动生成的定位参数,控制提取子模块提取数据的位置。Drawing control sub-module: This module is the control sub-module under the drawing module, which receives the positioning parameters generated by dragging from the interactive module, and controls the position where the extraction sub-module extracts data.
监听子模块:实时监测显示区域内的用户交互请求,主要表现为触摸显示区域。Monitoring sub-module: real-time monitoring of user interaction requests in the display area, mainly in the form of touching the display area.
计算子模块:对触发的请求进行计算,若检测到超出容差的请求产生,便交由请求分流器处理。Calculation sub-module: calculate the triggered request, and if a request exceeding the tolerance is detected, it will be processed by the request splitter.
请求分流器:对交互请求进行分流,分离出是缩放请求还是拖动请求,或者是二者都有。Request splitter: Split the interaction request to separate out whether it is a zoom request or a drag request, or both.
缩放适配器:接收请求分流器分离出的缩放请求,进行缩放坐标计算,得到缩放参数。Scaling Adapter: Receive the scaling request separated by the request splitter, calculate the scaling coordinates, and obtain the scaling parameters.
拖动适配器:接收请求分流器分离出的拖动请求,进行拖动坐标计算,得到定位参数。Drag adapter: Receive the drag request separated by the request splitter, calculate the drag coordinates, and obtain the positioning parameters.
越界处理子模块:测交互请求是否越界并进行越界纠正。Out-of-bounds processing sub-module: test whether the interaction request is out-of-bounds and correct the out-of-bounds.
本领域的普通技术人员将会意识到,这里所述的实施例是为了帮助读者理解本发明的原理,应被理解为本发明的保护范围并不局限于这样的特别陈述和实施例。本领域的普通技术人员可以根据本发明公开的这些技术启示做出各种不脱离本发明实质的其它各种具体变形和组合,这些变形和组合仍然在本发明的保护范围内。Those skilled in the art will appreciate that the embodiments described here are to help readers understand the principles of the present invention, and it should be understood that the protection scope of the present invention is not limited to such specific statements and embodiments. Those skilled in the art can make various other specific modifications and combinations based on the technical revelations disclosed in the present invention without departing from the essence of the present invention, and these modifications and combinations are still within the protection scope of the present invention.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410119164.6A CN103888840B (en) | 2014-03-27 | 2014-03-27 | A kind of video mobile terminal Real Time Dragging and the method and device for scaling |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410119164.6A CN103888840B (en) | 2014-03-27 | 2014-03-27 | A kind of video mobile terminal Real Time Dragging and the method and device for scaling |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103888840A true CN103888840A (en) | 2014-06-25 |
CN103888840B CN103888840B (en) | 2017-03-29 |
Family
ID=50957528
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410119164.6A Expired - Fee Related CN103888840B (en) | 2014-03-27 | 2014-03-27 | A kind of video mobile terminal Real Time Dragging and the method and device for scaling |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103888840B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104104897A (en) * | 2014-06-27 | 2014-10-15 | 北京奇艺世纪科技有限公司 | Mobile terminal video editing method and device |
CN104822088A (en) * | 2015-04-16 | 2015-08-05 | 腾讯科技(北京)有限公司 | Video image zooming method and device |
CN105491386A (en) * | 2014-09-19 | 2016-04-13 | 杭州海康威视数字技术股份有限公司 | Format conversion method and device of video data |
CN105635832A (en) * | 2014-11-10 | 2016-06-01 | 阿里巴巴集团控股有限公司 | Video processing method and device |
CN106201318A (en) * | 2016-07-13 | 2016-12-07 | 乐视控股(北京)有限公司 | Adjust the method for video playback window, device and display terminal |
CN108509133A (en) * | 2017-02-27 | 2018-09-07 | 腾讯科技(深圳)有限公司 | Searching component methods of exhibiting and device |
CN108804187A (en) * | 2018-05-30 | 2018-11-13 | 北京小米移动软件有限公司 | The setting method and device of interfaces windows |
CN111722887A (en) * | 2020-05-30 | 2020-09-29 | 南京恒歌零一信息科技有限公司 | A method and system for processing and displaying massive scatter data |
CN113766255A (en) * | 2021-01-19 | 2021-12-07 | 北京沃东天骏信息技术有限公司 | Video stream merging method and device, electronic equipment and computer medium |
CN116095250A (en) * | 2022-05-30 | 2023-05-09 | 荣耀终端有限公司 | Method and device for video cropping |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1919210A1 (en) * | 2005-08-25 | 2008-05-07 | Sony Corporation | Reproduction device, reproduction method, program, program storage medium, data structure, and recording medium fabrication method |
CN101616281A (en) * | 2009-06-26 | 2009-12-30 | 中兴通讯股份有限公司南京分公司 | A kind of with local method and the portable terminal that amplifies of mobile TV playing picture |
CN102520860A (en) * | 2011-12-09 | 2012-06-27 | 中兴通讯股份有限公司 | Method for controlling desktop display and mobile terminal |
CN103677543A (en) * | 2012-09-03 | 2014-03-26 | 中兴通讯股份有限公司 | Method for adjusting screen display area of mobile terminal and mobile terminal |
-
2014
- 2014-03-27 CN CN201410119164.6A patent/CN103888840B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1919210A1 (en) * | 2005-08-25 | 2008-05-07 | Sony Corporation | Reproduction device, reproduction method, program, program storage medium, data structure, and recording medium fabrication method |
CN101616281A (en) * | 2009-06-26 | 2009-12-30 | 中兴通讯股份有限公司南京分公司 | A kind of with local method and the portable terminal that amplifies of mobile TV playing picture |
CN102520860A (en) * | 2011-12-09 | 2012-06-27 | 中兴通讯股份有限公司 | Method for controlling desktop display and mobile terminal |
CN103677543A (en) * | 2012-09-03 | 2014-03-26 | 中兴通讯股份有限公司 | Method for adjusting screen display area of mobile terminal and mobile terminal |
Non-Patent Citations (1)
Title |
---|
周益民 等: "视频图像复杂度估计的H·264/AVC码率控制", 《计算机工程与应用》 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104104897A (en) * | 2014-06-27 | 2014-10-15 | 北京奇艺世纪科技有限公司 | Mobile terminal video editing method and device |
CN105491386A (en) * | 2014-09-19 | 2016-04-13 | 杭州海康威视数字技术股份有限公司 | Format conversion method and device of video data |
CN105491386B (en) * | 2014-09-19 | 2019-05-28 | 杭州海康威视数字技术股份有限公司 | A kind of format conversion method and device of video data |
CN105635832B (en) * | 2014-11-10 | 2019-05-28 | 阿里巴巴集团控股有限公司 | A kind of method for processing video frequency and device |
CN105635832A (en) * | 2014-11-10 | 2016-06-01 | 阿里巴巴集团控股有限公司 | Video processing method and device |
CN104822088A (en) * | 2015-04-16 | 2015-08-05 | 腾讯科技(北京)有限公司 | Video image zooming method and device |
US10397649B2 (en) | 2015-04-16 | 2019-08-27 | Tencent Technology (Shenzhen) Company Limited | Method of zooming video images and mobile display terminal |
CN104822088B (en) * | 2015-04-16 | 2019-03-19 | 腾讯科技(北京)有限公司 | Video image zooming method and apparatus |
CN106201318A (en) * | 2016-07-13 | 2016-12-07 | 乐视控股(北京)有限公司 | Adjust the method for video playback window, device and display terminal |
CN108509133A (en) * | 2017-02-27 | 2018-09-07 | 腾讯科技(深圳)有限公司 | Searching component methods of exhibiting and device |
CN108509133B (en) * | 2017-02-27 | 2023-05-02 | 腾讯科技(深圳)有限公司 | Search component display method and apparatus |
CN108804187A (en) * | 2018-05-30 | 2018-11-13 | 北京小米移动软件有限公司 | The setting method and device of interfaces windows |
CN108804187B (en) * | 2018-05-30 | 2023-11-24 | 北京小米移动软件有限公司 | Interface window setting method and device |
CN111722887A (en) * | 2020-05-30 | 2020-09-29 | 南京恒歌零一信息科技有限公司 | A method and system for processing and displaying massive scatter data |
CN113766255A (en) * | 2021-01-19 | 2021-12-07 | 北京沃东天骏信息技术有限公司 | Video stream merging method and device, electronic equipment and computer medium |
CN113766255B (en) * | 2021-01-19 | 2024-07-16 | 北京沃东天骏信息技术有限公司 | Video stream merging method, device, electronic equipment and computer medium |
CN116095250A (en) * | 2022-05-30 | 2023-05-09 | 荣耀终端有限公司 | Method and device for video cropping |
CN116095250B (en) * | 2022-05-30 | 2023-10-31 | 荣耀终端有限公司 | Method and apparatus for video cropping |
Also Published As
Publication number | Publication date |
---|---|
CN103888840B (en) | 2017-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103888840B (en) | A kind of video mobile terminal Real Time Dragging and the method and device for scaling | |
TWI490772B (en) | Method and apparatus for adapting custom control components to a screen | |
CN102750067B (en) | Large screen interaction method based on handheld device | |
US20180035137A1 (en) | Video sharing method and device, and video playing method and device | |
WO2017101441A1 (en) | Method and system for remote synchronization of annotation moving and scaling | |
WO2015070694A1 (en) | Screen splicing system and video data stream processing method | |
CN103281575A (en) | Novel control method of intelligent television | |
US10271105B2 (en) | Method for playing video, client, and computer storage medium | |
CN104080005A (en) | Device and method for clipping dynamic pictures | |
CN109388329A (en) | Method and system for remote annotation synchronization | |
US10257436B1 (en) | Method for using deep learning for facilitating real-time view switching and video editing on computing devices | |
US20150121301A1 (en) | Information processing method and electronic device | |
EP4344229A1 (en) | Video processing method and apparatus, device, and storage medium | |
US20220159197A1 (en) | Image special effect processing method and apparatus, and electronic device and computer readable storage medium | |
CN113852757A (en) | Video processing method, device, equipment and storage medium | |
WO2015078257A1 (en) | Search information display device and method | |
CN113852756B (en) | Image acquisition method, device, equipment and storage medium | |
JP2017068683A (en) | Information processing apparatus, image transmission method, and program | |
JP5911168B2 (en) | Display device, display method, and display program | |
CN112835506A (en) | Display device and control method thereof | |
CN112860212A (en) | Volume adjusting method and display device | |
CN104125497A (en) | Webpage video recording method and device | |
US9141443B2 (en) | Method and system for integrating visual controls with legacy applications | |
CN103177373A (en) | Interactive commercial experiencing method and system | |
CN102263922A (en) | Static state image extraction apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170329 Termination date: 20180327 |
|
CF01 | Termination of patent right due to non-payment of annual fee |