TW536918B - Method to increase the temporal resolution of continuous image series - Google Patents
Method to increase the temporal resolution of continuous image series Download PDFInfo
- Publication number
- TW536918B TW536918B TW90127166A TW90127166A TW536918B TW 536918 B TW536918 B TW 536918B TW 90127166 A TW90127166 A TW 90127166A TW 90127166 A TW90127166 A TW 90127166A TW 536918 B TW536918 B TW 536918B
- Authority
- TW
- Taiwan
- Prior art keywords
- view frame
- block
- frame
- view
- value
- Prior art date
Links
Landscapes
- Television Systems (AREA)
Abstract
Description
536918536918
發明所屬之技術領域 本奄明係有關於一種用以增加連續影像序列之時間解 =度(temporal res〇iuti〇n)的方法,特別是有關於一種 ^由連績影像序列内插圖像增加播放晝面,以達到增加連 續影像序列之時間解析度的方法。 先前技術TECHNICAL FIELD The present invention relates to a method for increasing the temporal resolution of continuous image sequences (temporal res〇iutión), and in particular, to a method for adding images by interpolating images from consecutive image sequences. The method of playing the daytime surface to increase the time resolution of continuous image sequences. Prior art
近年來’數位視訊廣泛地被發展與應用,而數位視訊 在編碼和傳輸的過程中,通常會使用視訊位元速率控制的 技術’將擷取到的影像訊號加以處理,使得能夠有效傳輸 及儲存視訊。而在上述數位影像訊號處理的過程中,通常 會刪除部分視框,而部分視框的刪除,會使該視訊在播放 時的時間解析度降低,當視訊的時間解析度過低時(一般 來說是低於1 5個視框/秒),該視訊播放時會使觀賞者有 晝面不連續的感覺。目前係以時序内插法(temp〇ral interpolation method),運用動量預測(m〇ti〇n e s t i m a t i ο η )的方法來預測並產生兩個畫面之間的影像,In recent years, 'digital video has been widely developed and applied, and in the process of encoding and transmission of digital video, the video bit rate control technology is usually used' to process the captured image signal so that it can be efficiently transmitted and stored. Video. In the process of the digital image signal processing described above, some frames are usually deleted, and the deletion of some frames will reduce the time resolution of the video during playback. When the time resolution of the video is too low (generally (Say, less than 15 frames / second), the viewer will feel the discontinuity of the day and time when the video is played. At present, the temporal interpolation method (temporal interpolation method) is used to predict and generate the image between the two pictures by using the method of momentum prediction (m〇ti〇n e s t i m a t i ο η).
來增加視訊的時間解析度。當插入的晝面與真實的影像差 距過大時,仍會產生影像不連續的感覺。 因此本發明人有鑑於此,經過不斷研究測試後始有本To increase the time resolution of the video. When the difference between the inserted daylight and the real image is too large, the image will still feel discontinuous. Therefore, the inventors have taken this into consideration,
第4頁Page 4
明之產生,提供一種新的 像晝面更㈣真實的料冢序 列内插架構 使預測的 發明内容 本發明之目 度的方法。 為達成本發 像序列中插入一 的係提供一種增力π連續 影像序列時間解析 明上述目的 預測影像的 序列的時間解析度,包括 一視框,其係依據一預設值 影像序列中選定 塊,於該第二視 计异該第一視框 重建一視框,其 估算於該第一視 像素值,並以該 該重建視框施以 的重建視框施以 一第二視框 框中尋找最 和該第二視 係利用上述 框和該第二 ’本發明提 方法,其係 於該連續影 & _為複數 ;針對該第 相似之區塊 框中相對應 步驟計算得 出一種 用以增 像序列 個區塊 在一連續影 加連續影像 中選定一第 ,·於該連續 一提框中每一區 作為一 區塊的 封的該 框間一時間點上 估异區塊像素值組合產 程序;對該經過中 濾波程序。 一中值濾波 一空間低通 對應區.塊; 移動向量; 移動向量, 各相應區塊 生一重建視框;對 值濾波程序 為使熟悉該項技藝人士瞭解本發明之目的、特徵及功 效,茲藉由下述具體實施例,並配合所附之圖式,對本發 明詳加說明,說明如后: · &The emergence of the future provides a new method of interpolation of the burial mound sequence which is more realistic like the daytime surface. To provide a system for inserting one into the image sequence, a time-resolved π continuous image sequence time analysis is provided to explain the time resolution of the predicted image sequence for the above purpose, including a view frame, which is based on a selected block in the image sequence with a preset value. , Reconstructing a view frame at the second view different from the first view frame, which is estimated at the first view pixel value, and using the reconstructed view frame applied by the reconstructed view frame to apply a second view frame to find The second view system uses the above frame and the second method of the present invention, which is based on the continuous shadow & is a plural number; for a corresponding step of the first similar block frame, a kind of Each block of the augmentation sequence is selected in a continuous image plus a continuous image, and the pixel value combination of the different blocks is estimated at a point in time between each frame in the continuous frame that is a block seal. Production process; the filtering process. A median filter, a spatial low-pass corresponding region. Block; motion vector; motion vector, each corresponding block generates a reconstructed view frame; the value filtering procedure is to make those skilled in the art understand the purpose, features and effects of the present invention, The following specific embodiments and the accompanying drawings are used to explain the present invention in detail, as described below: &
第5頁 536918 案號 901271fifi 五、發明說明(3) 月 修正 實施方式 為充分揭露本發明,茲配合圖式詳細說明如下。 於一依據本發明方法流程圖。本發明方法首先 定一第一視框及-第二視框,其中 一視框依據一預設值分別分割為複數個區塊,該分巧 之預设值-般為l6pix*16pix。步驟 一視框及第二視框中各區塊像素值。步 /上^^ 所輸入兩視框中區塊的像素資料,針i f依據y 1 區塊,於該第二視框中尋找最相似之區:::^:每-塊,並計算該第一視框和該第二視框中相雍’台心區 向量。(計算移動向量方法係採用動c塊的移動 ,.,.、士、上 ▲ 里1古 5十(motion ===//5丨’^\^所示)°步驟13為係利用上 述步驟1 2计τ侍到的移動向量,估算於上述第一 二視框間-時間點上各相應區塊像素值-,纟以該估管 像素值組合產生一新視框。步驟14為動量分 ^ 係將經動量補償後產生之該重建視框内之區塊=二:二第 -區,和:第二區⑽,4對該第一區塊組施以該中: 濾波魟序’/、中,該弟-區塊組係為該第—視框和該第二 區:Γ 區塊因為視框中物件影像移動而 有變化者亦即,移動向量不為零的區塊,該第 係為該第一視框和該第二視框内之區塊中,相對應^塊未 第6頁 1 536918Page 5 536918 Case No. 901271fifi V. Description of the Invention (3) Month Amendment Implementation In order to fully disclose the present invention, the following detailed description is given in conjunction with the drawings. A flowchart of a method according to the present invention. The method of the present invention first determines a first view frame and a second view frame. One of the view frames is divided into a plurality of blocks according to a preset value. The default value of the sub-frame is generally 16pix * 16pix. Step The pixel values of each block in the first frame and the second frame. Step / Up ^^ Input the pixel data of the blocks in the two view frames. If the block is based on the y 1 block, find the most similar area in the second view frame :: ^: per-block, and calculate the first A view frame and the second view frame are relative to each other. (The method of calculating the motion vector uses the movement of the moving c block, ..., Shi, Shang ▲ li 1 ancient 50 ten (motion === // 5 丨 '^ \ ^) ° Step 13 is the use of the above steps 12 Calculate the motion vector served by τ, estimate the pixel values of the corresponding blocks at the time point between the first and second view frames, and then use the estimated pixel value combination to generate a new view frame. Step 14 is the momentum score. ^ The blocks in the reconstructed view frame generated after the momentum compensation = two: two, the second area, and: the second area ⑽, 4 applies the following to the first block group: filtering sequence '/ ,, The brother-block group is the first-view frame and the second region: the Γ block changes because the object image in the view frame moves, that is, the block whose motion vector is not zero. It is the corresponding block in the first view frame and the second view frame. Page 6 1 536918
修正 ® ^,者,亦即,移動向量為零之區塊。步驟15為針對梦 研 判疋為第一區塊組者,施以一中值濾波程序的步 ::步驟16為對經過步驟15之中值濾波程序的第一區塊组 :未經過步驟15的第二區塊組施以一空間低通濾波程序、。 =驟1 7為輸上述新視框,此即為依據上述第—視框和第 一視框所計算出的一重建視框。 、、上述第視框係可以為目前視框,上述第二視框係可 以為該目前視框之前一視框。 、 第一圖顯不依據本發明方法中計算移動向量方法示意 圖。視框20為目前視框,依據一預設值分割為複數個區 ,。視框2 5為目前視框2 〇的前一視框。針對視框2 〇中每一 區塊,於視框25中尋找最相似的區塊作為對應,將視框2〇 和視框25内之對應區塊互相比對,即可計算出視框2〇中各 區塊相對於視框25中對應區塊的移動向量。 、一=三A圖及第三B圖顯示依據本發明方法重建視框的方 法示意圖。本發明之重建視框的方法係先產生一第一内插 視框和一第二内插視框,再求取該第一内插視框和該第二 内插視框内像素資料之平均值,並以該平均值作為該重建 視框内像素值。 弟二A圖為產生该第一内插視框的方法示意圖。第一 視框30内一區塊B1位於(x,y)的位置,其對應ς第二視框 35之移動向量為(dx,dy),亦即區塊^在第二視框35中之 對應位置為(x + dy,y + dy) ’也就是圖三a所示的μ。設若 在第一内插視框33-A中,位置(X, y)之區塊β3,在第二視Fix ® ^, that is, the block where the motion vector is zero. Step 15 is a step of applying a median filtering procedure to those who are judged to be the first block group by Dream Research: Step 16 is the first block group that has passed the median filter procedure of Step 15: The second block group applies a spatial low-pass filtering process. = Step 17 is to input the new view frame, which is a reconstructed view frame calculated according to the first view frame and the first view frame. The first view frame can be the current view frame, and the second view frame can be the previous view frame of the current view frame. The first diagram shows a schematic diagram of a method for calculating a motion vector in the method of the present invention. The view frame 20 is the current view frame, and is divided into a plurality of regions according to a preset value. The frame 25 is the previous frame of the current frame 20. For each block in view frame 20, find the most similar block in view frame 25 as the corresponding, and compare the corresponding blocks in view frame 20 and view frame 25 with each other to calculate view frame 2. The motion vector of each block in 0 relative to the corresponding block in view frame 25. Figures 1A, 3A and 3B are schematic diagrams of a method for reconstructing a view frame according to the method of the present invention. The method for reconstructing the view frame of the present invention first generates a first interpolated view frame and a second interpolated view frame, and then obtains an average of the pixel data in the first and second interpolated view frames. Value, and the average value is used as the pixel value in the reconstructed view frame. Figure 2A is a schematic diagram of a method for generating the first interpolation view frame. A block B1 in the first view frame 30 is located at the (x, y) position, and the corresponding motion vector of the second view frame 35 is (dx, dy), that is, the block ^ in the second view frame 35 The corresponding position is (x + dy, y + dy) ', which is μ as shown in Fig. 3a. Suppose that in the first interpolated view frame 33-A, the block β3 at position (X, y) is in the second view
536918 案號 901271fifi 五、發明說明(5) 框35中的相對應位置為(x + dx/2, y + dy/2),亦即圖三a中 所示之區塊C3b,另外其在第一視框3〇中的相對應位置為 (X-dx/2, y-dy/2),亦即區塊C3f。由C3b與C3f中像素資料 之平均值來重建出區塊B3。重複此一方法,可以得出對應 於第一視框30和第二視框35内每一區塊,並由該内插區= 構成第一内插視框33-A。 第二B圖為產生該第二内插視框的方法示意圖。在此 不考慮重建(x,y)位置的區塊,而考慮(x + dx/2,y + dy/2)位 ,。依據第一視框30内區塊B1的移動向量,可以預測其在 第二内插視框33-B中的位置,,亦即區塊B3的位置。利用 一視框30中區塊B1和第二視框35中區塊β2之像素資料的平 均值,求得第二内插視框33 —Β中區塊Β3的像素資料值。 上述產生第二内插視框的方法較之上述產生第一内插視框 的方法較符合邏輯且正確度也較高,但利用此法產生内插 ,框可能會造成一種情況,亦即,視框中某些像素可能會 内插像素重疊,而某些像素卻沒有相對應的 重建内插像素,而造成視框中有空白的區域產生,對於 ::多個重建内插像素重疊的狀況,*人計算出所有重疊 =it^值來代表該對應之像素值’對於後者,則利用 上述產生弟一内插視框的方法來預測該像素值。 —產生第一内插視框33 —八和第二内插視框33-B後,·叶瞀 ::=視框2:巧第二内插視框33—以各區塊像素值: 忙=值亚以汁异仟到的像素資料平均值產生一重建視536918 Case No. 901271fifi 5. Description of the invention (5) The corresponding position in box 35 is (x + dx / 2, y + dy / 2), which is the block C3b shown in Fig. The corresponding position in a view frame 30 is (X-dx / 2, y-dy / 2), that is, block C3f. The block B3 is reconstructed from the average of the pixel data in C3b and C3f. By repeating this method, it can be obtained that each block in the first view frame 30 and the second view frame 35 corresponds to the first interpolation view frame 33-A. FIG. 2B is a schematic diagram of a method for generating the second interpolation view frame. Here we do not consider reconstructing the block at (x, y) position, but consider (x + dx / 2, y + dy / 2) bits. According to the motion vector of the block B1 in the first view frame 30, its position in the second interpolated view frame 33-B, that is, the position of the block B3 can be predicted. The pixel data values of the block B3 in the second interpolated view frame 33-B are obtained by using the average of the pixel data of the block B1 in the view frame 30 and the block β2 in the second view frame 35. The above method for generating the second interpolation view frame is more logical and more accurate than the method for generating the first interpolation view frame, but using this method to generate the interpolation may cause a situation in the frame, that is, Some pixels in the view frame may overlap the interpolated pixels, but some pixels have no corresponding reconstructed interpolated pixels, resulting in a blank area in the view frame. For the situation where multiple reconstructed interpolated pixels overlap , * Person calculates all overlap = it ^ values to represent the corresponding pixel value. For the latter, the above method is used to predict the pixel value by using the method of generating a brother-inline view frame. —Generating the first interpolated view frame 33 —After eight and the second interpolated view frame 33-B, Ye Ye :: = view frame 2: Qiao second interpolated view frame 33—by the pixel value of each block: busy = The average value of the pixel data obtained by the difference of the values is used to generate a reconstructed view.
五、發明說明(6) 在產生(temporal 視框的方法 縮小的狀況 成連續影像 零的重建區 述。首先取 及該重建視 相比較之區 建視框各區 算,輸出該 重建視框後,information) 無法有效處理 ,故利用時間 序列播放時不 進一步利 塊,施以一中 得一中間值, 框中相對應區 塊中像素值位 塊像素值與該 運算後之平均 來提升影 連續影像 關連資訊 自然的結 值濾波程 其係將該 塊之像素 於中間者 各區塊像 值,使用 用時間關連性資 像品質。由於上 序列中物件旋轉 可以減少重建視 果。針對移動向 序的計算,其方 第一視框、該第 值加以比較,取每一組 為该中間值。再 素中間值進行平 公式如下: 視框 將該重 均運V. Description of the invention (6) In the method of generating (temporal frame reduction method), the reconstruction area of continuous image is zero. First, the area of the frame that is compared with the reconstructed frame is calculated, and the reconstructed frame is output. (Information) can not be effectively processed, so when the time series is played, it does not further benefit the block, and applies one to one to obtain an intermediate value. The box corresponds to the pixel value in the block. The natural connection value filtering process of the image connection information is to use the pixel value of the block in the middle block image value to use the time-related image quality. Since the rotation of the objects in the above sequence can reduce the reconstruction vision. For the calculation of the moving sequence, the first view frame and the first value are compared, and each group is taken as the intermediate value. The formula for re-priming the median value is as follows:
NewY = (MotCompY+MedianCMotCompY, PastY, FutureY))/2 士其中NewY為將該重建視框各區塊像素值持 ”間值進行平均運算後的結果,—為利;= 法產生之重建區塊,PastY為該第二視框内 =方 值,FutureY為該第一視框内區塊的像素值。 像素 上述經過中值濾波程序處理的重建視框, 過空間低通濾波程序處自’該處理 :經 濾波公式: 妹用下列幾種 536918NewY = (MotCompY + MedianCMotCompY, PastY, FutureY)) / 2. Among them, NewY is the result of averaging the pixel values of each block of the reconstructed view frame—for profit; = reconstructed blocks generated by the method , PastY is the second view frame = square value, and FutureY is the pixel value of the block in the first view frame. The reconstructed view frame of the pixel processed by the median filtering program above is from the low pass filter program. Processing: filtered formula: The following types are used: 536918
第ίο頁 ,v:: 536918 _案號 90127166_年月日__ 圖式簡單說明 圖式簡單說明 第一圖顯示依據本發明方法流程圖。 第二圖顯示依據本發明方法中計算移動向量方法示意圖。 第三圖A及B顯示依據本發明方法重建視框的方法示意圖。 主要元件編號 11 輸入視框像素資料 1 2估算移動向量 1 3動量補償 1 4動量分類 1 5中值濾波程序 1 6 空間低通濾波程序 1 7輸出重建視框像素資料 2 0第一視框 2 5 第二視框 3 0 第一視框 3 3 - A第一内插視框 33-B第二内插視框 3 5 第二視框Page ίο, v :: 536918 _ case number 90127166_ year month day __ simple illustration of the diagram simple illustration of the diagram The first diagram shows the flow chart of the method according to the invention. The second figure shows a schematic diagram of a method for calculating a motion vector in the method according to the present invention. The third figures A and B are schematic diagrams of a method for reconstructing a view frame according to the method of the present invention. Main component number 11 Input frame pixel data 1 2 Estimated motion vector 1 3 Momentum compensation 1 4 Momentum classification 1 5 Median filter program 1 6 Spatial low-pass filter program 1 7 Output reconstructed frame pixel data 2 0 First view frame 2 5 Second view frame 3 0 First view frame 3 3-A First interpolation view frame 33-B Second interpolation view frame 3 5 Second view frame
第11頁Page 11
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW90127166A TW536918B (en) | 2001-11-01 | 2001-11-01 | Method to increase the temporal resolution of continuous image series |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW90127166A TW536918B (en) | 2001-11-01 | 2001-11-01 | Method to increase the temporal resolution of continuous image series |
Publications (1)
Publication Number | Publication Date |
---|---|
TW536918B true TW536918B (en) | 2003-06-11 |
Family
ID=29268266
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW90127166A TW536918B (en) | 2001-11-01 | 2001-11-01 | Method to increase the temporal resolution of continuous image series |
Country Status (1)
Country | Link |
---|---|
TW (1) | TW536918B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8654848B2 (en) | 2005-10-17 | 2014-02-18 | Qualcomm Incorporated | Method and apparatus for shot detection in video streaming |
US8780957B2 (en) | 2005-01-14 | 2014-07-15 | Qualcomm Incorporated | Optimal weights for MMSE space-time equalizer of multicode CDMA system |
US8879856B2 (en) | 2005-09-27 | 2014-11-04 | Qualcomm Incorporated | Content driven transcoder that orchestrates multimedia transcoding using content information |
US8948260B2 (en) | 2005-10-17 | 2015-02-03 | Qualcomm Incorporated | Adaptive GOP structure in video streaming |
US9131164B2 (en) | 2006-04-04 | 2015-09-08 | Qualcomm Incorporated | Preprocessor method and apparatus |
US9197912B2 (en) | 2005-03-10 | 2015-11-24 | Qualcomm Incorporated | Content classification for multimedia processing |
-
2001
- 2001-11-01 TW TW90127166A patent/TW536918B/en not_active IP Right Cessation
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8780957B2 (en) | 2005-01-14 | 2014-07-15 | Qualcomm Incorporated | Optimal weights for MMSE space-time equalizer of multicode CDMA system |
US9197912B2 (en) | 2005-03-10 | 2015-11-24 | Qualcomm Incorporated | Content classification for multimedia processing |
US8879856B2 (en) | 2005-09-27 | 2014-11-04 | Qualcomm Incorporated | Content driven transcoder that orchestrates multimedia transcoding using content information |
US8879857B2 (en) | 2005-09-27 | 2014-11-04 | Qualcomm Incorporated | Redundant data encoding methods and device |
US8879635B2 (en) | 2005-09-27 | 2014-11-04 | Qualcomm Incorporated | Methods and device for data alignment with time domain boundary |
US9071822B2 (en) | 2005-09-27 | 2015-06-30 | Qualcomm Incorporated | Methods and device for data alignment with time domain boundary |
US9088776B2 (en) | 2005-09-27 | 2015-07-21 | Qualcomm Incorporated | Scalability techniques based on content information |
US8654848B2 (en) | 2005-10-17 | 2014-02-18 | Qualcomm Incorporated | Method and apparatus for shot detection in video streaming |
US8948260B2 (en) | 2005-10-17 | 2015-02-03 | Qualcomm Incorporated | Adaptive GOP structure in video streaming |
US9131164B2 (en) | 2006-04-04 | 2015-09-08 | Qualcomm Incorporated | Preprocessor method and apparatus |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Tulyakov et al. | Time lens++: Event-based frame interpolation with parametric non-linear flow and multi-scale fusion | |
Massey et al. | Salient stills: Process and practice | |
CN110782490A (en) | Video depth map estimation method and device with space-time consistency | |
US5943445A (en) | Dynamic sprites for encoding video data | |
US8289411B2 (en) | Image processing apparatus, program, and method for performing preprocessing for movie reproduction of still images | |
EP1237370B1 (en) | A frame-interpolated variable-rate motion imaging system | |
JPH02141876A (en) | Method and apparatus for generating animation image | |
JPH06500225A (en) | video image processing | |
CN112270692B (en) | Monocular video structure and motion prediction self-supervision method based on super-resolution | |
JP2009194896A (en) | Image processing device and method, and imaging apparatus | |
JP2009003507A (en) | Image processing method, image processor, and image processing program | |
JP2007035038A (en) | Method for generating sequence of reduced images from sequence of source images | |
CN112488922B (en) | Super-resolution processing method based on optical flow interpolation | |
CN112750092A (en) | Training data acquisition method, image quality enhancement model and method and electronic equipment | |
CN114339030A (en) | Network live broadcast video image stabilization method based on self-adaptive separable convolution | |
TW536918B (en) | Method to increase the temporal resolution of continuous image series | |
EP0722251B1 (en) | Method for interpolating images | |
JPH07505033A (en) | Mechanical method for compensating nonlinear image transformations, e.g. zoom and pan, in video image motion compensation systems | |
US20230206955A1 (en) | Re-Timing Objects in Video Via Layered Neural Rendering | |
CN111767679A (en) | Method and device for processing time-varying vector field data | |
CN116033183A (en) | Video frame inserting method and device | |
JP3859989B2 (en) | Image matching method and image processing method and apparatus capable of using the method | |
Hu et al. | A multi-user oriented live free-viewpoint video streaming system based on view interpolation | |
Cho et al. | Depth image processing technique for representing human actors in 3DTV using single depth camera | |
JP2012004653A (en) | Image processing system and its control method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
GD4A | Issue of patent certificate for granted invention patent | ||
MM4A | Annulment or lapse of patent due to non-payment of fees |