Nothing Special   »   [go: up one dir, main page]

JPH0528269A - Hourly change extraction method for area shape - Google Patents

Hourly change extraction method for area shape

Info

Publication number
JPH0528269A
JPH0528269A JP3179457A JP17945791A JPH0528269A JP H0528269 A JPH0528269 A JP H0528269A JP 3179457 A JP3179457 A JP 3179457A JP 17945791 A JP17945791 A JP 17945791A JP H0528269 A JPH0528269 A JP H0528269A
Authority
JP
Japan
Prior art keywords
amount
point
movement
thin line
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP3179457A
Other languages
Japanese (ja)
Inventor
Seiya Shimizu
誠也 清水
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Priority to JP3179457A priority Critical patent/JPH0528269A/en
Publication of JPH0528269A publication Critical patent/JPH0528269A/en
Withdrawn legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

PURPOSE:To supply the hourly change extraction method of an area shape, which can roughly capture the deformation of an object in a moving picture, especially the movement of a part constituting a shape without being affected by the micro change of an outline owing to noise or the like at the time of inputting the moving picture. CONSTITUTION:A thinning processing means 2 generates a thinning picture by cutting the shape outline of a closed area in a moving picture frame outputted from a picture input device 1 from a periphery. A characteristic point extraction means 3 extracts a characteristic point on a thin line thinned by the thinning processing means 2. A characteristic point corresponding means 4 executes correspondence on the characteristic point between two frames extracted in the characteristic point extraction means 3. A calculation means calculates the moving amount of the characteristic point based on the characteristic point which is made correspondence and calculates the moving amount of the point on the thin line. When a known point exists in the closed area, the moving amount of the other point is considered to be equal to the moving amount of the point on the thin line, which can arrive at a shortest distance within the closed area so as to calculate it.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】本発明は、動画像中に存在する閉
領域形状のフレーム毎の変形量を抽出する領域形状の時
間変化抽出方法に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a method for extracting a change in the shape of a region for extracting the amount of deformation of a closed region existing in a moving image for each frame.

【0002】本方法の利用分野として動画像中の物体の
認識装置が挙げられる。画像中の物体の認識は、物体を
表す領域と予め定義してあるモデルの形状特徴を比較す
ることにより行えるが、本手法により抽出される領域形
状の時間変化の特徴を用いて物体とモデルを比較するこ
とにより更に正確な物体認識が可能となる。
A field of application of this method is an apparatus for recognizing an object in a moving image. The recognition of an object in an image can be performed by comparing the shape feature of the model representing a region representing the object, and the feature of the region shape extracted by this method is used to identify the object and the model. By making a comparison, more accurate object recognition becomes possible.

【0003】[0003]

【従来の技術】従来、閉領域のフレーム間での変形量抽
出は、閉領域の輪郭の変形をもとめることにより行われ
ている。この変化量抽出方法は、まずフレーム毎に閉領
域の輪郭線を抽出してこの輪郭線の特徴点を抽出する。
特徴点としては例えば輪郭線を折れ線で近似したときの
頂点、曲率極大の点、微分不可能な点等が用いられる。
2. Description of the Related Art Conventionally, extraction of a deformation amount between frames in a closed region has been performed by obtaining the deformation of the contour of the closed region. In this change amount extraction method, the contour line of the closed region is first extracted for each frame, and the feature points of this contour line are extracted.
As the characteristic points, for example, vertices when the contour line is approximated by a polygonal line, points of maximum curvature, non-differentiable points, etc. are used.

【0004】そして、動画像中に隣接する2フレーム間
において特徴点の対応付けを行い、さらに対応する特徴
点の2フレーム間での移動量を算出する。輪郭線上で隣
接する点の移動量は滑らかに変化しているはずであるか
ら、特徴点間の輪郭線上に存在する点の移動量は両端点
の特徴点移動量を補間して決定する。領域の変形量は、
この輪郭線上の点の移動量により定義される。
Then, feature points are associated with each other between two adjacent frames in the moving image, and the amount of movement of the corresponding feature points between the two frames is calculated. Since the movement amount of the points adjacent to each other on the contour line should change smoothly, the movement amount of the points existing on the contour line between the feature points is determined by interpolating the feature point movement amounts of both end points. The amount of deformation of the area is
It is defined by the amount of movement of points on this contour line.

【0005】[0005]

【発明が解決しようとする課題】しかしながら、上述し
た従来の変形量抽出方法にあっては、形状輪郭の付近に
限定して処理が行われていたため、雑音等による形状変
形に敏感に反応してしまう。また、動画像中の物体認識
においては、形状の輪郭線の微小な時間的な変形よりも
形状を構成する部位の動きを大まかに捕らえることの方
が重要である。このため、従来手法は動画像中の物体変
形を認識するには不適当であった。
However, in the above-described conventional deformation amount extraction method, since the processing is performed only near the shape contour, it is sensitive to the shape deformation due to noise or the like. I will end up. Further, in object recognition in a moving image, it is more important to roughly capture the movement of a part forming a shape than the minute temporal deformation of the contour of the shape. Therefore, the conventional method is not suitable for recognizing object deformation in moving images.

【0006】本発明の目的は、動画像入力時における雑
音等による輪郭線の微小変化に影響されずに、動画像中
の物体変形、特に形状を構成する部位の動きを大まかに
捕らえることのできる領域形状の時間変化抽出方法を提
供することにある。
An object of the present invention is to be able to roughly capture the deformation of an object in a moving image, particularly the movement of a part constituting a shape, without being affected by a minute change in a contour line due to noise or the like when a moving image is input. An object of the present invention is to provide a method for extracting a change in area shape over time.

【0007】[0007]

【課題を解決するための手段】本発明は、上記課題を解
決するために領域形状の時間変化抽出方法として下記の
構成とした。図1は本発明原理のフロチャート、図2は
本発明を実現するための装置構成図である。図1及び図
2を参照して本発明を説明する。
In order to solve the above-mentioned problems, the present invention has the following configuration as a method for extracting a change in area shape over time. FIG. 1 is a flowchart of the principle of the present invention, and FIG. 2 is a device configuration diagram for realizing the present invention. The present invention will be described with reference to FIGS.

【0008】画像入力装置1は、2値化されているフレ
ーム毎の動画像を細線化処理手段2に出力する。細線化
処理手段2は、画像入力手段1から入力する動画像フレ
ーム中の閉領域の形状輪郭を周辺から削って細線画像を
作成する。
The image input device 1 outputs a binarized moving image for each frame to the thinning processing means 2. The thinning processing unit 2 creates a thin line image by cutting the shape contour of the closed region in the moving image frame input from the image input unit 1 from the periphery.

【0009】特徴点抽出手段3は、該細線化処理手段2
で細線化された細線上の特徴点を抽出する。特徴点対応
手段4は、該特徴点抽出手段3で抽出された2フレーム
間での特徴点の対応付けを行なう。
The feature point extraction means 3 is the thinning processing means 2
The feature points on the thin line thinned by are extracted. The feature point correspondence means 4 associates the feature points between the two frames extracted by the feature point extraction means 3.

【0010】算出手段5は、該特徴点対応手段4で対応
付けられた特徴点に基づき特徴点の移動量の算出及び細
線上の点の移動量の算出を行ない、閉領域中に移動量が
既知の点が存在するとき、他の点の移動量を、閉領域内
を伝って最短距離で到達できる細線上の点の移動量と等
しいものとして推定して算出する。
The calculation means 5 calculates the movement amount of the feature points and the movement amount of the points on the thin line based on the feature points associated by the feature point correspondence means 4, and the movement amount is calculated in the closed area. When a known point exists, the movement amount of the other point is estimated and calculated as being equal to the movement amount of the point on the thin line that can be reached in the shortest distance along the closed region.

【0011】また、算出手段5は、閉領域内部の点の移
動量を算出して、この移動量を領域の部位の変形量とし
て画像出力装置6に出力する。また、算出手段5は、2
フレーム間での特徴点の対応が既知のとき、特徴点によ
り挟まれる細線区間上の点のフレーム間の対応を推定
し、この推定結果に基づき細線区間上の点の移動量を算
出する。
Further, the calculation means 5 calculates the amount of movement of the point inside the closed region and outputs this amount of movement to the image output device 6 as the amount of deformation of the region part. In addition, the calculation means 5 is 2
When the correspondence of the feature points between the frames is known, the correspondence between the frames on the thin line section sandwiched by the feature points is estimated, and the movement amount of the points on the thin line section is calculated based on this estimation result.

【0012】[0012]

【作用】本発明によれば、次のように作用を呈する。図
3は本発明の作用を説明するための図である。本発明の
処理対象は、図3(a)に示すように各フレーム中で閉
領域が1に、背景が0に2値化されている動画像であ
り、動画像中の閉領域の時間変形を抽出するため、領域
形状の輪郭を用いる代わりに閉領域の形状の細線を用い
る。
According to the present invention, the following effects are exhibited. FIG. 3 is a diagram for explaining the operation of the present invention. The processing target of the present invention is a moving image in which the closed region is binarized to 1 and the background to 0 in each frame as shown in FIG. 3A, and the closed region in the moving image is temporally transformed. In order to extract, the thin line of the shape of the closed region is used instead of using the contour of the region shape.

【0013】細線化処理手段1により、閉領域を構成す
る画素が形状の輪郭から順に削り取とられ、図3(b)
に示すように画素幅1の線状の細線が作成される。特徴
点抽出手段3により図3(c)に示すようにフレーム毎
に細線化された領域形状から細線の端点、分岐・交差点
等の特徴点が抽出され、特徴点対応手段4により、動画
像中で隣接する2フレーム間で特徴点の対応付けが行わ
れる。
By the thinning processing means 1, the pixels forming the closed region are sequentially scraped off from the contour of the shape, and FIG.
As shown in, a linear thin line with a pixel width of 1 is created. The feature point extracting means 3 extracts feature points such as end points of fine lines, branches and intersections from the area shape thinned for each frame as shown in FIG. The feature points are associated between two adjacent frames.

【0014】次に移動量算出手段5により、次のように
算出が行われる。図3(d)に示すように閉領域中の点
のうち、特徴点の移動量が対応付け結果に基づき算出さ
れる。特徴点により挟まれる細線区間上の点の移動量
は、図3(e)に示すように特徴点の対応結果に基づき
算出される。そして閉領域を構成し細線上に存在しない
点の2フレーム間の移動量は、図2(f)に示すように
その点と閉領域内部のみを通るような経路を持ち、かつ
その経路長が最小となる細線上の点の移動量と等しいも
のとして決定され、閉領域内部の移動量が算出されるか
ら、動画像に入力時における雑音等による輪郭線の微小
変化に影響されずに、動画像中の物体変形、特に形状を
構成する部位の動きを大まかに捕らえることができる。
Next, the movement amount calculating means 5 calculates as follows. As shown in FIG. 3D, of the points in the closed region, the movement amount of the feature point is calculated based on the association result. The amount of movement of points on the thin line section sandwiched by the characteristic points is calculated based on the correspondence result of the characteristic points as shown in FIG. As shown in FIG. 2F, the amount of movement between two frames of a point that constitutes the closed area and does not exist on the thin line has a path that passes through that point and the inside of the closed area, and the path length is Since it is determined to be equal to the movement amount of the point on the thin line that is the minimum, and the movement amount inside the closed area is calculated, the moving image is not affected by the minute change of the contour line due to noise etc. at the time of input to the moving image, It is possible to roughly capture the deformation of an object in an image, in particular, the movement of a part forming a shape.

【0015】また、閉領域を構成する点の2フレーム間
での移動量を動画像中の領域の変形量として画像出力装
置に出力し、変形量に応じて領域の部位に色を割り当て
た画像を作成してカラー画像表示することで、変形量が
色により識別できる。
Further, the amount of movement of the points forming the closed region between two frames is output to the image output device as the amount of deformation of the region in the moving image, and an image in which a color is assigned to a region part according to the amount of deformation. By creating and displaying a color image, the amount of deformation can be identified by color.

【0016】また、算出手段5により、2フレーム間で
の特徴点の対応が既知のとき、特徴点により挟まれる細
線区間上の点のフレーム間の対応が推定され、この推定
結果に基づき細線区間上の点の移動量が算出されるか
ら、閉領域上の点の移動量の算出が容易になる。
Further, when the correspondence of the feature points between the two frames is known by the calculation means 5, the correspondence between the frames of the points on the thin line section sandwiched by the feature points is estimated, and the thin line section is based on this estimation result. Since the movement amount of the upper point is calculated, the movement amount of the point on the closed region can be easily calculated.

【0017】[0017]

【実施例】本発明の実施例を図を用いて詳細に説明す
る。図4は本発明の実施例におけるフローチャートであ
り、図5は実施例における装置構成図である。本装置
は、フレーム毎の動画像を出力する画像入力部1と、画
像入力部1から入力するフレーム毎の動画像に基づき閉
領域内部の時間変化量として部位の移動量を抽出する領
域形状の時間変化抽出部7と、部位の移動量を画像表示
する画像表示部6とからなる。
Embodiments of the present invention will be described in detail with reference to the drawings. FIG. 4 is a flowchart in the embodiment of the present invention, and FIG. 5 is a device configuration diagram in the embodiment. The present apparatus includes an image input unit 1 that outputs a moving image for each frame, and a region shape that extracts a moving amount of a region as a time change amount inside a closed region based on a moving image for each frame input from the image input unit 1. It is composed of a time change extraction unit 7 and an image display unit 6 that displays the amount of movement of a part as an image.

【0018】画像入力部1は、カメラ等の動画像入力装
置11により動物体を撮影した時刻(t=1,2・・・)におけ
るデジタル画像化されたフレームを逐次作成する。この
フレームは、例えばディジタルフィルタからなる背景除
去部12により背景に相当する画素を画素値0とし、物
体形状に相当する画素を画素値1とした2値画像B(t)
に変換され(図3(a)参照)、領域形状の時間変化抽
出部7に送られる。
The image input section 1 sequentially creates digital imaged frames at a time (t = 1, 2 ...) At which a moving object is photographed by a moving image input device 11 such as a camera. In this frame, a binary image B (t) in which the pixel corresponding to the background has a pixel value of 0 and the pixel corresponding to the object shape has a pixel value of 1 by the background removing unit 12 including a digital filter, for example.
(See FIG. 3A) and sent to the region shape time change extraction unit 7.

【0019】領域形状の時間変化抽出部7は、細線化処
理部2と、特徴点抽出部3と、特徴点対応部4と、移動
量算出部5と、画像表示部6とからなる。細線化処理部
2は、背景除去部12から入力する2値画像B(t)の画
素値1の領域に対して細線化処理を施し、細線化処理後
の線状の領域内の画素値を1に、それ以外の画素値を0
とした細線画像S(t)を作成する(例えば図3(b)に
示す細線画像S(t-1),S(t))。細線化処理とは、2値
画像中の画素値1の領域輪郭を周囲から削り、形状の幅
1の中心線を抽出する処理である。処理後の細線画像S
(t)は特徴点抽出部3に逐次送られる。
The area shape time change extraction unit 7 includes a thinning processing unit 2, a feature point extraction unit 3, a feature point correspondence unit 4, a movement amount calculation unit 5, and an image display unit 6. The thinning processing unit 2 performs thinning processing on a region having a pixel value of 1 of the binary image B (t) input from the background removing unit 12, and calculates pixel values in the linear region after the thinning processing. 1 and 0 for other pixel values
The thin line image S (t) is created (for example, the thin line images S (t-1) and S (t) shown in FIG. 3B). The thinning process is a process in which the region contour having a pixel value of 1 in the binary image is cut off from the surroundings to extract the center line having a width of 1 in the shape. Fine line image S after processing
(t) is sequentially sent to the feature point extraction unit 3.

【0020】特徴点抽出部3は、細線画像S(t)中の細
線の連結状態をその点の交差数で判別し、閉領域の形状
の特徴点を抽出する(例えば図3(c)に示す特徴点a
(t-1),b(t-1),a(t),b(t))。点の交差数とは、その点
の8近傍を1周するとき、背景(画素値=0)から領域
内(画素値=1)に遷移する回数である。細線画像S
(t)上で交差数が1の点(端点)、3の点(分岐点)、
4の点(交差点)を閉領域の形状の特徴点として確定し
てラベルpt(pt=1,・・・,Pt)を付加する。特徴点の2次元
座標(Xt[pt],Yt[pt])を抽出して特徴点対応部4に送
る。
The feature point extraction unit 3 determines the connection state of the thin lines in the thin line image S (t) by the number of intersections of the points, and extracts the feature points in the shape of the closed region (for example, in FIG. 3 (c)). Characteristic point a
(t-1), b (t-1), a (t), b (t)). The number of intersections of points is the number of transitions from the background (pixel value = 0) to the inside of the area (pixel value = 1) when one round is made around eight points. Fine line image S
On (t), the number of intersections is 1 (end point), 3 points (branch point),
The point 4 (intersection) is determined as a feature point of the shape of the closed region, and the label pt (pt = 1, ..., Pt) is added. The two-dimensional coordinates (Xt [pt], Yt [pt]) of the feature point are extracted and sent to the feature point correspondence unit 4.

【0021】特徴点対応部4は、図6に示すように座標
格納部411,412と、対応処理部42とからなる。
t=1のとき、特徴点抽出部3から送られた特徴点座標値
は座標格納部411に格納される。なお、このとき対応
処理部42は起動しない。t≠1のとき、特徴点抽出部3
から時刻tの特徴点座標値が送られると、座標格納部4
11に格納されている時刻(t-1) の座標値を座標格納部
412へと移動し、座標格納部411に時刻t の座標値
を格納し、対応処理部42が起動する。
The feature point correspondence unit 4 is composed of coordinate storage units 411 and 412 and a correspondence processing unit 42 as shown in FIG.
When t = 1, the feature point coordinate values sent from the feature point extraction unit 3 are stored in the coordinate storage unit 411. At this time, the handling processing unit 42 is not activated. When t ≠ 1, the feature point extraction unit 3
When the coordinate value of the characteristic point at time t is sent from the coordinate storage unit 4,
The coordinate value at time (t-1) stored in 11 is moved to the coordinate storage unit 412, the coordinate value at time t is stored in the coordinate storage unit 411, and the correspondence processing unit 42 is activated.

【0022】対応処理部42は、時刻(t-1)と時刻t の
特徴点座標値を比較して、特徴点間のユークリッド距離
が最短となるもの同士を1対1に対応付ける(例えば図
3(d)に示す特徴点a(t-1)とa(t),b(t-1)とb
(t))。この対応結果は、対応するp(t-1)とptの順序対
(p(t-1),pt)の集合Γtとして移動量算出部5に格納す
る。
The correspondence processing unit 42 compares the feature point coordinate values at time (t-1) and time t, and associates the ones having the shortest Euclidean distance between the feature points with each other in a one-to-one correspondence (for example, FIG. 3). Feature points a (t-1) and a (t), b (t-1) and b shown in (d)
(t)). The result of this correspondence is the corresponding ordered pair of p (t-1) and pt.
It is stored in the movement amount calculation unit 5 as a set Γt of (p (t−1), pt).

【0023】 Γt={(p(t-1),pt)|p(t-1)は、時刻tの特徴点ptに 1対1に対応する時刻(t-1)の特徴点} ・・(1) 移動量算出部5は、図7に示すように移動量格納部51
X,51Yと、特徴点移動量算出部52と、細線移動量
算出部53と、領域移動量算出部54とからなる。
Γt = {(p (t-1), pt) | p (t-1) is a feature point at time (t-1) corresponding to the feature point pt at time t on a one-to-one basis} (1) The movement amount calculation unit 5, as shown in FIG.
X, 51Y, a feature point movement amount calculation unit 52, a fine line movement amount calculation unit 53, and a region movement amount calculation unit 54.

【0024】移動量格納部51X(Y)は、処理画像と
同サイズのフレームメモリであり、細線画像B(t)中に
存在する領域中の座標値(X,Y)の画素の、時刻(t-
1)〜t間のX(Y)方向の移動量(単位:画素)をアド
レス(X,Y)の値として持つ。 特徴点移動量算出部
52は、座標格納部412と座標格納部411との内容
を参照して、集合Γtを基に特徴点ptと、ptに対応する
特徴点(p'(t-1))のX,Y座標値の差を計算し、この差
分値を時刻(t-1)〜t間のptの移動量として移動量格納部
51X,51Yに登録する。
The movement amount storage unit 51X (Y) is a frame memory having the same size as the processed image, and the time ((X, Y)) of the pixel of the coordinate value (X, Y) in the area existing in the thin line image B (t) is stored. t-
It has the amount of movement (unit: pixel) in the X (Y) direction between 1) to t as the value of the address (X, Y). The feature point movement amount calculation unit 52 refers to the contents of the coordinate storage unit 412 and the coordinate storage unit 411, and based on the set Γt, the feature point pt and the feature point (p ′ (t-1) corresponding to pt ), The difference between the X and Y coordinate values is calculated, and this difference value is registered in the movement amount storage units 51X and 51Y as the movement amount of pt between time (t-1) and t.

【0025】 M51X(Xt[pt],Yt[pt])=Xt[pt]-X(t-1)[p'(t-1)] M51Y(Xt[pt],Yt[pt])=Yt[pt]-Y(t-1)[p'(t-1)] ・・(2) ただし、ここでM51X(X,Y),M51Y(X,Y)はそれぞれ移動
量格納部51X,51Yのアドレス(X,Y)の値を表
す。
M51X (Xt [pt], Yt [pt]) = Xt [pt] -X (t-1) [p '(t-1)] M51Y (Xt [pt], Yt [pt]) = Yt [pt] -Y (t-1) [p '(t-1)] ··· (2) where M51X (X, Y) and M51Y (X, Y) are movement amount storage units 51X and 51Y, respectively. Represents the value of the address (X, Y).

【0026】細線移動量算出部53は、全てのptの移動
量が算出されたのち起動し、細線画像S(t-1),S(t)と
集合Γtとを参照して、両端点が対応している細線区間
は2フレーム間で対応しているものとみなし、2フレー
ム間で対応する細線区間の時刻(t-1) の座標列(Xs
1[l1],Ys1[l1])(l1=1,2,・・,L1)と時刻tの座標列(Xs2[l
2],Ys2[l2])(l2=1,2,・・,L2)を抽出する。その後、時刻
(t-1) でl1=1,l1=L1の点は特徴点であり、それぞれ時刻
t でl2=1,l2=L2となる特徴点に対応している。任意のl2
に対応するl1は、 l2:L2−l2=l1:L1−l1 ・・(3) が成り立つように推定して、推定結果をもとに細線区間
でl2の点の時刻(t-1)〜t間の移動量を次式で計算し登録
する(例えば図3(e)に示すc(t-1),c(t))。 M51X(Xs2[l2],Ys2[l2])=Xs2[l2]-Xs1[ int(l2×L1/L2)] M51Y(Xs2[l2],Ys2[l2])=Ys2[l2]-Ys1[ int(l2×L1/L2)] ・・(4) ただし、int( )は小数点以下を四捨五入して整数化する
関数である。
The thin line movement amount calculation unit 53 is started after the movement amounts of all pt are calculated, and the end points are determined by referring to the thin line images S (t-1), S (t) and the set Γt. The corresponding thin line section is regarded as corresponding between two frames, and the coordinate sequence (Xs) at the time (t-1) of the corresponding thin line section between the two frames is considered.
1 [l 1 ], Ys 1 [l 1 ]) (l 1 = 1,2, ..., L 1 ) and the coordinate sequence of time t (Xs 2 [l
2 ], Ys 2 [l 2 ]) (l 2 = 1,2, ..., L 2 ) is extracted. Then the time
The points of l 1 = 1, l 1 = L 1 in (t-1) are feature points, and
It corresponds to the feature point where l 2 = 1, l 2 = L 2 at t. Any l 2
L 1 corresponding to is estimated so that l 2 : L 2 −l 2 = l 1 : L 1 −l 1 ··· (3) holds, and the point of l 2 in the thin line section is estimated based on the estimation result. The amount of movement from time (t-1) to t is calculated by the following formula and registered (for example, c (t-1), c (t) shown in FIG. 3 (e)). M51X (Xs 2 [l 2 ], Ys 2 [l 2 ]) = Xs 2 [l 2 ] -Xs 1 [int (l 2 × L 1 / L 2 )] M51Y (Xs 2 [l 2 ], Ys 2 [l 2 ]) = Ys 2 [l 2 ] -Ys 1 [int (l 2 × L 1 / L 2 )] ・ ・ (4) However, int () is a function that rounds the decimal places to an integer. is there.

【0027】細線上の全ての点について移動量が算出・
登録されたのちに、領域移動量算出部54が起動する。
領域移動量算出部54は、2値画像B(t),移動量格納
部51X,51Yの内容を参照し、領域内部に存在し移
動量未登録の点の座標(Xm,Ym)を抽出する。この点の移
動量は、B(t)中の画素値1の点を伝い、最短経路で到
達できる細線上の点(座標値(Xsm,Ysm))の移動量と等
しいものとする(図3(f)参照)。
The movement amount is calculated for all points on the thin line.
After the registration, the area movement amount calculation unit 54 is activated.
The area movement amount calculation unit 54 refers to the contents of the binary image B (t) and the movement amount storage units 51X and 51Y, and extracts the coordinates (Xm, Ym) of a point existing inside the area and having no movement amount registered. . The amount of movement of this point is equal to the amount of movement of a point (coordinate value (Xsm, Ysm)) on the thin line that can be reached by the shortest path along the point of pixel value 1 in B (t) (Fig. 3). (See (f)).

【0028】 M51X(Xm,Ym)=M51X(Xsm,Ysm) M51Y(Xm,Ym)=M51Y(Xsm,Ysm) ・・(5) 2値画像B(t)上で画素値1となる点に対応する移動量
が全て算出され、移動量格納部51X,51Yに登録さ
れる。
M51X (Xm, Ym) = M51X (Xsm, Ysm) M51Y (Xm, Ym) = M51Y (Xsm, Ysm) (5) At the point where the pixel value is 1 on the binary image B (t) All the corresponding movement amounts are calculated and registered in the movement amount storage units 51X and 51Y.

【0029】以上のように、動画像中で連続して形成変
化する閉領域形状に対して、輪郭線による変形抽出の代
わりに、輪郭部分の微小変形に影響を受けづらい形状の
中心線である細線による変形抽出を行い、閉領域内部の
移動量が算出されるから、動画像に入力時における雑音
等による輪郭線の微小変化に影響されずに、動画像中の
物体変形、特に形状を構成する部位の動きを大まかに捕
らえることができる。
As described above, for a closed region shape that is continuously formed and changed in a moving image, instead of extracting the deformation by the contour line, the center line has a shape that is hardly affected by the minute deformation of the contour portion. Since the amount of movement inside the closed region is calculated by extracting the deformation using thin lines, the object deformation in the moving image, especially the shape, can be configured without being affected by minute changes in the contour line due to noise etc. at the time of input to the moving image. You can roughly capture the movement of the part you do.

【0030】さらに、移動量算出部5は、移動量格納部
51X,51Y中の点の移動量を時刻(t-1)〜t間の閉領
域の変形量として装置の外部に出力する。また、同時に
画像表示部6で変形量を画像として表示する。
Further, the movement amount calculation unit 5 outputs the movement amount of the points in the movement amount storage units 51X and 51Y to the outside of the apparatus as the deformation amount of the closed area between time (t-1) and t. At the same time, the image display unit 6 displays the deformation amount as an image.

【0031】画像表示部6は、移動量格納部51X,5
1Yに格納された時刻(t-1)〜t間の閉領域の変形量を画
像として可視化し表示する装置であり、画像化部61と
ディスプレイ62とからなる。
The image display unit 6 includes movement amount storage units 51X and 5X.
It is a device that visualizes and displays the deformation amount of the closed region between time (t-1) and t stored in 1Y as an image, and includes an imaging unit 61 and a display 62.

【0032】画像化部61は、2値画像B(t)上で画素
値1となる点(座標(Xc,Yc))の変形量を移動量格納部
51X,51Yから読み出し、その値に基づき決定した
RGB値を座標(Xc,Yc) のRGB値とする画像を作成
し、これをディスプレイ62へと送信して表示する。
(ここで背景画素値は0とする。)移動量からのRGB
値を決定する関数は多種存在するが、一例として図8の
関数を示す。R(Xc,Yc),G(Xc,Yc),B(Xc,Yc)はそれぞ
れ、座標(Xc,Yc)のRGB値であり、0〜1の値であ
る。
The imaging unit 61 reads the deformation amount of the point (coordinates (Xc, Yc)) having the pixel value 1 on the binary image B (t) from the movement amount storage units 51X and 51Y, and based on the value. An image having the determined RGB value as the RGB value of the coordinates (Xc, Yc) is created, and this image is transmitted to the display 62 and displayed.
(Here, the background pixel value is 0.) RGB from the movement amount
Although there are various functions for determining the value, the function shown in FIG. 8 is shown as an example. R (Xc, Yc), G (Xc, Yc), and B (Xc, Yc) are RGB values of coordinates (Xc, Yc), and are values of 0 to 1.

【0033】 R(Xc,Yc)は、図8(a)に示すように {1−3θ/(2π)}r/D; 0≦θ≦2π/3 R(Xc,Yc)= {3θ/(2π)−2}r/D;4π/3≦θ≦2π 0 ;上記以外 ・・(6ー1) である。[0033]   R (Xc, Yc) is as shown in FIG.               {1−3θ / (2π)} r / D; 0 ≦ θ ≦ 2π / 3   R (Xc, Yc) = {3θ / (2π) −2} r / D; 4π / 3 ≦ θ ≦ 2π               0 ; Other than the above ・ ・ (6-1) Is.

【0034】 G(Xc,Yc)は、図8(b)に示すように {2−3θ/(2π)}r/D;2π/3≦θ≦4π/3 G(Xc,Yc)= {3θ/(2π)}r/D ; 0≦θ≦2π/3 0 ;上記以外 ・・(6ー2) である。[0034]   G (Xc, Yc) is as shown in FIG.             {2−3θ / (2π)} r / D; 2π / 3 ≦ θ ≦ 4π / 3 G (Xc, Yc) = {3θ / (2π)} r / D; 0 ≦ θ ≦ 2π / 3             0 ; Other than the above ・ ・ (6-2) Is.

【0035】 B(Xc,Yc)は、図8(c)に示すように {3−3θ/(2π)}r/D;4π/3≦θ≦2π B(Xc,Yc)= {3θ/(2π)−1}r/D;2π/3≦θ≦4π/3 0 ;上記以外 ・・(6ー3) である。[0035]   B (Xc, Yc) is as shown in FIG. 8 (c).             {3−3θ / (2π)} r / D; 4π / 3 ≦ θ ≦ 2π B (Xc, Yc) = {3θ / (2π) −1} r / D; 2π / 3 ≦ θ ≦ 4π / 3             0 ; Other than the above ・ ・ (6-3) Is.

【0036】なお、角度θは、 θ=2tan-1{−ΔX/(r+ΔY)}(単位:rad,0≦θ≦2
π) であり、変形量の大きさrは、 r=sqrt(ΔX2+ΔY2)(単位:画素) である。
The angle θ is θ = 2 tan −1 {−ΔX / (r + ΔY)} (unit: rad, 0 ≦ θ ≦ 2
π), and the magnitude r of the amount of deformation is r = sqrt (ΔX 2 + ΔY 2 ) (unit: pixel).

【0037】また図8(d)に示すようにX軸方向変形
量ΔXは、 ΔX=M51X(Xc,Yc) であり、Y軸方向変形量ΔY ΔY=M51Y(Xc,Yc) である。
As shown in FIG. 8D, the X-axis direction deformation amount ΔX is ΔX = M51X (Xc, Yc) and the Y-axis direction deformation amount ΔY ΔY = M51Y (Xc, Yc).

【0038】ここで、DはRGB値を0〜1の値に納め
るための定数であり、動画像中で予想される変形量の大
きさrの最大値よりも大きく設定される値である。この
関数を用いて領域の変形の画像を作成すると、画像の原
点に対して領域の部位が大きく移動するほど明るく、移
動する方向が異なると違う色に彩色されるような画像が
作成され、ディスプレイ62で容易に領域の変形が確認
できるようになる。
Here, D is a constant for accommodating the RGB value in a value of 0 to 1, and is a value set larger than the maximum value of the size r of the deformation amount expected in the moving image. If you use this function to create an image of the deformation of a region, an image is created that becomes brighter as the region moves greatly with respect to the origin of the image, and is colored in a different color depending on the moving direction. At 62, the deformation of the area can be easily confirmed.

【0039】[0039]

【発明の効果】以上説明したように、本発明によれば、
動画像中で連続して形成変化する閉領域形状に対して、
輪郭線による変形抽出の代わりに、輪郭部分の微小変形
に影響を受けづらい形状の中心線である細線による変形
抽出を行うので、動画像入力時における雑音等による輪
郭線の微小な変化に影響されない形状の大まかな変形抽
出が可能となる。
As described above, according to the present invention,
For closed region shapes that continuously change in moving images,
Instead of extracting the deformation by the contour line, the deformation is extracted by the thin line which is the center line of the shape that is hard to be affected by the minute deformation of the contour part, so it is not affected by the minute change of the contour line due to noise etc. at the time of moving image input. It is possible to extract a rough deformation of the shape.

【図面の簡単な説明】[Brief description of drawings]

【図1】本発明の原理を示すフローチャートである。FIG. 1 is a flowchart showing the principle of the present invention.

【図2】本発明を実現するための装置構成図である。FIG. 2 is a device configuration diagram for realizing the present invention.

【図3】本発明の作用を説明するための図である。FIG. 3 is a diagram for explaining the operation of the present invention.

【図4】本発明の実施例におけるフローチャートであ
る。
FIG. 4 is a flowchart in an embodiment of the present invention.

【図5】本発明の実施例における装置構成図である。FIG. 5 is a device configuration diagram in an embodiment of the present invention.

【図6】特徴点対応部の詳細構成図である。FIG. 6 is a detailed configuration diagram of a feature point corresponding unit.

【図7】移動量算出部の詳細構成図である。FIG. 7 is a detailed configuration diagram of a movement amount calculation unit.

【図8】領域内部の移動量をカラー画像化するためのR
GB値説明図である。
FIG. 8 is a diagram illustrating R for converting a moving amount inside a region into a color image.
It is a GB value explanatory drawing.

【符号の説明】[Explanation of symbols]

1・・画像入力部 2・・細線化処理部 3・・特徴点抽出部 4・・特徴点対応部 5・・移動量算出部 6・・画像表示部 7・・領域形状の時間変化抽出部 11・・動画像入力装置 12・・背景除去部 42・・対応処理部 51X,51Y・・移動量格納部 52・・特徴点移動量算出部 53・・細線移動量算出部 54・・領域移動量算出部 61・・画像化部 62・・ディスプレイ 411,412・・座標格納部 1 ... Image input section 2 ... Thinning processing unit 3 ... Feature point extraction unit 4 ... Characteristic point corresponding part 5 ... Movement amount calculation unit 6 ... Image display 7 ... Region shape time change extraction unit 11 ... Moving image input device 12 ... Background removal section 42 ... Corresponding processing unit 51X, 51Y ... Moving amount storage 52 .. Feature point movement amount calculation unit 53 .. Thin line movement amount calculation unit 54 ... Area movement amount calculation unit 61 .. Imaging unit 62..Display 411, 412 ... Coordinate storage unit

Claims (3)

【特許請求の範囲】[Claims] 【請求項1】 画像入力装置(1)から出力される動画
像フレーム中の画像を用いて隣接する2フレーム間にお
ける閉領域の形状輪郭の時間変化量を抽出する領域形状
の時間変化抽出装置(7)において、 動画像フレーム中の閉領域の形状輪郭を周辺から削って
細線画像を作成する細線化処理手段(2)と、該細線化
処理手段(2)で細線化された細線上の特徴点を抽出す
る特徴点抽出手段(3)と、該特徴点抽出手段(3)で
抽出された2フレーム間での特徴点の対応付けを行なう
特徴点対応手段(4)と、該特徴点対応手段(4)で対
応付けられた特徴点に基づき特徴点の移動量の算出及び
細線上の点の移動量の算出を行なう算出手段(5)とを
備え、 前記算出手段(5)は、閉領域中に移動量が既知の点が
存在するとき、他の点の移動量を、閉領域内を伝って最
短距離で到達できる細線上の点の移動量と等しいものと
して推定して算出することを特徴とする領域形状の時間
変化抽出方法。
1. A region shape temporal change extraction device for extracting a temporal change amount of a shape contour of a closed region between two adjacent frames using an image in a moving image frame output from an image input device (1) ( In 7), a thinning processing unit (2) for creating a thin line image by shaving the shape contour of the closed region in the moving image frame from the periphery, and a feature on the thin line thinned by the thinning processing unit (2) Feature point extraction means (3) for extracting points, feature point correspondence means (4) for associating feature points between the two frames extracted by the feature point extraction means (3), and the feature point correspondence A calculation means (5) for calculating the movement amount of the feature points and the movement amount of the points on the thin line based on the feature points associated by the means (4), wherein the calculation means (5) is closed. If there is a point with a known amount of movement in the area, move another point The time change detection process of a region shape, and calculates estimated as equivalent to the amount of movement of the point on the thin line that can be reached at the shortest distance along the closed area.
【請求項2】 前記算出手段(5)は、閉領域内部の点
の移動量を算出して、この移動量を領域の部位の変形量
として画像出力装置(6)に出力することを特徴とする
請求項1記載の領域形状の時間変化抽出方法。
2. The calculation means (5) calculates the amount of movement of a point inside the closed region, and outputs the amount of movement to the image output device (6) as the amount of deformation of the region part. The method for extracting a temporal change of a region shape according to claim 1.
【請求項3】 前記算出手段(5)は、2フレーム間で
の特徴点の対応が既知のとき、特徴点により挟まれる細
線区間上の点のフレーム間の対応を推定し、この推定結
果に基づき細線区間上の点の移動量を算出することを特
徴とする請求項2記載の領域形状の時間変化抽出方法。
3. The calculation means (5) estimates, when the correspondence of feature points between two frames is known, the correspondence between frames of points on a thin line section sandwiched by the feature points, The method for extracting a temporal change of a region shape according to claim 2, wherein the amount of movement of the point on the thin line section is calculated based on the calculated amount.
JP3179457A 1991-07-19 1991-07-19 Hourly change extraction method for area shape Withdrawn JPH0528269A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP3179457A JPH0528269A (en) 1991-07-19 1991-07-19 Hourly change extraction method for area shape

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP3179457A JPH0528269A (en) 1991-07-19 1991-07-19 Hourly change extraction method for area shape

Publications (1)

Publication Number Publication Date
JPH0528269A true JPH0528269A (en) 1993-02-05

Family

ID=16066192

Family Applications (1)

Application Number Title Priority Date Filing Date
JP3179457A Withdrawn JPH0528269A (en) 1991-07-19 1991-07-19 Hourly change extraction method for area shape

Country Status (1)

Country Link
JP (1) JPH0528269A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10728439B2 (en) 2015-12-16 2020-07-28 Nikon Corporation Image-capturing apparatus and motion detection method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10728439B2 (en) 2015-12-16 2020-07-28 Nikon Corporation Image-capturing apparatus and motion detection method

Similar Documents

Publication Publication Date Title
US6016148A (en) Automated mapping of facial images to animation wireframes topologies
EP0654749B1 (en) An image processing method and apparatus
US20150253864A1 (en) Image Processor Comprising Gesture Recognition System with Finger Detection and Tracking Functionality
US9600898B2 (en) Method and apparatus for separating foreground image, and computer-readable recording medium
US20010008561A1 (en) Real-time object tracking system
JPH10214346A6 (en) Hand gesture recognition system and method
JPH10214346A (en) Hand gesture recognizing system and its method
JPH0935061A (en) Image processing method
US11159717B2 (en) Systems and methods for real time screen display coordinate and shape detection
WO2013074153A1 (en) Generating three dimensional models from range sensor data
WO2018098862A1 (en) Gesture recognition method and device for virtual reality apparatus, and virtual reality apparatus
JPH0719832A (en) Extracting method for corresponding points of pulirity of images
JPH09153137A (en) Method for recognizing picture
Karabasi et al. A model for real-time recognition and textual representation of malaysian sign language through image processing
JP2700440B2 (en) Article identification system
Elloumi et al. Improving a vision indoor localization system by a saliency-guided detection
JPH05205052A (en) Automatic tracking device
EP0735509A1 (en) Image processing for facial feature extraction
JPH0528269A (en) Hourly change extraction method for area shape
JP2004030408A (en) Three-dimensional image display apparatus and display method
JP2014178909A (en) Commerce system
Fischer et al. Face detection using 3-d time-of-flight and colour cameras
TW202004619A (en) Self-checkout system, method thereof and device therefor
JPH1055446A (en) Object recognizing device
JPH0981737A (en) Three-dimensional object model generating method

Legal Events

Date Code Title Description
A300 Withdrawal of application because of no request for examination

Free format text: JAPANESE INTERMEDIATE CODE: A300

Effective date: 19981008