Nothing Special   »   [go: up one dir, main page]

WO2011058626A1 - Image processing device and slide show display - Google Patents

Image processing device and slide show display Download PDF

Info

Publication number
WO2011058626A1
WO2011058626A1 PCT/JP2009/069211 JP2009069211W WO2011058626A1 WO 2011058626 A1 WO2011058626 A1 WO 2011058626A1 JP 2009069211 W JP2009069211 W JP 2009069211W WO 2011058626 A1 WO2011058626 A1 WO 2011058626A1
Authority
WO
WIPO (PCT)
Prior art keywords
contour
image
unit
pixel
pixels
Prior art date
Application number
PCT/JP2009/069211
Other languages
French (fr)
Japanese (ja)
Inventor
俊輔 高山
晃司 山本
恒 青木
Original Assignee
株式会社 東芝
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社 東芝 filed Critical 株式会社 東芝
Priority to PCT/JP2009/069211 priority Critical patent/WO2011058626A1/en
Publication of WO2011058626A1 publication Critical patent/WO2011058626A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform

Definitions

  • the present invention relates to an image processing apparatus and a slide show display apparatus using the apparatus.
  • an artificial structure such as a building from an image
  • a method using a line segment in the image see, for example, Patent Document 1.
  • an artificial structure is detected based on conditions such as whether there are parallel line segments or intersecting line segments.
  • Patent Document 1 is premised on application to an automatic rendezvous of a spacecraft. For example, when an image of a space station and a planet is input, a region where a space station as an artificial structure exists is extracted.
  • the present invention has been made in view of the above, and provides an image processing apparatus and a slide show display apparatus that can determine an artificial structure without being influenced by surrounding objects even in a complex image including many objects. For the purpose.
  • An image processing apparatus includes an acquisition unit that acquires an image including a plurality of pixels, a division unit that divides the image into a plurality of regions based on a difference in pixel values between the pixel and a pixel adjacent to the pixel, An extraction unit that extracts a contour for each region, and calculates a polygon rate and a concavo-convex rate of the contour, and the polygon rate is equal to or higher than a first threshold value, or a region where the concavo-convex rate is equal to or lower than a second threshold value
  • a selection unit that selects an image from the plurality of regions, and a determination unit that determines an image including the region selected by the selection unit as an image of an artifact.
  • a slide show display device includes the above-described image processing device, and includes a display unit that displays an image of the artifact in a plurality of images.
  • the image processing device and the slide show display device of the present invention it is possible to determine an artificial structure without being influenced by surrounding objects even in a complex image including many objects.
  • FIG. 1 is a block diagram of an image processing apparatus according to an embodiment.
  • the block diagram of a calculation part. 6 is a flowchart illustrating an example of the operation of the image processing apparatus according to the embodiment.
  • the figure which shows an example of the slide show created from the photograph in which the face is reflected.
  • the figure which shows an example of the slide show which inserted the photograph of the artificial structure between the photographs of the person.
  • 1 is a diagram illustrating an example of an apparatus that can include a slide show display apparatus including an image processing apparatus according to an embodiment.
  • an image processing apparatus and a slide show display apparatus will be described in detail with reference to the drawings. Note that, in the following embodiments, the same numbered portions are assumed to perform the same operation, and repeated description is omitted.
  • an object having an artificial and regular structure such as a building, a signboard, or a vehicle is referred to as an artificial structure
  • an object including the artificial structure in an image is referred to as an artificial structure image.
  • the building includes a residence, a building, a castle, a temple, a torii, and the like.
  • Signs include signs, monuments, foreheads, etc.
  • Vehicles include trains, ships, airplanes, cars and the like.
  • Polygon refers to all polygons including triangles and quadrangles.
  • a score indicating the degree to which an artificial structure is included in the image is referred to as a structure score.
  • the image processing apparatus includes an acquisition unit 101, a division unit 102, an extraction unit 103, a selection unit 104, a calculation unit 105, and a determination unit 106.
  • the acquisition unit 101 acquires one or more image frames 151.
  • the dividing unit 102 divides the image frame 151 acquired by the acquiring unit 101 into a plurality of regions based on the pixel values. Details of the dividing unit 102 will be described later with reference to FIG.
  • the extraction unit 103 extracts the outline of each region divided by the division unit 102. Details of the extraction unit 103 will be described later with reference to FIGS.
  • the selection unit 104 selects only contours having a polygon ratio equal to or higher than the first threshold value or an unevenness rate equal to or lower than the second threshold value from the contours extracted by the extraction unit 103.
  • the polygon ratio is an index indicating how close the shape is to a polygon having a contour, and indicates how close the contour is to a quadrangle, for example. Details of the selection unit 104 will be described later with reference to FIGS.
  • the calculation unit 105 Based on the shape of each contour selected by the selection unit 104, the calculation unit 105 calculates a structural score indicating the likelihood of an artificial structure. Details of the calculation unit 105 will be described later with reference to FIG.
  • the determination unit 106 determines whether the image is an image including many artificial structures from the height of the structural score calculated by the calculation unit 105, and obtains a determination result 152.
  • the acquisition unit 101 acquires an image frame 151 including a plurality of pixels. When there are a plurality of images, they are read into the memory one by one.
  • the memory is installed in the acquisition unit 101.
  • the acquisition unit 101 acquires one image from several frames converted into an image.
  • the acquisition unit 101 converts the input image into a specific resolution specified in advance. This conversion uses an image expansion / contraction technique such as a bicubic method. If the original image is a vertically long image and the converted image is horizontally long, the original image is rotated by 90 ° and then stretched.
  • the acquisition unit 101 outputs the converted image frame to the division unit 102.
  • the acquisition unit 101 may be configured to determine whether the input image has been photographed with an inclination, and to correct the inclination of the image when the input image has been inclined.
  • Line segments in the image are detected and classified into a plurality of directions (for example, a total of 8 directions inclined by 22.5 ° from the horizontal direction), and the total length of the line segments is calculated for each direction. If the direction with the largest total line segment length is the horizontal direction or the vertical direction, it is not inclined, and if it is any other direction, it is determined that the direction is inclined.
  • Detecting a line segment is performed by acquiring an edge image by an edge detection method such as the Canny algorithm, and performing a Hough transform on the edge image. If it is tilted, the detected tilt direction is set as the horizontal direction, and the center portion of the image is cut out and rotated as shown in FIG.
  • the acquisition unit 101 may be configured to determine whether the input image has been taken with an inclination, and to correct the inclination of the image if the input image has been inclined.
  • Line segments in the image are detected and classified into a plurality of directions (for example, a total of 8 directions inclined by 22.5 ° from the horizontal direction), and the total length of the line segments is calculated for each direction. If the direction with the largest total line segment length is the horizontal direction or the vertical direction, it is not inclined, and if it is any other direction, it is determined that the direction is inclined.
  • Detecting a line segment is performed by acquiring an edge image by an edge detection method such as the Canny algorithm, and performing a Hough transform on the edge image. If it is tilted, the detected tilt direction is set as the horizontal direction, and the center portion of the image is cut out and rotated as shown in FIG.
  • the dividing unit 102 divides the image area from the difference between the pixel value of the pixel of the converted image frame and the pixel adjacent to the pixel.
  • the pixel value difference includes an RGB value, an HSV value, a Euclidean distance of a luminance value, and the like.
  • H represents hue
  • S represents saturation
  • V represents luminance.
  • a numerical difference binarized using the luminance as a threshold value may be used.
  • a division / integration method There are various methods for dividing an image into a plurality of regions, such as a k-means method, a region expansion method, and a division / integration method, and any method may be used.
  • a division / integration method will be described as an example.
  • the target area is divided into two equal parts.
  • the division process is repeated for each partial region until all the regions are uniform.
  • the density value (luminance value, RGB value, HSV value, etc.) of each pixel in the region is quantized, and the number of pixels of each density value is counted. If the ratio of the maximum number of pixels among the density values to the entire area is high, it is determined as a uniform area, and if it is low, it is determined as non-uniform and divided.
  • the dividing unit 102 assigns a different label to each divided area, and creates an area image (area division result) with the area label to which each pixel belongs as the pixel value of the pixel.
  • This region division result is, for example, as shown on the left in FIG.
  • the dividing unit 102 outputs the area division result to the extracting unit 103.
  • An example of the region division result is shown in FIG. Different areas are displayed in different patterns. Each sky, building roof, or wall in the image is divided into separate areas.
  • FIG. 5 is a flowchart illustrating an example of the operation of the extraction unit 103.
  • the extraction unit 103 extracts the outline of each region in the image divided by the division unit 102.
  • the extraction unit 103 performs the processing from step S502 on all the regions divided by the division unit 102 (step S501).
  • One pixel to which the label of the region currently being processed is assigned is searched from the region image (step S502).
  • the region image is scanned sequentially from the upper left pixel, and the first found pixel is stored as the first contour pixel (step S502).
  • contour pixels are obtained along the boundary between the area being processed and another area.
  • the pixel adjacent to the pixel having another region label is set as the next contour pixel (step S503). ).
  • the next contour pixel is determined based on the relationship between the immediately preceding contour pixel and the current contour pixel. For example, if the immediately preceding contour pixel is at the upper right of the current contour pixel, the neighboring pixels are scanned counterclockwise sequentially from the pixel above the current contour pixel to find a pixel having the region label currently being processed. The pixel found in the next is defined as the next contour pixel. If the current contour pixel is the first contour pixel, there is no previous contour pixel, and therefore, the pixel adjacent to the left of the first contour pixel is regarded as the previous contour pixel and processed.
  • next contour pixel has the same coordinates as the first contour pixel is checked. If the next contour pixel is different, the next contour pixel is set as the current contour pixel, and the process returns to step S503. If the same, the process proceeds to step S505 (step S504). The contour of the region is output and the process proceeds to step S501 (step S505). When all the regions have been processed, the operation is terminated, and the extracted contour of each region is used as an input to the selection unit 104. An example of the contour extraction result is shown in FIG. The number of each pixel in FIG. 3 is a label indicating the area to which it belongs.
  • FIG. 6 is a block diagram illustrating a detailed configuration of the selection unit 104.
  • the selection unit 104 includes a first selection unit 601 and a second selection unit 602.
  • the first selection unit 601 calculates a value obtained by inverting the average value of the curvature of each contour, and selects a contour whose value is equal to or greater than a threshold value. As a result, it is possible to select only a contour made of a straight line or a gentle curve, which is peculiar to the shape of the artificial structure.
  • the second selection unit 602 calculates the concavo-convex ratio of each contour, and selects a contour having a concavo-convex ratio of a threshold value or less. Thereby, a distorted contour can be removed.
  • the block diagram of FIG. 6 is merely an example, and the selection units 601 and 602 included in the selection unit 104 are not necessarily all required, but include only a part of the processing, or the order is It may be replaced.
  • the first selection unit 601 receives the contour information 651 extracted by the extraction unit 103, and selects only contours having a value obtained by reversing the average value of the curvature from the positive value to the threshold value.
  • a value obtained by inverting the average value of the curvature between positive and negative corresponds to an example of a polygonal curvature.
  • a curvature score indicating the size of the curvature of the contour is calculated based on the relationship with the adjacent two pixels. The curvature score becomes smaller as the contour is closer to straight.
  • the curvature score is 1 ((5) in FIG. 7).
  • an average value of curvature scores of all the pixels in the contour is calculated, and a contour whose value obtained by inverting the average curvature score is equal to or greater than a threshold value is selected as the contour of the artificial structure.
  • This threshold is determined experimentally, but here it is 0.7.
  • the eight adjacent pixels of the target pixel 801 are (1) to (8) in order from the upper left.
  • the curvature score of the pixel of interest 801 is set to ⁇ 1 because the groups of two pixels before and after the pixel of interest 801 are (1) (8), (2) (7), (3) (6), (4) (5) ) Is one of the cases.
  • the curvature score is set to ⁇ 0.5 because the set of two pixels before and after the target pixel 801 is (1) (7), (1) (5), (2) (6), (2) (8), (3) (4), (3) (7), (4) (8), (5) (6).
  • the curvature score is set to 0 because the group of two pixels before and after the pixel of interest 801 is (1) (6), (1) (3), (2) (4), (2) (5), (3) (8), (4) (7), (5) (7), (6), or (8).
  • the curvature score is set to 0.5 because the groups of two pixels before and after the pixel of interest 801 are (1) (2), (2) (3), (3) (5), (5) (8), ( 8) Any of (7), (7), (6), (6), (4), (4), and (1).
  • the curvature score is set to 1 because the group of two pixels before and after the pixel of interest 801 is (1) (1), (2) (2), (3) (3), (4) (4), (5) (5), (6), (6), (7), (7), (8), or (8).
  • the above example is an example of a contour curvature score calculation method, and another method may be used.
  • a method of calculating a radius of curvature and a curvature using a quadratic differentiation of each point of the contour by approximating the shape of the contour as a function is a method of calculating a radius of curvature and a curvature using a quadratic differentiation of each point of the contour by approximating the shape of the contour as a function.
  • the selection unit 104 may remove short contours that are equal to or less than a threshold before performing the processing of the first selection unit 601.
  • the threshold value of the contour length to be deleted is experimentally determined according to the resolution of the image.
  • the contour length the number of pixels belonging to the contour.
  • the first selection unit 601 may calculate a value obtained by inverting the average curvature score from the portion excluding the pixels at the screen edge in the contour. Since the image is generally rectangular, the outline of the region including the screen edge is linear. The shape of the contour of the true object can be evaluated by removing the pixels at the screen edge. As another method, the selection unit 104 may not select an outline including pixels at a certain ratio or more at the edge of the screen.
  • the second selection unit 602 in FIG. 6 will be described.
  • An example of a method for calculating the unevenness ratio indicating the number of unevenness of the contour will be described with reference to FIG.
  • the second selection unit 602 selects only the contour with less unevenness among the contours in the image selected by the first selection unit 601.
  • the smallest rectangle circumscribing each contour is obtained (dotted line in FIG. 9), and “the length of the contour / the length of the contour of the circumscribed rectangle” is calculated as the unevenness ratio.
  • An example of a method for calculating a circumscribed rectangle will be shown.
  • a pixel having the smallest x coordinate in the image is obtained, and the x coordinate is set to x1.
  • the maximum x coordinate is x2, the minimum y coordinate is y1, the maximum y coordinate is y2, and the four vertices (x1, y1), (x1, y2), (x2, y1), (x2, y2) ) Is a circumscribed rectangle.
  • the unevenness ratio increases as the unevenness increases.
  • the length of the contour and the length of the contour of the circumscribed rectangle are obtained from the number of pixels belonging to the contour. If the relationship between adjacent pixels is diagonal (near 8), the length is counted as 2, and if not, the length is counted as 1.
  • the unevenness ratio 1 for the left figure, and the unevenness ratio> 1 for the middle and right figures because the contour is distorted.
  • a contour having an unevenness ratio equal to or lower than a threshold is selected as the contour of the artificial structure. This threshold is obtained experimentally, but here it is 1 / 0.75. If the ratio is 1 / 0.75 or less, the natural object rectangle is removed, and the contour of the artificial structure remains.
  • the above-described example is an example of a method for calculating the contour unevenness rate, and another method may be used.
  • the selection unit 104 may have the following configuration. A modification of the selection unit 104 will be described with reference to FIGS. A modification of the selection unit 104 is assumed to be 104b.
  • FIG. 10 is a block diagram illustrating a detailed configuration of the selection unit 104b.
  • the selection unit 104b includes a direction classification unit 1001, a corner point calculation unit 1002, an inter-corner curvature calculation unit 1003, and an inter-corner deletion unit 1004.
  • the selection unit 104b calculates the direction of a line segment composed of contour pixels, and calculates a point where the direction changes as a corner point. Subsequently, a value obtained by reversing the average value of curvature between corner points adjacent to each other along the direction of the contour line segment is calculated. The part where this value is less than or equal to the threshold value is deleted from the contour, and the remaining part (the part where the value obtained by inverting the curvature is greater than the threshold value) is used as the input to the calculation unit 105.
  • the direction classification unit 1001 classifies the directions between adjacent pixels in the outline into four directions:-(horizontal direction),
  • the corner point calculation unit 1002 sequentially scans adjacent pixels on the contour as shown in FIG. 11, and calculates a point where the average direction changes as a corner point. However, a large number of corner points are generated in the portion where the average direction changes frequently. Therefore, if the ratio occupied by the maximum direction of several pixels before and after the change in the average direction is low (the ratio is equal to or less than the threshold), the corner point is not set.
  • the inter-corner curvature calculation unit 1003 calculates the average curvature of the contour between corners for each pair of corner points adjacent along the direction of the contour.
  • the average curvature score described above with reference to FIG. 7 is used as the average curvature.
  • the inter-corner deletion unit 1004 deletes a portion in the contour where a value obtained by reversing the average value of the curvature between corners is less than or equal to a threshold value.
  • This threshold is determined experimentally, but here it is 0.7.
  • the distorted portion of the outline or the bent portion is deleted, and only the straight or gentle curved portion remains.
  • the contour after the processing is partly interrupted, and this contour is input to the calculation unit 105.
  • FIG. 12 is a block diagram illustrating a detailed configuration of the calculation unit 105.
  • the calculation unit 105 includes a direction classification unit 1201, a perpendicularity score calculation unit 1202, and a structural score calculation unit 1203.
  • the direction classification unit 1201 classifies the direction between adjacent pixels of the contour, and counts the number of pixels in each direction for each contour.
  • the vertical score calculation unit 1202 calculates the ratio of the number of pixels in the directions perpendicular to each other in each direction of the contour as the vertical score.
  • the structural score calculation unit 1203 calculates the contour score of each contour from the contour length and the perpendicularity score, and outputs the sum of the contour scores of all the contours as the structural score of the entire image.
  • the contour score is calculated for each contour, it is possible to appropriately calculate the structure without being influenced by surrounding objects even in a complex image including many objects.
  • the direction classification unit 1201 classifies the directions between adjacent pixels in the outline into four directions-,
  • the number of pixels in each direction may be obtained by obtaining the number of pixels in each direction in several pixels before and after the contour for each pixel and using the direction with the largest number of pixels as the average direction of the pixels. .
  • the direction with the maximum number of pixels is obtained from the four directions ⁇ ,
  • the number of pixels obtains the maximum in a direction perpendicular to the direction N v, and outputs the N v / N max as a vertical score for that contour.
  • the direction perpendicular to it is-, and when the direction with the maximum number of pixels is /, the direction perpendicular to it is ⁇ .
  • an artificial structure such as a building contains many square parts, there are many line segments in two directions perpendicular to each other. Therefore, by introducing a perpendicularity score, a score reflecting the characteristics of such an artificial structure can be calculated.
  • the verticality score is calculated after correcting the number of pixels. If the direction of the maximum number of pixels among the four directions is ⁇ or
  • the structural score calculation unit 1203 calculates a contour score indicating the likelihood of an artificial structure of the shape for each contour.
  • Contour score contour length ⁇ verticality score.
  • the contour score is calculated for all the contours selected by the selection unit 104, and the sum of the contour scores is output as the structural score of the image.
  • the number of pixels excluding the pixels at the edge of the screen in the contour may be used as the contour length of the contour.
  • the determination unit 106 determines that an image in which the structural score output by the calculation unit 105 is equal to or greater than a threshold is an artificial structure image. Since the optimum threshold varies depending on the size of the image, it is obtained experimentally.
  • the configuration of the image processing apparatus may be a form that excludes the calculation unit 105 of FIG.
  • the determination unit 106 determines that an image in which one or more contours are selected by the selection unit 104 is an artificial structure image.
  • the image processing apparatus performs processing from step S1302 onward until processing of all images is completed (step S1301).
  • the dividing unit 102 divides the image into a plurality of areas, and assigns and outputs a separate label for each area (step S1302).
  • processing from step S1304 is performed until processing of all divided areas is completed (step S1303).
  • the extraction unit 103 extracts the contour of the region by obtaining the boundary point between the region and the region outside the region (step S1304).
  • the selection unit 104 determines whether or not the average curvature score and the unevenness ratio of the extracted contour satisfy a preset threshold value (step S1305).
  • step S1306 If the condition is satisfied, the process proceeds to step S1306. If not, the contour processing ends and the process proceeds to step S1303.
  • the calculating unit 105 calculates a contour score from the contour length and the perpendicularity score (step S1306). The contour score calculated by the calculation unit 105 is added to the structural score of the entire image, the contour processing is terminated, and the process proceeds to step S1303 (step S1307).
  • step S1303 When all areas are processed in step S1303, the process proceeds to step S1308. If the determination unit 106 determines that the structural score of the image is equal to or greater than a preset threshold value, the determination unit 106 determines that the image is an artificial structure image, and if not, the process proceeds to step S1301 (step S1308). When all the images have been processed, the process ends.
  • the image is divided into regions, and the score is calculated for each outline of the divided regions. Therefore, even in a complex image including many objects, the influence of surrounding objects is affected. Therefore, it is possible to provide an image processing apparatus that can determine a structural object without receiving the image.
  • the structural score is calculated for each region in the image, it can be used to grasp where and in what size the building is shown in the image.
  • FIG. 14 and FIG. 15 show an example of the image group selected for the slide show. In the slide show, five photos are displayed for several seconds in order from the left.
  • FIG. 15 shows an artificial structure photograph inserted between human photographs by a slide show display apparatus using the image processing apparatus of the present embodiment.
  • the photographs of the artificial structure are the second (house) and the fourth (signboard) from the left.
  • the images to be inserted into the slide show are appropriately selected from images determined as artificial structures.
  • the slide show display device includes a display unit that displays an image of an artifact in a plurality of other images.
  • the configuration of the image processing apparatus may not include the determination unit 106 in the processing unit of FIG.
  • the determination unit 106 is not present, the top several images having a high structural score output from the calculation unit 105 are selected as images to be inserted into the slide show.
  • it can be used to classify images taken and stored into categories of artificial structures and other categories, and to identify urban and mountainous areas using aerial photographs.
  • the slide show display device including the image processing device of the present embodiment can be applied to a personal computer, a digital photo frame, a camera, a television, a mobile phone and the like as shown in FIG.
  • the present invention is not limited to the above-described embodiment as it is, and can be embodied by modifying constituent elements without departing from the scope of the invention in the implementation stage.
  • various inventions can be formed by appropriately combining a plurality of components disclosed in the embodiment. For example, some components may be deleted from all the components shown in the embodiment.
  • constituent elements over different embodiments may be appropriately combined.
  • the image processing apparatus and slide show display apparatus can be applied to a personal computer, a digital photo frame, a camera, a television, a mobile phone, and the like.
  • DESCRIPTION OF SYMBOLS 101 ... Acquisition part, 102 ... Dividing part, 103 ... Extraction part, 104, 104b ... Selection part, 105 ... Calculation part, 106 ... Determination part, 151 ... Image frame, 152 ... Determination result, 601 ... First selection part, 602 ... second selection unit, 651, 652 ... contour information, 801 ... target pixel, 1001 ... direction classification unit, 1002 ... corner point calculation unit, 1003 ... curvature calculation unit between corners, 1004 ... deletion unit between corners, 1201 ... direction classification , 1202... Vertical score calculator, 1203.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

An image processing device is provided with an acquiring unit (101) for acquiring an image composed of a plurality of pixels, a dividing unit (102) for dividing the image into a plurality of regions on the basis of the difference between the pixel values of a pixel and another pixel adjacent to the former pixel, an extracting unit (103) for extracting the contour of each region, a selecting unit (104) for calculating the polygonality ratio and convexity/concavity ratio of the contour and selecting from among the regions a region having a polygonality ratio equal to or greater than a first threshold value or having a convexity/concavity ratio equal to or smaller than a second threshold value, and a determining unit (106) for determining the whole image including the region selected by the selecting unit as an image of an artifact.

Description

画像処理装置及びスライドショー表示装置Image processing apparatus and slide show display apparatus
 本発明は、画像処理装置及びこの装置を使用したスライドショー表示装置に関する。 The present invention relates to an image processing apparatus and a slide show display apparatus using the apparatus.
 画像中から建物などの人工構造物を検出する技術として、画像中の線分を用いる手法がある(例えば特許文献1参照)。この手法では平行な線分の組や、交わる線分の組が存在するか等の条件により人工構造物を検出している。 As a technique for detecting an artificial structure such as a building from an image, there is a method using a line segment in the image (see, for example, Patent Document 1). In this method, an artificial structure is detected based on conditions such as whether there are parallel line segments or intersecting line segments.
特許第3480369号公報Japanese Patent No. 3480369
 特許文献1は、宇宙機の自動ランデブに応用することを前提とし、例えば宇宙ステーションと惑星の画像が入力された場合に、人工構造物である宇宙ステーションの存在する領域を抽出している。 Patent Document 1 is premised on application to an automatic rendezvous of a spacecraft. For example, when an image of a space station and a planet is input, a region where a space station as an artificial structure exists is extracted.
 しかし、多くの物体を含む複雑な画像の場合には、単純に近くにある線分同士の平行性や直交性によって人工構造物を抽出すると、近くにある別の物体の輪郭との線分関係も考慮してしまう問題がある。また、いびつな直線や曲線は検出しない。 However, in the case of a complex image containing many objects, if an artificial structure is simply extracted by the parallelism or orthogonality between nearby line segments, the line segment relationship with the contour of another nearby object There is a problem that also takes into account. Also, irregular lines and curves are not detected.
 本発明は、上記に鑑みてなされたものであって、多くの物体を含む複雑な画像でも、周囲の物体の影響を受けずに人工構造物を判定できる画像処理装置及びスライドショー表示装置を提供することを目的とする。 The present invention has been made in view of the above, and provides an image processing apparatus and a slide show display apparatus that can determine an artificial structure without being influenced by surrounding objects even in a complex image including many objects. For the purpose.
 本発明の画像処理装置は、複数の画素からなる画像を取得する取得部と、前記画素と該画素に隣接する画素との画素値の差分から前記画像を複数の領域に分割する分割部と、前記領域毎の輪郭を抽出する抽出部と、前記輪郭の多角形率及び凹凸率を算出し、該多角形率が第1閾値以上であるか、あるいは該凹凸率が第2閾値以下である領域を前記複数の領域から選択する選択部と、前記選択部が選択した領域を含む画像を人工物の画像と判定する判定部と、を備えることを特徴とする。 An image processing apparatus according to the present invention includes an acquisition unit that acquires an image including a plurality of pixels, a division unit that divides the image into a plurality of regions based on a difference in pixel values between the pixel and a pixel adjacent to the pixel, An extraction unit that extracts a contour for each region, and calculates a polygon rate and a concavo-convex rate of the contour, and the polygon rate is equal to or higher than a first threshold value, or a region where the concavo-convex rate is equal to or lower than a second threshold value A selection unit that selects an image from the plurality of regions, and a determination unit that determines an image including the region selected by the selection unit as an image of an artifact.
 本発明のスライドショー表示装置は、上記の画像処理装置を含み、前記人工物の画像を複数の画像の中に含めて表示する表示部を備えることを特徴とする。 A slide show display device according to the present invention includes the above-described image processing device, and includes a display unit that displays an image of the artifact in a plurality of images.
 本発明の画像処理装置及びスライドショー表示装置によれば、多くの物体を含む複雑な画像でも、周囲の物体の影響を受けずに人工構造物を判定できる。 According to the image processing device and the slide show display device of the present invention, it is possible to determine an artificial structure without being influenced by surrounding objects even in a complex image including many objects.
実施形態の画像処理装置のブロック図。1 is a block diagram of an image processing apparatus according to an embodiment. 傾いていた場合には画像の傾きを修正する場合を説明するための図。The figure for demonstrating the case where the inclination of an image is corrected when it inclined. 抽出部による輪郭抽出結果の一例を示す図。The figure which shows an example of the outline extraction result by an extraction part. 分割部による領域分割結果の一例を示す図。The figure which shows an example of the area | region division result by a division part. 抽出部の動作の一例を示すフローチャート。The flowchart which shows an example of operation | movement of an extraction part. 選択部のブロック図。The block diagram of a selection part. 第一選択部による曲率の平均値の算出手法を説明するための図。The figure for demonstrating the calculation method of the average value of the curvature by a 1st selection part. 第一選択部が算出する曲率スコアの付け方を説明するための図。The figure for demonstrating how to give the curvature score which a 1st selection part calculates. 第二選択部による凹凸率の算出手法を説明するための図。The figure for demonstrating the calculation method of the uneven | corrugated rate by a 2nd selection part. 選択部の変形例のブロック図。The block diagram of the modification of a selection part. コーナー点算出部によるコーナー点算出結果の一例を示す図。The figure which shows an example of the corner point calculation result by a corner point calculation part. 算出部のブロック図。The block diagram of a calculation part. 実施形態の画像処理装置の動作の一例を示すフローチャート。6 is a flowchart illustrating an example of the operation of the image processing apparatus according to the embodiment. 顔が映っている写真から作成したスライドショーの一例を示す図。The figure which shows an example of the slide show created from the photograph in which the face is reflected. 人物の写真の間に人工構造物の写真を挿入したスライドショーの一例を示す図。The figure which shows an example of the slide show which inserted the photograph of the artificial structure between the photographs of the person. 実施形態の画像処理装置を含むスライドショー表示装置を含むことができる装置の一例を示す図。1 is a diagram illustrating an example of an apparatus that can include a slide show display apparatus including an image processing apparatus according to an embodiment.
 以下、図面を参照しながら本発明の実施形態に係る画像処理装置及びスライドショー表示装置について詳細に説明する。なお、以下の実施形態では、同一の番号を付した部分については同様の動作を行うものとして、重ねての説明を省略する。 
 以下では、建物や看板、乗り物などの人工的で規則的な構造を持った物体を人工構造物、人工構造物を画像中に含むものを人工構造物画像と呼ぶことにする。ここで、建物は住居、ビル、城、寺、鳥居などを含む。看板は標識、記念碑、額などを含む。乗り物は列車、船、飛行機、車などを含む。また、多角形とは三角形、四角形などを含む多角形全般を指す。また、画像に人工構造物が含まれている度合いを示すスコアを、構造性スコアと呼ぶことにする。
Hereinafter, an image processing apparatus and a slide show display apparatus according to embodiments of the present invention will be described in detail with reference to the drawings. Note that, in the following embodiments, the same numbered portions are assumed to perform the same operation, and repeated description is omitted.
Hereinafter, an object having an artificial and regular structure such as a building, a signboard, or a vehicle is referred to as an artificial structure, and an object including the artificial structure in an image is referred to as an artificial structure image. Here, the building includes a residence, a building, a castle, a temple, a torii, and the like. Signs include signs, monuments, foreheads, etc. Vehicles include trains, ships, airplanes, cars and the like. Polygon refers to all polygons including triangles and quadrangles. In addition, a score indicating the degree to which an artificial structure is included in the image is referred to as a structure score.
 次に、本実施形態に係る画像処理装置について図1を参照して説明する。 
 本実施形態の画像処理装置は、取得部101、分割部102、抽出部103、選択部104、算出部105、判定部106を備えている。取得部101は、1以上の画像フレーム151を取得する。分割部102は、取得部101が取得した画像フレーム151を画素値に基づいて複数の領域に分割する。分割部102の詳細は後に図4を参照して説明する。抽出部103は、分割部102が分割した各領域の輪郭を抽出する。抽出部103の詳細は後に図3、図5を参照して説明する。選択部104は、抽出部103が抽出した輪郭のうちで多角形率が第1閾値以上もしくは凹凸率が第2閾値以下の輪郭だけを選択する。多角形率とは、輪郭がどの程度ある多角形に近い形状であるかを示す指標であり、例えば輪郭が四角形にどの程度近い形状であるかを示す。選択部104の詳細は後に図6~図11を参照して説明する。算出部105は、選択部104が選択した各輪郭の形状に基づいて、人工構造物らしさを示す構造性スコアを算出する。算出部105の詳細は後に図12を参照して説明する。判定部106は、算出部105が算出した構造性スコアの高さから、その画像が人工構造物を多く含む画像であるか判定し、判定結果152を得る。
Next, the image processing apparatus according to the present embodiment will be described with reference to FIG.
The image processing apparatus according to the present embodiment includes an acquisition unit 101, a division unit 102, an extraction unit 103, a selection unit 104, a calculation unit 105, and a determination unit 106. The acquisition unit 101 acquires one or more image frames 151. The dividing unit 102 divides the image frame 151 acquired by the acquiring unit 101 into a plurality of regions based on the pixel values. Details of the dividing unit 102 will be described later with reference to FIG. The extraction unit 103 extracts the outline of each region divided by the division unit 102. Details of the extraction unit 103 will be described later with reference to FIGS. The selection unit 104 selects only contours having a polygon ratio equal to or higher than the first threshold value or an unevenness rate equal to or lower than the second threshold value from the contours extracted by the extraction unit 103. The polygon ratio is an index indicating how close the shape is to a polygon having a contour, and indicates how close the contour is to a quadrangle, for example. Details of the selection unit 104 will be described later with reference to FIGS. Based on the shape of each contour selected by the selection unit 104, the calculation unit 105 calculates a structural score indicating the likelihood of an artificial structure. Details of the calculation unit 105 will be described later with reference to FIG. The determination unit 106 determines whether the image is an image including many artificial structures from the height of the structural score calculated by the calculation unit 105, and obtains a determination result 152.
 以下各部について詳細に説明する。 
 まず、取得部101について説明する。 
 取得部101は、複数の画素からなる画像フレーム151を取得する。複数枚の画像がある場合には、1枚ずつメモリに読み込む。メモリは例えば取得部101内に設置される。取得部101への入力が映像の場合、取得部101は映像から数フレームに1枚を画像に変換して取得する。次に、取得部101は入力画像を事前に指定した特定の解像度に変換する。この変換にはバイキュービック法などの画像伸縮手法を用いる。元画像が縦長の画像で変換後の画像が横長の場合は、元画像を90°回転してから伸縮を行う。取得部101は変換後の画像フレームを分割部102へ出力する。
Each part will be described in detail below.
First, the acquisition unit 101 will be described.
The acquisition unit 101 acquires an image frame 151 including a plurality of pixels. When there are a plurality of images, they are read into the memory one by one. For example, the memory is installed in the acquisition unit 101. When the input to the acquisition unit 101 is a video, the acquisition unit 101 acquires one image from several frames converted into an image. Next, the acquisition unit 101 converts the input image into a specific resolution specified in advance. This conversion uses an image expansion / contraction technique such as a bicubic method. If the original image is a vertically long image and the converted image is horizontally long, the original image is rotated by 90 ° and then stretched. The acquisition unit 101 outputs the converted image frame to the division unit 102.
 取得部101は、入力画像が傾いて撮影されたかどうかを判定し、傾いていた場合には画像の傾きを修正するような構成でもよい。画像中の線分を検出して複数の方向(例えば、水平方向から22.5°ずつ傾けた計8方向)に分類し、方向毎に線分の長さの総計を算出する。線分の長さの総計が最も多い方向が水平方向または垂直方向であれば傾いていない、それ以外の方向であればその方向に傾いていると判定する。 The acquisition unit 101 may be configured to determine whether the input image has been photographed with an inclination, and to correct the inclination of the image when the input image has been inclined. Line segments in the image are detected and classified into a plurality of directions (for example, a total of 8 directions inclined by 22.5 ° from the horizontal direction), and the total length of the line segments is calculated for each direction. If the direction with the largest total line segment length is the horizontal direction or the vertical direction, it is not inclined, and if it is any other direction, it is determined that the direction is inclined.
 線分の検出は、Cannyアルゴリズムなどのエッジ検出手法によりエッジ画像を取得し、そのエッジ画像にハフ変換を施すことにより行う。傾いていた場合、検出した傾き方向を水平方向として図2のように画像の中心部分を切り出し回転して修正する。取得部101は、入力画像が傾いて撮影されたかどうかを判定し、傾いていた場合には画像の傾きを修正するような構成でもよい。画像中の線分を検出して複数の方向(例えば、水平方向から22.5°ずつ傾けた計8方向)に分類し、方向毎に線分の長さの総計を算出する。線分の長さの総計が最も多い方向が水平方向または垂直方向であれば傾いていない、それ以外の方向であればその方向に傾いていると判定する。 Detecting a line segment is performed by acquiring an edge image by an edge detection method such as the Canny algorithm, and performing a Hough transform on the edge image. If it is tilted, the detected tilt direction is set as the horizontal direction, and the center portion of the image is cut out and rotated as shown in FIG. The acquisition unit 101 may be configured to determine whether the input image has been taken with an inclination, and to correct the inclination of the image if the input image has been inclined. Line segments in the image are detected and classified into a plurality of directions (for example, a total of 8 directions inclined by 22.5 ° from the horizontal direction), and the total length of the line segments is calculated for each direction. If the direction with the largest total line segment length is the horizontal direction or the vertical direction, it is not inclined, and if it is any other direction, it is determined that the direction is inclined.
 線分の検出は、Cannyアルゴリズムなどのエッジ検出手法によりエッジ画像を取得し、そのエッジ画像にハフ変換を施すことにより行う。傾いていた場合、検出した傾き方向を水平方向として図2のように画像の中心部分を切り出し回転して修正する。 Detecting a line segment is performed by acquiring an edge image by an edge detection method such as the Canny algorithm, and performing a Hough transform on the edge image. If it is tilted, the detected tilt direction is set as the horizontal direction, and the center portion of the image is cut out and rotated as shown in FIG.
 次に、図1の分割部102について図3及び図4を参照して説明する。 
 分割部102は、変換後の画像フレームの画素と該画素に隣接する画素の画素値の差分から画像の領域を分割する。ここで画素値の差分には、RGB値やHSV値、輝度値のユークリッド距離などがある。Hは色相、Sは彩度、Vは輝度を示す。また、輝度を閾値として2値化した数値の差分を用いてもよい。
Next, the dividing unit 102 in FIG. 1 will be described with reference to FIGS. 3 and 4.
The dividing unit 102 divides the image area from the difference between the pixel value of the pixel of the converted image frame and the pixel adjacent to the pixel. Here, the pixel value difference includes an RGB value, an HSV value, a Euclidean distance of a luminance value, and the like. H represents hue, S represents saturation, and V represents luminance. Also, a numerical difference binarized using the luminance as a threshold value may be used.
 画像を複数の領域に分割する手法には、k-means法、領域拡張法、分割・統合法など様々な手法があり、どの手法を利用しても構わない。本実施形態では一例として分割・統合法について説明する。 There are various methods for dividing an image into a plurality of regions, such as a k-means method, a region expansion method, and a division / integration method, and any method may be used. In this embodiment, a division / integration method will be described as an example.
 始めに、画像全体を一つの部分領域とみなす。そして、対象とする領域が一様でないと判断した場合には縦横2等分に分割する。各部分領域について分割処理を繰り返し、すべての領域が一様となるまで行う。分割条件には様々なものがあるが、代表的なものとして濃度ヒストグラムを用いた手法がある。領域内の各画素の濃度値(輝度値、RGB値、HSV値など)を量子化しておき、各濃度値の画素数をカウントする。各濃度値のうちで最大の画素数が領域全体に占める割合が高ければ一様な領域と判定し、低ければ一様でないと判定して分割する。 First, consider the entire image as one partial area. When it is determined that the target area is not uniform, the target area is divided into two equal parts. The division process is repeated for each partial region until all the regions are uniform. There are various division conditions, but there is a technique using a density histogram as a representative one. The density value (luminance value, RGB value, HSV value, etc.) of each pixel in the region is quantized, and the number of pixels of each density value is counted. If the ratio of the maximum number of pixels among the density values to the entire area is high, it is determined as a uniform area, and if it is low, it is determined as non-uniform and divided.
 続いて、各部分領域と上下左右に隣接する部分領域について、統合した場合の領域が条件を満たしていれば統合する。統合条件には分割条件と同じ濃度ヒストグラムを用いる。統合できるものがなくなるまで統合を繰り返し、最終的な領域分割結果を得る。 Next, for the partial areas adjacent to each partial area in the top, bottom, left, and right, if the areas when they are integrated satisfy the conditions, integration is performed. As the integration condition, the same density histogram as the division condition is used. The integration is repeated until there is nothing that can be integrated, and the final region division result is obtained.
 分割部102は、分割された領域毎に別々のラベルを付与し、各画素が属する領域ラベルをその画素の画素値とした領域画像(領域分割結果)を作成する。この領域分割結果は例えば図3の左のようになる。分割部102はこの領域分割結果を抽出部103に出力する。領域分割結果の一例を図4に示す。異なる領域を異なる模様で表示してある。画像中の空や建物の屋根、壁ごとに別々の領域に分割されている。 The dividing unit 102 assigns a different label to each divided area, and creates an area image (area division result) with the area label to which each pixel belongs as the pixel value of the pixel. This region division result is, for example, as shown on the left in FIG. The dividing unit 102 outputs the area division result to the extracting unit 103. An example of the region division result is shown in FIG. Different areas are displayed in different patterns. Each sky, building roof, or wall in the image is divided into separate areas.
 次に、図1の抽出部103について図5を参照して説明する。図5は抽出部103の動作の一例を示すフローチャートである。 
 抽出部103は、分割部102で分割された画像中の各領域の輪郭を抽出する。抽出部103は、分割部102で分割されたすべての領域についてステップS502以降の処理を行う(ステップS501)。現在処理中の領域のラベルが付与された画素を領域画像中から一つ探す(ステップS502)。領域画像を左上の画素から順に走査し、最初に見つかった画素を第一の輪郭画素として記憶する(ステップS502)。
Next, the extraction unit 103 in FIG. 1 will be described with reference to FIG. FIG. 5 is a flowchart illustrating an example of the operation of the extraction unit 103.
The extraction unit 103 extracts the outline of each region in the image divided by the division unit 102. The extraction unit 103 performs the processing from step S502 on all the regions divided by the division unit 102 (step S501). One pixel to which the label of the region currently being processed is assigned is searched from the region image (step S502). The region image is scanned sequentially from the upper left pixel, and the first found pixel is stored as the first contour pixel (step S502).
 続いて、処理中の領域と別の領域の境界に沿って輪郭画素を求めていく。輪郭画素に隣接する画素(4近傍または8近傍)の中で現在処理中の領域ラベルを持つもののうち、別の領域ラベルを持つ画素に隣接している画素を次の輪郭画素とする(ステップS503)。条件を満たす画素が複数ある場合には、直前の輪郭画素と現在の輪郭画素の関係に基づいて次の輪郭画素を決定する。例えば、直前の輪郭画素が現在の輪郭画素の右上にある場合、現在の輪郭画素の上の画素から順に半時計周りに隣接画素を走査して現在処理中の領域ラベルを持つ画素を探し、最初に見つかった画素を次の輪郭画素とする。現在の輪郭画素が第一の輪郭画素である場合は、直前の輪郭画素が存在しないため、第一の輪郭画素の左隣の画素を直前の輪郭画素とみなして処理する。 Subsequently, contour pixels are obtained along the boundary between the area being processed and another area. Of the pixels adjacent to the contour pixel (near 4 or 8) having the region label currently being processed, the pixel adjacent to the pixel having another region label is set as the next contour pixel (step S503). ). If there are a plurality of pixels that satisfy the condition, the next contour pixel is determined based on the relationship between the immediately preceding contour pixel and the current contour pixel. For example, if the immediately preceding contour pixel is at the upper right of the current contour pixel, the neighboring pixels are scanned counterclockwise sequentially from the pixel above the current contour pixel to find a pixel having the region label currently being processed. The pixel found in the next is defined as the next contour pixel. If the current contour pixel is the first contour pixel, there is no previous contour pixel, and therefore, the pixel adjacent to the left of the first contour pixel is regarded as the previous contour pixel and processed.
 次の輪郭画素が第一の輪郭画素と同じ座標かどうか調べ、違う場合には次の輪郭画素を現在の輪郭画素としてステップS503に戻り、同じ場合にはステップS505に進む(ステップS504)。その領域の輪郭を出力してステップS501に進む(ステップS505)。すべての領域を処理し終わったら動作を終了し、抽出された各領域の輪郭を選択部104への入力とする。輪郭抽出結果の一例を図3に示す。図3の各画素の数字は属する領域を示すラベルである。 Whether the next contour pixel has the same coordinates as the first contour pixel is checked. If the next contour pixel is different, the next contour pixel is set as the current contour pixel, and the process returns to step S503. If the same, the process proceeds to step S505 (step S504). The contour of the region is output and the process proceeds to step S501 (step S505). When all the regions have been processed, the operation is terminated, and the extracted contour of each region is used as an input to the selection unit 104. An example of the contour extraction result is shown in FIG. The number of each pixel in FIG. 3 is a label indicating the area to which it belongs.
 次に、図1の選択部104について図6を参照して説明する。図6は選択部104の詳細な構成を示すブロック図である。 
 選択部104は、第一選択部601、第二選択部602を含んでいる。
Next, the selection unit 104 in FIG. 1 will be described with reference to FIG. FIG. 6 is a block diagram illustrating a detailed configuration of the selection unit 104.
The selection unit 104 includes a first selection unit 601 and a second selection unit 602.
 第一選択部601は、各輪郭の曲率の平均値を正負反転した値を算出し、この値が閾値以上の輪郭を選択する。これにより人工的な構造物の形状に特有な、直線や緩やかな曲線からなる輪郭だけを選択することができる。 The first selection unit 601 calculates a value obtained by inverting the average value of the curvature of each contour, and selects a contour whose value is equal to or greater than a threshold value. As a result, it is possible to select only a contour made of a straight line or a gentle curve, which is peculiar to the shape of the artificial structure.
 第二選択部602は、各輪郭の凹凸率を算出し、凹凸率が閾値以下の輪郭を選択する。これにより歪な形状の輪郭を取り除くことができる。 The second selection unit 602 calculates the concavo-convex ratio of each contour, and selects a contour having a concavo-convex ratio of a threshold value or less. Thereby, a distorted contour can be removed.
 なお、図6のブロック図はあくまで一例を示したものであり、選択部104に含まれる選択部601、602は必ずしもすべてが必要なわけではなく、一部の処理だけを含むもの、あるいは順番が入れ替わったものでもよい。 Note that the block diagram of FIG. 6 is merely an example, and the selection units 601 and 602 included in the selection unit 104 are not necessarily all required, but include only a part of the processing, or the order is It may be replaced.
 次に、図6の第一選択部601について説明する。曲率の平均値の算出手法の一例について図7を参照して説明する。 
 第一選択部601は、抽出部103で抽出された輪郭情報651を入力とし、その中で曲率の平均値を正負反転した値が閾値以上の輪郭だけを選択する。曲率の平均値を正負反転した値は、多角形率の一例に対応する。輪郭中の画素毎に、隣接2画素との関係に基づいて、輪郭の曲がりの大きさを示す曲率スコアを算出する。この曲率スコアは、輪郭が真っ直ぐに近いほど値が小さくなる。注目画素と隣接2画素が直線上にある場合は曲率スコア=-1(図7の(1))、隣接2画素が直線に近い斜めの場合は曲率スコア=-0.5(図7の(2))、隣接2画素が直角の関係の場合は曲率スコア=0(図7の(3))、隣接2画素が折り返しに近い斜めの場合は曲率スコア=0.5(図7の(4))、注目画素で折り返していて隣接2画素が同じ場合は曲率スコア=1となる(図7の(5))。最後に、輪郭中の全画素の曲率スコアの平均値を算出し、平均曲率スコアを正負反転した値が閾値以上となる輪郭を人工構造物の輪郭として選択する。この閾値は実験的に求めるが、ここでは0.7とする。
Next, the first selection unit 601 in FIG. 6 will be described. An example of a method for calculating the average value of curvature will be described with reference to FIG.
The first selection unit 601 receives the contour information 651 extracted by the extraction unit 103, and selects only contours having a value obtained by reversing the average value of the curvature from the positive value to the threshold value. A value obtained by inverting the average value of the curvature between positive and negative corresponds to an example of a polygonal curvature. For each pixel in the contour, a curvature score indicating the size of the curvature of the contour is calculated based on the relationship with the adjacent two pixels. The curvature score becomes smaller as the contour is closer to straight. When the target pixel and two adjacent pixels are on a straight line, the curvature score = -1 ((1) in FIG. 7), and when the adjacent two pixels are oblique to the straight line, the curvature score = -0.5 ((( 2)), when the adjacent two pixels are in a right angle relationship, the curvature score = 0 ((3) in FIG. 7), and when the adjacent two pixels are diagonally close to folding, the curvature score = 0.5 ((4 in FIG. 7). )), When the pixel of interest is folded back and the two adjacent pixels are the same, the curvature score is 1 ((5) in FIG. 7). Finally, an average value of curvature scores of all the pixels in the contour is calculated, and a contour whose value obtained by inverting the average curvature score is equal to or greater than a threshold value is selected as the contour of the artificial structure. This threshold is determined experimentally, but here it is 0.7.
 これにより、直線または緩やかな曲線から成る輪郭だけを選択することができる。図7の例では、左のきれいな長方形は平均曲率スコアを正負反転した値=0.88であり、真ん中の緩やかな曲線を含む図形はスコアを正負反転した値=0.73であり、右のいびつな図形はスコアを正負反転した値=0.48なので、閾値処理により左と真ん中の図形だけを選択できる。 This makes it possible to select only contours consisting of straight lines or gentle curves. In the example of FIG. 7, the clean rectangle on the left is the value obtained by inverting the average curvature score = 0.88, and the graphic including the gentle curve in the middle is the value obtained by inverting the score is 0.73, Since an irregular figure has a value obtained by reversing the score from positive to negative = 0.48, only the left and middle figures can be selected by threshold processing.
 上記のスコアの付け方について詳細に説明する。図8のように注目画素801の隣接8画素を左上から順に(1)~(8)とする。 
 注目画素801の曲率スコアを-1とするのは、注目画素801の前後2画素の組が(1)(8)、(2)(7)、(3)(6)、(4)(5)のどれかの場合である。 
 曲率スコアを-0.5とするのは、注目画素801の前後2画素の組が(1)(7)、(1)(5)、(2)(6)、(2)(8)、(3)(4)、(3)(7)、(4)(8)、(5)(6)のどれかの場合である。 
 曲率スコアを0とするのは、注目画素801の前後2画素の組が(1)(6)、(1)(3)、(2)(4)、(2)(5)、(3)(8)、(4)(7)、(5)(7)、(6)(8)のどれかの場合である。 
 曲率スコアを0.5とするのは、注目画素801の前後2画素の組が(1)(2)、(2)(3)、(3)(5)、(5)(8)、(8)(7)、(7)(6)、(6)(4)、(4)(1)のどれかの場合である。 
 曲率スコアを1とするのは、注目画素801の前後2画素の組が(1)(1)、(2)(2)、(3)(3)、(4)(4)、(5)(5)、(6)(6)、(7)(7)、(8)(8)のどれかの場合である。
The method for assigning the score will be described in detail. As shown in FIG. 8, the eight adjacent pixels of the target pixel 801 are (1) to (8) in order from the upper left.
The curvature score of the pixel of interest 801 is set to −1 because the groups of two pixels before and after the pixel of interest 801 are (1) (8), (2) (7), (3) (6), (4) (5) ) Is one of the cases.
The curvature score is set to −0.5 because the set of two pixels before and after the target pixel 801 is (1) (7), (1) (5), (2) (6), (2) (8), (3) (4), (3) (7), (4) (8), (5) (6).
The curvature score is set to 0 because the group of two pixels before and after the pixel of interest 801 is (1) (6), (1) (3), (2) (4), (2) (5), (3) (8), (4) (7), (5) (7), (6), or (8).
The curvature score is set to 0.5 because the groups of two pixels before and after the pixel of interest 801 are (1) (2), (2) (3), (3) (5), (5) (8), ( 8) Any of (7), (7), (6), (6), (4), (4), and (1).
The curvature score is set to 1 because the group of two pixels before and after the pixel of interest 801 is (1) (1), (2) (2), (3) (3), (4) (4), (5) (5), (6), (6), (7), (7), (8), or (8).
 上述の例は輪郭の曲率スコアの算出手法の一例であり、別の手法を利用してもよい。例えば、輪郭の形状を関数近似して、輪郭の各点の2次微分を用いて曲率半径及び曲率を算出する手法がある。 The above example is an example of a contour curvature score calculation method, and another method may be used. For example, there is a method of calculating a radius of curvature and a curvature using a quadratic differentiation of each point of the contour by approximating the shape of the contour as a function.
 この他にも多角形率の一例として、「輪郭に内包される面積/輪郭に外接する四角形の面積」がある。 In addition to this, as an example of the polygon ratio, there is “area included in outline / area of rectangle circumscribing outline”.
 選択部104では、第一選択部601の処理を行う前に閾値以下の短い輪郭を取り除いておいてもよい。削除する輪郭長の閾値は画像の解像度に応じて実験的に求める。ここで、輪郭長=その輪郭に属する画素数とする。 The selection unit 104 may remove short contours that are equal to or less than a threshold before performing the processing of the first selection unit 601. The threshold value of the contour length to be deleted is experimentally determined according to the resolution of the image. Here, the contour length = the number of pixels belonging to the contour.
 第一選択部601は、輪郭中で画面端の画素を除いた部分から平均曲率スコアを正負反転した値を算出してもよい。一般的に画像は長方形であるため、画面端を含む領域の輪郭は直線的になる。画面端の画素を除くことにより真の物体の輪郭の形状を評価することができる。別の手法として、画面端の画素を一定割合以上含む輪郭は選択部104で選択しないようにしてもよい。 The first selection unit 601 may calculate a value obtained by inverting the average curvature score from the portion excluding the pixels at the screen edge in the contour. Since the image is generally rectangular, the outline of the region including the screen edge is linear. The shape of the contour of the true object can be evaluated by removing the pixels at the screen edge. As another method, the selection unit 104 may not select an outline including pixels at a certain ratio or more at the edge of the screen.
 次に、図6の第二選択部602について説明する。輪郭の凹凸の多さを示す凹凸率の算出手法の一例について図9を参照して説明する。 
 第二選択部602は、第一選択部601で選択された画像中の各輪郭のうち、凹凸の少ない輪郭だけを選択する。各輪郭に外接する最小の四角形を求め(図9の点線)、「輪郭の長さ/外接四角形の輪郭の長さ」を凹凸率として算出する。外接四角形の算出法の一例を示す。輪郭画素のうちで画像中のx座標が最小の画素を求め、そのx座標をx1とする。同様に、最大のx座標をx2、最小のy座標をy1、最大のy座標をy2とし、4つの頂点(x1,y1)、(x1,y2)、(x2,y1)、(x2,y2)からなる四角形を外接四角形とする。この算出手法では凹凸が大きいほど凹凸率が大きくなる。
Next, the second selection unit 602 in FIG. 6 will be described. An example of a method for calculating the unevenness ratio indicating the number of unevenness of the contour will be described with reference to FIG.
The second selection unit 602 selects only the contour with less unevenness among the contours in the image selected by the first selection unit 601. The smallest rectangle circumscribing each contour is obtained (dotted line in FIG. 9), and “the length of the contour / the length of the contour of the circumscribed rectangle” is calculated as the unevenness ratio. An example of a method for calculating a circumscribed rectangle will be shown. Among the contour pixels, a pixel having the smallest x coordinate in the image is obtained, and the x coordinate is set to x1. Similarly, the maximum x coordinate is x2, the minimum y coordinate is y1, the maximum y coordinate is y2, and the four vertices (x1, y1), (x1, y2), (x2, y1), (x2, y2) ) Is a circumscribed rectangle. In this calculation method, the unevenness ratio increases as the unevenness increases.
 輪郭の長さ、及び外接四角形の輪郭の長さは輪郭に属する画素数により求める。隣接する画素間の関係が斜め(8近傍)の場合には長さ2、そうでない場合には長さ1としてカウントする。図9では、左の図形は凹凸率=1で、真ん中や右の図形では輪郭が歪なため凹凸率>1となる。凹凸率が閾値以下となる輪郭を人工構造物の輪郭として選択する。この閾値は実験的に求めるが、ここでは1/0.75とする。1/0.75以下にすると自然物の矩形が取り除かれ、人工構造物の輪郭が残るのでよい。 The length of the contour and the length of the contour of the circumscribed rectangle are obtained from the number of pixels belonging to the contour. If the relationship between adjacent pixels is diagonal (near 8), the length is counted as 2, and if not, the length is counted as 1. In FIG. 9, the unevenness ratio = 1 for the left figure, and the unevenness ratio> 1 for the middle and right figures because the contour is distorted. A contour having an unevenness ratio equal to or lower than a threshold is selected as the contour of the artificial structure. This threshold is obtained experimentally, but here it is 1 / 0.75. If the ratio is 1 / 0.75 or less, the natural object rectangle is removed, and the contour of the artificial structure remains.
 上述の例は輪郭の凹凸率の算出手法の一例であり、別の手法を利用してもよい。例えば、輪郭の周囲長l及び輪郭が内包する領域の面積Sを用いて、凹凸率=l/Sとして求めてもよい。この場合も、凹凸が大きいほど凹凸率が大きくなる。 The above-described example is an example of a method for calculating the contour unevenness rate, and another method may be used. For example, the unevenness ratio = l 2 / S may be obtained using the peripheral length l of the contour and the area S of the region included in the contour. Also in this case, the unevenness ratio increases as the unevenness increases.
 (選択部104の変形例) 
 選択部104は下記に示すような構成でもよい。選択部104の変形例について図10、図11を参照して説明する。選択部104の変形例を104bとする。図10は選択部104bの詳細な構成を示すブロック図である。
(Modification of the selection unit 104)
The selection unit 104 may have the following configuration. A modification of the selection unit 104 will be described with reference to FIGS. A modification of the selection unit 104 is assumed to be 104b. FIG. 10 is a block diagram illustrating a detailed configuration of the selection unit 104b.
 選択部104bは、方向分類部1001、コーナー点算出部1002、コーナー間曲率算出部1003、コーナー間削除部1004を含んでいる。 The selection unit 104b includes a direction classification unit 1001, a corner point calculation unit 1002, an inter-corner curvature calculation unit 1003, and an inter-corner deletion unit 1004.
 選択部104bは、輪郭の画素からなる線分の方向を算出し、方向が変化する点をコーナー点として算出する。続いて、輪郭の線分の方向に沿って隣接するコーナー点間の曲率の平均値を正負反転した値を計算する。この値が閾値以下の部分を輪郭から削除し、残りの部分(曲率を正負反転した値が閾値よりも大きな部分)を算出部105の入力とする。 The selection unit 104b calculates the direction of a line segment composed of contour pixels, and calculates a point where the direction changes as a corner point. Subsequently, a value obtained by reversing the average value of curvature between corner points adjacent to each other along the direction of the contour line segment is calculated. The part where this value is less than or equal to the threshold value is deleted from the contour, and the remaining part (the part where the value obtained by inverting the curvature is greater than the threshold value) is used as the input to the calculation unit 105.
 方向分類部1001は、輪郭中の各隣接画素間の方向を―(水平方向)、|(垂直方向)、/(右斜め方向)、\(左斜め方向)の4方向に分類する。続いて、画素ごとに輪郭の前後数画素での各方向の画素数を求め、画素数が最も多い方向をその画素の平均方向とする。 The direction classification unit 1001 classifies the directions between adjacent pixels in the outline into four directions:-(horizontal direction), | (vertical direction), / (right diagonal direction), and \ (left diagonal direction). Subsequently, the number of pixels in each direction for several pixels before and after the contour is obtained for each pixel, and the direction with the largest number of pixels is defined as the average direction of the pixels.
 コーナー点算出部1002は、図11のように輪郭上の隣接する画素を順番に走査していき、平均方向が変化する点をコーナー点として算出する。ただし、平均方向が頻繁に変化する部分ではコーナー点が大量にされてしまう。そこで、平均方向が変化した点のうち、前後数画素での最大方向の占める比率が低い場合(比率が閾値以下)にはコーナー点としないようにする。 The corner point calculation unit 1002 sequentially scans adjacent pixels on the contour as shown in FIG. 11, and calculates a point where the average direction changes as a corner point. However, a large number of corner points are generated in the portion where the average direction changes frequently. Therefore, if the ratio occupied by the maximum direction of several pixels before and after the change in the average direction is low (the ratio is equal to or less than the threshold), the corner point is not set.
 コーナー間曲率算出部1003は、輪郭の方向に沿って隣接するコーナー点対毎に、コーナー間の輪郭の平均曲率を算出する。平均曲率には図7を参照して上記に説明した平均曲率スコアを用いる。 The inter-corner curvature calculation unit 1003 calculates the average curvature of the contour between corners for each pair of corner points adjacent along the direction of the contour. The average curvature score described above with reference to FIG. 7 is used as the average curvature.
 コーナー間削除部1004は、輪郭中でコーナー間の曲率の平均値を正負反転した値が閾値以下となる部分を削除する。この閾値は実験的に求めるが、ここでは0.7とする。これにより、輪郭の歪な部分や大きく曲がっている部分が削除され、直線や緩やかな曲線の部分だけが残る。処理後の輪郭は一部が途切れたものとなり、この輪郭を算出部105の入力とする。 The inter-corner deletion unit 1004 deletes a portion in the contour where a value obtained by reversing the average value of the curvature between corners is less than or equal to a threshold value. This threshold is determined experimentally, but here it is 0.7. As a result, the distorted portion of the outline or the bent portion is deleted, and only the straight or gentle curved portion remains. The contour after the processing is partly interrupted, and this contour is input to the calculation unit 105.
 次に、図1の算出部105について図12を参照して説明する。図12は算出部105の詳細な構成を示すブロック図である。 
 算出部105は、方向分類部1201、垂直性スコア算出部1202、構造性スコア算出部1203を含んでいる。
Next, the calculation unit 105 of FIG. 1 will be described with reference to FIG. FIG. 12 is a block diagram illustrating a detailed configuration of the calculation unit 105.
The calculation unit 105 includes a direction classification unit 1201, a perpendicularity score calculation unit 1202, and a structural score calculation unit 1203.
 方向分類部1201は、輪郭の各隣接画素間の方向を分類し、輪郭毎に各方向の画素数をカウントする。垂直性スコア算出部1202は、輪郭の各方向の中で互いに垂直な方向の画素数の比率を垂直性スコアとして算出する。構造性スコア算出部1203は、輪郭の長さ及び垂直性スコアから各輪郭の輪郭スコアを算出し、全輪郭の輪郭スコアの総和を画像全体の構造性スコアとして出力する。 The direction classification unit 1201 classifies the direction between adjacent pixels of the contour, and counts the number of pixels in each direction for each contour. The vertical score calculation unit 1202 calculates the ratio of the number of pixels in the directions perpendicular to each other in each direction of the contour as the vertical score. The structural score calculation unit 1203 calculates the contour score of each contour from the contour length and the perpendicularity score, and outputs the sum of the contour scores of all the contours as the structural score of the entire image.
 輪郭毎に輪郭スコアを算出するので、多くの物体を含む複雑な画像でも、周囲の物体の影響を受けずに適切に構造性を算出することができる。 Since the contour score is calculated for each contour, it is possible to appropriately calculate the structure without being influenced by surrounding objects even in a complex image including many objects.
 次に、図12の方向分類部1201の動作について説明する。方向分類部1201は、輪郭中の各隣接画素間の方向を―、|、/、\の4方向に分類し、各方向の画素数をカウントする。各方向の画素数は、上述した、画素ごとに輪郭の前後数画素での各方向の画素数を求め画素数が最も多い方向をその画素の平均方向としたものを使用して求めてもよい。 Next, the operation of the direction classification unit 1201 in FIG. 12 will be described. The direction classification unit 1201 classifies the directions between adjacent pixels in the outline into four directions-, |, /, and \, and counts the number of pixels in each direction. The number of pixels in each direction may be obtained by obtaining the number of pixels in each direction in several pixels before and after the contour for each pixel and using the direction with the largest number of pixels as the average direction of the pixels. .
 次に、図12の垂直性スコア算出部1202の動作について説明する。 
 始めに、4方向―、|、/、\のうちで画素数が最大の方向を求め、その画素数をNmaxとする。次に、画素数が最大の方向と垂直な方向Nを求め、N/Nmaxをその輪郭の垂直性スコアとして出力する。画素数が最大の方向が|の場合はそれと垂直な方向は―となり、画素数が最大の方向が/の場合はそれと垂直な方向は\となる。
Next, the operation of the perpendicularity score calculation unit 1202 in FIG. 12 will be described.
First, the direction with the maximum number of pixels is obtained from the four directions −, |, /, and \, and the number of pixels is defined as N max . Next, the number of pixels obtains the maximum in a direction perpendicular to the direction N v, and outputs the N v / N max as a vertical score for that contour. When the direction with the maximum number of pixels is |, the direction perpendicular to it is-, and when the direction with the maximum number of pixels is /, the direction perpendicular to it is \.
 建物のような人工的な構造物は四角い部分を多く含んでいるので、互いに垂直な2方向の線分が多くなる。そこで垂直性スコアを導入により、このような人工構造物の特徴を反映したスコアを算出できる。 Since an artificial structure such as a building contains many square parts, there are many line segments in two directions perpendicular to each other. Therefore, by introducing a perpendicularity score, a score reflecting the characteristics of such an artificial structure can be calculated.
 画面アスペクト比(画面に表示する際の縦横比)とピクセルアスペクト比(画像を構成する画素の縦横比)が異なる場合、画素数の比率と実際の縦横の長さの比率が異なるため、各方向の画素数を修正してから垂直性スコアを算出する。4方向のうちで最大画素数の方向が―または|で、画面アスペクト比がm:n、ピクセルアスペクト比がp:qの場合、―方向の画素数をm×q/n×p倍してから垂直性スコアを算出する。4方向のうちで最大画素数の方向が/または\の場合、元の画素数をそのまま構造性スコア算出に用いる。これは、縦横比が変わっても/、\方向の長さの比率は変わらないためである。 When the screen aspect ratio (aspect ratio when displaying on the screen) and the pixel aspect ratio (aspect ratio of the pixels that make up the image) are different, the ratio of the number of pixels and the actual aspect ratio are different, so each direction The verticality score is calculated after correcting the number of pixels. If the direction of the maximum number of pixels among the four directions is − or |, the screen aspect ratio is m: n, and the pixel aspect ratio is p: q, the number of pixels in the − direction is multiplied by m × q / n × p. The vertical score is calculated from When the direction of the maximum number of pixels among the four directions is / or \, the original number of pixels is used as it is for the structural score calculation. This is because the ratio of the length in the \ direction does not change even if the aspect ratio changes.
 次に、図12の構造性スコア算出部1203の動作について説明する。 
 構造性スコア算出部1203はまず、輪郭毎にその形状の人工構造物らしさを示す輪郭スコアを算出する。輪郭スコア=輪郭長×垂直性スコア、として算出する。選択部104で選択された全ての輪郭について輪郭スコアを算出し、輪郭スコアの総和をその画像の構造性スコアとして出力する。 
 構造性スコア算出時には、輪郭中で画面端の画素を除いた画素数をその輪郭の輪郭長として用いてもよい。
Next, the operation of the structural score calculation unit 1203 in FIG. 12 will be described.
First, the structural score calculation unit 1203 calculates a contour score indicating the likelihood of an artificial structure of the shape for each contour. Contour score = contour length × verticality score. The contour score is calculated for all the contours selected by the selection unit 104, and the sum of the contour scores is output as the structural score of the image.
When calculating the structural score, the number of pixels excluding the pixels at the edge of the screen in the contour may be used as the contour length of the contour.
 次に、図1の判定部106の動作について説明する。 
 判定部106は、算出部105で出力された構造性スコアが閾値以上となる画像を人工構造物画像と判定する。最適な閾値は画像のサイズによって異なるので実験的に求める。
Next, the operation of the determination unit 106 in FIG. 1 will be described.
The determination unit 106 determines that an image in which the structural score output by the calculation unit 105 is equal to or greater than a threshold is an artificial structure image. Since the optimum threshold varies depending on the size of the image, it is obtained experimentally.
 なお、画像処理装置の構成は、図1の算出部105を除いた形態でもよい。この場合に判定部106は、選択部104で輪郭が一つ以上選択された画像を人工構造物画像と判定する。 Note that the configuration of the image processing apparatus may be a form that excludes the calculation unit 105 of FIG. In this case, the determination unit 106 determines that an image in which one or more contours are selected by the selection unit 104 is an artificial structure image.
 最後に、図1の画像処理装置の動作の流れについて図13を参照して説明する。 
 画像処理装置は、すべての画像の処理が終了するまでステップS1302以降の処理を行う(ステップS1301)。分割部102が、画像を複数の領域に分割し、領域毎に別々のラベルを付与して出力する(ステップS1302)。続いて、分割したすべての領域の処理が終了するまでステップS1304以降の処理を行う(ステップS1303)。抽出部103が、当該領域とその外側の領域の境界点を求めていくことで、領域の輪郭を抽出する(ステップS1304)。次に、選択部104が、抽出した輪郭の平均曲率スコアと凹凸率が事前に設定した閾値を満たすかどうかを判定する(ステップS1305)。条件を満たす場合にはステップS1306に進み、そうでない場合にはその輪郭の処理を終了してステップS1303に進む。算出部105が輪郭長と垂直性スコアから輪郭スコアを算出する(ステップS1306)。算出部105が算出した輪郭スコアを画像全体の構造性スコアに足しこみ、その輪郭の処理を終了してステップS1303に進む(ステップS1307)。
Finally, the operation flow of the image processing apparatus of FIG. 1 will be described with reference to FIG.
The image processing apparatus performs processing from step S1302 onward until processing of all images is completed (step S1301). The dividing unit 102 divides the image into a plurality of areas, and assigns and outputs a separate label for each area (step S1302). Subsequently, processing from step S1304 is performed until processing of all divided areas is completed (step S1303). The extraction unit 103 extracts the contour of the region by obtaining the boundary point between the region and the region outside the region (step S1304). Next, the selection unit 104 determines whether or not the average curvature score and the unevenness ratio of the extracted contour satisfy a preset threshold value (step S1305). If the condition is satisfied, the process proceeds to step S1306. If not, the contour processing ends and the process proceeds to step S1303. The calculating unit 105 calculates a contour score from the contour length and the perpendicularity score (step S1306). The contour score calculated by the calculation unit 105 is added to the structural score of the entire image, the contour processing is terminated, and the process proceeds to step S1303 (step S1307).
 ステップS1303ですべての領域が処理されたらステップS1308に進む。判定部106が画像の構造性スコアが事前に設定した閾値以上なら人工構造物画像、そうでなければ非人工構造物画像と判定してステップS1301に進む(ステップS1308)。全ての画像を処理し終わったら終了する。 When all areas are processed in step S1303, the process proceeds to step S1308. If the determination unit 106 determines that the structural score of the image is equal to or greater than a preset threshold value, the determination unit 106 determines that the image is an artificial structure image, and if not, the process proceeds to step S1301 (step S1308). When all the images have been processed, the process ends.
 本実施形態の画像処理装置によれば、画像の領域分割を行い、分割された領域の輪郭毎にスコアの算出を行うため、これにより多くの物体を含む複雑な画像でも、周囲の物体の影響を受けずに構造性のある物体を判定できる画像処理装置を提供することが可能になる。また、画像中の領域毎に構造性スコアが算出されるので、建物が画像中のどこにどの程度のサイズで映っているのかを把握するのにも利用することができる。 According to the image processing apparatus of the present embodiment, the image is divided into regions, and the score is calculated for each outline of the divided regions. Therefore, even in a complex image including many objects, the influence of surrounding objects is affected. Therefore, it is possible to provide an image processing apparatus that can determine a structural object without receiving the image. In addition, since the structural score is calculated for each region in the image, it can be used to grasp where and in what size the building is shown in the image.
 こうして求めた人工構造物画像は、スライドショーの自動作成への利用が考えられる。個人で撮影した大量の写真や動画から数十枚の画像を自動的に選択してスライドショーを再生することで、旅行やイベントの思い出を短時間で鑑賞することができる。図14、図15はスライドショー向けに選択された画像群の一例であり、スライドショーでは5枚の写真を左から順番に数秒ずつ表示していく。 The artificial structure image thus obtained can be used for automatic creation of a slide show. By automatically selecting dozens of images from a large number of photos and videos taken by individuals and playing a slideshow, you can appreciate memories of travel and events in a short time. FIG. 14 and FIG. 15 show an example of the image group selected for the slide show. In the slide show, five photos are displayed for several seconds in order from the left.
 スライドショーに用いる画像選択の従来手法として、図14のように顔がアップで写っている写真や多くの顔が写っている写真を選択する手法がある。この手法では同じ人物が写っている写真が連続することが多く、スライドショー全体として単調な印象がある。 
 それに対して、図15は、本実施形態の画像処理装置を利用したスライドショー表示装置によって人物の写真の間に人工構造物の写真を挿入したものである。人工構造物の写真は左から2番目(家)及び4番目(看板)である。これによりスライドショーにメリハリをつけることができると共に、どこで撮影した写真なのか把握することができる。スライドショーに挿入する画像は、人工構造物と判定された画像の中から適宜選択する。スライドショー表示装置は、人工物の画像をその他の複数の画像の中に含めて表示する表示部を備えている。
As a conventional method of image selection used for a slide show, there is a method of selecting a photo with a face up or a photo with many faces as shown in FIG. In this method, photographs with the same person appear often in succession, and the entire slide show has a monotonous impression.
On the other hand, FIG. 15 shows an artificial structure photograph inserted between human photographs by a slide show display apparatus using the image processing apparatus of the present embodiment. The photographs of the artificial structure are the second (house) and the fourth (signboard) from the left. As a result, the slide show can be sharpened and the photograph taken can be grasped. The images to be inserted into the slide show are appropriately selected from images determined as artificial structures. The slide show display device includes a display unit that displays an image of an artifact in a plurality of other images.
 スライドショーへ利用する場合、画像処理装置の構成は図1の処理部のうち、判定部106はなくてもよい。判定部106がない場合、算出部105で出力された構造性スコアが高い上位数枚を、スライドショーへ挿入する画像として選択する。 When using for a slide show, the configuration of the image processing apparatus may not include the determination unit 106 in the processing unit of FIG. When the determination unit 106 is not present, the top several images having a high structural score output from the calculation unit 105 are selected as images to be inserted into the slide show.
 他の用途として、撮り貯めた画像を人工構造物のカテゴリとそれ以外のカテゴリに画像分類したり、空中写真により都市部と山間部の識別などに利用できる。 As other uses, it can be used to classify images taken and stored into categories of artificial structures and other categories, and to identify urban and mountainous areas using aerial photographs.
 また、本実施形態の画像処理装置を含むスライドショー表示装置は、図16に示すようなパソコン、デジタルフォトフレーム、カメラ、テレビ、携帯電話などに適用することができる。 Also, the slide show display device including the image processing device of the present embodiment can be applied to a personal computer, a digital photo frame, a camera, a television, a mobile phone and the like as shown in FIG.
 なお、本発明は上記実施形態そのままに限定されるものではなく、実施段階ではその要旨を逸脱しない範囲で構成要素を変形して具体化できる。また、上記実施形態に開示されている複数の構成要素の適宜な組み合わせにより、種々の発明を形成できる。例えば、実施形態に示される全構成要素から幾つかの構成要素を削除してもよい。さらに、異なる実施形態にわたる構成要素を適宜組み合わせてもよい。 Note that the present invention is not limited to the above-described embodiment as it is, and can be embodied by modifying constituent elements without departing from the scope of the invention in the implementation stage. In addition, various inventions can be formed by appropriately combining a plurality of components disclosed in the embodiment. For example, some components may be deleted from all the components shown in the embodiment. Furthermore, constituent elements over different embodiments may be appropriately combined.
 本実施形態の画像処理装置及びスライドショー表示装置は、パソコン、デジタルフォトフレーム、カメラ、テレビ、携帯電話などに適用することができる。 The image processing apparatus and slide show display apparatus according to this embodiment can be applied to a personal computer, a digital photo frame, a camera, a television, a mobile phone, and the like.
101…取得部、102…分割部、103…抽出部、104、104b…選択部、105…算出部、106…判定部、151…画像フレーム、152…判定結果、601…第一選択部、602…第二選択部、651、652…輪郭情報、801…注目画素、1001…方向分類部、1002…コーナー点算出部、1003…コーナー間曲率算出部、1004…コーナー間削除部、1201…方向分類部、1202…垂直性スコア算出部、1203…構造性スコア算出部。 DESCRIPTION OF SYMBOLS 101 ... Acquisition part, 102 ... Dividing part, 103 ... Extraction part, 104, 104b ... Selection part, 105 ... Calculation part, 106 ... Determination part, 151 ... Image frame, 152 ... Determination result, 601 ... First selection part, 602 ... second selection unit, 651, 652 ... contour information, 801 ... target pixel, 1001 ... direction classification unit, 1002 ... corner point calculation unit, 1003 ... curvature calculation unit between corners, 1004 ... deletion unit between corners, 1201 ... direction classification , 1202... Vertical score calculator, 1203.

Claims (6)

  1.  複数の画素からなる画像を取得する取得部と、
     前記画素と該画素に隣接する画素との画素値の差分から前記画像を複数の領域に分割する分割部と、
     前記領域毎の輪郭を抽出する抽出部と、
     前記輪郭の多角形率及び凹凸率を算出し、該多角形率が第1閾値以上であるか、あるいは該凹凸率が第2閾値以下である領域を前記複数の領域から選択する選択部と、
     前記選択部が選択した領域を含む画像全体を人工物の画像と判定する判定部と、を備えることを特徴とする画像処理装置。
    An acquisition unit for acquiring an image composed of a plurality of pixels;
    A dividing unit that divides the image into a plurality of regions from a difference in pixel value between the pixel and a pixel adjacent to the pixel;
    An extraction unit for extracting a contour for each region;
    Calculating a polygonal rate and a concavo-convex rate of the contour, and selecting a region from which the polygonal rate is equal to or higher than a first threshold value, or the concavo-convex rate is equal to or lower than a second threshold value,
    An image processing apparatus comprising: a determination unit that determines an entire image including an area selected by the selection unit as an image of an artifact.
  2.  前記選択部は、輪郭毎に多角形率及び凹凸率から構造性スコアを算出する算出部をさらに備え、
     前記判定部は、画像全体の構造性スコアが高い画像を人工物の画像と判定することを特徴とする請求項1に記載の画像処理装置。
    The selection unit further includes a calculation unit that calculates a structural score from the polygon ratio and the unevenness ratio for each contour,
    The image processing apparatus according to claim 1, wherein the determination unit determines that an image having a high structural score of the entire image is an image of an artifact.
  3.  前記選択部は、前記輪郭の曲率の平均を正負反転した値が閾値以上となる輪郭に対応する領域を選択することを特徴とする請求項2に記載の画像処理装置。 3. The image processing apparatus according to claim 2, wherein the selection unit selects a region corresponding to a contour in which a value obtained by reversing the average of the curvature of the contour is greater than or equal to a threshold value.
  4.  前記選択部は、前記領域を内包する最小の四角形の長さに対する前記輪郭の長さの比率が閾値以下となる輪郭に対応する領域を選択することを特徴とする請求項2に記載の画像処理装置。 The image processing according to claim 2, wherein the selection unit selects a region corresponding to a contour in which a ratio of a length of the contour to a length of a minimum quadrangle including the region is equal to or less than a threshold value. apparatus.
  5.  前記選択部は、前記輪郭の画素からなる線分の方向を算出し、前記方向が変化する点をコーナー点として算出するコーナー点算出部と、
     前記輪郭の線分の方向に沿って隣接するコーナー点間の曲率の平均値を計算する曲率算出部をさらに備え、
     前記曲率を正負反転した値が閾値よりも大きな部分のみを前記構造性スコアの算出に用いることを特徴とする請求項2に記載の画像処理装置。
    The selection unit calculates a direction of a line segment composed of pixels of the contour, a corner point calculation unit that calculates a point where the direction changes as a corner point;
    A curvature calculating unit that calculates an average value of curvature between corner points adjacent to each other along the direction of the line segment of the contour;
    The image processing apparatus according to claim 2, wherein only a portion where the value obtained by reversing the curvature of the curvature is greater than a threshold is used for calculation of the structural score.
  6.  請求項1に記載の画像処理装置を含み、前記人工物の画像を該画像とは異なる複数の画像の中に含めて表示する表示部を備えることを特徴とするスライドショー表示装置。 A slide show display device comprising: the image processing device according to claim 1, further comprising a display unit that displays the artifact image in a plurality of images different from the image.
PCT/JP2009/069211 2009-11-11 2009-11-11 Image processing device and slide show display WO2011058626A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2009/069211 WO2011058626A1 (en) 2009-11-11 2009-11-11 Image processing device and slide show display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2009/069211 WO2011058626A1 (en) 2009-11-11 2009-11-11 Image processing device and slide show display

Publications (1)

Publication Number Publication Date
WO2011058626A1 true WO2011058626A1 (en) 2011-05-19

Family

ID=43991305

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2009/069211 WO2011058626A1 (en) 2009-11-11 2009-11-11 Image processing device and slide show display

Country Status (1)

Country Link
WO (1) WO2011058626A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014186550A (en) * 2013-03-22 2014-10-02 Fujitsu Ltd Image processor, image processing method and image processing program

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0229881A (en) * 1988-07-20 1990-01-31 Toshiba Corp Image discriminating device
JP2008004123A (en) * 1995-09-13 2008-01-10 Fujifilm Corp Specific shape region extraction device and method, specific region extraction device and method, and copy condition decision device and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0229881A (en) * 1988-07-20 1990-01-31 Toshiba Corp Image discriminating device
JP2008004123A (en) * 1995-09-13 2008-01-10 Fujifilm Corp Specific shape region extraction device and method, specific region extraction device and method, and copy condition decision device and method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014186550A (en) * 2013-03-22 2014-10-02 Fujitsu Ltd Image processor, image processing method and image processing program

Similar Documents

Publication Publication Date Title
CN112348815B (en) Image processing method, image processing apparatus, and non-transitory storage medium
JP4847592B2 (en) Method and system for correcting distorted document images
US6898316B2 (en) Multiple image area detection in a digital image
US7986831B2 (en) Image processing apparatus, image processing method and computer program
US7574069B2 (en) Retargeting images for small displays
US8311336B2 (en) Compositional analysis method, image apparatus having compositional analysis function, compositional analysis program, and computer-readable recording medium
US6560361B1 (en) Drawing pixmap to vector conversion
US20100033603A1 (en) Method for extracting raw data from an image resulting from a camera shot
US20040037460A1 (en) Method for detecting objects in digital images
US8711141B2 (en) 3D image generating method, 3D animation generating method, and both 3D image generating module and 3D animation generating module thereof
CN114529925B (en) Method for identifying table structure of whole line table
JP2000172849A (en) Picture processor and pattern extracting device
Kiess et al. Seam carving with improved edge preservation
CN110268442B (en) Computer-implemented method of detecting a foreign object on a background object in an image, device for detecting a foreign object on a background object in an image, and computer program product
CN113744142B (en) Image restoration method, electronic device and storage medium
CN108961283A (en) Based on the corresponding image distortion method of feature and device
KR20190080388A (en) Photo Horizon Correction Method based on convolutional neural network and residual network structure
CN111062317A (en) Method and system for cutting edges of scanned document
CN113723399A (en) License plate image correction method, license plate image correction device and storage medium
Sapirstein Segmentation, reconstruction, and visualization of ancient inscriptions in 2.5 D
US9230359B2 (en) Method for resizing an image
JP2014106713A (en) Program, method, and information processor
CN117760342A (en) Laser point cloud-based outer wall flatness detection method
JP6764983B1 (en) Building shape setting device and program
WO2011058626A1 (en) Image processing device and slide show display

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09851260

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09851260

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP