Nothing Special   »   [go: up one dir, main page]

WO2024101874A1 - System and method for supporting service for converting real-time data into 3d object in 3d virtual reality space and selectively fusing same - Google Patents

System and method for supporting service for converting real-time data into 3d object in 3d virtual reality space and selectively fusing same Download PDF

Info

Publication number
WO2024101874A1
WO2024101874A1 PCT/KR2023/017834 KR2023017834W WO2024101874A1 WO 2024101874 A1 WO2024101874 A1 WO 2024101874A1 KR 2023017834 W KR2023017834 W KR 2023017834W WO 2024101874 A1 WO2024101874 A1 WO 2024101874A1
Authority
WO
WIPO (PCT)
Prior art keywords
space
real
digital twin
data
virtual reality
Prior art date
Application number
PCT/KR2023/017834
Other languages
French (fr)
Korean (ko)
Inventor
박승하
최은규
홍충기
Original Assignee
주식회사 안지온
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020230149408A external-priority patent/KR20240067011A/en
Application filed by 주식회사 안지온 filed Critical 주식회사 안지온
Publication of WO2024101874A1 publication Critical patent/WO2024101874A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a system and method that supports services for converting real-time data into 3D objects in a 3D virtual reality space and selectively converging them.
  • AR augmented reality
  • IoT Internet of Things
  • IoT sensors are not available at the time of producing city digital twins, resulting in unnecessary time spent on development. Accordingly, there is a need to research IoT event generation systems that will serve as virtual signal generation, data transmission, and storage instead of hardware IoT sensors.
  • a simulation method using virtual IoT is required to minimize the waste of time and material costs that may arise depending on the utilization of IoT when building an actual IoT system in the digital twin development stage for initial smart city service design. .
  • the purpose of the present invention is to provide a system and method that supports services for selectively fusion processing by converting real-time data into 3D objects in a 3D virtual reality space that can solve conventional problems.
  • a system that supports a service for converting real-time data into 3D objects in a 3D virtual reality space and selectively converging them according to an embodiment of the present invention to solve the above problem is a pre-processing system that receives video data from CCTV cameras and pre-processes them.
  • an object extraction unit that extracts a live-action object in the image data and the location coordinates of the live-action object
  • a digital twin space creation unit that generates a digital twin space reflecting geographical information of the point where the CCTV camera is located; a 3D object conversion unit that converts the photorealistic object into a 3D object;
  • a coordinate matching unit that matches real (absolute) coordinates in the image data with spatial coordinates of the digital twin space; and a data registration unit that projects the 3D object by synchronizing the positional change of the 3D object in real time within the digital twin space.
  • the apparatus further includes an object characteristic value granting unit for assigning object characteristic values including at least one of the type, shape, size, color, movement coordinate, and departure coordinate of the photorealistic object to the 3D object.
  • the device further includes a tracking unit that tracks the movement line of a 3D object moving within the digital twin space, and the 3D object is a moving object.
  • a method of supporting a service for converting real-time data into 3D objects in a 3D virtual reality space and selectively converging them according to an embodiment of the present invention to solve the above problem includes the steps of receiving video data from a CCTV camera and pre-processing it. ; Extracting a live-action object and location coordinates of the live-action object from the image data; Creating a digital twin space reflecting geographical information of the point where the CCTV camera is located; converting the photorealistic object into a 3D object; Matching absolute coordinates of real space within the image data with spatial coordinates of the digital twin space; and projecting the 3D object by synchronizing the positional change of the 3D object in real time within the digital twin space.
  • the step of assigning object characteristic values including at least one of the type, shape, size, color, movement coordinate, and departure coordinate of the real-life object to the 3D object in an object characteristic value granting unit is characterized by
  • the method further includes tracking the movement line of the 3D object moving within the digital twin space in a tracking unit.
  • the basic state of the video for privacy infringement issues of CCTV is anonymized. It provides the advantage of realizing the generalization of three-dimensional tracking technology combined with 3D models by changing the status to that of anonymity and selectively releasing anonymity for exceptional situations such as criminal vehicles, missing persons, wanted criminals, traffic accidents, and emergency situations.
  • Figure 1 is a network configuration diagram of a system supporting a service for selectively fusion processing by converting real-time data into 3D objects in a 3D virtual reality space according to an embodiment of the present invention.
  • Figure 2 is a hierarchical diagram structuring GIS data and BIM data.
  • Figure 3 is an example of synchronization between a live object in a CCTV image and a 3D object in a CCTV digital twin image.
  • Figure 4 is a flowchart illustrating a method of supporting a service for selectively fusion processing by converting real-time data into 3D objects in a 3D virtual reality space according to an embodiment of the present invention.
  • the service for selectively converging and converting real-time data into 3D objects in a 3D virtual reality space is a service that provides information about the real world when implementing a virtual reality (digital twin) space running on a smart device.
  • a virtual reality digital twin
  • geographic points are used to match the azimuth of the real world and the azimuth of the virtual world.
  • the service for selectively converging and converting real-time data into 3D objects in the 3D virtual reality space presented in the present invention can be applied to smart cities, and smart cities solve urban problems and improve the quality of life at the same time.
  • cases of smart city introduction are increasing around the world in various fields such as energy, transportation, and healthcare.
  • digital twin technology is used for efficient operation of smart cities. This is in the spotlight.
  • GIS Geographic Information System
  • BIM Building Information Modeling
  • the basic state of the video regarding CCTV privacy infringement issues is changed to an anonymized state, and selectively for exceptional situations such as stolen vehicles, missing persons, wanted criminals, traffic accidents, and emergency situations.
  • the key point of the present invention which will be described later, is to commercialize three-dimensional tracking technology combined with 3D models by removing anonymity.
  • Figure 1 is a network configuration diagram of a system supporting a service for selectively fusion processing by converting real-time data into 3D objects in a 3D virtual reality space according to an embodiment of the present invention
  • Figure 2 is a network configuration diagram of GIS data and BIM data It is a structured hierarchy
  • Figure 3 is an example of synchronization between a live object in a CCTV image and a 3D object in a CCTV digital twin image.
  • a system 100 that supports a service for converting real-time data into 3D objects and selectively fusion processing in a 3D virtual reality space according to an embodiment of the present invention includes an image pre-processing unit 110, It includes an object extraction unit 120, a digital twin space creation unit 130, a 3D object conversion unit 140, a spatial coordinate matching unit 150, and an image data matching unit 160.
  • the image pre-processing unit 110 may be configured to receive CCTV image data captured by a CCTV camera 10, which is an intelligent camera, and pre-process it into a perspective view with a specific viewing angle.
  • the image preprocessing unit 110 can preprocess CCTV image data to support functions beyond simple streaming image data, and the present invention can create a 6 degrees of freedom (6 DoF) virtual environment using multiple 360 image data. .
  • the object extraction unit 120 may be configured to extract a live object in CCTV image data and the location coordinates (absolute coordinates) of the live object.
  • the object extraction unit 120 extracts pixel coordinates for the location of the object in the image data using an instance segmentation algorithm, and is constructed based on a street view and a 3D model.
  • Image depth can be estimated using camera posture and ground information in the virtual space of the digital twin.
  • the initial position which is the 3D position of the object, can be estimated using the pixel coordinates of the object and the depth value of the corresponding pixel.
  • the digital twin space creation unit 130 creates a digital twin space reflecting the geographical information of the point where the CCTV camera 10 is located.
  • the digital twin space generator 130 creates a virtual space based on three-dimensional information about a specific space by replicating the corresponding space using a digital twin technique.
  • three-dimensional information about a specific space may be any one of a blueprint for the space, 3D scanned data, a floor plan, or data generated by actually measuring the specific space.
  • the digital twin space generator 130 can classify GIS (Geographic Information System) data and BIM (Building Information Modeling) data, and GIS data provided at the national level contains various information according to LOD (Level of Detail). It contains them. All GIS data contains various attribute information such as shape information from a macroscopic perspective, building site address, use, name, height, number of floors, and area.
  • GIS Geographic Information System
  • BIM Building Information Modeling
  • the digital twin space creation unit 130 uses the digital twin technique to create a virtual space with a view angle taken from the point where the CCTV camera 10 is located.
  • the digital twin space generator 130 calculates the difference with the object in the image data using a 3D model (3D model reconstruction) placed in the digital twin based on the initial position, and minimizes the difference.
  • 3D model 3D model reconstruction
  • the PSO particle Swarm Optimization
  • optimization can be performed to minimize errors.
  • the optimization process is performed through iterations that change the position and rotation of the 3D object (Position/Rotation Refinement) using the PSO algorithm, resulting in 6DoF including the position and angle of the object in the image data.
  • the pose (6DoF pose) can be estimated (final position/rotation) and implemented in a digital twin space.
  • the digital twin referred to in the present invention is used in various systems and industries such as manufacturing, medicine, transportation, aerospace, and construction, and uses standardized data to generate IoT data, 3D objects, and 3D maps. It may be formed based on (3DGeographic Map)
  • Digital Twin technology uses a digital twin model that maintains the identity and consistency of an actual physical model or an actual sensor information model that can be dynamically analyzed in real time, so that future phenomena such as an actual physical model or an actual sensor information model can be used. It is known as a technology that can predict and control the actual physical model or actual sensor information model.
  • the digital twin space creation unit 130 can provide digital representations of architectural and environmental objects using building information modeling (BIM) and geographic information systems (GIS).
  • BIM building information modeling
  • GIS geographic information systems
  • BIM deals with building information and focuses on the microscopic representation of the building itself.
  • GIS deals with spatial information and provides a macroscopic representation of the external environment of a building.
  • Indoor spatial information can be expressed as BIM data
  • outdoor spatial information can be expressed as GIS data.
  • BIM data like GIS data, has shape information and attribute information, and contains detailed and diverse information for each building element from a microscopic perspective.
  • Each IFC file of a building is a text-based file, and in order to visualize it, numerous string processing operations are performed at the parsing stage, requiring a long loading time.
  • the digital twin space creation unit 130 divides the second attribute information into categories such as building, storey, space, and element in advance. Data that can be classified and pre-classified is stored in a database and can be selectively extracted when necessary.
  • IFC Industry Foundation Classes
  • GIS stores, extracts, manages, and analyzes information by linking topographic information that spatially expresses location and non-geometric attribute information that explains and supplements its form and function with graphics and database management functions.
  • geographic data occupying a spatial location and attribute data related to it are integrated and processed.
  • BIM data used in this specification may refer to all information about buildings used when implementing BIM, such as roads, bridges, and structures, and GIS data also includes street lights, crosswalks, etc., in addition to typical geographical information such as rivers, parks, etc. It can collectively refer to information including parking facilities, IOT (internet of things) facilities for measuring air pollution, CCTV, etc.
  • IOT internet of things
  • GIS and BIM information contain various overlapping information such as building name, longitude and latitude, and address. This information can be used to link GIS and BIM information, and can be selectively extracted and used as needed.
  • the 3D object conversion unit 140 is a component that converts a real-life object into a 3D object, and converts the real-life object into a 3D object of a similar shape to prevent privacy infringement on the real-life object.
  • the spatial coordinate matching unit 150 may be configured to match real (absolute) coordinates in the image data with spatial coordinates of the digital twin space.
  • the image data registration unit 160 may be configured to project the 3D object by synchronizing the positional change of the 3D object in real time within the digital twin space.
  • system 100 which supports a service for converting real-time data into 3D objects in a 3D virtual reality space according to an embodiment of the present invention and selectively fusion processing, includes the type, shape, size, color, and It may further include an object characteristic value granting unit 170 that assigns an object characteristic value including at least one of movement coordinates and starting coordinates to the 3D object.
  • system 100 that supports a service for converting real-time data into 3D objects in a 3D virtual reality space and selectively fusion processing according to an embodiment of the present invention is a 3D moving within the digital twin space. It may further include a tracking unit 180 that tracks the moving line of the object.
  • the tracking unit 180 may be configured to track a 3D object including object characteristic values including at least one of the type, shape, size, color, movement coordinate, and departure coordinate of the real-life object in a time-series manner.
  • Figure 4 is a flowchart illustrating a method of supporting a service for selectively fusion processing by converting real-time data into 3D objects in a 3D virtual reality space according to an embodiment of the present invention.
  • the method (S700) of supporting a service for selectively fusion processing by converting real-time data into 3D objects in a 3D virtual reality space is first, in the image pre-processing unit 110.
  • Image data is received from the CCTV camera 10 and the CCTV image is pre-processed (S710).
  • the object extraction unit 120 extracts the real-life object and the location coordinates of the real-life object from the preprocessed image data (S720).
  • the object extraction unit 120 is a component that extracts the location coordinates (absolute coordinates) of the real-life object and the real-life object in the CCTV image data, and uses an instance segmentation algorithm to extract the pixel coordinates of the location of the object in the image data.
  • image depth can be estimated using camera posture and ground information in the virtual space of a digital twin built based on street view and 3D model. .
  • the initial position which is the 3D position of the object, can be estimated using the pixel coordinates of the object and the depth value of the corresponding pixel.
  • the digital twin space creation unit 130 creates a digital twin space reflecting the geographical information of the point where the CCTV camera 10 is located (S730).
  • the digital twin space generator 130 can classify GIS (Geographic Information System) data and BIM (Building Information Modeling) data, and GIS data provided at the national level contains various information according to LOD (Level of Detail). It contains them. All GIS data contains various attribute information such as shape information from a macroscopic perspective, building site address, use, name, height, number of floors, and area.
  • the digital twin space creation unit 130 uses the digital twin technique to create a virtual space with a view angle taken from the point where the CCTV camera 10 is located.
  • the digital twin space generator 130 calculates the difference with the object in the image data using a 3D model (3D model reconstruction) placed in the digital twin based on the initial position, and minimizes the difference.
  • 3D model 3D model reconstruction
  • the PSO particle Swarm Optimization
  • optimization can be performed to minimize errors.
  • the optimization process is performed through iterations that change the position and rotation of the 3D object (Position/Rotation Refinement) using the PSO algorithm, resulting in 6DoF including the position and angle of the object in the image data.
  • the pose (6DoF pose) can be estimated (final position/rotation) and implemented in a digital twin space.
  • the digital twin referred to in the present invention is used in various systems and industries such as manufacturing, medicine, transportation, aerospace, and construction, and uses standardized data to generate IoT data, 3D objects, and 3D maps. It may be formed based on (3DGeographic Map)
  • the 3D object conversion unit 140 converts the real-life object into a 3D object (S740), and the coordinate matching unit 150 converts the real (absolute) coordinates in the image data and the digital twin ( Match the spatial coordinates of the Digital Twin space (S750).
  • the coordinate matching unit 150 sets a specific or invariant geographical point in real space as absolute coordinates, and based on the absolute coordinates, the spatial coordinates of real space and the digital twin space, for example, virtual space, are set as absolute coordinates. Match spatial coordinates.
  • the image data registration unit 160 projects or registers the 3D object by real-time synchronizing the change in position of the 3D object within the digital twin space (S760).
  • a method (S700) of supporting a service for converting real-time data into a 3D object in a 3D virtual reality space includes: the type of the photo-realistic object in an object characteristic value granting unit; The method may further include assigning object characteristic values including at least one of shape, size, color, movement coordinates, and departure coordinates to the 3D object.
  • a method (S700) of supporting a service for converting real-time data into 3D objects in a 3D virtual reality space and selectively fusion processing according to an embodiment of the present invention is to use the digital twin in the tracking unit 180. ) It may further include tracking the movement line of the 3D object moving in space.
  • the basic state of the image regarding the privacy infringement issue of CCTV It provides the advantage of realizing the generalization of three-dimensional tracking technology combined with 3D models by changing it to an anonymized state and selectively de-anonymizing it in exceptional situations such as criminal vehicles, missing persons, wanted criminals, traffic accidents, and emergency situations. do.
  • devices and components described in embodiments may include, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, and a field programmable gate array (FPGA).
  • ALU arithmetic logic unit
  • FPGA field programmable gate array
  • PLU programmable logic unit
  • the processing device may execute an operating system (OS) and one or more software applications running on the operating system. Additionally, a processing device may access, store, manipulate, process, and generate data in response to the execution of software.
  • OS operating system
  • a processing device may access, store, manipulate, process, and generate data in response to the execution of software.
  • a single processing device may be described as being used; however, those skilled in the art will understand that a processing device includes multiple processing elements and/or multiple types of processing elements. It can be seen that it may include.
  • a processing device may include a plurality of processors or one processor and one controller. Additionally, other processing configurations, such as parallel processors, are possible.
  • Software may include a computer program, code, instructions, or a combination of one or more of these, which may configure a processing unit to operate as desired, or may be processed independently or collectively. You can command the device.
  • Software and/or data may be used on any type of machine, component, physical device, virtual equipment, computer storage medium or device to be interpreted by or to provide instructions or data to a processing device. , or may be permanently or temporarily embodied in a transmitted signal wave.
  • Software may be distributed over networked computer systems and stored or executed in a distributed manner.
  • Software and data may be stored on one or more computer-readable recording media.
  • the method according to the embodiment may be implemented in the form of program instructions that can be executed through various computer means and recorded on a computer-readable medium.
  • the computer-readable medium may include program instructions, data files, data structures, etc., singly or in combination.
  • Program instructions recorded on the medium may be specially designed and configured for the embodiment or may be known and available to those skilled in the art of computer software.
  • Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic media such as floptical disks.
  • program instructions include machine language code, such as that produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter, etc.
  • the hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method for supporting a service for converting real-time data into a 3D object in a 3D virtual reality space and selectively fusing same according to an embodiment of the present invention comprises the steps of: receiving image data from a CCTV camera and preprocessing same; extracting the position of a real object in the image data; generating a digital twin space reflecting geographic information of a point where the CCTV camera is located; converting the real object into a 3D object; matching real (absolute) coordinates in the image data with spatial coordinates in the digital twin space; and synchronizing and reflecting the position change of the 3D object in the digital twin space in real time.

Description

3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 시스템 및 방법A system and method for supporting services for selectively fusion processing by converting real-time data into 3D objects in 3D virtual reality space
본 발명은 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 시스템 및 방법에 관한 것이다.The present invention relates to a system and method that supports services for converting real-time data into 3D objects in a 3D virtual reality space and selectively converging them.
기존의 증강현실(AR) 기술은 엔터테인먼트를 중심으로 한 연구에 주력되어 왔다. 도시 공간 문제를 연구하고 스마트 도시를 건설하기 위해 도시에서 생성되는 다양한 대규모 데이터를 수집 및 분석하는 노력이 이어지고 있다. Existing augmented reality (AR) technology has been focused on research centered on entertainment. Efforts are continuing to collect and analyze a variety of large-scale data generated in cities to study urban spatial issues and build smart cities.
이러한 시도의 한 예로는 정보를 수집하고 광범위한 도시에 사물인터넷(IoT)을 설정하여 다양한 도시현상을 유용한 자료로 변환하는 것이다.One example of such an attempt is to collect information and set up the Internet of Things (IoT) in a wide range of cities to transform various urban phenomena into useful data.
이와 같이 수집된 정보를 시각화하는 방법으로, 다양한 도시 서비스를 위해 증강현실을 이용한 데이터 시각화가 제공될 것이다.As a way to visualize the information collected in this way, data visualization using augmented reality will be provided for various city services.
한편, 스마트 서비스를 보급하려는 노력에도 불구하고, 도시 관리자를 위한 증강현실 시각화 도구의 개발은 여전히 미흡하다. 스마트시티 연구자가 도시서비스 설계자가 기존 IoT 데이터를 활용하기 어려운 것은 데이터를 실시간으로 수집 및 송신할 수 있는 시스템이 갖춰져 있지 않거나, IoT 시스템이 갖춰져 있어도 다른 기관 운영진으로부터 IoT 데이터 접근 허가를 받기 어렵기 때문이다.Meanwhile, despite efforts to disseminate smart services, the development of augmented reality visualization tools for city managers is still insufficient. The reason it is difficult for smart city researchers and city service designers to utilize existing IoT data is because they do not have a system in place to collect and transmit data in real time, or even if an IoT system is in place, it is difficult to obtain permission to access IoT data from the management of other organizations. am.
또한, 스마트시티의 수요증가로 도시 디지털 트윈을 제작할 시점에 IoT 센서들이 구비되어 있지 않아 개발에 불필요한 시간 소요가 발생된다. 이에 따라서, 하드웨어 IoT 센서들을 대신하여 가상의 신호 발생, 데이터 전송 및 저장 역할을 할 IoT 이벤트 생성 시스템 연구 필요하다. In addition, due to the increasing demand for smart cities, IoT sensors are not available at the time of producing city digital twins, resulting in unnecessary time spent on development. Accordingly, there is a need to research IoT event generation systems that will serve as virtual signal generation, data transmission, and storage instead of hardware IoT sensors.
특히, 초기 스마트시티 서비스 디자인을 위한 디지털 트윈 개발 단계에서 실제 IoT 시스템을 구축할 때 IoT의 활용도에 따라 발생할 수 있는 시간, 물질적 비용의 낭비를 최소화하기 위해 가상 IoT를 사용하여 시뮬레이션 하는 방법이 요구된다.In particular, a simulation method using virtual IoT is required to minimize the waste of time and material costs that may arise depending on the utilization of IoT when building an actual IoT system in the digital twin development stage for initial smart city service design. .
[선행기술문헌][Prior art literature]
[비특허문헌][Non-patent literature]
Nguyen D., Meixner G.: Comparison User Engagement of Gamified and Non-gamified Augmented Reality Assembly Training Advances in Agile and User-Centred Software Engineering. pp. 142-152. Springer International Publishing (2020)Nguyen D., Meixner G.: Comparison User Engagement of Gamified and Non-gamified Augmented Reality Assembly Training Advances in Agile and User-Centred Software Engineering. pp. 142-152. Springer International Publishing (2020)
본 발명이 해결하고자 하는 과제는 종래의 문제점을 해결할 수 있는 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 시스템 및 방법을 제공하는 데 그 목적이 있다.The purpose of the present invention is to provide a system and method that supports services for selectively fusion processing by converting real-time data into 3D objects in a 3D virtual reality space that can solve conventional problems.
상기 과제를 해결하기 위한 본 발명의 일 실시예에 따른 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 시스템은 CCTV 카메라에서 영상 데이터를 수신하여 전처리하는 전처리부; 상기 영상 데이터 내 실사 객체 및 상기 실사 객체의 위치좌표를 추출하는 객체 추출부; 상기 CCTV 카메라가 위치한 지점의 지리정보 반영된 디지털 트윈(Digital Twin) 공간을 생성하는 디지털 트윈공간 생성부; 상기 실사 객체를 3D 오브젝트로 변환하는 3D 오브젝트 변환부; 상기 영상 데이터 내의 현실(절대)좌표와 상기 디지털 트윈(Digital Twin) 공간의 공간좌표를 매칭하는 좌표 매칭부; 및 상기 디지털 트윈(Digital Twin) 공간 내에 상기 3D 오브젝트의 위치변화를 실시간 동기화하여 상기 3D 오브젝트를 투영하는 데이터 정합부를 포함한다. A system that supports a service for converting real-time data into 3D objects in a 3D virtual reality space and selectively converging them according to an embodiment of the present invention to solve the above problem is a pre-processing system that receives video data from CCTV cameras and pre-processes them. wealth; an object extraction unit that extracts a live-action object in the image data and the location coordinates of the live-action object; A digital twin space creation unit that generates a digital twin space reflecting geographical information of the point where the CCTV camera is located; a 3D object conversion unit that converts the photorealistic object into a 3D object; A coordinate matching unit that matches real (absolute) coordinates in the image data with spatial coordinates of the digital twin space; and a data registration unit that projects the 3D object by synchronizing the positional change of the 3D object in real time within the digital twin space.
일 실시예에서, 상기 실사 객체의 종류, 형상, 크기, 색상, 이동좌표, 출발좌표 중 적어도 하나 이상을 포함하는 객체 특성값을 상기 3D 오브젝트에 부여하는 객체 특성값 부여부를 더 포함한다.In one embodiment, the apparatus further includes an object characteristic value granting unit for assigning object characteristic values including at least one of the type, shape, size, color, movement coordinate, and departure coordinate of the photorealistic object to the 3D object.
일 실시예에서, 상기 디지털 트윈(Digital Twin) 공간 내에 이동하는 3D 오브젝트의 이동동선을 추적하는 추적부를 더 포함하고, 상기 3D 오브젝트는 이동 객체인 것을 특징으로 한다.In one embodiment, the device further includes a tracking unit that tracks the movement line of a 3D object moving within the digital twin space, and the 3D object is a moving object.
상기 과제를 해결하기 위한 본 발명의 일 실시예에 따른 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 방법은 CCTV 카메라에서 영상 데이터를 수신하여 전처리하는 단계; 상기 영상 데이터 내 실사 객체 및 상기 실사 객체의 위치좌표를 추출하는 단계; 상기 CCTV 카메라가 위치한 지점의 지리정보 반영된 디지털 트윈(Digital Twin) 공간을 생성하는 단계; 상기 실사 객체를 3D 오브젝트로 변환하는 단계; 상기 영상 데이터 내의 현실공간의 절대좌표와 상기 디지털 트윈(Digital Twin) 공간의 공간좌표를 매칭하는 단계; 및 상기 디지털 트윈(Digital Twin) 공간 내에 상기 3D 오브젝트의 위치변화를 실시간 동기화하여 상기 3D 오브젝트를 투영하는 단계를 포함한다.A method of supporting a service for converting real-time data into 3D objects in a 3D virtual reality space and selectively converging them according to an embodiment of the present invention to solve the above problem includes the steps of receiving video data from a CCTV camera and pre-processing it. ; Extracting a live-action object and location coordinates of the live-action object from the image data; Creating a digital twin space reflecting geographical information of the point where the CCTV camera is located; converting the photorealistic object into a 3D object; Matching absolute coordinates of real space within the image data with spatial coordinates of the digital twin space; and projecting the 3D object by synchronizing the positional change of the 3D object in real time within the digital twin space.
일 실시예에서, 객체 특성값 부여부에서 상기 실사 객체의 종류, 형상, 크기, 색상, 이동좌표, 출발좌표 중 적어도 하나 이상을 포함하는 객체 특성값을 상기 3D 오브젝트에 부여하는 단계를 더 포함하는 것을 특징으로 한다.In one embodiment, the step of assigning object characteristic values including at least one of the type, shape, size, color, movement coordinate, and departure coordinate of the real-life object to the 3D object in an object characteristic value granting unit. It is characterized by
일 실시예에서, 추적부에서 상기 디지털 트윈(Digital Twin) 공간 내에 이동하는 3D 오브젝트의 이동동선을 추적하는 단계를 더 포함하는 것을 특징으로 한다.In one embodiment, the method further includes tracking the movement line of the 3D object moving within the digital twin space in a tracking unit.
본 발명의 일 실시예에 따른 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 시스템 및 방법에 따르면, CCTV의 프라이버시 침해 이슈에 대한 영상의 기본 상태를 익명화된 상태로 변경하고, 범죄차량, 실종자, 수배범, 교통사고, 비상상황과 같은 예외 상황에 대해 선택적으로 익명성을 해제함으로써 3D 모델과 결합된 입체적 추적 기술의 범용화를 실현할 수 있다는 이점을 제공한다.According to a system and method for supporting a service for converting real-time data into 3D objects in a 3D virtual reality space according to an embodiment of the present invention and selectively fusion processing, the basic state of the video for privacy infringement issues of CCTV is anonymized. It provides the advantage of realizing the generalization of three-dimensional tracking technology combined with 3D models by changing the status to that of anonymity and selectively releasing anonymity for exceptional situations such as criminal vehicles, missing persons, wanted criminals, traffic accidents, and emergency situations.
도 1은 본 발명의 일 실시예에 따른 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 시스템의 네트워크 구성도이다.Figure 1 is a network configuration diagram of a system supporting a service for selectively fusion processing by converting real-time data into 3D objects in a 3D virtual reality space according to an embodiment of the present invention.
도 2는 GIS 데이터와 BIM 데이터를 구조화한 계층(hierarchy)도이다.Figure 2 is a hierarchical diagram structuring GIS data and BIM data.
도 3은 CCTV 영상 내의 실사 객체와 CCTV 디지털 트윈 영상으로 3D 오브젝트 간을 동기화한 예시도이다.Figure 3 is an example of synchronization between a live object in a CCTV image and a 3D object in a CCTV digital twin image.
도 4는 본 발명의 일 실시예에 따른 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 방법을 설명한 흐름도이다.Figure 4 is a flowchart illustrating a method of supporting a service for selectively fusion processing by converting real-time data into 3D objects in a 3D virtual reality space according to an embodiment of the present invention.
본 발명의 이점 및 특징, 그리고 그것들을 달성하는 방법은 첨부되는 도면과 함께 상세하게 후술되어 있는 실시예들을 참조하면 명확해질 것이다. 그러나, 본 발명은 이하에서 개시되는 실시예들에 한정되는 것이 아니라 서로 다른 다양한 형태로 구현될 것이며, 단지 본 실시예들은 본 발명의 개시가 완전하도록 하며, 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자에게 발명의 범주를 완전하게 알려주기 위해 제공되는 것이며, 본 발명은 청구항의 범주에 의해 정의될 뿐이다.The advantages and features of the present invention and methods for achieving them will become clear by referring to the embodiments described in detail below along with the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below and will be implemented in various different forms. The present embodiments only serve to ensure that the disclosure of the present invention is complete and are within the scope of common knowledge in the technical field to which the present invention pertains. It is provided to fully inform those who have the scope of the invention, and the present invention is only defined by the scope of the claims.
본 명세서에서 사용된 용어는 실시예들을 설명하기 위한 것이며, 본 발명을 제한하고자 하는 것은 아니다. 본 명세서에서, 단수형은 문구에서 특별히 언급하지 않는 한 복수형도 포함한다. 명세서에서 사용되는 "포함한다(comprises)" 및/또는 "포함하는(comprising)"은 언급된 구성요소, 단계, 동작 및/또는 소자는 하나 이상의 다른 구성요소, 단계, 동작 및/또는 소자의 존재 또는 추가를 배제하지 않는다.The terminology used herein is for describing embodiments and is not intended to limit the invention. As used herein, singular forms also include plural forms, unless specifically stated otherwise in the context. As used herein, “comprises” and/or “comprising” refers to the presence of one or more other components, steps, operations and/or elements. or does not rule out addition.
다른 정의가 없다면, 본 명세서에서 사용되는 모든 용어(기술 및 과학적 용어를 포함)는 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자에게 공통적으로 이해될 수 있는 의미로 사용될 수 있을 것이다. 또한, 일반적으로 사용되는 사전에 정의되어 있는 용어들은 명백하게 특별히 정의되어 있지 않는 한 이상적으로 또는 과도하게 해석되지 않는다.Unless otherwise defined, all terms (including technical and scientific terms) used in this specification may be used with meanings that can be commonly understood by those skilled in the art to which the present invention pertains. Additionally, terms defined in commonly used dictionaries are not to be interpreted ideally or excessively unless clearly specifically defined.
본 발명의 바람직한 실시예들을 보다 상세하게 설명하고자 한다. 도면 상의 동일한 구성요소에 대해서는 동일한 참조 부호를 사용하고 동일한 구성요소에 대해서 중복된 설명은 생략한다.Preferred embodiments of the present invention will be described in more detail. The same reference numerals are used for the same components in the drawings, and duplicate descriptions for the same components are omitted.
이하, 첨부된 도면들에 기초하여 본 발명의 일 실시예에 따른 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 시스템 및 방법을 보다 상세하게 설명하도록 한다.Hereinafter, based on the attached drawings, a system and method for supporting a service for selectively fusion processing by converting real-time data into 3D objects in a 3D virtual reality space according to an embodiment of the present invention will be described in more detail. .
먼저, 본 발명의 일 실시예에 따른 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스는 스마트 디바이스 상에서 구동되는 가상현실(디지털 트윈) 공간을 구현할 때, 현실세계의 방위와 가상세계의 방위를 맞추는 수단으로 지리적 포인트(마커)를 이용하여 현실세계의 방위각과 가상세계의 방위각을 매칭한다.First, the service for selectively converging and converting real-time data into 3D objects in a 3D virtual reality space according to an embodiment of the present invention is a service that provides information about the real world when implementing a virtual reality (digital twin) space running on a smart device. As a means of matching the direction between the direction and the virtual world, geographic points (markers) are used to match the azimuth of the real world and the azimuth of the virtual world.
현실공간의 방위각과 가상공간의 방위각을 매칭한 후, 현실공간에서 실시간 움직이는 객체를 3D 오브젝트로 변환한 후, 가상공간에 투영, 즉, 현실공간의 비디오 프레임 마다 3D 오브젝트를 중첩시키되, 개별정보의 프라이버시 문제를 선택적으로 융합으로 해결한 상태에서 가상과 현실의 실시간 동선을 일치 추적하기 위한 서비스이다.After matching the azimuth of real space and the azimuth of virtual space, convert real-time moving objects in real space into 3D objects and then project them into virtual space, that is, overlapping 3D objects for each video frame in real space, but retaining individual information. This is a service for tracking virtual and real real-time movements while selectively solving privacy issues through convergence.
참고로, 본 발명에서 제시하는 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스는 스마트 시티에 적용될 수 있고, 스마트 시티는 도시 문제를 해결함과 동시에 삶의 질을 개선할 수 있는 도시관리체계로서, 전 세계적으로 에너지(energy), 운송(transportation), 헬스케어(healthcare) 등 다양한 분야에서 스마트시티 도입 사례가 증가하고 있다. 이러한 스마트시티에 대한 지속적인 관심과 더불어 다양한 도시 문제 해결을 위해 이와 상응하는 기술의 발전이 요구되고 있으며, 방대한 양으로 수집되는 데이터를 기반으로, 스마트시티의 효율적인 운영을 위해 디지털 트윈(Digital Twin) 기술이 각광받고 있다. For reference, the service for selectively converging and converting real-time data into 3D objects in the 3D virtual reality space presented in the present invention can be applied to smart cities, and smart cities solve urban problems and improve the quality of life at the same time. As an urban management system that can be improved, cases of smart city introduction are increasing around the world in various fields such as energy, transportation, and healthcare. In addition to the continued interest in smart cities, the development of corresponding technologies is required to solve various urban problems, and based on the vast amount of data collected, digital twin technology is used for efficient operation of smart cities. This is in the spotlight.
범용적인 스마트시티 서비스(smart city services)을 위한 도시차원에서의 디지털 트윈(Digital Twin) 구축을 위해서는 거시적 관점에서의 지리정보를 담고 있는 GIS(Geographic Information System, 지리 정보 체계)와 미시적 관점에서의 건축물/도시시설물의 세부정보를 담고 있는 BIM(Building Information Modeling, 건축(빌딩)정보 모델)이 필요하고, 따라서 이들을 통합하여 도시관리 시스템으로 활용하고자 많은 노력을 기울이고 있다.In order to build a digital twin at the city level for general-purpose smart city services, a GIS (Geographic Information System) containing geographic information from a macroscopic perspective and buildings from a microscopic perspective are used. /BIM (Building Information Modeling), which contains detailed information of urban facilities, is needed, and therefore, great efforts are being made to integrate them and utilize them as an urban management system.
따라서, 본 발명에서 제시하는 해당 서비스를 통해 CCTV의 프라이버시 침해 이슈에 대한 영상의 기본 상태를 익명화된 상태로 변경하고, 범좌차량, 실종자, 수배범, 교통사고, 비상상황과 같은 예외 상황에 대해 선택적으로 익명성을 해제함으로써, 3D 모델과 결합된 입체적 추적 기술의 범용화를 하고자 하는 것이 후술하고자 하는 본 발명의 핵심 요지이다.Therefore, through the service proposed in the present invention, the basic state of the video regarding CCTV privacy infringement issues is changed to an anonymized state, and selectively for exceptional situations such as stolen vehicles, missing persons, wanted criminals, traffic accidents, and emergency situations. The key point of the present invention, which will be described later, is to commercialize three-dimensional tracking technology combined with 3D models by removing anonymity.
도 1은 본 발명의 일 실시예에 따른 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 시스템의 네트워크 구성도이고, 도 2는 GIS 데이터와 BIM 데이터를 구조화한 계층(hierarchy)도이고, 도 3은 CCTV 영상 내의 실사 객체와 CCTV 디지털 트윈 영상으로 3D 오브젝트 간을 동기화한 예시도이다.Figure 1 is a network configuration diagram of a system supporting a service for selectively fusion processing by converting real-time data into 3D objects in a 3D virtual reality space according to an embodiment of the present invention, and Figure 2 is a network configuration diagram of GIS data and BIM data It is a structured hierarchy, and Figure 3 is an example of synchronization between a live object in a CCTV image and a 3D object in a CCTV digital twin image.
도 1 내지 도 3을 참조, 본 발명의 일 실시예에 따른 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 시스템(100)은 영상전처리부(110), 객체 추출부(120), 디지털 트윈 공간생성부(130), 3D 오브젝트 변환부(140), 공간좌표 매칭부(150), 영상데이터 정합부(160)를 포함한다.Referring to FIGS. 1 to 3, a system 100 that supports a service for converting real-time data into 3D objects and selectively fusion processing in a 3D virtual reality space according to an embodiment of the present invention includes an image pre-processing unit 110, It includes an object extraction unit 120, a digital twin space creation unit 130, a 3D object conversion unit 140, a spatial coordinate matching unit 150, and an image data matching unit 160.
상기 영상전처리부(110)는 지능형 카메라인 CCTV 카메라(10)에서 촬영된 CCTV 영상 데이터를 수신하여 특정 시야각을 가지는 원근 시점(Perspective view)으로 전처리하는 구성일 수 있다.The image pre-processing unit 110 may be configured to receive CCTV image data captured by a CCTV camera 10, which is an intelligent camera, and pre-process it into a perspective view with a specific viewing angle.
또한, 영상전처리부(110)는 CCTV 영상 데이터를 전처리하여 단순한 스트리밍 영상 데이터 이상의 기능을 지원하게 할 수 있으며, 본 발명은 다중 360 영상 데이터를 이용하여 6 자유도(6 DoF) 가상 환경을 만들 수 있다.In addition, the image preprocessing unit 110 can preprocess CCTV image data to support functions beyond simple streaming image data, and the present invention can create a 6 degrees of freedom (6 DoF) virtual environment using multiple 360 image data. .
상기 객체 추출부(120)는 CCTV 영상 데이터 내 실사 객체 및 실사 객체의 위치좌표(절대좌표)를 추출하는 구성일 수 있다.The object extraction unit 120 may be configured to extract a live object in CCTV image data and the location coordinates (absolute coordinates) of the live object.
상기 객체 추출부(120)는 영상 분할(instance segmentation) 알고리즘을 이용하여 영상 데이터 내 객체의 위치에 대한 픽셀 좌표를 추출하며, 스트리트 뷰(street view)와 3D 모델(3D model)을 기반으로 구축된 디지털 트윈의 가상공간에서 카메라 자세와 그라운드(Ground) 정보를 이용하여 이미지 깊이를 추정(Depth Estimation)할 수 있다. 이후에 객체의 픽셀 좌표와 해당 픽셀의 깊이(depth)값을 이용하여 객체의 3차원 위치인 초기 위치(initial position)를 추정할 수 있다.The object extraction unit 120 extracts pixel coordinates for the location of the object in the image data using an instance segmentation algorithm, and is constructed based on a street view and a 3D model. Image depth can be estimated using camera posture and ground information in the virtual space of the digital twin. Afterwards, the initial position, which is the 3D position of the object, can be estimated using the pixel coordinates of the object and the depth value of the corresponding pixel.
상기 디지털 트윈 공간생성부(130)는 CCTV 카메라(10)가 위치한 지점의 지리정보 반영된 디지털 트윈(Digital Twin) 공간을 생성한다.The digital twin space creation unit 130 creates a digital twin space reflecting the geographical information of the point where the CCTV camera 10 is located.
여기서, 디지털 트윈 공간생성부(130)는 특정 공간에 대한 3차원적인 정보에 기초하여 해당 공간을 디지털 트윈 기법을 이용하여 가상 공간으로 모사하여 생성한다. Here, the digital twin space generator 130 creates a virtual space based on three-dimensional information about a specific space by replicating the corresponding space using a digital twin technique.
예컨대, 특정 공간에 대한 3차원적인 정보는 해당 공간에 대한 설계도, 3D 스캔한 자료, 평면도, 특정 공간을 실측하여 생성된 자료 중 어느 하나일 수 있다.For example, three-dimensional information about a specific space may be any one of a blueprint for the space, 3D scanned data, a floor plan, or data generated by actually measuring the specific space.
상기 디지털 트윈 공간생성부(130)는 GIS(Geographic Information System) 데이터와 BIM(Building Information Modeling) 데이터를 분류할 수 있고, 국가 차원에서 제공하는 GIS 데이터는 LOD(Level of Detail)에 따른 여러 가지 정보들을 담고 있다. 모든 GIS 데이터는 거시적인 관점에서의 형상 정보와 건축물 사이트 주소, 용도, 명칭, 높이, 층수, 면적 등 다양한 속성정보들을 가지고 있다.The digital twin space generator 130 can classify GIS (Geographic Information System) data and BIM (Building Information Modeling) data, and GIS data provided at the national level contains various information according to LOD (Level of Detail). It contains them. All GIS data contains various attribute information such as shape information from a macroscopic perspective, building site address, use, name, height, number of floors, and area.
상기 디지털 트윈 공간생성부(130)는 디지털 트윈 기법을 이용하여 CCTV 카메라(10)가 위치한 지점에서 촬영한 화각(View angle)을 갖는 가상공간을 생성한다.The digital twin space creation unit 130 uses the digital twin technique to create a virtual space with a view angle taken from the point where the CCTV camera 10 is located.
또한, 디지털 트윈 공간생성부(130)는 초기 위치(Initial position)를 기반으로 디지털 트윈에 배치된 3D 모델(3D model Reconstruction)을 이용하여 영상 데이터 내 객체와의 차이를 계산하며, 차이를 최소화하는 최적화(optimization)를 통해 객체의 위치 및 각도를 포함하는 지리적 데이터를 추정하여 추정된 지리적 데이터를 디지털 트윈공간으로 구현할 수 있다.In addition, the digital twin space generator 130 calculates the difference with the object in the image data using a 3D model (3D model reconstruction) placed in the digital twin based on the initial position, and minimizes the difference. Through optimization, geographic data including the location and angle of an object can be estimated and the estimated geographic data can be implemented as a digital twin space.
또한, PSO(Particle Swarm Optimization) 알고리즘을 이용하여 초기 위치에 3D 모델의 3D 객체를 프로젝션(projection)하여 생성된 추정 데이터의 객체 윤곽(outline)과 영상 데이터 내 객체 윤곽(outline) 간의 차이를 계산하며, 오차를 최소화하는 최적화를 수행할 수 있다.In addition, the PSO (Particle Swarm Optimization) algorithm is used to calculate the difference between the object outline of the estimated data generated by projecting the 3D object of the 3D model to the initial position and the object outline in the image data. , optimization can be performed to minimize errors.
또한, PSO 알고리즘을 이용하여 3D 객체의 위치(position) 및 회전(rotation)을 변화(Position/Rotation Refinement)하는 반복(Iteration)으로 최적화 과정을 수행하여 영상 데이터 내 객체의 위치 및 각도를 포함하는 6DoF 포즈(6DoF pose)를 추정(Final position/rotation)하여 디지털 트윈공간으로 구현할 수 있다.In addition, the optimization process is performed through iterations that change the position and rotation of the 3D object (Position/Rotation Refinement) using the PSO algorithm, resulting in 6DoF including the position and angle of the object in the image data. The pose (6DoF pose) can be estimated (final position/rotation) and implemented in a digital twin space.
본 발명에서 언급하는 디지털 트윈은 제조, 의료, 운송, 항공우주, 건설 등 다양한 시스템과 산업에 사용되는 것으로, 표준화된 데이터를 사용하여 IoT 데이터(IoT Data), 3D 객체(3D Objects) 및 3D 맵(3DGeographic Map) 기반으로 형성된 것일 수 있다The digital twin referred to in the present invention is used in various systems and industries such as manufacturing, medicine, transportation, aerospace, and construction, and uses standardized data to generate IoT data, 3D objects, and 3D maps. It may be formed based on (3DGeographic Map)
참고로, 디지털 트윈(Digital Twin) 기술은 실시간으로 역학적 해석이 가능한 실제 물리 모델이나 실제 센서 정보 모델 등과 동일성 및 일관성을 유지하는 디지털 트윈 모델을 이용하여 실제 물리 모델이나 실제 센서 정보 모델 등의 미래 현상을 예측하여 실제 물리 모델이나 실제 센서 정보 모델 등을 제어할 수 있는 기술로 알려져 있다.For reference, Digital Twin technology uses a digital twin model that maintains the identity and consistency of an actual physical model or an actual sensor information model that can be dynamically analyzed in real time, so that future phenomena such as an actual physical model or an actual sensor information model can be used. It is known as a technology that can predict and control the actual physical model or actual sensor information model.
상기 디지털 트윈 공간생성부(130)는 건물 정보 모델링(Building information modelling, BIM) 및 지리 정보 시스템(Geographical Information Systems, GIS)을 이용하여 건축 및 환경 객체의 디지털 표현을 제공할 수 있다.The digital twin space creation unit 130 can provide digital representations of architectural and environmental objects using building information modeling (BIM) and geographic information systems (GIS).
예컨대, BIM은 건물정보를 다루며, 건물 자체의 미시적 표현에 초점을 맞춘다. GIS는 공간정보를 다루며, 건물의 외부 환경에 대한 거시적 표현을 제공한다.For example, BIM deals with building information and focuses on the microscopic representation of the building itself. GIS deals with spatial information and provides a macroscopic representation of the external environment of a building.
3차원 모델링을 위한 공간정보는 크게 실내 공간정보와 실외 공간정보로 구분될 수 있으며, 일반적으로 실내 공 간정보는 BIM 데이터로 표현되고, 실외 공간정보는 GIS 데이터로 표현될 수 있다. Spatial information for 3D modeling can be largely divided into indoor spatial information and outdoor spatial information. In general, indoor spatial information can be expressed as BIM data, and outdoor spatial information can be expressed as GIS data.
구체적으로, BIM 데이터는 GIS 데이터와 마찬가지로 형상 정보와 속성 정보를 가지고 있으며, 미시적인 관점에서의 건축물 요소(element)별 세부적이고 다양한 정보를 담고 있다. 건축물 각각의 IFC 파일은 텍스트 기반 파일이며, 가시화하기 위해서는 파싱(parsing) 단계에서 수많은 스트링(string) 처리 연산이 수행되므로 긴 로딩시간이 필요하다. Specifically, BIM data, like GIS data, has shape information and attribute information, and contains detailed and diverse information for each building element from a microscopic perspective. Each IFC file of a building is a text-based file, and in order to visualize it, numerous string processing operations are performed at the parsing stage, requiring a long loading time.
따라서, 디지털 트윈 공간생성부(130)는 도시 차원에서의 디지털 트윈 구현을 위해서, 사전에 제2 속성 정보를 건물(building), 층(storey), 공간(space), 요소(element) 등 범위별로 분류할 수 있고, 사전 분류된 데이터는 데이터베이스에 저장되고 필요시 선택적으로 추출될 수 있다. Therefore, in order to implement a digital twin at the city level, the digital twin space creation unit 130 divides the second attribute information into categories such as building, storey, space, and element in advance. Data that can be classified and pre-classified is stored in a database and can be selectively extracted when necessary.
참고로, IFC(Industry Foundation Classes)는 서로 다른 소프트웨어 어플리케이션 간에 상호 운용성을 제공하기 위해 건축이나 건설산업에서 사용되는 표준 형식이다.For reference, IFC (Industry Foundation Classes) is a standard format used in the architecture and construction industry to provide interoperability between different software applications.
GIS는 공간적으로 위치를 표현하는 지형 정보와 그 형태와 기능을 설명 보완하는 비도형 속성정보를 그래픽과 데이터베이스의 관리기능 등과 연계하여 정보를 저장, 추출, 관리 및 분석한다. GIS stores, extracts, manages, and analyzes information by linking topographic information that spatially expresses location and non-geometric attribute information that explains and supplements its form and function with graphics and database management functions.
이 경우, 공간상 위치를 점유하는 지리자료(geographic data)와 이에 관련된 속성자료(attribute data)를 통합하여 처리한다. In this case, geographic data occupying a spatial location and attribute data related to it are integrated and processed.
본 명세서에 사용되는 BIM 데이터는 도로, 교량, 구조물과 같이 BIM을 구현할 때 사용되는 건물에 대한 모든 정보를 의미할 수 있으며, GIS 데이터 역시 통상적인 하천, 공원 등과 같은 지리적 정보 이외에 가로등, 횡단보도, 주차시설, 대기 오염의 측정하기 위한 IOT(internet of things) 시설물, CCTV 등을 포함하는 정보를 통칭할 수 있다.BIM data used in this specification may refer to all information about buildings used when implementing BIM, such as roads, bridges, and structures, and GIS data also includes street lights, crosswalks, etc., in addition to typical geographical information such as rivers, parks, etc. It can collectively refer to information including parking facilities, IOT (internet of things) facilities for measuring air pollution, CCTV, etc.
도 2를 참조, GIS와 BIM 데이터를 구조화한 계층(hierarchy)을 살펴보면, 거시적인 측면과 미시적인 측면에서 데이터를 구조화시킨 계층(hierarchy)이며, CityGML과 IFC 표준 계층(hierarchy)과는 다소 차이가 있을 수 있다. GIS와 BIM 정보는 건축물명, 경위도, 주소 등 여러가지 오버랩(overlap)되는 정보를 담고 있어, 이러한 정보들을 이용하여 GIS와 BIM의 정보를 연계시킬 수 있으며, 필요에 따라 선택적으로 추출하여 사용될 수 있다. Referring to Figure 2, looking at the hierarchy that structures GIS and BIM data, it is a hierarchy that structures data from a macroscopic and microscopic aspect, and is somewhat different from the CityGML and IFC standard hierarchy. There may be. GIS and BIM information contain various overlapping information such as building name, longitude and latitude, and address. This information can be used to link GIS and BIM information, and can be selectively extracted and used as needed.
이외에도 디지털 트윈(Digital Twin)에는 센서에서 수집되는 실시간 데이터, 서비스별 레거시(legacy) 데이터 등 다양한 데이터가 존재하며, 사용 환경(use case)에 따라 데이터베이스(database)에 테이블이 추가될 수 있다..In addition, there is a variety of data in the Digital Twin, such as real-time data collected from sensors and legacy data for each service, and tables can be added to the database depending on the use case.
상기 3D 오브젝트 변환부(140)는 실사 객체를 3D 오브젝트로 변환하는 구성으로, 실사 객체에 대한 프라이버시 침해를 방지하기 위하여 유사한 형상의 3D 오브젝트로 변환한다.The 3D object conversion unit 140 is a component that converts a real-life object into a 3D object, and converts the real-life object into a 3D object of a similar shape to prevent privacy infringement on the real-life object.
다음으로, 공간좌표 매칭부(150)는 상기 영상 데이터 내의 현실(절대)좌표와 상기 디지털 트윈(Digital Twin) 공간의 공간좌표를 매칭하는 구성일 수 있다.Next, the spatial coordinate matching unit 150 may be configured to match real (absolute) coordinates in the image data with spatial coordinates of the digital twin space.
다음으로, 영상데이터 정합부(160)는 상기 디지털 트윈(Digital Twin) 공간 내에 상기 3D 오브젝트의 위치변화를 실시간 동기화하여 상기 3D 오브젝트를 투영하는 구성일 수 있다.Next, the image data registration unit 160 may be configured to project the 3D object by synchronizing the positional change of the 3D object in real time within the digital twin space.
또한, 본 발명의 일 실시예에 따른 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 시스템(100)은 상기 실사 객체의 종류, 형상, 크기, 색상, 이동좌표, 출발좌표 중 적어도 하나 이상을 포함하는 객체 특성값을 상기 3D 오브젝트에 부여하는 객체 특성값 부여부(170)를 더 포함할 수 있다.In addition, the system 100, which supports a service for converting real-time data into 3D objects in a 3D virtual reality space according to an embodiment of the present invention and selectively fusion processing, includes the type, shape, size, color, and It may further include an object characteristic value granting unit 170 that assigns an object characteristic value including at least one of movement coordinates and starting coordinates to the 3D object.
또한, 본 발명의 일 실시예에 따른 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 시스템(100)은 상기 디지털 트윈(Digital Twin) 공간 내에 이동하는 3D 오브젝트의 이동동선을 추적하는 추적부(180)를 더 포함할 수 있다.In addition, the system 100 that supports a service for converting real-time data into 3D objects in a 3D virtual reality space and selectively fusion processing according to an embodiment of the present invention is a 3D moving within the digital twin space. It may further include a tracking unit 180 that tracks the moving line of the object.
상기 추적부(180)는 상기 실사 객체의 종류, 형상, 크기, 색상, 이동좌표, 출발좌표 중 적어도 하나 이상을 포함하는 객체 특성값을 포함하는 3D 오브젝트를 시계열 방식으로 추적하는 구성일 수 있다.The tracking unit 180 may be configured to track a 3D object including object characteristic values including at least one of the type, shape, size, color, movement coordinate, and departure coordinate of the real-life object in a time-series manner.
도 4는 본 발명의 일 실시예에 따른 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 방법을 설명한 흐름도이다.Figure 4 is a flowchart illustrating a method of supporting a service for selectively fusion processing by converting real-time data into 3D objects in a 3D virtual reality space according to an embodiment of the present invention.
도 4를 참조하면, 본 발명의 일 실시예에 따른 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 방법(S700)은 먼저, 영상전처리부(110)에서 CCTV 카메라(10)로부터 영상 데이터를 수신하여 CCTV 영상을 전처리(S710)한다.Referring to FIG. 4, the method (S700) of supporting a service for selectively fusion processing by converting real-time data into 3D objects in a 3D virtual reality space according to an embodiment of the present invention is first, in the image pre-processing unit 110. Image data is received from the CCTV camera 10 and the CCTV image is pre-processed (S710).
이후, 객체 추출부(120)에서 전처리된 영상 데이터 내 실사 객체 및 상기 실사 객체의 위치좌표를 추출(S720)한다.Thereafter, the object extraction unit 120 extracts the real-life object and the location coordinates of the real-life object from the preprocessed image data (S720).
상기 객체 추출부(120)는 CCTV 영상 데이터 내 실사 객체 및 실사 객체의 위치좌표(절대좌표)를 추출하는 구성으로, 영상 분할(instance segmentation) 알고리즘을 이용하여 영상 데이터 내 객체의 위치에 대한 픽셀 좌표를 추출하며, 스트리트 뷰(street view)와 3D 모델(3D model)을 기반으로 구축된 디지털 트윈의 가상공간에서 카메라 자세와 그라운드(Ground) 정보를 이용하여 이미지 깊이를 추정(Depth Estimation)할 수 있다. 이후에 객체의 픽셀 좌표와 해당 픽셀의 깊이(depth)값을 이용하여 객체의 3차원 위치인 초기 위치(initial position)를 추정할 수 있다.The object extraction unit 120 is a component that extracts the location coordinates (absolute coordinates) of the real-life object and the real-life object in the CCTV image data, and uses an instance segmentation algorithm to extract the pixel coordinates of the location of the object in the image data. can be extracted, and image depth can be estimated using camera posture and ground information in the virtual space of a digital twin built based on street view and 3D model. . Afterwards, the initial position, which is the 3D position of the object, can be estimated using the pixel coordinates of the object and the depth value of the corresponding pixel.
상기 S720 과정이 완료되면, 디지털 트윈공간 생성부(130)에서 CCTV 카메라(10)가 위치한 지점의 지리정보가 반영된 디지털 트윈(Digital Twin) 공간을 생성(S730)한다. 상기 디지털 트윈 공간생성부(130)는 GIS(Geographic Information System) 데이터와 BIM(Building Information Modeling) 데이터를 분류할 수 있고, 국가 차원에서 제공하는 GIS 데이터는 LOD(Level of Detail)에 따른 여러 가지 정보들을 담고 있다. 모든 GIS 데이터는 거시적인 관점에서의 형상 정보와 건축물 사이트 주소, 용도, 명칭, 높이, 층수, 면적 등 다양한 속성정보들을 가지고 있다.When the S720 process is completed, the digital twin space creation unit 130 creates a digital twin space reflecting the geographical information of the point where the CCTV camera 10 is located (S730). The digital twin space generator 130 can classify GIS (Geographic Information System) data and BIM (Building Information Modeling) data, and GIS data provided at the national level contains various information according to LOD (Level of Detail). It contains them. All GIS data contains various attribute information such as shape information from a macroscopic perspective, building site address, use, name, height, number of floors, and area.
상기 디지털 트윈 공간생성부(130)는 디지털 트윈 기법을 이용하여 CCTV 카메라(10)가 위치한 지점에서 촬영한 화각(View angle)을 갖는 가상공간을 생성한다.The digital twin space creation unit 130 uses the digital twin technique to create a virtual space with a view angle taken from the point where the CCTV camera 10 is located.
또한, 디지털 트윈 공간생성부(130)는 초기 위치(Initial position)를 기반으로 디지털 트윈에 배치된 3D 모델(3D model Reconstruction)을 이용하여 영상 데이터 내 객체와의 차이를 계산하며, 차이를 최소화하는 최적화(optimization)를 통해 객체의 위치 및 각도를 포함하는 지리적 데이터를 추정하여 추정된 지리적 데이터를 디지털 트윈공간으로 구현할 수 있다.In addition, the digital twin space generator 130 calculates the difference with the object in the image data using a 3D model (3D model reconstruction) placed in the digital twin based on the initial position, and minimizes the difference. Through optimization, geographic data including the location and angle of an object can be estimated and the estimated geographic data can be implemented as a digital twin space.
또한, PSO(Particle Swarm Optimization) 알고리즘을 이용하여 초기 위치에 3D 모델의 3D 객체를 프로젝션(projection)하여 생성된 추정 데이터의 객체 윤곽(outline)과 영상 데이터 내 객체 윤곽(outline) 간의 차이를 계산하며, 오차를 최소화하는 최적화를 수행할 수 있다.In addition, the PSO (Particle Swarm Optimization) algorithm is used to calculate the difference between the object outline of the estimated data generated by projecting the 3D object of the 3D model to the initial position and the object outline in the image data. , optimization can be performed to minimize errors.
또한, PSO 알고리즘을 이용하여 3D 객체의 위치(position) 및 회전(rotation)을 변화(Position/Rotation Refinement)하는 반복(Iteration)으로 최적화 과정을 수행하여 영상 데이터 내 객체의 위치 및 각도를 포함하는 6DoF 포즈(6DoF pose)를 추정(Final position/rotation)하여 디지털 트윈공간으로 구현할 수 있다.In addition, the optimization process is performed through iterations that change the position and rotation of the 3D object (Position/Rotation Refinement) using the PSO algorithm, resulting in 6DoF including the position and angle of the object in the image data. The pose (6DoF pose) can be estimated (final position/rotation) and implemented in a digital twin space.
본 발명에서 언급하는 디지털 트윈은 제조, 의료, 운송, 항공우주, 건설 등 다양한 시스템과 산업에 사용되는 것으로, 표준화된 데이터를 사용하여 IoT 데이터(IoT Data), 3D 객체(3D Objects) 및 3D 맵(3DGeographic Map) 기반으로 형성된 것일 수 있다The digital twin referred to in the present invention is used in various systems and industries such as manufacturing, medicine, transportation, aerospace, and construction, and uses standardized data to generate IoT data, 3D objects, and 3D maps. It may be formed based on (3DGeographic Map)
다음으로, S730 과정이 완료되면, 3D 오브젝트 변환부(140)에서 실사 객체를 3D 오브젝트로 변환하고(S740), 좌표 매칭부(150)에서 상기 영상 데이터 내의 현실(절대)좌표와 상기 디지털 트윈(Digital Twin) 공간의 공간좌표를 매칭(S750)한다.Next, when the S730 process is completed, the 3D object conversion unit 140 converts the real-life object into a 3D object (S740), and the coordinate matching unit 150 converts the real (absolute) coordinates in the image data and the digital twin ( Match the spatial coordinates of the Digital Twin space (S750).
이때, 상기 좌표 매칭부(150)는 현실공간 내에서 특정 또는 위치가 불변하는 지리적 지점을 절대좌표로 설정하여, 해당 절대좌표를 기초로 현실공간의 공간좌표와 디지털 트윈공간, 예컨대, 가상공간의 공간좌표를 매칭한다.At this time, the coordinate matching unit 150 sets a specific or invariant geographical point in real space as absolute coordinates, and based on the absolute coordinates, the spatial coordinates of real space and the digital twin space, for example, virtual space, are set as absolute coordinates. Match spatial coordinates.
이후, 영상데이터 정합부(160)에서 상기 디지털 트윈(Digital Twin) 공간 내에 상기 3D 오브젝트의 위치변화를 실시간 동기화하여 상기 3D 오브젝트를 투영 또는 정합(S760)한다.Thereafter, the image data registration unit 160 projects or registers the 3D object by real-time synchronizing the change in position of the 3D object within the digital twin space (S760).
한편, 본 발명의 일 실시예에 따른 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 방법(S700)은 객체 특성값 부여부에서 상기 실사 객체의 종류, 형상, 크기, 색상, 이동좌표, 출발좌표 중 적어도 하나 이상을 포함하는 객체 특성값을 상기 3D 오브젝트에 부여하는 단계를 더 포함할 수 있다.Meanwhile, a method (S700) of supporting a service for converting real-time data into a 3D object in a 3D virtual reality space according to an embodiment of the present invention for selective fusion processing includes: the type of the photo-realistic object in an object characteristic value granting unit; The method may further include assigning object characteristic values including at least one of shape, size, color, movement coordinates, and departure coordinates to the 3D object.
또한, 본 발명의 일 실시예에 따른 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 방법(S700)은 추적부(180)에서 상기 디지털 트윈(Digital Twin) 공간 내에 이동하는 3D 오브젝트의 이동동선을 추적하는 단계를 더 포함할 수 있다.In addition, a method (S700) of supporting a service for converting real-time data into 3D objects in a 3D virtual reality space and selectively fusion processing according to an embodiment of the present invention is to use the digital twin in the tracking unit 180. ) It may further include tracking the movement line of the 3D object moving in space.
따라서, 본 발명의 일 실시예에 따른 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 시스템 및 방법에 따르면, CCTV의 프라이버시 침해 이슈에 대한 영상의 기본 상태를 익명화된 상태로 변경하고, 범죄차량, 실종자, 수배범, 교통사고, 비상상황과 같은 예외 상황에 대해 선택적으로 익명성을 해제함으로써 3D 모델과 결합된 입체적 추적 기술의 범용화를 실현할 수 있다는 이점을 제공한다.Therefore, according to the system and method for supporting a service for selectively fusion processing by converting real-time data into 3D objects in a 3D virtual reality space according to an embodiment of the present invention, the basic state of the image regarding the privacy infringement issue of CCTV It provides the advantage of realizing the generalization of three-dimensional tracking technology combined with 3D models by changing it to an anonymized state and selectively de-anonymizing it in exceptional situations such as criminal vehicles, missing persons, wanted criminals, traffic accidents, and emergency situations. do.
이상에서 설명된 시스템 또는 장치는 하드웨어 구성요소, 소프트웨어 구성요소, 및/또는 하드웨어 구성요소 및 소프트웨어 구성요소의 조합으로 구현될 수 있다. 예를 들어, 실시예들에서 설명된 장치 및 구성요소는, 예를 들어, 프로세서, 콘트롤러, ALU(arithmetic logic unit), 디지털 신호 프로세서(digital signal processor),마이크로컴퓨터, FPGA(Field Programmable Gate Array), PLU(programmable logic unit), 마이크로프로세서, 또는 명령(instruction)을 실행하고 응답할 수 있는 다른 어떠한 장치와 같이, 하나 이상의 범용 컴퓨터 또는 특수 목적 컴퓨터를 이용하여 구현될 수 있다. 처리 장치는 운영 체제(OS) 및 상기 운영 체제 상에서 수행되는 하나 이상의 소프트웨어 어플리케이션을 수행할 수 있다. 또한, 처리 장치는 소프트웨어의 실행에 응답하여, 데이터를 접근, 저장, 조작, 처리 및 생성할 수도 있다. 이해의 편의를 위하여, 처리 장치는 하나가 사용되는 것으로 설명된 경우도 있지만, 해당 기술분야에서 통상의 지식을 가진 자는, 처리 장치가 복수 개의 처리 요소(processing element) 및/또는 복수 유형의 처리 요소를 포함할 수 있음을 알 수 있다. 예를 들어, 처리 장치는 복수 개의 프로세서 또는 하나의 프로세서 및 하나의 콘트롤러를 포함할 수 있다. 또한, 병렬 프로세서(parallel processor)와 같은, 다른 처리 구성(processing configuration)도 가능하다.The system or device described above may be implemented with hardware components, software components, and/or a combination of hardware components and software components. For example, devices and components described in embodiments may include, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, and a field programmable gate array (FPGA). , may be implemented using one or more general-purpose or special-purpose computers, such as a programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions. The processing device may execute an operating system (OS) and one or more software applications running on the operating system. Additionally, a processing device may access, store, manipulate, process, and generate data in response to the execution of software. For ease of understanding, a single processing device may be described as being used; however, those skilled in the art will understand that a processing device includes multiple processing elements and/or multiple types of processing elements. It can be seen that it may include. For example, a processing device may include a plurality of processors or one processor and one controller. Additionally, other processing configurations, such as parallel processors, are possible.
소프트웨어는 컴퓨터 프로그램(computer program), 코드(code), 명령(instruction), 또는 이들 중 하나 이상의 조합을 포함할 수 있으며, 원하는 대로 동작하도록 처리 장치를 구성하거나 독립적으로 또는 결합적으로(collectively) 처리 장치를 명령할 수 있다. 소프트웨어 및/또는 데이터는, 처리 장치에 의하여 해석되거나 처리 장치에 명령 또는 데이터를 제공하기 위하여, 어떤 유형의 기계, 구성요소(component), 물리적 장치, 가상 장치(virtual equipment), 컴퓨터 저장 매체 또는 장치, 또는 전송되는 신호 파(signal wave)에 영구적으로, 또는 일시적으로 구체화(embody)될 수 있다. 소프트웨어는 네트워크로 연결된 컴퓨터 시스템 상에 분산되어서, 분산된 방법으로 저장되거나 실행될 수도 있다. 소프트웨어 및 데이터는 하나 이상의 컴퓨터 판독 가능 기록 매체에 저장될 수 있다.Software may include a computer program, code, instructions, or a combination of one or more of these, which may configure a processing unit to operate as desired, or may be processed independently or collectively. You can command the device. Software and/or data may be used on any type of machine, component, physical device, virtual equipment, computer storage medium or device to be interpreted by or to provide instructions or data to a processing device. , or may be permanently or temporarily embodied in a transmitted signal wave. Software may be distributed over networked computer systems and stored or executed in a distributed manner. Software and data may be stored on one or more computer-readable recording media.
실시예에 따른 방법은 다양한 컴퓨터 수단을 통하여 수행될 수 있는 프로그램 명령 형태로 구현되어 컴퓨터 판독 가능 매체에 기록될 수 있다. 상기 컴퓨터 판독 가능 매체는 프로그램 명령, 데이터 파일, 데이터 구조 등을 단독으로 또는 조합하여 포함할 수 있다. 상기 매체에 기록되는 프로그램 명령은 실시예를 위하여 특별히 설계되고 구성된 것들이거나 컴퓨터 소프트웨어 당업자에게 공지되어 사용 가능한 것일 수도 있다. 컴퓨터 판독 가능 기록 매체의 예에는 하드 디스크, 플로피 디스크 및 자기 테이프와 같은 자기 매체(magnetic media), CD-ROM, DVD와 같은 광기록 매체(optical media), 플롭티컬 디스크(floptical disk)와 같은 자기-광 매체(magneto-optical media), 및 롬(ROM), 램(RAM), 플래시 메모리 등과 같은 프로그램 명령을 저장하고 수행하도록 특별히 구성된 하드웨어 장치가 포함된다. 프로그램 명령의 예에는 컴파일러에 의해 만들어지는 것과 같은 기계어 코드뿐만 아니라 인터프리터 등을 사용해서 컴퓨터에 의해서 실행될 수 있는 고급 언어 코드를 포함한다. 상기된 하드웨어 장치는 실시예의 동작을 수행하기 위해 하나 이상의 소프트웨어 모듈로서 작동하도록 구성될 수 있으며, 그 역도 마찬가지이다.The method according to the embodiment may be implemented in the form of program instructions that can be executed through various computer means and recorded on a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, etc., singly or in combination. Program instructions recorded on the medium may be specially designed and configured for the embodiment or may be known and available to those skilled in the art of computer software. Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic media such as floptical disks. -Includes optical media (magneto-optical media) and hardware devices specifically configured to store and execute program instructions, such as ROM, RAM, flash memory, etc. Examples of program instructions include machine language code, such as that produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter, etc. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
이상과 같이 실시예들이 비록 한정된 실시예와 도면에 의해 설명되었으나, 해당 기술분야에서 통상의 지식을 가진 자라면 상기의 기재로부터 다양한 수정 및 변형이 가능하다. 예를 들어, 설명된 기술들이 설명된 방법과 다른 순서로 수행되거나, 및/또는 설명된 시스템, 구조, 장치, 회로 등의 구성요소들이 설명된 방법과 다른 형태로 결합 또는 조합되거나, 다른 구성요소 또는 균등물에 의하여 대치되거나 치환되더라도 적절한 결과가 달성될 수 있다.As described above, although the embodiments have been described with limited examples and drawings, various modifications and variations can be made by those skilled in the art from the above description. For example, the described techniques are performed in a different order than the described method, and/or components of the described system, structure, device, circuit, etc. are combined or combined in a different form than the described method, or other components are used. Alternatively, appropriate results may be achieved even if substituted or substituted by an equivalent.
*부호의 설명**Explanation of symbols*
100: 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 시스템100: A system that supports services for selective fusion processing by converting real-time data into 3D objects in 3D virtual reality space
110: 영상전처리부110: Image pre-processing unit
120: 객체 추출부120: Object extraction unit
130: 3D 오브젝트 변환부130: 3D object conversion unit
140: 디지털 트윈공간 생성부140: Digital twin space creation unit
150: 좌표 매칭부150: Coordinate matching unit
160: 영상투명부160: Image transparent part
170: 객체 특성값 부여부170: Object characteristic value granting unit
180: 추적부180: tracking unit

Claims (6)

  1. CCTV 카메라에서 영상 데이터를 수신하여 전처리하는 전처리부;A pre-processing unit that receives video data from CCTV cameras and pre-processes it;
    상기 영상 데이터 내 실사 객체 및 상기 실사 객체의 위치좌표를 추출하는 객체 추출부;an object extraction unit that extracts a live-action object in the image data and location coordinates of the live-action object;
    상기 CCTV 카메라가 위치한 지점의 지리정보 반영된 디지털 트윈(Digital Twin) 공간을 생성하는 디지털 트윈공간 생성부;A digital twin space creation unit that generates a digital twin space reflecting geographical information of the point where the CCTV camera is located;
    상기 실사 객체를 3D 오브젝트로 변환하는 3D 오브젝트 변환부;a 3D object conversion unit that converts the photorealistic object into a 3D object;
    상기 영상 데이터 내의 현실(절대)좌표와 상기 디지털 트윈(Digital Twin) 공간의 공간좌표를 매칭하는 좌표 매칭부; 및A coordinate matching unit that matches real (absolute) coordinates in the image data with spatial coordinates of the digital twin space; and
    상기 디지털 트윈(Digital Twin) 공간 내에 상기 3D 오브젝트의 위치변화를 실시간 동기화하여 상기 3D 오브젝트를 투영하는 데이터 정합부를 포함하는,Comprising a data matching unit that projects the 3D object by synchronizing the positional change of the 3D object in real time within the digital twin space,
    3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 시스템. A system that supports services for selectively fusion processing by converting real-time data into 3D objects in 3D virtual reality space.
  2. 제1항에 있어서,According to paragraph 1,
    상기 실사 객체의 종류, 형상, 크기, 색상, 이동좌표, 출발좌표 중 적어도 하나 이상을 포함하는 객체 특성값을 상기 3D 오브젝트에 부여하는 객체 특성값 부여부를 더 포함하는,Further comprising an object characteristic value granting unit for assigning an object characteristic value including at least one of the type, shape, size, color, movement coordinate, and departure coordinate of the photorealistic object to the 3D object,
    3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 시스템.A system that supports services for selectively fusion processing by converting real-time data into 3D objects in 3D virtual reality space.
  3. 제2항에 있어서,According to paragraph 2,
    상기 디지털 트윈(Digital Twin) 공간 내에 이동하는 3D 오브젝트의 이동동선을 추적하는 추적부를 더 포함하고,It further includes a tracking unit that tracks the movement line of the 3D object moving within the digital twin space,
    상기 3D 오브젝트는 이동 객체인 것을 특징으로 하는 3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 시스템.A system that supports a service for selectively fusion processing by converting real-time data into a 3D object in a 3D virtual reality space, wherein the 3D object is a moving object.
  4. CCTV 카메라에서 영상 데이터를 수신하여 전처리하는 단계;Receiving and pre-processing video data from a CCTV camera;
    상기 영상 데이터 내 실사 객체 및 상기 실사 객체의 위치좌표를 추출하는 단계;Extracting a live-action object and location coordinates of the live-action object from the image data;
    상기 CCTV 카메라가 위치한 지점의 지리정보 반영된 디지털 트윈(Digital Twin) 공간을 생성하는 단계;Creating a digital twin space reflecting geographical information of the point where the CCTV camera is located;
    상기 실사 객체를 3D 오브젝트로 변환하는 단계;converting the photorealistic object into a 3D object;
    상기 영상 데이터 내의 현실공간의 절대좌표와 상기 디지털 트윈(Digital Twin) 공간의 공간좌표를 매칭하는 단계; 및Matching absolute coordinates of real space within the image data with spatial coordinates of the digital twin space; and
    상기 디지털 트윈(Digital Twin) 공간 내에 상기 3D 오브젝트의 위치변화를 실시간 동기화하여 상기 3D 오브젝트를 투영하는 단계를 포함하는 Comprising the step of projecting the 3D object by synchronizing the positional change of the 3D object in real time within the digital twin space.
    3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 방법.A method of supporting services for selectively fusion processing by converting real-time data into 3D objects in 3D virtual reality space.
  5. 제4항에 있어서,According to clause 4,
    객체 특성값 부여부에서 상기 실사 객체의 종류, 형상, 크기, 색상, 이동좌표, 출발좌표 중 적어도 하나 이상을 포함하는 객체 특성값을 상기 3D 오브젝트에 부여하는 단계를 더 포함하는,Further comprising the step of assigning, by an object characteristic value granting unit, an object characteristic value including at least one of the type, shape, size, color, movement coordinate, and departure coordinate of the real-life object to the 3D object.
    3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 방법.A method of converting real-time data into 3D objects in a 3D virtual reality space and supporting services for selective fusion processing.
  6. 제5항에 있어서,According to clause 5,
    추적부에서 상기 디지털 트윈(Digital Twin) 공간 내에 이동하는 3D 오브젝트의 이동동선을 추적하는 단계를 더 포함하고,Further comprising tracking the movement line of the 3D object moving within the digital twin space in a tracking unit,
    3D 가상현실 공간에 실시간 데이터를 3D 오브젝트로 변환하여 선택적으로 융합처리하기 위한 서비스를 지원하는 방법.A method of supporting services for selectively fusion processing by converting real-time data into 3D objects in 3D virtual reality space.
PCT/KR2023/017834 2022-11-08 2023-11-08 System and method for supporting service for converting real-time data into 3d object in 3d virtual reality space and selectively fusing same WO2024101874A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2022-0147779 2022-11-08
KR20220147779 2022-11-08
KR10-2023-0149408 2023-11-01
KR1020230149408A KR20240067011A (en) 2022-11-08 2023-11-01 System and method that support services for selectively converting real-time data into 3D objects in a 3D virtual reality space

Publications (1)

Publication Number Publication Date
WO2024101874A1 true WO2024101874A1 (en) 2024-05-16

Family

ID=91033242

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/017834 WO2024101874A1 (en) 2022-11-08 2023-11-08 System and method for supporting service for converting real-time data into 3d object in 3d virtual reality space and selectively fusing same

Country Status (1)

Country Link
WO (1) WO2024101874A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118488315A (en) * 2024-07-15 2024-08-13 航天极创物联网研究院(南京)有限公司 Lens correction method and system for panoramic video fusion

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112015275A (en) * 2020-08-29 2020-12-01 南京翱翔智能制造科技有限公司 Digital twin AR interaction method and system
KR20210074569A (en) * 2019-12-12 2021-06-22 포항공과대학교 산학협력단 Apparatus and method for tracking multiple objects
KR20220082567A (en) * 2020-12-10 2022-06-17 주식회사 코너스 Real time space intelligence delivery system and method based on situational awareness that matches object recognition from cctv video to 3d space coordinates
KR20220087688A (en) * 2020-12-18 2022-06-27 한국과학기술원 Geo-spacial data estimation method of moving object based on 360 camera and digital twin and the system thereof
KR20220125539A (en) * 2021-03-05 2022-09-14 주식회사 맘모식스 Method for providing mutual interaction service according to location linkage between objects in virtual space and real space

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210074569A (en) * 2019-12-12 2021-06-22 포항공과대학교 산학협력단 Apparatus and method for tracking multiple objects
CN112015275A (en) * 2020-08-29 2020-12-01 南京翱翔智能制造科技有限公司 Digital twin AR interaction method and system
KR20220082567A (en) * 2020-12-10 2022-06-17 주식회사 코너스 Real time space intelligence delivery system and method based on situational awareness that matches object recognition from cctv video to 3d space coordinates
KR20220087688A (en) * 2020-12-18 2022-06-27 한국과학기술원 Geo-spacial data estimation method of moving object based on 360 camera and digital twin and the system thereof
KR20220125539A (en) * 2021-03-05 2022-09-14 주식회사 맘모식스 Method for providing mutual interaction service according to location linkage between objects in virtual space and real space

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118488315A (en) * 2024-07-15 2024-08-13 航天极创物联网研究院(南京)有限公司 Lens correction method and system for panoramic video fusion

Similar Documents

Publication Publication Date Title
US5448696A (en) Map information system capable of displaying layout information
JP3212113B2 (en) Map information display method and device
KR102441454B1 (en) 3d digital twin visualization system interlocked with real and virtual iot devices and the method thereof
CN102004963A (en) Digital city generation method and system
WO2024101874A1 (en) System and method for supporting service for converting real-time data into 3d object in 3d virtual reality space and selectively fusing same
Wu et al. A novel method for tunnel digital twin construction and virtual-real fusion application
Dimitrov et al. 3d city model as a first step towards digital twin of Sofia city
Breunig et al. Collaborative multi-scale 3D city and infrastructure modeling and simulation
KR20220087688A (en) Geo-spacial data estimation method of moving object based on 360 camera and digital twin and the system thereof
CN115098603A (en) Digital twinning method and terminal based on multi-scale modeling
Tang et al. Thma: Tencent hd map ai system for creating hd map annotations
Bi et al. Research on the construction of City information modelling basic platform based on multi-source data
Zhang et al. Exploring semantic information extraction from different data forms in 3D point cloud semantic segmentation
CN113626900B (en) Method and system for generating facility asset icons of road comprehensive bar system
KR20240067011A (en) System and method that support services for selectively converting real-time data into 3D objects in a 3D virtual reality space
Zheng et al. A method of detect traffic police in complex scenes
Buyukdemircioglu et al. A 3D campus application based on city models and WebGL
Petrova-Antonova et al. Towards a semantic 3D model of Sofia City
Zheng et al. High‐definition map automatic annotation system based on active learning
Shi et al. Lane-level road network construction based on street-view images
Zhai et al. Semantic enrichment of BIM with IndoorGML for quadruped robot navigation and automated 3D scanning
Golovnin et al. Intelligent geographic information platform for transport process analysis
Li et al. Semi-Automatic Construction of Virtual Reality Environment for Highway Work Zone Training using Open-Source Tools
Wei Research on Smart City Platform Based on 3D Video Fusion
Lee et al. Developing data fusion method for indoor space modeling based on IndoorGML core module

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23889132

Country of ref document: EP

Kind code of ref document: A1