Nothing Special   »   [go: up one dir, main page]

CN116188891A - Image generation method and system based on three-dimensional point cloud - Google Patents

Image generation method and system based on three-dimensional point cloud Download PDF

Info

Publication number
CN116188891A
CN116188891A CN202211629805.3A CN202211629805A CN116188891A CN 116188891 A CN116188891 A CN 116188891A CN 202211629805 A CN202211629805 A CN 202211629805A CN 116188891 A CN116188891 A CN 116188891A
Authority
CN
China
Prior art keywords
point cloud
cloud data
loss
target
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211629805.3A
Other languages
Chinese (zh)
Other versions
CN116188891B (en
Inventor
杨星
胡以华
梁振宇
胡睿晗
朱东涛
穆华
王阳阳
高皓琪
许颢砾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202211629805.3A priority Critical patent/CN116188891B/en
Publication of CN116188891A publication Critical patent/CN116188891A/en
Application granted granted Critical
Publication of CN116188891B publication Critical patent/CN116188891B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an image generation method and system based on three-dimensional point cloud, and relates to the technical field of three-dimensional point cloud data processing. According to the method, the point cloud classification model is trained by utilizing sample point cloud data, so that the trained point cloud classification model can identify a first target from original point cloud data representing the first target; disturbing partial data points of the original point cloud data to obtain first pair of anti-point cloud data, and calculating classification loss of the first pair of anti-point cloud data representing the first target as the second target by the trained point cloud classification model; acquiring first loss representing the generalization distance of the point cloud and second loss representing the generalization distance of the grid surface between the original point cloud data and the first pair of anti-point cloud data, and determining collaborative optimization loss by combining the classification loss; and determining the minimum value of the collaborative optimization loss through optimizing the collaborative optimization loss, and generating second countering point cloud data by utilizing the minimum value of the collaborative optimization loss and the original point cloud data so as to resist point cloud attacks from aggressors.

Description

Image generation method and system based on three-dimensional point cloud
Technical Field
The invention belongs to the technical field of three-dimensional point cloud data processing, and particularly relates to an image generation method and system based on three-dimensional point cloud.
Background
At present, deep learning model research is deep, so that face recognition, target detection and the like are unprecedented based on two-dimensional image recognition application. However, since the deep learning model is composed of multiple layers of weights and biased mathematical parameters. Compared with a two-dimensional image recognition model, the three-dimensional point cloud recognition provides a more robust artificial intelligent perception tool in consideration of 360-degree omnidirectionality, and is widely applied to the fields of radar recognition, automatic driving and the like.
The vulnerability of the three-dimensional point cloud recognition model is studied relatively less than the vulnerability of the two-dimensional image recognition model. Aiming at three-dimensional point cloud data, the method adopts a collaborative optimization mode to learn in the antagonism optimization process, and finally realizes the attack of the fixed tag with less change of the point cloud position.
From the existing method in the market, the attack of the three-dimensional point cloud model fixed label is realized in an anti-optimization iteration mode, the anti-attack sensitivity of each point in the local point cloud to be attacked is calculated by generally obtaining the local point cloud to be attacked and the local point cloud to be matched with the local point cloud to be attacked, and the disturbance of the local point cloud to be attacked is realized according to each anti-attack sensitivity.
The implementation mode uses the interconversion mechanism between the point cloud and the Mesh (the three-dimensional visual representation grid surface data structure consists of nodes, edges and surfaces) to realize the optimal generation of the countermeasure sample by directly calculating the distances between the countermeasure generation point cloud and the Mesh and the original point cloud and the Mesh, and does not consider the cost problem of calculating the matching degree and the sensitivity.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides an image generation scheme based on three-dimensional point cloud.
The invention discloses an image generation method based on three-dimensional point cloud. The method comprises the following steps: step S1, training a point cloud classification model by using sample point cloud data, wherein the trained point cloud classification model identifies a first target from original point cloud data representing the first target; step S2, perturbation is carried out on partial data points of the original point cloud data to obtain first contrast point cloud data, and classification loss of the first contrast point cloud data representing the first target as a second target is recognized by the trained point cloud classification model; step S3, obtaining first loss representing the point cloud generalization distance between the original point cloud data and the first pair of anti-point cloud data and second loss representing the grid surface generalization distance, and determining collaborative optimization loss by combining the classification loss; s4, determining the minimum value of the collaborative optimization loss by optimizing the collaborative optimization loss, and generating second contrast point cloud data by utilizing the minimum value of the collaborative optimization loss and the original point cloud data so as to resist point cloud attack from an attacker; wherein a minimum representation of the collaborative optimization penalty minimally perturbs the raw point cloud data to obtain the second countermeasure point cloud data, and the trained point cloud classification model identifies second countermeasure point cloud data that characterizes the first target as the second target.
According to the method of the first aspect, in the step S1, each set of sample point cloud data represents one target, the point cloud classification model is trained by using a plurality of sets of sample point cloud data, the classification accuracy of the trained point cloud classification model on each target is not lower than a first threshold, and the first target is identified from the original point cloud data representing the first target.
According to the method of the first aspect, in the step S2, a ratio between the number of the partial data points and the number of all data points in the original point cloud data does not exceed a second threshold, and coordinate values of the partial data points are modified within a given range, so that the obtained first contrast point cloud data still characterizes the first target.
According to the method of the first aspect, in the step S2, the first challenge point cloud data is input to the trained point cloud classification model, and a classification loss for identifying the first challenge point cloud data as the second target is calculated based on the identification result of the first challenge point cloud data by the trained point cloud classification model.
According to the method of the first aspect, in the step S3, the first loss is a Hausdorff distance between the original point cloud data and the first pair of anti-point cloud data, which characterizes a point cloud generalization distance, and the second loss is a geometric distance between the original point cloud data and the first pair of anti-point cloud data, which characterizes a grid surface generalization distance, and the geometric distance includes a weighted point feature geometric distance, a dihedral angle-based edge feature geometric distance and a surface feature geometric distance of the grid surface.
According to the method of the first aspect, in the step S3, weights are assigned to the first loss and the second loss, respectively, to obtain a joint loss, and weights are further assigned to the joint loss and the classification loss, respectively, to determine the collaborative optimization loss.
According to the method of the first aspect, in the step S4, when the point cloud attack from the attacker is countered, the minimum disturbance is added to a part of data points of the original point cloud data by using the minimum value of the collaborative optimization loss, and the obtained second countering point cloud data representing the first target is identified as the second target by the trained point cloud classification model, so that countering defense against the attacker is realized.
The invention discloses an image generation system based on three-dimensional point cloud. The system comprises: a first processing unit configured to: training a point cloud classification model by using sample point cloud data, wherein the trained point cloud classification model identifies a first target from original point cloud data representing the first target; a second processing unit configured to: perturbation is performed on partial data points of the original point cloud data to acquire first contrast point cloud data, and classification losses of the first contrast point cloud data representing the first target as a second target are recognized by the trained point cloud classification model; a third processing unit configured to: acquiring first loss representing the generalization distance of the point cloud and second loss representing the generalization distance of the grid surface between the original point cloud data and the first pair of anti-point cloud data, and determining collaborative optimization loss by combining the classification loss; a fourth processing unit configured to: determining the minimum value of the collaborative optimization loss by optimizing the collaborative optimization loss, and generating second countering point cloud data by utilizing the minimum value of the collaborative optimization loss and the original point cloud data so as to resist point cloud attack from an attacker; wherein a minimum representation of the collaborative optimization penalty minimally perturbs the raw point cloud data to obtain the second countermeasure point cloud data, and the trained point cloud classification model identifies second countermeasure point cloud data that characterizes the first target as the second target.
According to the system of the second aspect, each set of sample point cloud data characterizes one target, the point cloud classification model is trained by using a plurality of sets of sample point cloud data, the classification accuracy of the trained point cloud classification model on each target is not lower than a first threshold, and the first target is identified from the original point cloud data characterizing the first target.
According to the system of the second aspect, the ratio between the number of the partial data points and the number of all data points in the original point cloud data does not exceed a second threshold, and the coordinate values of the partial data points are changed within a given range, so that the obtained first pair of anti-point cloud data still represents the first target.
According to the system of the second aspect, the first countermeasure point cloud data is input to the trained point cloud classification model, and a classification loss for identifying the first countermeasure point cloud data as the second target is calculated based on a recognition result of the first countermeasure point cloud data by the trained point cloud classification model.
The system according to the second aspect, the first loss is a Hausdorff distance between the original point cloud data and the first pair of anti-point cloud data that characterizes a point cloud generalization distance, the second loss is a geometric distance between the original point cloud data and the first pair of anti-point cloud data that characterizes a grid face generalization distance, the geometric distance comprising a weighted point feature geometric distance, a dihedral angle-based edge feature geometric distance, and a face feature geometric distance of the grid face.
The system according to the second aspect assigns weights to the first and second losses, respectively, to obtain joint losses, and further assigns weights to the joint losses and the classification losses, respectively, to determine the collaborative optimization losses.
According to the system of the second aspect, when the point cloud attack from the attacker is resisted, the minimum disturbance is added to partial data points of the original point cloud data by using the minimum value of the collaborative optimization loss, and the obtained second countermeasure point cloud data representing the first target is identified as the second target by the trained point cloud classification model, so that the countermeasure against the attacker is realized.
A third aspect of the invention discloses an electronic device. The electronic device comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps in the image generation method based on the three-dimensional point cloud according to the first aspect of the disclosure when executing the computer program.
A fourth aspect of the invention discloses a computer-readable storage medium. The computer readable storage medium stores a computer program which, when executed by a processor, implements the steps in a three-dimensional point cloud-based image generation method according to the first aspect of the present disclosure.
In summary, the technical scheme provided by the invention constructs a point cloud classification model, designs Hausdorff distance loss representing the point cloud generalization distance and geometric distance loss representing the Mesh generalization distance, merges the point cloud and the Mesh generalization distance as regularized distance functions in an optimized loss function for guiding three-dimensional point cloud attack, constructs a collaborative optimized loss function together based on the regularized distance functions and the classification loss function, obtains point cloud and Mesh countermeasure generation data in the point cloud attack process according to the minimum value of the collaborative optimized loss function, and realizes a countermeasure attack process aiming at fixed target point cloud through multiple rounds of optimization loop iteration of the countermeasure generation data.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings which are required in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are some embodiments of the invention and that other drawings may be obtained from these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an image generating method based on a three-dimensional point cloud according to an embodiment of the invention;
fig. 2 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention discloses an image generation method based on three-dimensional point cloud. Fig. 1 is a schematic flow chart of an image generating method based on a three-dimensional point cloud according to an embodiment of the invention; as shown in connection with fig. 1, the method comprises: step S1, training a point cloud classification model by using sample point cloud data, wherein the trained point cloud classification model identifies a first target from original point cloud data representing the first target; step S2, perturbation is carried out on partial data points of the original point cloud data to obtain first contrast point cloud data, and classification loss of the first contrast point cloud data representing the first target as a second target is recognized by the trained point cloud classification model; step S3, obtaining first loss representing the point cloud generalization distance between the original point cloud data and the first pair of anti-point cloud data and second loss representing the grid surface generalization distance, and determining collaborative optimization loss by combining the classification loss; s4, determining the minimum value of the collaborative optimization loss by optimizing the collaborative optimization loss, and generating second contrast point cloud data by utilizing the minimum value of the collaborative optimization loss and the original point cloud data so as to resist point cloud attack from an attacker; wherein a minimum representation of the collaborative optimization penalty minimally perturbs the raw point cloud data to obtain the second countermeasure point cloud data, and the trained point cloud classification model identifies second countermeasure point cloud data that characterizes the first target as the second target.
In some embodiments, in the step S1, each set of the sample point cloud data characterizes one target, the point cloud classification model is trained using a plurality of sets of sample point cloud data, the classification accuracy of the trained point cloud classification model for each target is not lower than a first threshold, and the first target is identified from the original point cloud data characterizing the first target.
In some embodiments, in the step S2, a ratio between the number of the partial data points and the number of all data points in the original point cloud data does not exceed a second threshold, and coordinate values of the partial data points are modified within a given range, so that the obtained first pair of resistant point cloud data still characterizes the first target.
In some embodiments, in the step S2, the first challenge point cloud data is input to the trained point cloud classification model, and a classification loss for identifying the first challenge point cloud data as the second target is calculated based on a recognition result of the first challenge point cloud data by the trained point cloud classification model.
In some embodiments, in the step S3, the first loss is a Hausdorff distance between the original point cloud data and the first pair of anti-point cloud data, which characterizes a point cloud generalization distance, and the second loss is a geometric distance between the original point cloud data and the first pair of anti-point cloud data, which characterizes a grid face generalization distance, and the geometric distances include a weighted point feature geometric distance, a dihedral angle-based edge feature geometric distance, and a face feature geometric distance of the grid face.
Specifically, the specific implementation process of the first loss (Hausdorff distance) includes: for a first contrast point cloud
Figure DEST_PATH_IMAGE001
And original point cloud
Figure 325543DEST_PATH_IMAGE002
Calculating the minimum Euclidean distance from the original point cloud to the first contrast point cloud
Figure 931099DEST_PATH_IMAGE004
Figure DEST_PATH_IMAGE005
As the data points in the original point cloud data,
Figure 835470DEST_PATH_IMAGE006
is the coordinates of the data points in the first pair of resistant point cloud data. For a first contrast point cloud
Figure 168362DEST_PATH_IMAGE001
And original point cloud
Figure 668220DEST_PATH_IMAGE002
Based on
Figure DEST_PATH_IMAGE007
Distance, calculating a first countering point cloud
Figure 225103DEST_PATH_IMAGE001
To the original point cloud
Figure 820033DEST_PATH_IMAGE002
The maximum Euclidean distance of (2) is Hausdorff loss
Figure 815670DEST_PATH_IMAGE008
Figure 172965DEST_PATH_IMAGE010
In particular, the implementation procedure of the second loss (geometric distance) includes:
mesh for first contrast point cloud
Figure 494225DEST_PATH_IMAGE001
And from the original point cloud
Figure 186237DEST_PATH_IMAGE002
Original Mesh generated through poisson reconstruction
Figure 608735DEST_PATH_IMAGE002
Will be
Figure 69803DEST_PATH_IMAGE002
And
Figure 296385DEST_PATH_IMAGE001
the point features of (2) are respectively marked as
Figure 865907DEST_PATH_IMAGE011
And
Figure 829446DEST_PATH_IMAGE012
then
Figure 410600DEST_PATH_IMAGE002
And
Figure 932717DEST_PATH_IMAGE001
the laplace mesh optimization distance of (c) may be noted as:
Figure 599322DEST_PATH_IMAGE014
mesh for first contrast point cloud
Figure 363622DEST_PATH_IMAGE001
And from the original point cloud
Figure 799282DEST_PATH_IMAGE002
Original Mesh generated through poisson reconstruction
Figure 367667DEST_PATH_IMAGE002
Will be
Figure 505256DEST_PATH_IMAGE002
And
Figure 325445DEST_PATH_IMAGE001
the dihedral angles of the edge features of (a) are respectively noted as
Figure DEST_PATH_IMAGE015
And
Figure 304027DEST_PATH_IMAGE016
then
Figure 167947DEST_PATH_IMAGE002
And
Figure 277985DEST_PATH_IMAGE001
the laplace mesh optimization distance of (c) may be noted as:
Figure DEST_PATH_IMAGE017
mesh for first contrast point cloud
Figure 321771DEST_PATH_IMAGE001
And from the original point cloud
Figure 981292DEST_PATH_IMAGE002
Original Mesh generated through poisson reconstruction
Figure 32424DEST_PATH_IMAGE002
Will be
Figure 646071DEST_PATH_IMAGE002
And
Figure 542482DEST_PATH_IMAGE001
the normal vectors of the surface features of (a) are respectively noted as
Figure 790930DEST_PATH_IMAGE018
And
Figure DEST_PATH_IMAGE019
then
Figure 698450DEST_PATH_IMAGE002
And
Figure 48660DEST_PATH_IMAGE001
the laplace mesh optimization distance of (c) may be noted as:
Figure DEST_PATH_IMAGE021
specifically, mesh for first resistant point cloud
Figure 201292DEST_PATH_IMAGE001
And from the original point cloud
Figure 336870DEST_PATH_IMAGE002
Original Mesh generated through poisson reconstruction
Figure 729805DEST_PATH_IMAGE002
Geometric distance lossLoss of function
Figure 223103DEST_PATH_IMAGE022
From the following components
Figure DEST_PATH_IMAGE023
,
Figure 599333DEST_PATH_IMAGE024
Figure DEST_PATH_IMAGE025
And (5) weighting calculation:
Figure 464783DEST_PATH_IMAGE026
in some embodiments, in the step S3, weights are assigned to the first loss and the second loss, respectively, to obtain a joint loss, and weights are further assigned to the joint loss and the classification loss, respectively, to determine the collaborative optimization loss.
Specifically, joint loss is determined by
Figure DEST_PATH_IMAGE027
And
Figure 481150DEST_PATH_IMAGE028
and (3) weighting construction:
Figure DEST_PATH_IMAGE029
specifically, a first countermeasure point cloud is constructed
Figure 757017DEST_PATH_IMAGE030
Classification loss function L on point cloud classification model cls The collaborative optimization loss is as follows:
Figure DEST_PATH_IMAGE031
in some embodiments, in the step S4, when the point cloud attack from the attacker is countered, the minimum disturbance is added to a part of data points of the original point cloud data by using the minimum value of the collaborative optimization loss, and the obtained second countering point cloud data representing the first target is identified as the second target by the trained point cloud classification model, so that countering defense against the attacker is realized.
Specifically, the generation of challenge samples is achieved with a minimum amount of disturbance point clouds by multiple rounds of optimization iterations and optimization functions of challenge generation data:
Figure 143130DEST_PATH_IMAGE032
the invention discloses an image generation system based on three-dimensional point cloud. The system comprises: a first processing unit configured to: training a point cloud classification model by using sample point cloud data, wherein the trained point cloud classification model identifies a first target from original point cloud data representing the first target; a second processing unit configured to: perturbation is performed on partial data points of the original point cloud data to acquire first contrast point cloud data, and classification losses of the first contrast point cloud data representing the first target as a second target are recognized by the trained point cloud classification model; a third processing unit configured to: acquiring first loss representing the generalization distance of the point cloud and second loss representing the generalization distance of the grid surface between the original point cloud data and the first pair of anti-point cloud data, and determining collaborative optimization loss by combining the classification loss; a fourth processing unit configured to: determining the minimum value of the collaborative optimization loss by optimizing the collaborative optimization loss, and generating second countering point cloud data by utilizing the minimum value of the collaborative optimization loss and the original point cloud data so as to resist point cloud attack from an attacker; wherein a minimum representation of the collaborative optimization penalty minimally perturbs the raw point cloud data to obtain the second countermeasure point cloud data, and the trained point cloud classification model identifies second countermeasure point cloud data that characterizes the first target as the second target.
According to the system of the second aspect, each set of sample point cloud data characterizes one target, the point cloud classification model is trained by using a plurality of sets of sample point cloud data, the classification accuracy of the trained point cloud classification model on each target is not lower than a first threshold, and the first target is identified from the original point cloud data characterizing the first target.
According to the system of the second aspect, the ratio between the number of the partial data points and the number of all data points in the original point cloud data does not exceed a second threshold, and the coordinate values of the partial data points are changed within a given range, so that the obtained first pair of anti-point cloud data still represents the first target.
According to the system of the second aspect, the first countermeasure point cloud data is input to the trained point cloud classification model, and a classification loss for identifying the first countermeasure point cloud data as the second target is calculated based on a recognition result of the first countermeasure point cloud data by the trained point cloud classification model.
The system according to the second aspect, the first loss is a Hausdorff distance between the original point cloud data and the first pair of anti-point cloud data that characterizes a point cloud generalization distance, the second loss is a geometric distance between the original point cloud data and the first pair of anti-point cloud data that characterizes a grid face generalization distance, the geometric distance comprising a weighted point feature geometric distance, a dihedral angle-based edge feature geometric distance, and a face feature geometric distance of the grid face.
The system according to the second aspect assigns weights to the first and second losses, respectively, to obtain joint losses, and further assigns weights to the joint losses and the classification losses, respectively, to determine the collaborative optimization losses.
According to the system of the second aspect, when the point cloud attack from the attacker is resisted, the minimum disturbance is added to partial data points of the original point cloud data by using the minimum value of the collaborative optimization loss, and the obtained second countermeasure point cloud data representing the first target is identified as the second target by the trained point cloud classification model, so that the countermeasure against the attacker is realized.
A third aspect of the invention discloses an electronic device. The electronic device comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps in the image generation method based on the three-dimensional point cloud according to the first aspect of the disclosure when executing the computer program.
Fig. 2 is a block diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 2, the electronic device includes a processor, a memory, a communication interface, a display screen, and an input device connected through a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic device includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the electronic device is used for conducting wired or wireless communication with an external terminal, and the wireless communication can be achieved through WIFI, an operator network, near Field Communication (NFC) or other technologies. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the electronic equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the structure shown in fig. 2 is merely a structural diagram of a portion related to the technical solution of the present disclosure, and does not constitute a limitation of the electronic device to which the present application is applied, and that a specific electronic device may include more or less components than those shown in the drawings, or may combine some components, or have different component arrangements.
A fourth aspect of the invention discloses a computer-readable storage medium. The computer readable storage medium stores a computer program which, when executed by a processor, implements the steps in a three-dimensional point cloud-based image generation method according to the first aspect of the present disclosure.
In summary, the technical scheme provided by the invention constructs a point cloud classification model, designs Hausdorff distance loss representing the point cloud generalization distance and geometric distance loss representing the Mesh generalization distance, merges the point cloud and the Mesh generalization distance as regularized distance functions in an optimized loss function for guiding three-dimensional point cloud attack, constructs a collaborative optimized loss function together based on the regularized distance functions and the classification loss function, obtains point cloud and Mesh countermeasure generation data in the point cloud attack process according to the minimum value of the collaborative optimized loss function, and realizes a countermeasure attack process aiming at fixed target point cloud through multiple rounds of optimization loop iteration of the countermeasure generation data.
Note that the technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be regarded as the scope of the description. The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. An image generation method based on a three-dimensional point cloud, which is characterized by comprising the following steps:
step S1, training a point cloud classification model by using sample point cloud data, wherein the trained point cloud classification model identifies a first target from original point cloud data representing the first target;
step S2, perturbation is carried out on partial data points of the original point cloud data to obtain first contrast point cloud data, and classification loss of the first contrast point cloud data representing the first target as a second target is recognized by the trained point cloud classification model;
step S3, obtaining first loss representing the point cloud generalization distance between the original point cloud data and the first pair of anti-point cloud data and second loss representing the grid surface generalization distance, and determining collaborative optimization loss by combining the classification loss;
s4, determining the minimum value of the collaborative optimization loss by optimizing the collaborative optimization loss, and generating second contrast point cloud data by utilizing the minimum value of the collaborative optimization loss and the original point cloud data so as to resist point cloud attack from an attacker;
wherein a minimum representation of the collaborative optimization penalty minimally perturbs the raw point cloud data to obtain the second countermeasure point cloud data, and the trained point cloud classification model identifies second countermeasure point cloud data that characterizes the first target as the second target.
2. The method according to claim 1, wherein in the step S1, each set of sample point cloud data represents a target, the point cloud classification model is trained using a plurality of sets of sample point cloud data, the classification accuracy of each target by the trained point cloud classification model is not lower than a first threshold, and the first target is identified from the original point cloud data representing the first target.
3. The method according to claim 2, wherein in the step S2, a ratio between the number of the partial data points and the number of all data points in the original point cloud data does not exceed a second threshold, and coordinate values of the partial data points are modified within a given range, so that the obtained first pair of resistant point cloud data still characterizes the first target.
4. A three-dimensional point cloud based image generation method according to claim 3, wherein in the step S2, the first countermeasure point cloud data is input to the trained point cloud classification model, and classification loss for identifying the first countermeasure point cloud data as the second target is calculated based on the identification result of the first countermeasure point cloud data by the trained point cloud classification model.
5. The three-dimensional point cloud based image generation method according to claim 4, wherein in the step S3, the first loss is a Hausdorff distance characterizing a point cloud generalization distance between the original point cloud data and the first pair of anti-point cloud data, and the second loss is a geometric distance characterizing a grid surface generalization distance between the original point cloud data and the first pair of anti-point cloud data, and the geometric distance includes a weighted point feature geometric distance, a dihedral angle based edge feature geometric distance, and a surface feature geometric distance of the grid surface.
6. The three-dimensional point cloud based image generation method according to claim 5, wherein in the step S3, weights are assigned to the first loss and the second loss, respectively, to obtain joint losses, and further weights are assigned to the joint losses and the classification losses, respectively, to determine the collaborative optimization loss.
7. The three-dimensional point cloud based image generation method according to claim 6, wherein in the step S4, when the point cloud attack from the attacker is countered, the minimum disturbance is added to a part of data points of the original point cloud data by using the minimum value of the collaborative optimization loss, and the obtained second countermeasure point cloud data representing the first target is identified as the second target by the trained point cloud classification model, so that countermeasure against the attacker is realized.
8. An image generation system based on a three-dimensional point cloud, the system comprising:
a first processing unit configured to: training a point cloud classification model by using sample point cloud data, wherein the trained point cloud classification model identifies a first target from original point cloud data representing the first target;
a second processing unit configured to: perturbation is performed on partial data points of the original point cloud data to acquire first contrast point cloud data, and classification losses of the first contrast point cloud data representing the first target as a second target are recognized by the trained point cloud classification model;
a third processing unit configured to: acquiring first loss representing the generalization distance of the point cloud and second loss representing the generalization distance of the grid surface between the original point cloud data and the first pair of anti-point cloud data, and determining collaborative optimization loss by combining the classification loss;
a fourth processing unit configured to: determining the minimum value of the collaborative optimization loss by optimizing the collaborative optimization loss, and generating second countering point cloud data by utilizing the minimum value of the collaborative optimization loss and the original point cloud data so as to resist point cloud attack from an attacker;
wherein a minimum representation of the collaborative optimization penalty minimally perturbs the raw point cloud data to obtain the second countermeasure point cloud data, and the trained point cloud classification model identifies second countermeasure point cloud data that characterizes the first target as the second target.
9. An electronic device comprising a memory and a processor, the memory storing a computer program, the processor implementing the steps in a three-dimensional point cloud based image generation method according to any of claims 1-7 when the computer program is executed.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of a three-dimensional point cloud based image generation method according to any of claims 1-7.
CN202211629805.3A 2022-12-19 2022-12-19 Image generation method and system based on three-dimensional point cloud Active CN116188891B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211629805.3A CN116188891B (en) 2022-12-19 2022-12-19 Image generation method and system based on three-dimensional point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211629805.3A CN116188891B (en) 2022-12-19 2022-12-19 Image generation method and system based on three-dimensional point cloud

Publications (2)

Publication Number Publication Date
CN116188891A true CN116188891A (en) 2023-05-30
CN116188891B CN116188891B (en) 2024-09-24

Family

ID=86437474

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211629805.3A Active CN116188891B (en) 2022-12-19 2022-12-19 Image generation method and system based on three-dimensional point cloud

Country Status (1)

Country Link
CN (1) CN116188891B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914946A (en) * 2020-08-19 2020-11-10 中国科学院自动化研究所 Countermeasure sample generation method, system and device for outlier removal method
US20210042929A1 (en) * 2019-01-22 2021-02-11 Institute Of Automation, Chinese Academy Of Sciences Three-dimensional object detection method and system based on weighted channel features of a point cloud
CN113838211A (en) * 2021-09-15 2021-12-24 广州大学 3D point cloud classification attack defense method, device, equipment and storage medium
CN114550260A (en) * 2022-02-24 2022-05-27 西安交通大学 Three-dimensional face point cloud identification method based on countermeasure data enhancement
CN114973235A (en) * 2022-05-06 2022-08-30 华中科技大学 Method for generating countermeasure point cloud based on disturbance added in geometric feature field
WO2022193335A1 (en) * 2021-03-15 2022-09-22 深圳大学 Point cloud data processing method and apparatus, and computer device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210042929A1 (en) * 2019-01-22 2021-02-11 Institute Of Automation, Chinese Academy Of Sciences Three-dimensional object detection method and system based on weighted channel features of a point cloud
CN111914946A (en) * 2020-08-19 2020-11-10 中国科学院自动化研究所 Countermeasure sample generation method, system and device for outlier removal method
WO2022193335A1 (en) * 2021-03-15 2022-09-22 深圳大学 Point cloud data processing method and apparatus, and computer device and storage medium
CN113838211A (en) * 2021-09-15 2021-12-24 广州大学 3D point cloud classification attack defense method, device, equipment and storage medium
CN114550260A (en) * 2022-02-24 2022-05-27 西安交通大学 Three-dimensional face point cloud identification method based on countermeasure data enhancement
CN114973235A (en) * 2022-05-06 2022-08-30 华中科技大学 Method for generating countermeasure point cloud based on disturbance added in geometric feature field

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨秋翔;杨小青;杜健;: "基于Hausdorff距离的点云分片精简算法", 计算机工程与设计, no. 08, 16 August 2016 (2016-08-16) *

Also Published As

Publication number Publication date
CN116188891B (en) 2024-09-24

Similar Documents

Publication Publication Date Title
Zhang et al. Dense attention fluid network for salient object detection in optical remote sensing images
Shi et al. Improved Iterative Closest Point (ICP) 3D point cloud registration algorithm based on point cloud filtering and adaptive fireworks for coarse registration
Donati et al. Deep orientation-aware functional maps: Tackling symmetry issues in shape matching
CN113780123B (en) Method, system, computer device and storage medium for generating countermeasure sample
CN113255816B (en) Directional attack countermeasure patch generation method and device
CN113496247A (en) Estimating an implicit likelihood of generating a countermeasure network
US20240320976A1 (en) Methods, systems, devices, media and products for video processing
CN116665282B (en) Face recognition model training method, face recognition method and device
Tang et al. Robust local-coordinate non-negative matrix factorization with adaptive graph for robust clustering
CN111597352B (en) Network space knowledge graph reasoning method and device combining ontology concepts and instances
Jia et al. Multiperspective progressive structure adaptation for JPEG steganography detection across domains
Li et al. Improving adversarial robustness of 3D point cloud classification models
CN115330579B (en) Model watermark construction method, device, equipment and storage medium
Shi et al. Deformable Convolution-Guided Multiscale Feature Learning and Fusion for UAV Object Detection
CN116188891B (en) Image generation method and system based on three-dimensional point cloud
CN109871249A (en) A kind of remote desktop operation method, apparatus, readable storage medium storing program for executing and terminal device
Xu et al. Head pose estimation using improved label distribution learning with fewer annotations
CN111915676B (en) Image generation method, device, computer equipment and storage medium
Yan et al. Multiscale feature aggregation network for salient object detection in optical remote sensing images
CN113610904B (en) 3D local point cloud countermeasure sample generation method, system, computer and medium
CN115828269A (en) Method, device, equipment and storage medium for constructing source code vulnerability detection model
Wang et al. Graph-based saliency detection using a learning joint affinity matrix
CN115424267A (en) Rotating target detection method and device based on Gaussian distribution
Lu et al. Feature Matching via Topology-Aware Graph Interaction Model
Hao et al. SuperGlue-based accurate feature matching via outlier filtering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant