CN115633179A - Compression method for real-time volume video streaming transmission - Google Patents
Compression method for real-time volume video streaming transmission Download PDFInfo
- Publication number
- CN115633179A CN115633179A CN202211243542.2A CN202211243542A CN115633179A CN 115633179 A CN115633179 A CN 115633179A CN 202211243542 A CN202211243542 A CN 202211243542A CN 115633179 A CN115633179 A CN 115633179A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- frame
- color information
- point
- slice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000006835 compression Effects 0.000 title claims abstract description 35
- 238000007906 compression Methods 0.000 title claims abstract description 35
- 238000000034 method Methods 0.000 title claims abstract description 31
- 230000005540 biological transmission Effects 0.000 title abstract description 10
- 230000000007 visual effect Effects 0.000 claims abstract description 8
- 230000011218 segmentation Effects 0.000 claims abstract description 7
- 230000009466 transformation Effects 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000003064 k means clustering Methods 0.000 claims description 3
- 238000009877 rendering Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/184—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/96—Tree coding, e.g. quad-tree coding
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention discloses a compression method for real-time volume video stream transmission, which realizes the motion compensation of point cloud by taking a piece as a unit through cluster segmentation and two-stage point cloud registration and further divides a point cloud frame group into point cloud piece groups; constructing common point cloud pictures for each point cloud picture group, generating color information corresponding to each picture, and processing predicted frame color information into color residual errors; then, according to the user visual angle information of the client, eliminating invisible point cloud pieces, and encoding geometric information and color information of the rest point cloud pieces by using an octree and RAHT algorithm; and finally, the decoder decodes the point cloud chip code streams which are irrelevant to each other in parallel through a CPU multithreading technology, high-speed decoding is realized, the real-time requirement is met, the common point cloud chip is created by the method to effectively utilize the time similarity between the volume video point cloud frames, and the compression ratio is improved by eliminating the time redundancy between the frames.
Description
Technical Field
The invention relates to the field of computer video coding, in particular to a compression method for real-time volume video stream transmission.
Background
In the application of virtual reality and augmented reality (VR/AR), a volume video formed by point cloud video frames has wide development prospect. Volumetric video may provide an immersive experience for the user because it allows the user to view in a six degree of freedom manner. Each frame of volumetric video is made up of a 3D point cloud, each point in the point cloud typically containing the three-dimensional coordinates < x, y, z >, RGB colors, and other optional attributes. To achieve sufficient realism, volumetric video typically meets the conditions of a frame rate of at least 30FPS and millions of points per frame, so transmitting volumetric video requires bandwidth in Gbps, well beyond the upper limit of current networks.
In order to transmit the volumetric video, it needs to be encoded and compressed. The current mainstream schemes are divided into two types, one is a two-dimensional projection compression scheme, and the other is a three-dimensional space tree structure compression scheme, but both schemes have certain problems. The two-dimensional compression scheme requires that the point cloud is projected to form a two-dimensional image, the encoding and decoding are performed through an existing two-dimensional encoder, and finally the two-dimensional image needs to be reconstructed into a three-dimensional point cloud. Although the scheme has a high compression rate, the complexity of the projection and reconstruction process is high, the operation time is long, the requirement of real-time transmission cannot be met, and due to technical reasons, the outside of a viewing cone and the blocked point cloud cannot be removed in advance according to the change of the visual angle of a user, so that the waste of resources is caused. The three-dimensional compression scheme uses spatial structures such as octree and the like to compress point clouds, the compression rate is low, and the obtained video stream cannot meet the limitation of the bandwidth of the current commercial internet. Therefore, at present, there is no scheme that can make the coded volume video stream code rate meet the bandwidth requirement of the commercial internet under the scene of real-time stream transmission, and meanwhile, the scheme has the function of adaptive user visual cone cutting.
Disclosure of Invention
In order to overcome the above problems in the prior art, the present invention proposes a compression method for real-time volumetric video streaming.
The technical scheme adopted by the invention for solving the technical problems is as follows: a compression method for real-time volumetric video streaming, comprising the steps of:
step 1, clustering segmentation and motion compensation stage: clustering and partitioning the reference frame, and performing motion compensation of point cloud registration with the predicted frame to obtain a point cloud sheet group;
step 2, a common point cloud piece generation stage: generating common point cloud sheets for the point cloud sheet groups formed in the step 1, and performing color interpolation and color redundancy removal;
step 3, compression coding stage: removing invisible point cloud films from the common point cloud films generated in the step 2, and compressing the residual data to form a volume video code stream;
step 4, parallel decoding stage: and (4) decoding the volume video code stream generated in the step (3) by the client, and accelerating the decoding process by using a CPU multithreading technology.
The compression method for real-time volumetric video streaming described above, the step 1 specifically includes:
1.1, grouping the given volume video with a fixed length, wherein each group is called a GOP, and each GOP consists of a reference frame and a plurality of prediction frames;
step 1.2, arranging a clustering center according to the point cloud density, and dividing a prediction frame into a plurality of point cloud sheets with close points;
step 1.3, point cloud registration is carried out between each predicted frame and the reference frame, an iterative closest point method (ICP) is selected by a point cloud registration algorithm, the predicted frame is firstly globally registered relative to the reference frame, and an optimal transformation matrix is obtained and recorded as M g Then, the point cloud slice divided from the reference frame in the previous step is locally registered relative to the transformed prediction frame to obtain a local optimal transformation matrix M p Recording the transformed point cloud film and the corresponding transformation matrix M p M′ g ;
Step 1.4, for each point of the prediction frame, calculating a point cloud slice group to which the point belongs, wherein the calculation mode of the slice group number l is as follows:
wherein,is the l reference frame point cloud slice for the n prediction frame transformation, p is the current prediction frame point to which the calculation belongs, and p' belongsA point of (c).
The compression method for real-time volumetric video streaming includes, in step 2:
step 2.1, calculating the mean square error MSE (p) between each predicted frame point cloud piece and the reference frame point cloud piece for the point cloud piece group generated in the step (1) n ) The calculation method is as follows:
wherein, P 0 Is a reference frame point cloud, P n Is a predicted frame point cloud picture, P and Q are two point clouds that are arbitrarily input, P is a point belonging to the point cloud P, and Q is a point belonging to the point cloud Q;
step 2.2, a common octree is constructed on the point cloud slice group, and common point cloud slices are generated on nodes of the octree by using k-means clustering;
step 2.3, the common point cloud slice refers to slices in the point cloud slice group for color interpolation, each slice generates a piece of color information, and the color information interpolation method comprises the following steps:
wherein, cc n Is the result after the color interpolation of the current point,is the color information of the j neighbor in the nth slice, w j Is the reciprocal of the distance from the neighbor point, w is the sum of all weights;
and 2.4, for the color information of different frames of the common point cloud picture, reserving original data for the reference frame, and reserving a difference value between the color information of the prediction frame and the color information of the reference frame, namely a color residual error.
The above compression method for real-time volumetric video streaming includes, in step 3:
step 3.1, according to the user visual angle information acquired by the client, eliminating all invisible common point cloud films of the user through a visual cone cutting and shielding removal algorithm;
step 3.2, establishing an octree for the common point cloud piece visible to the user, discarding information under a layer with the resolution of 1, representing octree nodes by single byte data, and traversing the octree according to a layer sequence and encoding by using an entropy coding algorithm;
and 3.3, encoding the color information of the common point cloud picture by using an RAHT (random access coding) algorithm to encode the residual error between the color information of the reference frame and the color information of the predicted frame.
And 3.4, independently coding the slice with the mean square error MSE larger than a certain threshold in the step 2.1, independently establishing an octree on the slice, compressing the geometric information by using an entropy coding algorithm, and compressing the color information by using an RAHT algorithm.
In the foregoing compression method for real-time volumetric video streaming, the step 4 specifically includes:
step 4.1, after the client decoder receives the video code stream, the common point cloud film is decoded firstly, the decoder of the entropy encoder is used for decoding the octree byte stream, the spatial structure of the octree is recovered, and the geometric coordinate of the midpoint of the point cloud is recovered from the leaf nodes;
step 4.2, the client decoder uses RAHT to decode the color information of the point cloud film and restores the color difference value of the predicted frame into color information;
and 4.3, decoding the point cloud film which is independently coded in the step 3.4 through the step 4.1 and the step 4.2 according to the mode of predicting the frame point cloud film.
The invention has the beneficial effects that:
(1) The invention constructs a volume video stream transmission compression framework which can be used for real-time transmission;
(2) According to the invention, the time sequence redundancy compression between different point cloud frames is realized by utilizing a common point cloud piece, and the compression ratio of the three-dimensional compression method is fully improved;
(3) The invention meets the requirement of CPU multithreading technology by eliminating the dependency relationship between the cloud chips at the common point, finally improves the decoding speed and meets the requirement of real-time transmission.
Drawings
The invention is further illustrated with reference to the following figures and examples.
FIG. 1 is a flow chart of an encoder of the present invention;
FIG. 2 is a flow chart of a decoder according to the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention provides a compression method for real-time volume video stream transmission, which uses cluster segmentation and motion compensation to divide a point cloud frame into point cloud slice groups, extracts common characteristics of the point cloud slice groups by constructing a common point cloud slice to perform compression coding, fully utilizes the similarity and redundancy of the point cloud frame in time, and improves the compression ratio, as shown in figure 1. As shown in fig. 2, the method meets the requirement of the CPU multithreading technology by breaking the dependency relationship between the point cloud slices, and realizes real-time decoding by fully utilizing the computing resources.
The specific implementation is as follows:
(1) Clustering segmentation and action compensation stage:
(1.1) clustering segmentation stage: the volume video is divided into a plurality of point cloud frame groups, wherein the first frame of each group is a reference frame, and the rest frames are predicted frames. And determining a clustering center of the reference frame according to the density, and clustering and segmenting to generate the point cloud film.
(1.2) motion compensation stage: a two-stage motion compensation is performed for each predicted frame relative to the reference frame using the ICP algorithm. Firstly, global registration is carried out on a predicted frame relative to a reference frame to obtain an optimal transformation matrix which is marked as M g Then, the point cloud slice divided from the reference frame in the previous step is locally registered relative to the transformed prediction frame to obtain a local optimal transformation matrix M p Recording the transformed dot cloud film and the corresponding transformation matrix M p M′ g 。
(1.3) point cloud piece group generation stage: for each point of the predicted frame, determining an attributive point cloud slice group according to the closest distance relative to the transformation point cloud slice obtained in the previous step, and for each point of the predicted frame, calculating the point cloud slice group to which the point belongs, wherein the calculation mode of the slice group number l is as follows:
wherein,is the l reference frame point cloud slice for the n prediction frame transformation, p is the current prediction frame point to which the calculation belongs, and p' belongsPoint (2) of (c). The prediction frame is also divided into point cloud slices through the method, and the point cloud slices and the reference frame point cloud slices form a point cloud slice group.
(2) A common point cloud piece generation stage:
(2.1) a coding mode determining stage: calculating the mean square error between the point cloud sheets, wherein the calculation method comprises the following steps:
wherein, P 0 Is a reference frame point cloud, P n The method is characterized in that a predicted frame point cloud picture is obtained, P and Q are two point clouds which are input randomly, P is a point belonging to a point cloud P, Q is a point belonging to a point cloud Q, and the predicted frame point cloud picture with the mean square error larger than a certain threshold value is coded independently.
(2.2) a common point cloud slice generation stage: and constructing a common octree on the point cloud slice group, and generating a common point cloud slice on octree nodes by using K-means clustering.
(2.3) color interpolation stage: the common point cloud slice carries out interpolation by referring to the color information of slices in the point cloud slice group, the interpolation weight and the distance between points are in inverse proportion, each slice generates a piece of color information, and the color information interpolation method comprises the following steps:
wherein, cc n Is the result after the color interpolation of the current point,is the color information of the adjacent point of j in the nth slice, w j Is the reciprocal of the distance from the neighbor, w is the sum of all weights;
(2.4) color compensation stage: for the color information of different frames of the cloud film with the same point, residual errors between the color information of the predicted frame and the color information of the reference frame are reserved in the predicted frame, and original information is reserved in the reference frame.
(3) And (3) a compression coding stage:
(3.1) invisible point cloud film cutting stage: and according to the user visual angle information acquired by the client, removing all invisible common point cloud films of the users through a visual cone cutting and shielding removal algorithm.
(3.2) a common point cloud slice geometric compression stage: establishing octree for common point cloud piece visible to user, discarding information under layer with resolution of 1, each node is represented by one eight-bit binary number, and the layer sequence traverses the octree and is compressed by entropy coding algorithm.
(3.3) common point cloud piece color compression stage: and compressing the color information of the common point cloud picture by using a RAHT algorithm to the residual error of the color information of the reference frame and the color information of the predicted frame.
(3.4) point cloud slice independent compression stage: separately coding the point cloud slice with overlarge mean square error screened in the step (2.1), separately establishing octree on the point cloud slice, compressing geometric information by using an entropy coding algorithm, and compressing color information by using a RAHT algorithm
(4) A parallel decoding stage:
(4.1) geometric information decoding stage: using CPU multithreading, each thread is responsible for decoding a common point cloud or an individual point cloud, first entropy coding is performed on the octree byte stream, then the octree structure is restored, and the geometric coordinates of the points are reconstructed from the leaf nodes.
(4.2) color information decoding stage: and using a CPU multi-thread technology, wherein each thread is responsible for decoding a group of color information, the color information of the reference frame and the single point cloud film is decoded by RAHT, and the color residual error of the predicted frame is restored to the original information after the color information of the predicted frame is decoded by RAHT.
And (4.3) decoding the point cloud picture coded independently in the step (3.4) through the steps (4.1) and (4.2) according to the mode of predicting the frame point cloud picture.
And (4.4) because the point cloud pieces have no dependency relationship, the steps can be optimized by using a CPU multithreading method, and each thread decodes one point cloud piece. And then inputting the decoded point cloud slice into a rendering device for rendering.
The algorithm achieves efficient coding and decoding of volumetric video through cluster segmentation, motion compensation and co-located cloud slices, and the general principles defined herein may be implemented in other instances without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the examples shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (5)
1. A compression method for real-time volumetric video streaming, characterized by: the method comprises the following steps:
step 1, clustering segmentation and motion compensation stage: clustering and dividing the reference frame, and performing motion compensation of point cloud registration with the predicted frame to obtain a point cloud group;
step 2, common point cloud piece generation stage: generating common point cloud sheets for the point cloud sheet groups formed in the step 1, and performing color interpolation and color redundancy removal;
step 3, compression coding stage: removing the invisible point cloud films from the common point cloud films generated in the step (2), and compressing the residual data to form a volume video code stream;
step 4, parallel decoding stage: and (3) decoding the volume video code stream generated in the step (3) by the client, and accelerating the decoding process by using a CPU multithreading technology.
2. The compression method for real-time volumetric video streaming according to claim 1, wherein the step 1 specifically comprises:
step 1.1, a given volume of video is grouped in fixed lengths, each group being called a GOP, each GOP being made up of a reference frame and several predicted frames.
And 1.2, arranging a clustering center according to the point cloud density, and dividing the prediction frame into a plurality of point cloud sheets with close points.
And step 1.3, performing point cloud registration between each prediction frame and a reference frame, wherein the point cloud registration algorithm selects an iterative closest point method (ICP). Firstly, global registration is carried out on a predicted frame relative to a reference frame to obtain an optimal transformation matrix which is marked as M g Then, the point cloud slice divided from the reference frame in the previous step is locally registered relative to the transformed prediction frame to obtain a local optimal transformation matrix M p . Recording transformed point cloud film and corresponding transformation matrix M p M′ g 。
Step 1.4, for each point of the prediction frame, calculating a point cloud slice group to which the point belongs, wherein the calculation mode of the slice group number l is as follows:
3. A compression method for real-time volumetric video streaming according to claim 1, wherein said step 2 specifically comprises:
step 2.1, calculating the mean square error MSE (p) between each predicted frame point cloud piece and the reference frame point cloud piece for the point cloud piece group generated in the step (1) n ) The calculation method is as follows:
wherein, P 0 Is a reference frame point cloud, P n Predicting frame point cloud pictures, wherein P and Q are two point clouds which are input randomly, P is a point belonging to the point cloud P, and Q is a point belonging to the point cloud Q;
step 2.2, a common octree is constructed on the point cloud piece group, and common point cloud pieces are generated on nodes of the octree by using k-means clustering;
step 2.3, the common point cloud slice performs color interpolation by referring to slices in the point cloud slice group, each slice generates a piece of color information, and the color information interpolation method comprises the following steps:
wherein, cc n Is the result after the color interpolation of the current point,is the color information of the adjacent point of j in the nth slice, w j Is the reciprocal of the distance from the neighbor, w is the sum of all weights;
and 2.4, for the color information of different frames of the cloud film with the same point, reserving original data for a reference frame, and reserving a difference value between the color information of the reference frame and the color information of the predicted frame, namely the color residual.
4. The compression method for real-time volumetric video streaming according to claim 1, wherein said step 3 specifically comprises:
step 3.1, according to the user visual angle information acquired by the client, removing all invisible common point cloud films of the user through a view frustum cutting and shielding removal algorithm;
step 3.2, establishing an octree for the common point cloud piece visible to the user, discarding information under a layer with the resolution of 1, representing octree nodes by single byte data, and traversing the octree according to a layer sequence and encoding by using an entropy coding algorithm;
and 3.3, encoding the color information of the common point cloud picture by using an RAHT (random access coding) algorithm to encode the residual error between the color information of the reference frame and the color information of the predicted frame.
And 3.4, independently coding the slice with the mean square error MSE larger than a certain threshold in the step 2.1, independently establishing an octree on the slice, compressing the geometric information by using an entropy coding algorithm, and compressing the color information by using an RAHT algorithm.
5. A compression method for real-time volumetric video streaming according to claim 1, characterized in that said step 4 specifically comprises:
step 4.1, after receiving the video code stream, the client decoder firstly decodes the common point cloud film, decodes the octree byte stream by using the decoder of the entropy encoder, recovers the spatial structure of the octree and recovers the geometric coordinates of the point cloud midpoint from the leaf nodes;
step 4.2, the client decoder uses RAHT to decode the color information of the point cloud film and restores the color difference value of the predicted frame into color information;
and 4.3, decoding the point cloud film which is independently coded in the step 3.4 through the step 4.1 and the step 4.2 according to the mode of predicting the frame point cloud film.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211243542.2A CN115633179A (en) | 2022-10-12 | 2022-10-12 | Compression method for real-time volume video streaming transmission |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211243542.2A CN115633179A (en) | 2022-10-12 | 2022-10-12 | Compression method for real-time volume video streaming transmission |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115633179A true CN115633179A (en) | 2023-01-20 |
Family
ID=84904399
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211243542.2A Pending CN115633179A (en) | 2022-10-12 | 2022-10-12 | Compression method for real-time volume video streaming transmission |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115633179A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117979097A (en) * | 2024-03-27 | 2024-05-03 | 深圳大学 | Volume video streaming scheduling method, device, terminal and medium |
-
2022
- 2022-10-12 CN CN202211243542.2A patent/CN115633179A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117979097A (en) * | 2024-03-27 | 2024-05-03 | 深圳大学 | Volume video streaming scheduling method, device, terminal and medium |
CN117979097B (en) * | 2024-03-27 | 2024-07-23 | 深圳大学 | Volume video streaming scheduling method, device, terminal and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
de Oliveira Rente et al. | Graph-based static 3D point clouds geometry coding | |
US11153550B2 (en) | Depth codec for real-time, high-quality light field reconstruction | |
CN111432210B (en) | Point cloud attribute compression method based on filling | |
WO2022121648A1 (en) | Point cloud data encoding method, point cloud data decoding method, device, medium, and program product | |
JP2024515174A (en) | Point cloud data transmission method, point cloud data transmission device, point cloud data receiving method and point cloud data receiving device | |
Hu et al. | An adaptive two-layer light field compression scheme using GNN-based reconstruction | |
Chai et al. | Depth map compression for real-time view-based rendering | |
CN115633179A (en) | Compression method for real-time volume video streaming transmission | |
CN116016951A (en) | Point cloud processing method, device, equipment and storage medium | |
WO2021170906A1 (en) | An apparatus, a method and a computer program for volumetric video | |
WO2021191500A1 (en) | An apparatus, a method and a computer program for volumetric video | |
JP2024515737A (en) | A parallel approach to dynamic mesh alignment. | |
EP4369716A1 (en) | Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method | |
CN117242480A (en) | Manhattan layout estimation using geometric and semantic information | |
CN116848553A (en) | Method for dynamic grid compression based on two-dimensional UV atlas sampling | |
US20230281878A1 (en) | Point cloud data transmission device, point cloud data transmission method, point cloud data reception device and point cloud data reception method | |
WO2022120809A1 (en) | Virtual view drawing method and apparatus, rendering method and apparatus, and decoding method and apparatus, and devices and storage medium | |
US20230412849A1 (en) | Mesh vertex displacements coding | |
KR102677403B1 (en) | Fast patch generation for video-based point cloud coding | |
JP7434667B2 (en) | Group-of-pictures-based patch packing for video-based point cloud coding | |
JP7580489B2 (en) | Fast recoloring for video-based point cloud coding | |
CN117061770B (en) | Point cloud processing method, device, equipment, storage medium and product | |
US20240064334A1 (en) | Motion field coding in dynamic mesh compression | |
US20230334719A1 (en) | Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method | |
US20240089499A1 (en) | Displacement coding for mesh compression |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |