Nothing Special   »   [go: up one dir, main page]

CN117751387A - Face mesh connectivity coding - Google Patents

Face mesh connectivity coding Download PDF

Info

Publication number
CN117751387A
CN117751387A CN202380013125.5A CN202380013125A CN117751387A CN 117751387 A CN117751387 A CN 117751387A CN 202380013125 A CN202380013125 A CN 202380013125A CN 117751387 A CN117751387 A CN 117751387A
Authority
CN
China
Prior art keywords
encoding
information
vertices
connectivity
vertex
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202380013125.5A
Other languages
Chinese (zh)
Inventor
D·格拉兹斯
A·扎格托
A·塔巴塔贝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Group Corp
Sony Optical Archive Inc
Original Assignee
Sony Group Corp
Optical Archive Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/987,833 external-priority patent/US20230306642A1/en
Application filed by Sony Group Corp, Optical Archive Inc filed Critical Sony Group Corp
Priority claimed from PCT/IB2023/052103 external-priority patent/WO2023180840A1/en
Publication of CN117751387A publication Critical patent/CN117751387A/en
Pending legal-status Critical Current

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Connectivity information and mapping information of the mesh surface bin can be encoded after projection into 2D. Regarding connectivity information, the projection operation does not change the connections between vertices, and thus can carry a list of identical connected vertices in the atlas data. Similarly, the mapping information does not change after projection and can be carried in the atlas data. Two methods for encoding connectivity and mapping information are disclosed. For connectivity information, video-based methods use adjacent color coding. For mapping coordinates, the method uses projected vertex positions. Connectivity and mapping can also be handled by an external trellis encoder. The time compression can be performed using the newly proposed mapping information.

Description

Face mesh connectivity coding
Cross Reference to Related Applications
The present application claims priority in accordance with 35 U.S. c. ≡119 (e) of U.S. provisional patent application "PATCH MESH CONNECTIVITY CODING" of serial No. 63/269,905 filed 3/25/2022, which is incorporated herein by reference in its entirety for all purposes.
Technical Field
The present invention relates to three-dimensional graphics. More particularly, the present invention relates to the encoding of three-dimensional graphics.
Background
Recently, new methods of compressing volumetric content, such as point clouds, based on projections from 3D to 2D are being standardized. This approach, also known as V3C (visual volume video based compression), maps 3D volume data into several 2D bins, then further arranges the bins into an atlas image, which is then encoded with a video encoder. The atlas image corresponds to the geometry of the points, the corresponding texture, and an occupancy map, which indicates which of the locations are to be considered for the point cloud reconstruction.
In 2017, MPEG issued a proposal solicitation (CfP) for compression of point clouds. After evaluating several proposals, MPEG is currently considering two different techniques for point cloud compression: 3D native coding techniques (based on octree and similar coding methods), or 3D to 2D projection, followed by conventional video coding. In the case of dynamic 3D scenes, MPEG is using test model software (TMC 2) based on surface modeling of bins, projection of the bins from 3D images to 2D images, and encoding 2D images with a video encoder such as HEVC. This approach has proven to be more efficient than native 3D coding and can achieve competitive bit rates with acceptable quality.
Due to the success of projection-based methods (also known as video-based methods or V-PCC) to encode 3D point clouds, it is expected that this standard will include more 3D data, such as a 3D mesh, in future versions. However, the current version of the standard is only suitable for the transmission of a set of unconnected points, so that there is no mechanism of connectivity of the sending points, which is required in 3D mesh compression.
A method of extending the functionality of V-PCC to a mesh is also proposed. One possible way is to encode the vertices using V-PCC and then encode connectivity using a mesh compression method such as TFAN or edgebreak. A limitation of this approach is that the original mesh must be dense so that the point cloud generated from the vertices is not sparse and can be efficiently encoded after projection. Furthermore, the order of vertices affects the encoding of connectivity, thereby proposing different methods of reorganizing mesh connectivity. An alternative way to encode a sparse grid is to encode vertex positions in 3D using RAW face metadata. Since RAW bins are coded directly (x, y, z), in this approach all vertices are coded as RAW data, while connectivity is coded by a similar grid compression method, as previously described. Note that in a RAW bin, vertices may be sent in any preferred order, so that the order resulting from connectivity encoding may be used. This approach may encode sparse point clouds, however, RAW bins are not efficient in encoding 3D data and more data such as the properties of triangle facets may be lost from this approach.
Disclosure of Invention
Connectivity information and mapping information of the mesh surface bin can be encoded after projection into 2D. Regarding connectivity information, the projection operation does not change the connections between vertices, and thus can carry a list of identical connected vertices in the atlas data. Similarly, the mapping information does not change after projection and can be carried in the atlas data. Two methods for encoding connectivity and mapping information are disclosed. For connectivity information, video-based methods use adjacent color coding. For mapping coordinates, the method uses projected vertex positions. Connectivity and mapping can also be handled by an external trellis encoder. The time compression can be performed using the newly proposed mapping information.
In one aspect, a method of encoding connectivity information and mapping information includes encoding vertex mapping information including delta information for geometric correction, and encoding face element connectivity information, including achieving mesh reduction by fixing the positions of vertices in time. The method further includes transmitting a flag indicating whether vertex mapping information is implicitly transmitted or explicitly transmitted. Encoding the vertex information and encoding the face element connectivity information are performed by an external encoder. Encoding the bin connectivity information includes using color coding in the occupancy map. The use of color coding in occupancy maps is limited to a maximum of 4 colors. Encoding the vertex mapping information also includes using bit rate distortion plane (face) transmission. Implementing mesh simplification includes sending only boundary vertices and not internal vertices. The internal vertices are determined based on a previous set of internal vertices from a previous frame.
In another aspect, an apparatus includes a non-transitory memory for storing an application to: vertex mapping information including delta information for geometric correction is encoded, and face element connectivity information is encoded, including grid simplification by fixing the positions of vertices in time, and a processor coupled to the memory, the processor configured to process the application. The application is further configured to send a flag indicating whether vertex mapping information is implicitly sent or explicitly sent. Encoding the vertex information and encoding the face element connectivity information are performed by an external encoder. Encoding the bin connectivity information includes using color coding in the occupancy map. The use of color coding in occupancy maps is limited to a maximum of 4 colors. Encoding the vertex mapping information also includes using rate-distortion plane transmission. Implementing mesh simplification includes sending only boundary vertices and not internal vertices. The internal vertices are determined based on a previous set of internal vertices from a previous frame.
In another aspect, a system includes one or more cameras for acquiring three-dimensional content, an encoder for encoding the three-dimensional content: encoding vertex mapping information including delta information for geometric correction, and encoding face element connectivity information, including achieving mesh simplification by temporally fixing positions of vertices, and a decoder for decoding encoded three-dimensional content, comprising: the delta information is used to adjust the mesh to determine internal vertices of the bin connectivity information based on previous internal vertices from a previous frame. The encoder is further configured to transmit a flag indicating whether vertex mapping information is implicitly transmitted or explicitly transmitted. Encoding the vertex information and encoding the face element connectivity information are performed by an external encoder. Encoding the bin connectivity information includes using color coding in the occupancy map. The use of color coding in occupancy maps is limited to a maximum of 4 colors. Encoding the vertex mapping information also includes using rate-distortion plane transmission. Implementing mesh simplification includes sending only boundary vertices and not internal vertices.
Drawings
FIG. 1 illustrates a diagram of binary encoding in accordance with some embodiments.
Fig. 2 illustrates a diagram using color coding of an occupancy map, in accordance with some embodiments.
Fig. 3 illustrates a diagram of RD plane transmissions in accordance with some embodiments.
Fig. 4 illustrates a diagram of temporal stability of grid connectivity, in accordance with some embodiments.
Fig. 5 illustrates a flow chart of a method of face mesh connectivity encoding in accordance with some embodiments.
Fig. 6 illustrates a block diagram of an exemplary computing device configured to implement the method of face mesh connectivity encoding, in accordance with some embodiments.
Detailed Description
Connectivity information and mapping information of the mesh surface bin can be encoded after projection into 2D. Regarding connectivity information, the projection operation does not change the connections between vertices, and thus can carry a list of identical connected vertices in the atlas data. Similarly, the mapping information does not change after projection and can be carried in the atlas data. Two methods for encoding connectivity and mapping information are disclosed. For connectivity information, video-based methods use adjacent color coding. For mapping coordinates, the method uses projected vertex positions. Connectivity and mapping can also be handled by an external trellis encoder. The time compression can be performed using the newly proposed mapping information.
The connectivity information indicates which pixels are connected. There are multiple sets of information for a triangle. One set of information is the location of the triangle on the texture map. For each triangle on the texture map, there are two sets of information-1) how the vertices are connected in 3D (e.g., in a connectivity list) and 2) vertex mapping information.
There are three ways to encode the vertex mapping information-implicit, explicit, binary. For implicit implementations, if projected on a 2D surface, the projection is the same as the mapping. For example, the location that hits when projected onto the projection surface is the UV coordinates. For explicit implementations, different coordinates are sent for textures even if projection is done. For binary implementations, the explicit information is encoded with an external encoder (e.g., drago or AFX).
The updated syntax for an explicit implementation is as follows:
a flag may be sent to indicate whether the vertex mapping information is implicitly sent or explicitly sent. If the information is explicitly sent, the value will scale by bit depth.
If binary encoding is implemented, an external trellis encoder may be used to encode the face element trellis information. U and V are added to ply, and the vertex mapping information is ply encoded. In some embodiments, delta information for the z coordinate is added. The delta information can be used for geometric correction.
There are many ways to encode the bin connectivity information. For explicit implementations, which pixels are connected are indicated in the grammar, so the pixel connection list is sent in the bin. For binary implementations, an external encoder may be utilized. In another implementation, mesh simplification may be performed by fixing the positions of vertices across time. In a color coding implementation, the implementation uses color coding that occupies a map. Triangles can be mapped using only four colors. In another implementation, a rate-distortion (RD) plane transmission is utilized.
FIG. 1 illustrates a diagram of binary encoding in accordance with some embodiments. The geometric image is used to generate map 100. There is still some information that can be modified by video coding. By explicitly sending delta information to the external encoder, a binary image can be generated while correcting errors from video transmission. UV coordinates are also sent, which aids in video compression. In some embodiments, the face element mesh information is encoded using an external mesh encoder.
Fig. 2 illustrates a diagram using color coding of an occupancy map, in accordance with some embodiments. The mapping of the triangle index may use only 4 colors. Since the graph is a gray scale graph, it may be difficult to distinguish certain colors, sides and vertices, but only 4 colors are used, and triangles of one color do not share one side with triangles of the same color. Connectivity information is added to the occupancy map using only the luminance channel (4 colors- > (0, 0), (64, 0), (128,0,0), (255, 0)). Further details regarding color coding and grid compression can be found in U.S. patent application Ser. No. 17/322,662 entitled "VIDEO BASED MESH COMPRESSION" filed 5/17/2021, U.S. provisional patent application Ser. No. 63/088,705 entitled "VIDEO BASED MESH COMPRESSION" filed 10/2020, and U.S. provisional patent application Ser. No. 63/087,958 entitled "VIDEO BASED MESH COMPRESSION" filed 6/2020, which are incorporated herein by reference in their entirety for all purposes. The 3D mesh or 2D binning mesh connectivity may be encoded using occupancy maps and exploiting the temporal correlation of applying video-based mesh compression. In addition, by using the color map 200, vertices can be detected, where the intersection of triangles can be detected based on color.
Fig. 3 illustrates a diagram of RD plane transmissions in accordance with some embodiments. At 300, a grid is received/acquired. At 302, grid connectivity points are encoded by an encoder. At the decoder side, the grid is reconstructed at 304, but the positions of the points may be slightly different from the original grid. Correction information (e.g., delta information) is sent to the decoder so that at 306 the decoder can adjust the grid so that it is more accurate than the original grid.
The vertices of the input mesh are V-PCC encoded and locally decoded. The encoder generates a grid from the decoded point cloud. The encoder compares the generated face/connectivity information with the original information. The encoder signals a non-matching plane containing the rate/distortion tradeoff. The decoder decodes mesh vertices using V-PCC and generates a mesh from the decoded vertices. The decoder uses the signaled non-matching surfaces to modify the trellis. In some embodiments, instead of 3D, a similar method can also be applied to encode UV coordinates and their connectivity using 2D triangulation.
Fig. 4 illustrates a diagram of temporal stability of grid connectivity, in accordance with some embodiments. When a bin is sent, only the boundary vertices are sent. Internal vertices are not sent. The decoder determines internal vertices (e.g., based on a previous frame or a subsequent frame). For example, the internal vertices from the bin 400 in frame 1 are used as the internal vertices of the bin 402 in frame 2. In some embodiments, a first set of internal vertices (e.g., for frame 0 or frame 1) is sent such that the internal vertices of the previous frame are available for future frames. The decoder is able to regenerate triangles from boundary vertices and interior vertices. The same interior points can be used even if the bin rotates from one frame to another. The internal triangle may be slightly different due to rotation, but this is acceptable. By not sending internal vertices, fewer bits are sent.
Fig. 5 illustrates a flow chart of a method of face mesh connectivity encoding in accordance with some embodiments. In step 500, vertex mapping information is encoded. The delta information may be included in vertex mapping information for geometric correction. A flag indicating whether vertex mapping information is implicitly or explicitly sent may be sent. In some embodiments, encoding vertex mapping information uses RD surface transmission. In step 502, the face connectivity information is encoded. Encoding the bin connectivity information includes achieving mesh simplification by fixing the positions of vertices in time. In some embodiments, mesh simplification includes sending only boundary vertices and not internal vertices. The bin connectivity information may include color coding in the occupancy map. Color coding is limited to a maximum of 4 colors. In some embodiments, the vertex information and the bin connectivity information are performed by an external encoder. In some embodiments, the order of steps is modified. In some embodiments, fewer or additional steps are implemented. For example, the encoded information may use delta information to adjust the grid. In another example, internal vertices of the face connectivity can be determined from previous internal vertices from a previous frame.
Fig. 6 illustrates a block diagram of an exemplary computing device configured to implement the method of face mesh connectivity encoding, in accordance with some embodiments. Computing device 600 can be used to obtain, store, calculate, process, transfer, and/or display information such as images and video including 3D content. The computing device 600 is capable of implementing any of the encoding/decoding aspects. In general, hardware structures suitable for implementing computing device 600 include a network interface 602, memory 604, processor 606, one or more I/O devices 608, a bus 610, and a storage device 612. The choice of processor is not critical as long as the appropriate processor with sufficient speed is chosen. Memory 604 may be any conventional computer memory known in the art. Storage 612 may include a hard disk drive, CDROM, CDRW, DVD, DVDRW, high definition optical disk/drive, ultra high definition drive, flash memory card, or any other storage device. The computing device 600 can include one or more network interfaces 602. Examples of network interfaces include a network card connected to an ethernet or other type of LAN. The I/O devices 608 can include one or more of the following: keyboard, mouse, monitor, screen, printer, modem, touch screen, button interface, and other devices. One or more face mesh connectivity encoding applications 630 for implementing face mesh connectivity encoding implementations may be stored in the storage device 612 and memory 604 and processed in the manner of a usual processing application. More or fewer components shown in fig. 6 can be included in computing device 600. In some embodiments, bin-grid connectivity encoding hardware 620 is included. Although computing device 600 in fig. 6 includes application 630 and hardware 620 for a face mesh connectivity encoding implementation, the face mesh connectivity encoding method can be implemented on a computing device in hardware, firmware, software, or any combination thereof. For example, in some embodiments, the bin mesh connectivity encoding application 630 is programmed in memory and executed using a processor. In another example, in some embodiments, the binning mesh connectivity encoding hardware 620 is programmed hardware logic, including logic gates specifically designed to implement the binning mesh connectivity encoding method.
In some embodiments, the binning connectivity encoding application 630 includes several applications and/or modules. In some embodiments, the module further comprises one or more sub-modules. In some embodiments, fewer or additional modules may be included.
Examples of suitable computing devices include personal computers, laptop computers, computer workstations, servers, mainframe computers, handheld computers, personal digital assistants, cellular/mobile phones, smart appliances, gaming machines, digital cameras, digital video cameras, camera phones, smart phones, portable music players, tablet computers, mobile devices, video players, video disc recorders/players (e.g., DVD recorders/players, high-definition disc recorders/players, ultra-high-definition disc recorders/players), televisions, home entertainment systems, augmented reality devices, virtual reality devices, smart jewelry (e.g., smart watches), vehicles (e.g., autopilots), or any other suitable computing device.
To utilize the bin-grid connectivity encoding method, a device acquires or receives 3D content (e.g., point cloud content). The face mesh connectivity coding method can be implemented with the help of a user or can be implemented automatically without user involvement.
In operation, the bin-grid connectivity encoding method enables more efficient and accurate 3D content encoding than previous implementations.
Some embodiments of face mesh connectivity encoding
1. A method of encoding connectivity information and mapping information, comprising:
encoding vertex mapping information including delta information for geometric correction; and
the bin connectivity information is encoded, including grid simplification, by fixing the positions of vertices in time.
2. The method according to clause 1, further comprising transmitting a flag indicating whether the vertex mapping information is implicitly transmitted or explicitly transmitted.
3. The method according to clause 1, wherein encoding the vertex information and encoding the face connectivity information are performed by an external encoder.
4. The method according to clause 1, wherein encoding the bin connectivity information comprises using color coding in the occupancy map.
5. The method according to clause 4, wherein using color coding in the occupancy map is limited to a maximum of 4 colors.
6. The method according to clause 1, wherein encoding the vertex mapping information further comprises using rate-distortion plane transmission.
7. The method according to clause 1, wherein implementing the mesh reduction includes sending only boundary vertices and not internal vertices.
8. The method according to clause 7, wherein the internal vertices are determined based on a previous set of internal vertices from a previous frame.
9. An apparatus, comprising:
a non-transitory memory for storing an application for:
encoding vertex mapping information including delta information for geometric correction; and
encoding the bin connectivity information, including achieving mesh simplification by fixing the positions of vertices in time; and
a processor coupled to the memory, the processor configured to process the application.
10. The apparatus according to clause 9, wherein the application is further configured to send a flag indicating whether the vertex mapping information is implicitly sent or explicitly sent.
11. The apparatus according to clause 9, wherein encoding the vertex information and encoding the face connectivity information are performed by an external encoder.
12. The apparatus according to clause 9, wherein encoding the bin connectivity information comprises using color coding in the occupancy map.
13. The apparatus according to clause 12, wherein the use of color coding in the occupancy map is limited to a maximum of 4 colors.
14. The apparatus according to clause 9, wherein encoding the vertex mapping information further comprises using rate-distortion plane transmission.
15. The apparatus according to clause 9, wherein implementing the mesh reduction includes sending only boundary vertices and not internal vertices.
16. The apparatus according to clause 15, wherein the internal vertices are determined based on a previous set of internal vertices from a previous frame.
17. A system, comprising:
one or more cameras for acquiring three-dimensional content;
an encoder for encoding three-dimensional content:
encoding vertex mapping information including delta information for geometric correction; and
encoding the bin connectivity information, including achieving mesh simplification by fixing the positions of vertices in time; and
a decoder for decoding encoded three-dimensional content, comprising:
adjusting a grid using the delta variable information;
internal vertices of the face connectivity information are determined from previous internal vertices from a previous frame.
18. The system according to clause 17, wherein the encoder is further configured to send a flag indicating whether the vertex mapping information is implicitly sent or explicitly sent.
19. The system according to clause 17, wherein encoding the vertex information and encoding the face connectivity information are performed by an external encoder.
20. The system according to clause 17, wherein encoding the bin connectivity information comprises using color coding in the occupancy map.
21. The system according to clause 20, wherein using color coding in the occupancy map is limited to a maximum of 4 colors.
22. The system according to clause 17, wherein encoding the vertex mapping information further comprises using rate-distortion plane transmission.
23. The system according to clause 17, wherein implementing the mesh reduction includes sending only boundary vertices and not internal vertices.
The present invention has been described in terms of specific embodiments incorporating details to facilitate the understanding of the principles of construction and operation of the invention. Such references herein to specific embodiments and details thereof are not intended to limit the scope of the claims appended hereto. It will be apparent to those skilled in the art that other various modifications may be made in the embodiments chosen for illustration without departing from the spirit and scope of the invention as defined in the claims.

Claims (23)

1. A method of encoding connectivity information and mapping information, comprising:
encoding vertex mapping information including delta information for geometric correction; and
the bin connectivity information is encoded, including grid simplification, by fixing the positions of vertices in time.
2. The method of claim 1, further comprising transmitting a flag indicating whether vertex mapping information is implicitly transmitted or explicitly transmitted.
3. The method of claim 1, wherein encoding the vertex information and encoding the face connectivity information are performed by an external encoder.
4. The method of claim 1, wherein encoding the bin connectivity information comprises using color coding in an occupancy map.
5. The method of claim 4, wherein using color coding in the occupancy map is limited to a maximum of 4 colors.
6. The method of claim 1, wherein encoding the vertex mapping information further comprises using rate-distortion plane transmission.
7. The method of claim 1, wherein implementing mesh reduction includes sending only boundary vertices and not internal vertices.
8. The method of claim 7, wherein the internal vertices are determined based on a previous set of internal vertices from a previous frame.
9. An apparatus, comprising:
a non-transitory memory for storing an application for:
encoding vertex mapping information including delta information for geometric correction; and
encoding the bin connectivity information, including achieving mesh simplification by fixing the positions of vertices in time; and
a processor coupled to the memory, the processor configured to process the application.
10. The apparatus of claim 9, wherein the application is further configured to send a flag indicating whether vertex mapping information is implicitly sent or explicitly sent.
11. The apparatus of claim 9, wherein encoding the vertex information and encoding the face connectivity information are performed by an external encoder.
12. The apparatus of claim 9, wherein encoding the bin connectivity information comprises using color coding in an occupancy map.
13. The apparatus of claim 12, wherein the use of color coding in the occupancy map is limited to a maximum of 4 colors.
14. The apparatus of claim 9, wherein encoding the vertex mapping information further comprises using rate-distortion plane transmission.
15. The apparatus of claim 9, wherein implementing mesh reduction comprises sending only boundary vertices and not internal vertices.
16. The apparatus of claim 15, wherein the internal vertices are determined based on a previous set of internal vertices from a previous frame.
17. A system, comprising:
one or more cameras for acquiring three-dimensional content;
an encoder for encoding three-dimensional content:
encoding vertex mapping information including delta information for geometric correction; and
encoding the bin connectivity information, including achieving mesh simplification by fixing the positions of vertices in time; and
a decoder for decoding encoded three-dimensional content, the decoding comprising:
adjusting a grid using the delta variable information;
internal vertices of the face connectivity information are determined from previous internal vertices from a previous frame.
18. The system of claim 17, wherein the encoder is further configured to send a flag indicating whether vertex mapping information is implicitly sent or explicitly sent.
19. The system of claim 17, wherein encoding the vertex information and encoding the face connectivity information are performed by an external encoder.
20. The system of claim 17, wherein encoding the bin connectivity information comprises using color coding in an occupancy map.
21. The system of claim 20, wherein the use of color coding in the occupancy map is limited to a maximum of 4 colors.
22. The system of claim 17, wherein encoding the vertex mapping information further comprises using rate-distortion plane transmission.
23. The system of claim 17, wherein implementing mesh reduction includes sending only boundary vertices and not internal vertices.
CN202380013125.5A 2022-03-25 2023-03-06 Face mesh connectivity coding Pending CN117751387A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/269,905 2022-03-25
US17/987,833 2022-11-15
US17/987,833 US20230306642A1 (en) 2022-03-25 2022-11-15 Patch mesh connectivity coding
PCT/IB2023/052103 WO2023180840A1 (en) 2022-03-25 2023-03-06 Patch mesh connectivity coding

Publications (1)

Publication Number Publication Date
CN117751387A true CN117751387A (en) 2024-03-22

Family

ID=90257763

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202380013125.5A Pending CN117751387A (en) 2022-03-25 2023-03-06 Face mesh connectivity coding

Country Status (1)

Country Link
CN (1) CN117751387A (en)

Similar Documents

Publication Publication Date Title
JP7303992B2 (en) Mesh compression via point cloud representation
CN112204618B (en) Method, apparatus and system for mapping 3D point cloud data into 2D surfaces
CN113557745B (en) Point cloud geometry filling
CN113302940B (en) Point cloud encoding using homography
US20220321912A1 (en) Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method
US20200195967A1 (en) Point cloud auxiliary information coding
CN117751387A (en) Face mesh connectivity coding
US20230306642A1 (en) Patch mesh connectivity coding
WO2023180840A1 (en) Patch mesh connectivity coding
US20230306641A1 (en) Mesh geometry coding
US20230306683A1 (en) Mesh patch sub-division
US12183045B2 (en) Mesh patch simplification
US20240233189A1 (en) V3c syntax extension for mesh compression using sub-patches
CN118302794A (en) Grid geometry coding
US20240404200A1 (en) V3c syntax new basemesh patch data unit
US20240127489A1 (en) Efficient mapping coordinate creation and transmission
US20230306644A1 (en) Mesh patch syntax
US20240153147A1 (en) V3c syntax extension for mesh compression
WO2023180839A1 (en) Mesh geometry coding
US20240177355A1 (en) Sub-mesh zippering
US20230306687A1 (en) Mesh zippering
WO2023180841A1 (en) Mesh patch sub-division
CN117897732A (en) Lattice face syntax
WO2024150046A1 (en) V3c syntax extension for mesh compression using sub-patches
EP4479940A1 (en) Mesh zippering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination