Nothing Special   »   [go: up one dir, main page]

CN116310037A - Model appearance updating method and device and computing equipment - Google Patents

Model appearance updating method and device and computing equipment Download PDF

Info

Publication number
CN116310037A
CN116310037A CN202310081090.0A CN202310081090A CN116310037A CN 116310037 A CN116310037 A CN 116310037A CN 202310081090 A CN202310081090 A CN 202310081090A CN 116310037 A CN116310037 A CN 116310037A
Authority
CN
China
Prior art keywords
model
coordinate
texture map
state
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310081090.0A
Other languages
Chinese (zh)
Inventor
冯浩霖
郑宇航
罗月花
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XFusion Digital Technologies Co Ltd
Original Assignee
XFusion Digital Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XFusion Digital Technologies Co Ltd filed Critical XFusion Digital Technologies Co Ltd
Priority to CN202310081090.0A priority Critical patent/CN116310037A/en
Publication of CN116310037A publication Critical patent/CN116310037A/en
Priority to PCT/CN2023/117340 priority patent/WO2024156180A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a model appearance updating method, a model appearance updating device and a computing device, relates to the technical field of computers, simplifies the operation of updating the appearance of a three-dimensional model, and further improves the model appearance updating efficiency. The method comprises the following steps: displaying a first model, wherein the first model is a three-dimensional model with a first texture appearance generated according to first texture map rendering; in response to receiving a trigger operation on a first position on a first model, determining a first coordinate of a target position, wherein the target position is a position of the first position mapped on the surface of the first model, and the first coordinate of the target position is a three-dimensional coordinate of the target position in a three-dimensional space in which the first model is located; determining a second coordinate of the target position based on the first coordinate of the target position, wherein the second coordinate of the target position is a two-dimensional coordinate of the first coordinate of the target position mapped on a two-dimensional space where the first texture map is located; the appearance of the first model is updated.

Description

Model appearance updating method and device and computing equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for updating model appearance, and a computing device.
Background
With the continuous development of the simulation technology of the computer, in order to intuitively display entity devices including a server, a memory, a switch, a cabinet and the like on the computer, a three-dimensional model of the simulated entity device can be displayed on the computer by creating the three-dimensional model of the entity device.
Currently, after the three-dimensional model of the simulation entity device is created, if the appearance of a certain part in the three-dimensional model needs to be adjusted, the three-dimensional model needs to be remodelled, and the remodelling process comprises the steps of modifying a model skeleton, adding a model map, adding codes and the like. For example, if it is necessary to add a color block of an indicator light to an indicator light region simulated on a three-dimensional model of a simulated hard disk that has been created and displayed in a web page, it is necessary to re-model the three-dimensional model of the simulated hard disk by modeling software and then reissue. This results in a cumbersome step of updating the appearance of the model.
In the related art, since the three-dimensional model needs to be remodelled with more modifications, the implementation period of the model appearance update is longer, so that the model appearance update efficiency is lower.
Disclosure of Invention
The embodiment of the application provides a model appearance updating method, device and computing equipment, which simplify the operation of updating the appearance of a three-dimensional model and further improve the model appearance updating efficiency.
In a first aspect, the present application provides a method for updating a model appearance, the method including: displaying a first model, wherein the first model is a three-dimensional model with a first texture appearance generated according to first texture map rendering; in response to receiving a triggering operation on a first position, determining a first coordinate of a target position, wherein the first position is a corresponding position of the first model displayed on a display interface, the target position is a position of the first position mapped on the surface of the first model, and the first coordinate of the target position is a three-dimensional coordinate of the target position in a three-dimensional space in which the first model is located; determining a second coordinate of the target position based on the first coordinate of the target position, wherein the second coordinate of the target position is a two-dimensional coordinate of the first coordinate of the target position mapped on a two-dimensional space where the first texture map is located; the appearance of the first model is updated based on the second coordinates of the target location and the second texture map.
It will be appreciated that by performing a triggering operation on a corresponding first position on the displayed first model, a first coordinate of a target position of the first position mapped on the three-dimensional model surface may be determined, a second coordinate of the target position mapped on a two-dimensional space where the first texture map is located may be determined according to the first coordinate of the target position, and updating of the appearance of the first model may be determined to be achieved according to the second coordinate. That is, by acquiring the position where the trigger operation is performed on the first model, the area where the second texture map needs to be displayed on the first model can be determined, the three-dimensional problem of changing the appearance of the three-dimensional model is changed into the two-dimensional problem of modifying the mapping texture map, the three-dimensional modeling can be omitted on the basis of displaying the three-dimensional model by reducing the dimension of the coordinates of the target position, and the appearance of the three-dimensional model is updated in a manner of directly displaying the texture map on the three-dimensional model according to the target position, so that the operation of updating the appearance of the three-dimensional model is simplified, and the efficiency of updating the appearance of the model is improved.
In one possible implementation, updating the appearance of the first model based on the second coordinates of the target location and the second texture map includes: determining a target area based on the second coordinates of the target position; the second texture map is superimposed on the target area of the first texture map to update the appearance of the first model.
It will be appreciated that updating the appearance of the model may be achieved by determining the region on the first model where the second texture map needs to be superimposed, and superimposing the second texture map on the first texture map of that region.
In one possible implementation, the target region is a pixel on the first texture map that contains the second coordinates of the target location and conforms to the size and shape of the preset two-dimensional graphic.
It can be understood that, if the preset two-dimensional pattern is provided, after the second coordinate of the target position is obtained, the area which contains the pixel point of the second coordinate and accords with the size and shape of the preset two-dimensional pattern can be determined as the target area according to the size and shape of the preset two-dimensional pattern. And the second texture map can be superimposed on the first texture map according to the size and the shape of the preset two-dimensional graph, so that the accuracy of model appearance updating is improved.
In one possible implementation, the target region is a region on the first texture map that is the same color as the pixel at the second coordinate of the target location or within a specified interval as the color error of the pixel at the second coordinate of the target location.
It can be understood that the region with the same color or color error in the designated interval is determined as the target region according to the color of the pixel point at the target position on the first model, so that the position region on the first texture map, which needs to be overlapped with the second texture map, can be accurately determined, and the accuracy of updating the appearance of the model is improved.
In one possible implementation, the method further includes: in response to receiving a trigger operation on the second position, determining a third coordinate of a third position, wherein the second position is a position of the first model, which is displayed on the display interface, except for other corresponding positions of the first position, and the third position is a position of the second position mapped on the surface of the first model, and the third coordinate of the third position is a three-dimensional coordinate of the third position in a three-dimensional space where the first model is located; determining a fourth coordinate of the third position based on the third coordinate of the third position, wherein the fourth coordinate of the third position is a two-dimensional coordinate of the third position mapped on a two-dimensional space where the first texture map is located; superposing the second texture map at a position corresponding to the fourth coordinate of the first texture map; and updating the appearance of the first model according to the second texture map.
It can be understood that by performing the triggering operation on the second position except the first position on the surface of the first model, the third coordinate of the third position corresponding to the second position can be obtained, and the second texture maps are overlapped on the corresponding position on the first texture maps based on the third coordinate, so that the second texture maps are overlapped on the corresponding position of each first texture map after the triggering operation is performed on each second position in batches, the appearance of the first model is updated according to the overlapped second texture maps, the operation of performing appearance updating on a plurality of positions of the first model is simplified, the model appearance updating operation for performing the batch is realized, and the model appearance updating efficiency is improved.
In one possible implementation, updating the appearance of the first model based on the second coordinates of the target location and the second texture map includes: superposing the second texture map at a position corresponding to the second coordinate of the first texture map corresponding to the target position to generate a third texture map; and updating the appearance of the first model according to the third texture map.
It will be appreciated that the first texture of the corresponding location on the first texture map is replaced with the second texture of the second texture map based on the second coordinates of the target location, a third texture map is generated, and the appearance of the first model is updated in accordance with the third texture map. So that an appearance update of the first model can be achieved.
In one possible implementation, the method further includes: determining a first state of the second texture map, the first state being used to indicate that the second texture map is displayed or hidden; and in response to receiving the preview trigger operation, displaying the appearance of the updated first model in the first state.
It will be appreciated that after determining that the second texture map is displayed or hidden, if a preview trigger operation is received for the first model, the second texture map may be displayed or hidden, thereby enabling an update to the appearance of the first model.
In one possible implementation, determining the first state of the second texture map includes:
and determining the first state of the second texture map according to the state switching frequency, wherein the state switching frequency is the frequency of switching the display state and the hidden state of the second texture map.
It can be appreciated that the second texture map of the target object is switched between the display state and the hidden state according to the state switching frequency, so that the updated first model can be dynamically displayed.
In one possible implementation, the first model is a model for simulating the first device and the second texture map is an indicator light map for prompting an operational state of the first device.
It can be understood that in one case, the first model may be a model for simulating the first device, and the second texture map may be an indicator light map for prompting the running state of the first device, so that the addition of the indicator light map to the model for simulating the first device is realized, and the simulation effect of the model is improved.
In one possible implementation, the method further includes: acquiring the running state of first equipment; determining a first state of the indicator light map at the current moment based on the operation state of the first equipment; the first state is used for indicating the indication lamp to be mapped and displayed or hidden; and updating the appearance of the first model according to the first state of the indicator light map so as to simulate the operation state of the first equipment indicated by the indicator light.
It can be understood that whether the current time indicator light map is in a display state or in a hidden state is determined by acquiring the running state of the first device, so that a model simulating the first device can dynamically display the indicator light map according to the actual running state of the first device, the simulated indicator light can indicate the running state of the first device, and the simulation effect of the model is improved.
In one possible implementation, the method further includes: generating a webpage package; the webpage package comprises a model framework of the first model and a texture map of the first model after appearance updating, and is used for downloading by other devices.
It can be understood that the updated first model can generate a webpage package for downloading by other devices, so that the utilization rate of the first model with updated appearance is improved.
In a second aspect, the present application provides a model appearance updating apparatus for performing any one of the model appearance updating methods provided in the first aspect above.
In a possible implementation manner, the present application may divide the functional module of the model appearance updating device according to the method provided in the first aspect. For example, each functional module may be divided corresponding to each function, or two or more functions may be integrated in one processing module. For example, the present application may divide the model appearance updating device into a display module, a processing module, an updating module, and the like according to functions. The description of possible technical solutions and beneficial effects executed by each of the above-divided functional modules may refer to the technical solutions provided by the first aspect or corresponding possible implementation manners thereof, which are not described herein again.
In a third aspect, embodiments of the present application provide a computing device comprising a processor and a memory, the processor coupled to the memory; the memory is used to store computer instructions that are loaded and executed by the processor to cause the computing device to implement the model appearance updating method as described in the above aspects.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored therein at least one computer program instruction that is loaded and executed by a processor to implement the model appearance updating method as described in the above aspects.
In a fifth aspect, embodiments of the present application provide a computer program product comprising computer instructions stored in a computer-readable storage medium. The computer instructions are read from a computer-readable storage medium by a processor of a computing device, which executes the computer instructions, causing the computing device to perform the model appearance updating method provided in the various alternative implementations of the first aspect described above.
For a detailed description of the second to fifth aspects and various implementations thereof in this application, reference may be made to the detailed description of the first aspect and various implementations thereof; moreover, the advantages of the second aspect and the various implementations thereof may be referred to as analyzing the advantages of the first aspect and the various implementations thereof, and will not be described herein.
These and other aspects of the present application will be more readily apparent from the following description.
Drawings
FIG. 1 is a schematic diagram of a computing device shown in accordance with an exemplary embodiment;
FIG. 2 is a flow diagram illustrating a method of model appearance updating, according to an example embodiment;
FIG. 3 is a schematic illustration of a digital twinning system production platform interface involved in the embodiment of FIG. 2;
FIG. 4 is a schematic illustration of one type of determining the location of a target on a first model involved in the embodiment of FIG. 2;
FIG. 5 is a perspective schematic projection view of one of the embodiments shown in FIG. 2;
FIG. 6 is a schematic diagram of a texture map UV mapping process involved in the embodiment of FIG. 2;
FIG. 7 is a schematic illustration of a preset two-dimensional graphical selection involved in the embodiment of FIG. 2;
fig. 8 is a schematic structural diagram of a model appearance updating device according to an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
References herein to "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Also, in the description of the present application, unless otherwise indicated, "a plurality" means two or more than two. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
In addition, in order to clearly describe the technical solutions of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", and the like are used to distinguish the same item or similar items having substantially the same function and effect. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ. Meanwhile, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion that may be readily understood.
First, an application scenario of the embodiment of the present application is described in an exemplary manner.
Currently, in digital twin systems for internet technology (internet technology, IT) devices, most of the digital twin systems display three-dimensional models of the IT devices such as servers, memories, switches or cabinets, and the displayed three-dimensional models usually do not include details (such as indicator lights), so that the displayed three-dimensional models can be used for viewing the general appearance of the IT devices on a subscriber line. The efficiency of operation and maintenance management of IT equipment cannot be improved.
The digital twin can be a simulation process integrating multiple disciplines, multiple physical quantities, multiple scales and multiple probabilities by fully utilizing data such as a physical model, sensor update and operation history, and mapping is completed in a virtual space, so that a full life cycle process of corresponding entity equipment is reflected. Digital twinning is a beyond-the-reality concept that can be seen as a digital mapping system of one or more important, mutually dependent equipment systems. A digital twin may be a digital version of a "clone" created on the basis of a device or system, and may include a virtual representation of the real world of physical objects, processes, relationships, behaviors, and the like. The digital twin skeleton may be the geometry of each digital twin in the digital twin system. In geometry, a finite body surrounded by a plurality of geometric surfaces (plane or curved surface) can be called a geometric body, a surface surrounding the geometric body is called an interface or a surface of the geometric body, intersecting lines of different interfaces are called edge lines of the geometric body, intersecting points of different edge lines are called vertexes of the geometric body, and the geometric body can also be regarded as a finite space area divided by a plurality of geometric surfaces in space. A material may represent a surface property of the rendered geometry, including the color used, and the degree of brightness, a material may reference one or more textures that may be used to wrap an image onto the surface of the geometry. Textures may generally represent an image that is loaded from a file, generated on a canvas, or rendered from another scene. In a digital twin system, the texture may be composed of one or more poster files due to the relatively high realism, and the texture in a digital twin system may be referred to as a digital twin texture map.
In the related art, a digital twin, i.e., a three-dimensional model of an IT device is presented in a digital twin system, including only the appearance of a fixed display of the IT device, which may refer to the model appearance of an analog IT device that is not affected by the operational state of the IT device.
That is, an indicator light or other indicating component for indicating the operational status of the IT device is not included on the three-dimensional model of the IT device displayed in the digital twinning system.
Since indicator lights are an important reference for IT devices such as servers, memories, switches to determine fault conditions. Serial attached small computer system interface (serial attached small computer system interface, SAS)/serial advanced technology attachment (serial advanced technology attachment, SATA) hard disk is exemplified herein. The SAS/SATA hard disk has two kinds of indicator lamps, one is a fault indicator lamp, which can be mainly used for prompting the fault occurrence, and the other is an active indicator lamp, which can be mainly used for displaying the normal working condition of the hard disk. In order to make the digital twinning system more realistic, the design of the digital twinning system is met, and the display of the indicator lamp needs to be added in the digital twinning system. If the display of the indicator lamp is needed to be added in the digital twin system, the user is needed to modify the three-dimensional model of the IT equipment again manually, and the position of the indicator lamp is modeled again.
In view of this, the following embodiments of the present application provide a method for updating the appearance of a model, which can further implement adding a texture map according to a position where a trigger operation is received when a user performs a trigger operation on a certain position on a three-dimensional model of an IT device on the basis of displaying the three-dimensional model of the IT device in a digital twin system according to a related technology, so as to avoid remodelling the three-dimensional model of the IT device.
Next, an exemplary description is given of a system architecture of an embodiment of the present application.
FIG. 1 illustrates a schematic diagram of a computing device provided by an embodiment of the present application. In terms of hardware, the computing device may include a central processing unit 101 (central processing unit, CPU), a graphics processor 102 (graphics processing unit, GPU), an external display device 103, a memory 104, and the like. In software, the computing device 100 may have the functionality to run a digital twinning system and display a three-dimensional model in the digital twinning system. That is, the computing device 100 may run a three.js engine and support Web graphic library (Web graphics library, webGL) technology, web three-dimensional (Web 3D) technology.
Among other things, webGL is a technology that renders interactive 2D and 3D graphics in any compatible web browser without the use of plug-ins. WebGL can be fully integrated into all web page standards of the browser, and GPU-accelerated usage of image processing and effects can be used as part of a web page canvas (canvas). The WebGL elements may be added to other hypertext markup language (hypertext markup language, HTML) elements and mixed with the web page or other portions of the web page background. The WebGL program may be composed of handles written in JavaScript and shader code written in a shading language (OpenGL shading language, GLSL) and executed on the GPU of the computing device. Web3D may refer to a method of displaying three-dimensional graphics via a Web browser. Js is a cross-browser Web3D engine that uses JavaScript function libraries or application programming interfaces (application program interface, APIs) to create and expose animated three-dimensional graphics in the Web browser. Js may allow GPU-accelerated 3D animation elements in web pages created using JavaScript.
It should be noted that, the execution of a certain step (e.g., S101 to S105 below) by the computing device 100 described in the following embodiments may be understood as: the CPU101 executes this step.
The memory 104 may store logic code corresponding to the execution of certain steps by the computing device 100 described in the embodiments below.
In addition, the display device 103 may have an interface display function, may display a digital twin system production platform interface, and the digital twin system production platform interface may be used to display a three-dimensional model in the digital twin system, and perform an appearance update operation on the three-dimensional model.
It should be noted that, the system architecture and the application scenario described in the embodiments of the present application are for more clearly describing the technical solution of the embodiments of the present application, and do not constitute a limitation on the technical solution provided in the embodiments of the present application, and those skilled in the art can know that, with the evolution of the system architecture and the appearance of a new service scenario, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
For ease of understanding, the model appearance updating method provided in the present application is described below by way of example with reference to the accompanying drawings, and is applicable to the computing device shown in fig. 1.
Fig. 2 is a flow chart illustrating a method for updating a model appearance according to an exemplary embodiment of the present application. The model appearance updating method comprises the following steps:
S101, displaying the first model.
Wherein the first model is a three-dimensional model generated in accordance with a first texture map, which is a pattern mapped for display on the first model, the first texture map may include at least one of a motif, a pattern, or a color.
In one possible implementation, the computing device may display the first model via an external display device.
The computing device displays a digital twin system manufacturing platform interface through the display device, and displays a first model on the digital twin system manufacturing platform interface.
Illustratively, a user may open a pre-created first model by logging into the digital twinning system production platform and displaying the first model on the digital twinning system production platform interface, e.g., fig. 3 is a schematic diagram of a digital twinning system production platform interface according to an embodiment of the present application. As shown in fig. 3, the digital twinning system production platform 200 may have a first model 201 displayed therein, and the first model 201 may be a three-dimensional model simulating a server.
S102, determining first coordinates of a target position in response to receiving a triggering operation on a first position, wherein the first position is a corresponding position of a first model displayed on a display interface, and the target position is a position of the first position mapped on the surface of the first model.
The first position is a corresponding position of the first model displayed on the display interface, the target position is a position of the first position mapped on the surface of the first model, and the first coordinate of the target position is a three-dimensional coordinate of the target position in a three-dimensional space where the first model is located. The target location may be any location on the first model, and the first location may be determined by a user selecting and triggering a certain location point on the first model displayed on the display interface.
For example, if the first model is displayed through an external display device of the computer device, the user may select and trigger a certain position point on the first model on the display interface by means of clicking a mouse; if the first model is displayed through the computing device with the touch screen, the user can click on the touch screen in a manual triggering mode, so that the position point displayed on the display interface of the first model is selected and triggered.
Illustratively, fig. 4 is a schematic diagram of determining a target position on the first model according to an embodiment of the present application, and as shown in fig. 4, when a user performs a triggering operation on a displayed position point 202 on the first model, it may be determined that the target position on the first model is the position point 202.
Because each position point on the first model can be clicked and triggered, the user can select and trigger any position on the first model, and the position of the target position on the first model is not limited.
In one possible implementation manner, when the computing device receives a triggering operation on the target position on the first model, firstly, two-dimensional coordinates of a position point of the triggered first position on the display interface can be determined, then, as a space rectangular coordinate system is preset in the first model, namely, a three-dimensional space in which the three-dimensional model is located, each point on the first model has corresponding three-dimensional coordinates, a camera is also present in the three-dimensional space except for the first model, the position coordinates of the camera are the three-dimensional coordinates determined in the space rectangular coordinate system, and the computing device acquires the position coordinates of the camera; then, according to the two-dimensional coordinates of the position point of the first position on the display interface, the three-dimensional coordinates of the first position in the three-dimensional space (the first space rectangular coordinate system) where the three-dimensional model is located can be determined, the computing device can determine a ray through the three-dimensional coordinates of the position point on the first model corresponding to the camera coordinates and the first position in the three-dimensional space, the ray can be obtained through connecting a position point indicated by the camera coordinates with a position point on the first model corresponding to the first position, the intersection point of the ray and the first model is determined to be a target position, and the first coordinate of the target position is determined through acquiring skeleton information of the position point in the three-dimensional space. It is understood that the target location is the location of the first location map on the first model surface.
Wherein the image of the first model taken at the camera position is an image displayed on the display interface, and the skeleton information of the position point in the three-dimensional space may indicate the geometry in which the position point is located, including information of a point, a line or a plane.
Fig. 5 is a schematic perspective view of an embodiment of the present application. As shown in fig. 5, the image on the near clipping plane 302 is an image obtained by capturing a three-dimensional model in space with a camera 301, and according to the projection mapping relationship between the near clipping plane 302 and the far clipping plane 303, the position point 304 of the determined target position on the display interface is projected onto the three-dimensional model in three-dimensional space, and may correspond to the position point 305 of the first position on the three-dimensional space.
Specifically, determining the three-dimensional coordinate of the first position in the three-dimensional space (the first space rectangular coordinate system) in which the three-dimensional model is located according to the two-dimensional coordinate of the first position may include: based on the planar rectangular coordinate system of the two-dimensional plane where the near clipping plane (display interface) 302 is located, a second spatial rectangular coordinate system where the display interface 302 is located is first established. For example, the x-axis and the y-axis of the planar rectangular coordinate system are defined as the x-axis and the y-axis of the second spatial rectangular coordinate system, a straight line passing through the origin of the planar rectangular coordinate system and perpendicular to the x-axis and the y-axis is defined as the z-axis. And then determining the three-dimensional coordinates of the camera position of the three-dimensional model in a second space rectangular coordinate system. And finally, determining a coordinate first corresponding relation between the first space rectangular coordinate system and the second space rectangular coordinate system according to the three-dimensional coordinates of the camera position of the three-dimensional model in the first space rectangular coordinate system and the three-dimensional coordinates of the camera position of the three-dimensional model in the second space rectangular coordinate system. Based on the coordinate correspondence, the three-dimensional coordinates of the first position in the first space rectangular coordinate system are determined according to the three-dimensional coordinates of the first position in the second space rectangular coordinate system. The first space rectangular coordinate system is the space rectangular coordinate system where the three-dimensional model (first model) is located.
For example, the two-dimensional coordinates of the first position are (x, y), the three-dimensional coordinates of the first position in the second space rectangular coordinate system are (x, y, z), the three-dimensional coordinates of the first position in the first space rectangular coordinate system are (x 1, y1, z 1), and the coordinates of the first space rectangular coordinate system and the second space rectangular coordinate system correspond to each other as follows:
x1=x*2+1;
y2=y*2+1;
z1=z;
and the three-dimensional coordinates of the first position in the second space rectangular coordinate system are (1, 1), and the three-dimensional coordinates of the first position in the first space rectangular coordinate system can be determined to be (3, 1) according to the coordinate corresponding relation between the first space rectangular coordinate system and the second space rectangular coordinate system.
Therefore, the three-dimensional coordinate of the first position in the first space rectangular coordinate system can be determined according to the coordinate correspondence relationship between the first space rectangular coordinate system and the second space rectangular coordinate system, the first ray can be determined according to the three-dimensional coordinate point and the coordinate point of the camera position in the first space rectangular coordinate system, the coordinate of the intersection point is obtained through the intersection point (target position) of the first ray and the outer surface of the three-dimensional model through the API of the thread.
In one possible implementation, if the first ray intersects the first model, and there are 2 intersections, the computing device may take as the target location the intersection closest to the mouse click point.
For example, first, the computing device may obtain camera coordinates of a camera in a three-dimensional space in which the three-dimensional model is located, and obtain two-dimensional coordinates of a position point of a first position clicked by a user in a two-dimensional space of a display interface; the user clicks the screen, and the web page acquires the point coordinates of the screen through a mouse click event (S x ,S y ) The point coordinates are calculated by normalization (S x ,S y ) And converting the first position into two-dimensional coordinates (x, y) of the first position in a two-dimensional space of the display interface.
S103, determining second coordinates of the target position based on the first coordinates of the target position.
The second coordinate of the target position is a two-dimensional coordinate of the first coordinate of the target position mapped on a two-dimensional space where the first texture map is located.
In one possible implementation, the computing device determines, by the acquired first coordinates of the target location, a location point on the first texture map corresponding to the first coordinates when the first texture map is overlaid on the model skeleton of the first model, and determines, as the second coordinates, two-dimensional coordinates of the location point on the first texture map.
The manner of determining the second coordinate of the target position according to the first coordinate of the target position may be a texture map UV mapping method. The first texture map may be regarded as a two-dimensional image tiled on a desktop, the left-right direction of the two-dimensional image is taken as a U-axis, the up-down direction of the two-dimensional image is taken as a V-axis, a plane formed by the U-axis and the V-axis may be taken as a texture space coordinate system, and the first texture map is superimposed on a model skeleton of the first model to achieve the purpose of adding texture to the appearance of the first model. Thus, the first coordinate of the target position may be converted into the second coordinate of the target position by means of UV mapping, thereby determining the position point on the first texture map indicated by the second coordinate.
Illustratively, fig. 6 is a schematic diagram of a texture map UV mapping process according to an embodiment of the present application, as shown in fig. 6, an exemplary texture map 51 may be regarded as a two-dimensional image tiled on a desktop, and the purpose of adding texture to the exemplary model 52 may be achieved by overlaying the exemplary texture map on the model skeleton of the exemplary model 52.
S104, determining a target area based on the second coordinates of the target position.
In this embodiment of the present application, after determining the second coordinate of the target position, the computing device may determine, according to the second coordinate, an area on the first model that needs to be superimposed on the second texture map, and use the area as the target area.
Wherein the target region is used to indicate a region on the first model that overlays the second texture map. The second texture map may be a pre-configured texture map, for example, a map of the same color.
In one possible implementation, the target region may be a region on the first model that includes pixels of the second coordinates of the target location and conforms to the shape and size of the preset two-dimensional graph.
That is, before determining the target object, the computing device may acquire a preset two-dimensional pattern, and determine a shape and size area that includes the pixel point of the second coordinate of the target position and conforms to the preset two-dimensional pattern as the target area.
The preset two-dimensional graph can be a preset two-dimensional graph or a two-dimensional graph selected by a user. The preset two-dimensional pattern may be a regular pattern, such as square, circular, oval, etc.; or may be an irregular pattern, such as a custom pattern that is selected by a user to preset the shape and size of a two-dimensional pattern or to draw.
Fig. 7 is a schematic diagram illustrating selection of a preset two-dimensional graph, as shown in fig. 7, after a user logs in a digital twin system manufacturing platform, a selection pop-up 401 for selecting the preset two-dimensional graph may be displayed in a pop-up form, and in the selection pop-up 401, a shape of a map to be added on the first model may be selected through a mouse pointer trigger operation of the user, for example, if the two-dimensional graph is selected to be square, a square map may be added when the map is added later.
For example, if the first model is a model for simulating the first device and the second texture map is an indicator map for prompting an operation state of the first device, it may be determined that the target area is an area including the target position, which needs to be added with an indicator on the first model, and is a preset two-dimensional graphic shape.
In one possible implementation, the target region may also be a region on the first model that contains the target pixel, where the color on the target pixel is a first color or a color within a specified interval from the first color, which may be the color of the pixel at the second coordinate of the target location on the first texture map.
That is, prior to determining the target region, the computing device may obtain a first color of the pixel points at the second coordinates of the target location on the first texture map and determine each target pixel point that is the same as the first color or has a color error within a specified interval within a specified threshold distance around the second coordinates on the first texture map, and determine the region on the first model that includes each target pixel point as the target region.
The computing device may find pixels with the same or similar color as the pixels of the second coordinate of the target location on the first texture map according to the second coordinate of the target location, until pixels with different colors are encountered as the pixels of the boundary of the target object.
In addition, if the target area also accords with the preset two-dimensional graph of the target area, the preset two-dimensional graph can be taken as a diffusion direction, and finally, an area, the pixel color of which is the same as that of the pixel at the target position and accords with the preset two-dimensional graph, can be determined as the target area.
That is, according to the second coordinates of the target position on the first texture map and the preset two-dimensional pattern, a small preset pattern (for example, square) may be formed first, the computing device may determine whether the color of the pixels on the outer layer of the preset pattern is the same as the color of the pixels in the preset pattern, and if so, expand the outer layer until expanding to the boundary with different colors of the pixels.
For example, if the first model is a model for simulating the first device, the second texture map is an indicator light map for prompting the operation state of the first device, and since the color of the pixels corresponding to the position where the indicator light needs to be added on the first texture map of the first model is different from that of other regions, the region where the indicator light needs to be added on the first texture map can be determined as the target region by looking for the region where the pixels on the first texture map are the same by out-diffusing with the second coordinate of the target position, and the first texture map superimposed by the target region is the region reserved by the developer for adding the indicator light map. As shown in fig. 3, the square area with black on the first model 201 may be an area reserved on the first texture map during modeling, if the target position is in the reserved area, an area with black color is found by outward diffusion at a position corresponding to the second coordinate based on the target position, and until pixel points with different colors are encountered as boundaries, it may be determined that the target area is a black reserved area including the target position.
S105, updating the appearance of the first model according to the target area and the second texture mapping.
In the embodiment of the application, after the target area is determined, the second texture map may be displayed in an overlaid manner on the target area, so as to display the appearance of the updated first model.
The shape, position and size of the second texture map superimposed on the first model may be fine-tuned, i.e. the shape, position and size of the second texture map superimposed on the first model may be adjusted by a user through operations such as dragging, scaling, etc.
For example, if the first model is a model for simulating the first device, the second texture map is an indicator light map for prompting the operation state of the first device, and the color of the second texture map may be preset, or may be reset after the user fine-tunes the superimposed second texture map.
That is, after the computing device automatically determines the shape size of the added indicator light map, the user may determine whether the shape size of the generated indicator light map meets the expectations, and if not, the user may modify the shape size and even the position of the indicator light map by triggering an operation.
Alternatively, in another possible implementation manner, the appearance updating of the first model may further generate a third texture map according to the first texture map, the position of the second coordinate, and the second texture map, where the third texture map is a texture map generated by stitching after superimposing the second texture map with the determined shape size at the target position on the first texture map at the position of the second coordinate, and the appearance of the first model is updated according to the third texture map, and in particular, S103 to S105 may be replaced by the following steps S106 to S107:
S106, the computing device superimposes the second texture map of the target area at a position corresponding to the second coordinate of the first texture map corresponding to the target position, and generates a third texture map.
S107, the computing device updates the appearance of the first model according to the third texture map of the target area.
In a possible case, after the second texture map is superimposed on the target area corresponding to the target position, since the shape and the content of the superimposed second texture map are already determined, the second texture map with the shape and the content adjusted for the object corresponding to the other position can be further superimposed, so that the operation of adding the map can be performed in batches, and the efficiency of adding the map on the first model can be improved, and specifically the method includes the following steps:
s11, determining a third coordinate of a third position in response to receiving the triggering operation of the second position.
The second position is the corresponding position of the first model, except the first position, displayed on the display interface, the third position is the position of the second position mapped on the surface of the first model, and the third coordinate of the third position is the three-dimensional coordinate of the third position in the three-dimensional space where the first model is located.
In one possible implementation, when the computing device receives a trigger operation for a second location on the first model other than the first location, the three-dimensional coordinates of the second location mapped in the first spatial rectangular coordinate system in which the first model is located may be determined according to the two-dimensional coordinates of the second location in the display interface.
The specific determination manner of the third coordinate of the third position is the same as that of the first coordinate of the target position, as shown in S102, which is not described herein.
S12, determining fourth coordinates of the third position based on the third coordinates of the third position.
The fourth coordinate of the third position is a two-dimensional coordinate of the third position mapped on the two-dimensional space where the first texture map is located.
The specific manner of determining the fourth coordinate of the third position by the computing device through the third coordinate of the third position is the same as the manner of determining the second coordinate of the target position by the first coordinate of the target position, as shown in S103, which is not described herein.
And S13, superposing the second texture map at a position corresponding to a fourth coordinate of the first texture map corresponding to the third position.
Wherein, since the target region has been determined, the shape and content (color, size, etc.) of the second texture map superimposed on the first model has been determined, the computing device may superimpose the second texture map of the determined shape, content at a location corresponding to a fourth coordinate of the first texture map corresponding to the third location.
And S14, updating the appearance of the first model according to the second texture map and the superposition position of the second texture map.
Wherein, after the computing device superimposes the second texture map of the target area at the position of the first texture map indicated by the fourth coordinate of the third position, the same map as that at the target position is added at other positions of the first model can be displayed on the display interface, and at this time, the user can fine-tune the texture maps at the other positions as well so as to achieve that the appearance of the updated first model meets the expectations.
For example, as shown in fig. 3, a batch application control may be displayed on the digital twin system production platform 200, after a user adds a texture map to a target location, a trigger operation may be performed on the batch application control, and after receiving the trigger operation on the batch application control, the computing device may start a function of acquiring other locations, that is, when the user performs a trigger operation on other locations except for the target location on the first model, the same texture map as that at the target location may be added at each other location.
If the user completes the online updating process of the first model, that is, after adding the texture map to the first model, the appearance of the online updated first model may be displayed, and the specific displaying process may be as follows:
S21, determining a first state of a second texture map of the target area.
Wherein the first state may be used to indicate that the second texture map of the target region is displayed or hidden.
That is, after overlaying the second texture map on the target region, or when the computing device receives a preview trigger operation, a respective first state of at least one second texture map added on the first model at that time is determined. In one possible implementation, the computing device may determine the first state of the second texture map according to a state switching frequency, which is a frequency at which the display state and the hidden state of the second texture map are switched.
The state switching frequency may be a preset parameter, or may be a parameter set by the user for each of the at least one second texture map before the preview is performed.
That is, if the state switching frequency is preset once per second, the computing device may display the default state after receiving the preview triggering operation, and if the default state is the hidden state, hide each second texture map added on the first model, and then switch the state of each second texture map once per second according to the preset state switching frequency, so as to achieve the effect of flashing once per second.
Or if the user sets the parameters for each of the at least one second texture map before previewing, the user may identify each of the at least one second texture map added on the first model by numbering and set each of the state switching frequencies according to the numbers, and after the user sets and after the computing device receives the preview triggering operation, each of the second texture maps may be displayed or hidden according to the state switching frequency corresponding to each of the numbers.
For example, if the state switching frequency is set to be switched once every second for the second texture map with the number 1 and the state switching frequency is set to be switched once every two seconds for the second texture map with the number 2, the state switching of the second texture maps can be performed according to the state switching frequency set in advance, so that the effect that the second texture map with the number 1 blinks once every second and the second texture map with the number 2 blinks once every two seconds is achieved.
In another possible implementation manner, if the first model is a model for simulating the first device, the second texture map is an indicator light map for prompting an operation state of the first device, and since the blinking state of the indicator light map may represent the operation state of the first device, in order to enable the first model to simulate the operation state of the first device through the added blinking state of the indicator light map, the computing device may acquire the operation state of the first device, and determine the first state of the indicator light map at the current moment based on the operation state of the first device; and displaying the appearance of the updated first model according to the content of the indicator light map of the target area and the first state of the indicator light map so as to simulate the operation state of the first equipment indicated by the indicator light.
For example, if the first model is used for simulating an SAS/SATA hard disk, the second texture map is an indicator light map, and since two indicator lights are present on the SAS/SATA hard disk, one is a fault (fault) indicator light, which can be mainly used for prompting the fault occurrence condition, and the other is an active (active) indicator light, which can be mainly used for displaying the normal working condition of the hard disk; when the active indicator lamp is green and normally on and the fault indicator lamp is off, the hard disk can be indicated to work normally; when the active indicator lights flash green and the fault indicator lights are turned off, the hard disk can be used for reading and writing data; when the active indicator lights are green and normally bright or green blinks, and the fault (fault) indicator lights are yellow blinks, the hard disk can be indicated to be positioned or RAID reconstruction is performed; when the active indicator light is green and normally on or off and the fault (fault) indicator light is yellow and normally on, the fault of the hard disk can be indicated; when the active indicator lights are turned off and the fault indicator lights are also turned off, it may be indicated that the hard disk is out of place or that the hard disk is faulty. The operating state of the real device (SAS/SATA hard disk) can be characterized by controlling the display or hidden state of the indicator map. The first state may be used for indicating that the indicator light map of the target area is displayed or hidden.
Wherein, in the case that the computing device automatically generates the indicator map and the user determines that the shape size and position of the indicator map need not be modified, the user may set the indicator map to a number (such as Fault-1), and write a custom script for the status switching frequency of the indicator map, where the custom script includes a function for determining the status switching frequency according to the operation status of the first device.
S22, displaying the updated first model according to the second texture map and the first state of the target area.
Wherein the computing device may dynamically display the updated first model in accordance with the second texture map of the target region and the determined first state at the current time.
In one possible case, the computing device may generate a web page package; the webpage package can comprise a model skeleton of the first model, a first texture map and a second texture map of a target area overlapped on the first texture map, and the webpage package can be used for downloading by other devices.
For example, if other devices download the above-mentioned webpage package, the other devices may load the webpage package and read the three-dimensional information of the first model provided in the webpage package, and load all model skeletons and texture maps in the digital twin system in the webpage package by loading a three.js 3D engine, when all model skeletons and texture maps are loaded, the indicator lamp texture maps generated by the previous platform may be in a hidden stage. Therefore, in the webpage seen by the current user, the indicator lamps are in a non-bright state, after loading is completed, according to the previously set frequency or frequency function, the indicator lamp mapping is switched between a display state and a hidden state, so that the indicator lamps show a flickering effect, or the indicator lamp mapping is continuously displayed and hidden according to the specified frequency or the frequency of changing the operation data of the actual equipment transmitted back by the background. At this time, the user can observe that the analog device indicator lamp in the webpage flashes according to the real state.
The foregoing description of the embodiments of the present application has been presented primarily from a method perspective. It will be appreciated that the software upgrade apparatus, in order to implement the above-described functions, includes at least one of a hardware structure and a software module for performing the respective functions. Those of skill in the art will readily appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the present application may divide the functional units of the model appearance updating device according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated in one processing unit. The integrated units may be implemented in hardware or in software functional units. It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice.
Fig. 8 is a schematic structural diagram of a model appearance updating device 400 according to an exemplary embodiment of the present application. The model appearance updating apparatus 400 is applied to a computing device, or the model appearance updating apparatus 400 may be a computer device. The model appearance updating apparatus 400 includes:
the display module 410 is configured to display a first model, where the first model is a three-dimensional model with a first texture appearance generated according to a first texture map rendering.
A processing module 420, configured to determine, in response to receiving a trigger operation on a first location, a first coordinate of a target location, where the first location is a corresponding location of the first model displayed on a display interface, the target location is a location of the first location mapped on a surface of the first model, and the first coordinate of the target location is a three-dimensional coordinate of the target location in a three-dimensional space in which the first model is located; and determining a second coordinate of the target position based on the first coordinate of the target position, wherein the second coordinate of the target position is a two-dimensional coordinate of the first coordinate of the target position mapped on a two-dimensional space where the first texture map is located.
An updating module 430 is configured to update the appearance of the first model based on the second coordinates of the target location and the second texture map.
For example, in connection with fig. 2, the display module 410 may be used to perform S101 as shown in fig. 2, the processing module 420 may be used to perform S102 to S104 as shown in fig. 2, and the update module 430 may be used to perform S105 as shown in fig. 2.
In one possible implementation, the processing module 420 is further configured to,
determining a target area based on the second coordinates of the target position;
the second texture map is superimposed on the target area of the first texture map to update the appearance of the first model.
In one possible implementation, the target region is a region on the first texture map that includes pixels of the second coordinate of the target position and conforms to a size and shape of a preset two-dimensional graphic.
In one possible implementation, the target region is a region on the first texture map that is the same color as the pixel at the second coordinate of the target location or within a specified interval as the color error of the pixel at the second coordinate of the target location.
In one possible implementation, the processing module 420 is further configured to determine, in response to receiving a trigger operation for a second location, a third coordinate of a third location, where the second location is another corresponding location of the first location, other than the first location, displayed on the display interface by the first model, the third location is a location of the second location mapped on the surface of the first model, and the third coordinate of the third location is a three-dimensional coordinate of the third location in a three-dimensional space in which the first model is located; determining a fourth coordinate of the third position based on the third coordinate of the third position, wherein the fourth coordinate of the third position is a two-dimensional coordinate of the third position mapped on a two-dimensional space where the first texture map is located; and superposing the second texture map at a position corresponding to the fourth coordinate of the first texture map. The updating module 430 is further configured to update the appearance of the first model according to the second texture map and the overlapping position of the second texture map.
In a possible implementation manner, the updating module 430 is further configured to superimpose the second texture map on a position corresponding to the second coordinate of the first texture map corresponding to the target position, to generate a third texture map; and updating the appearance of the first model according to the third texture map.
In a possible implementation, the processing module 420 is further configured to determine a first state of the second texture map, where the first state is used to indicate that the second texture map is displayed or hidden; the updating module 430 is further configured to display, in response to the received preview trigger operation, an updated appearance of the first model according to the first state.
In a possible implementation, the processing module 420 is further configured to determine the first state of the second texture map according to a state switching frequency, where the state switching frequency is a frequency of switching between a display state and a hidden state of the second texture map.
In one possible implementation, the first model is a model for simulating a first device, and the second texture map is an indicator light map for prompting an operation state of the first device.
In one possible implementation of the present invention,
the processing module 420 is further configured to obtain an operation state of the first device;
the updating module 430 is further configured to determine, based on the operation state of the first device, a first state of the indicator light map at the current time; the first state is used for indicating the indication lamp map of the target object to be displayed or hidden; and displaying the updated appearance of the first model according to the content of the indicator light map of the target object and the first state of the indicator light map so as to simulate the operation state of indicating the first equipment through an indicator light.
In one possible implementation of the present invention,
the processing module 420 is further configured to generate a webpage package; the webpage package comprises a model framework of the first model and a texture map of the first model after appearance updating, and is used for being downloaded by other equipment.
For a specific description of the above alternative modes, reference may be made to the foregoing method embodiments, and details are not repeated here. In addition, the explanation and the description of the beneficial effects of any of the model appearance updating devices provided above can refer to the corresponding method embodiments, and are not repeated.
As an example, in connection with fig. 1, the functions implemented by some or all of the display module 410, the processing module 420, and the updating module 430 in the model appearance updating apparatus may be performed by the computing device 100 in fig. 1, where the display module 410 may be performed by the external display device 103 of the computing device 100 in fig. 1, and the processing module 420 and the updating module 430 may be cooperatively performed by the central processor 101, the graphics processor 102, and the memory 104 of the computing device 100 in fig. 1.
In an exemplary embodiment, a computer readable storage medium is also provided for storing at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by a processor to implement all or part of the steps in the memory failure prediction method described above. For example, the computer readable storage medium may be a read-only memory (ROM), a random access memory (random access memory, RAM), a compact disc-read only memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computing device reads the computer instructions from the computer readable storage medium and executes the computer instructions to cause the computing device to perform all or part of the steps of the method shown in any of the embodiments of fig. 2 described above.
In some embodiments, the methods illustrated in the embodiments of the present application may be implemented as computer program instructions encoded on a computer-readable storage medium in a machine-readable format or encoded on other non-transitory media or articles of manufacture.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing description of the preferred embodiments is merely exemplary in nature and is in no way intended to limit the invention, since it is intended that all modifications, equivalents, improvements, etc. that fall within the spirit and scope of the invention.

Claims (22)

1. A method for updating the appearance of a model, the method comprising:
Displaying a first model, wherein the first model is a three-dimensional model with a first texture appearance, and the three-dimensional model is generated according to first texture map rendering;
in response to receiving a triggering operation on a first position, determining a first coordinate of a target position, wherein the first position is a corresponding position of the first model displayed on a display interface, the target position is a position of the first position mapped on the surface of the first model, and the first coordinate of the target position is a three-dimensional coordinate of the target position in a three-dimensional space where the first model is located;
determining a second coordinate of the target position based on the first coordinate of the target position, wherein the second coordinate of the target position is a two-dimensional coordinate of the first coordinate of the target position mapped on a two-dimensional space where the first texture map is located;
the appearance of the first model is updated based on the second coordinates of the target location and a second texture map.
2. The method of claim 1, wherein updating the appearance of the first model based on the second coordinates of the target location and a second texture map comprises:
determining a target area based on the second coordinates of the target position;
The second texture map is superimposed on the target area of the first texture map to update the appearance of the first model.
3. The method of claim 2, wherein the target region is a region on the first texture map that includes pixels of the second coordinates of the target location and conforms to a size and shape of a preset two-dimensional graphic.
4. A method according to claim 2 or 3, wherein the target area is an area on the first texture map that is the same color as the pixel at the second coordinate of the target location or within a specified interval as the color error of the pixel at the second coordinate of the target location.
5. The method according to any one of claims 1 to 4, further comprising:
in response to receiving a trigger operation on a second position, determining a third coordinate of a third position, wherein the second position is a corresponding position, except the first position, of the first model displayed on a display interface, the third position is a position, mapped on the surface of the first model, of the second position, and the third coordinate of the third position is a three-dimensional coordinate of the third position in a three-dimensional space in which the first model is located;
Determining a fourth coordinate of the third position based on the third coordinate of the third position, wherein the fourth coordinate of the third position is a two-dimensional coordinate of the third position mapped on a two-dimensional space where the first texture map is located;
superposing the second texture map at a position corresponding to the fourth coordinate of the first texture map;
and updating the appearance of the first model according to the second texture map and the superposition position of the second texture map.
6. The method according to any one of claims 1 to 4, further comprising:
determining a first state of the second texture map, the first state being used to indicate that the second texture map is displayed or hidden;
and responding to the received preview triggering operation, and displaying the updated appearance of the first model according to the first state.
7. The method of claim 6, wherein determining the first state of the second texture map comprises:
and determining the first state of the second texture map according to a state switching frequency, wherein the state switching frequency is the frequency of switching between the display state and the hidden state of the second texture map.
8. The method of any of claims 1 to 5, wherein the first model is a model for simulating a first device and the second texture map is an indicator light map for prompting an operational state of the first device.
9. The method of claim 8, wherein the method further comprises:
acquiring the running state of the first equipment;
determining a first state of the indicator light map at the current moment based on the running state of the first equipment; the first state is used for indicating the indication lamp map to be displayed or hidden;
and displaying the updated appearance of the first model according to the first state of the indicator light map so as to simulate the operation state of the first equipment indicated by the indicator light.
10. The method according to any one of claims 1 to 9, further comprising:
generating a webpage package; the webpage package comprises a model framework of the first model and a texture map of the first model after appearance updating, and is used for being downloaded by other equipment.
11. A model appearance updating device, characterized in that the device comprises:
The display module is used for displaying a first model, wherein the first model is a three-dimensional model with a first texture appearance, and the three-dimensional model is generated according to first texture map rendering;
the processing module is used for responding to the received triggering operation of a first position, determining a first coordinate of a target position, wherein the first position is a corresponding position of the first model displayed on a display interface, the target position is a position of the first position mapped on the surface of the first model, and the first coordinate of the target position is a three-dimensional coordinate of the target position in a three-dimensional space where the first model is located; determining a second coordinate of the target position based on the first coordinate of the target position, wherein the second coordinate of the target position is a two-dimensional coordinate of the first coordinate of the target position mapped on a two-dimensional space where the first texture map is located;
and the updating module is used for updating the appearance of the first model based on the second coordinate of the target position and the second texture map.
12. The apparatus of claim 11, wherein the processing module is further configured to,
determining a target area based on the second coordinates of the target position;
The second texture map is superimposed on the target area of the first texture map to update the appearance of the first model.
13. The apparatus of claim 12, wherein the target region is a region on the first texture map that includes pixels of a second coordinate of the target location and conforms to a size and shape of a preset two-dimensional graphic.
14. The apparatus of claim 12 or 13, wherein the target region is a region on the first texture map that is the same color as a pixel at the second coordinate of the target location or within a specified interval as a color error of a pixel at the second coordinate of the target location.
15. The device according to any one of claims 11 to 14, wherein,
the processing module is further configured to determine, in response to receiving a trigger operation for a second position, a third coordinate of a third position, where the second position is another corresponding position, other than the first position, of the first model displayed on the display interface, the third position is a position where the second position is mapped on the surface of the first model, and the third coordinate of the third position is a three-dimensional coordinate of the third position in a three-dimensional space where the first model is located; determining a fourth coordinate of the third position based on the third coordinate of the third position, wherein the fourth coordinate of the third position is a two-dimensional coordinate of the third position mapped on a two-dimensional space where the first texture map is located; superposing the second texture map at a position corresponding to the fourth coordinate of the first texture map;
And the updating module is further used for updating the appearance of the first model according to the second texture mapping and the superposition position of the second texture mapping.
16. The apparatus according to any one of claims 12 to 15, wherein the updating module is further configured to superimpose a second texture map on a position corresponding to a second coordinate of the first texture map corresponding to the target position, and generate a third texture map; and updating the appearance of the first model according to the third texture map.
17. The apparatus of any of claims 11 to 16, wherein the processing module is further configured to determine a first state of the second texture map, the first state being configured to indicate that the second texture map is displayed or hidden; and the updating module is also used for responding to the received preview triggering operation and displaying the updated appearance of the first model according to the first state.
18. The apparatus of claim 17, wherein the processing module is further configured to determine the first state of the second texture map according to a state switching frequency, the state switching frequency being a frequency at which a display state and a hidden state of the second texture map are switched.
19. The apparatus of any of claims 11 to 18, wherein the first model is a model for simulating a first device and the second texture map is an indicator light map for prompting an operational state of the first device.
20. The apparatus of claim 19, wherein the processing module is further configured to obtain an operational status of the first device;
the updating module is further configured to determine a first state of the indicator light map at a current time based on an operation state of the first device; the first state is used for indicating the indication lamp map to be displayed or hidden; and displaying the updated appearance of the first model according to the first state of the indicator light map so as to simulate the operation state of the first equipment indicated by the indicator light.
21. The apparatus of any one of claims 11 to 20, wherein the processing module is further configured to generate a web page package; the webpage package comprises a model framework of the first model and a texture map of the first model after appearance updating, and is used for being downloaded by other equipment.
22. A computing device, the computing device comprising a processor and a memory; the processor is coupled with the memory; the memory is for storing computer instructions that are loaded and executed by the processor to cause a computing device to implement the model appearance updating method of any of claims 1 to 10.
CN202310081090.0A 2023-01-29 2023-01-29 Model appearance updating method and device and computing equipment Pending CN116310037A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202310081090.0A CN116310037A (en) 2023-01-29 2023-01-29 Model appearance updating method and device and computing equipment
PCT/CN2023/117340 WO2024156180A1 (en) 2023-01-29 2023-09-06 Model appearance updating method and apparatus, and computing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310081090.0A CN116310037A (en) 2023-01-29 2023-01-29 Model appearance updating method and device and computing equipment

Publications (1)

Publication Number Publication Date
CN116310037A true CN116310037A (en) 2023-06-23

Family

ID=86787911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310081090.0A Pending CN116310037A (en) 2023-01-29 2023-01-29 Model appearance updating method and device and computing equipment

Country Status (2)

Country Link
CN (1) CN116310037A (en)
WO (1) WO2024156180A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024156180A1 (en) * 2023-01-29 2024-08-02 超聚变数字技术有限公司 Model appearance updating method and apparatus, and computing device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012037157A2 (en) * 2010-09-13 2012-03-22 Alt Software (Us) Llc System and method for displaying data having spatial coordinates
US9396585B2 (en) * 2013-12-31 2016-07-19 Nvidia Corporation Generating indirection maps for texture space effects
CN106570822B (en) * 2016-10-25 2020-10-16 宇龙计算机通信科技(深圳)有限公司 Face mapping method and device
CN111489428B (en) * 2020-04-20 2023-06-30 北京字节跳动网络技术有限公司 Image generation method, device, electronic equipment and computer readable storage medium
CN116310037A (en) * 2023-01-29 2023-06-23 超聚变数字技术有限公司 Model appearance updating method and device and computing equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024156180A1 (en) * 2023-01-29 2024-08-02 超聚变数字技术有限公司 Model appearance updating method and apparatus, and computing device

Also Published As

Publication number Publication date
WO2024156180A1 (en) 2024-08-02

Similar Documents

Publication Publication Date Title
JP5437485B2 (en) Display a visual representation of performance metrics for rendered graphics elements
US8587593B2 (en) Performance analysis during visual creation of graphics images
US20130063460A1 (en) Visual shader designer
RU2427918C2 (en) Metaphor of 2d editing for 3d graphics
JP2004038926A (en) Texture map editing
TW201737207A (en) Method and system of graphics processing enhancement by tracking object and/or primitive identifiers, graphics processing unit and non-transitory computer readable medium
RU2680355C1 (en) Method and system of removing invisible surfaces of a three-dimensional scene
KR101431311B1 (en) Performance analysis during visual creation of graphics images
CN108765520A (en) Rendering intent and device, storage medium, the electronic device of text message
CN116310037A (en) Model appearance updating method and device and computing equipment
CN111429587A (en) Display method, terminal and storage medium of three-dimensional design model
JP5242788B2 (en) Partition-based performance analysis for graphics imaging
CN117215592B (en) Rendering program generation method, device, electronic equipment and storage medium
AU2023274149B2 (en) Method for 3D visualization of sensor data
US9741156B2 (en) Material trouble shooter
CN116863067A (en) Model generation method and computing device
CN118079373A (en) Model rendering method and device, storage medium and electronic device
CN117893702A (en) Polygonal visual field analysis method, device and storage medium for Cesium three-dimensional scene
CN113947655A (en) Animation rendering method and device and electronic equipment
CN117786951A (en) Page display method and computing device of digital twin system
CN114995705A (en) Method, device and equipment for constructing three-dimensional display information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination