CN109032348B - Intelligent manufacturing method and equipment based on augmented reality - Google Patents
Intelligent manufacturing method and equipment based on augmented reality Download PDFInfo
- Publication number
- CN109032348B CN109032348B CN201810743981.7A CN201810743981A CN109032348B CN 109032348 B CN109032348 B CN 109032348B CN 201810743981 A CN201810743981 A CN 201810743981A CN 109032348 B CN109032348 B CN 109032348B
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- information
- user equipment
- target object
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 164
- 238000004519 manufacturing process Methods 0.000 title claims abstract description 135
- 238000000034 method Methods 0.000 claims abstract description 91
- 238000013507 mapping Methods 0.000 claims abstract description 66
- 230000015654 memory Effects 0.000 claims description 38
- 238000009434 installation Methods 0.000 abstract description 18
- 238000011900 installation process Methods 0.000 abstract description 2
- 238000012423 maintenance Methods 0.000 abstract description 2
- 230000004044 response Effects 0.000 abstract description 2
- 239000011521 glass Substances 0.000 description 146
- 230000003993 interaction Effects 0.000 description 73
- 238000004891 communication Methods 0.000 description 31
- 239000011159 matrix material Substances 0.000 description 26
- 238000004422 calculation algorithm Methods 0.000 description 20
- 238000003860 storage Methods 0.000 description 19
- 238000006243 chemical reaction Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 11
- 238000004590 computer program Methods 0.000 description 9
- 230000005291 magnetic effect Effects 0.000 description 7
- 238000013519 translation Methods 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 239000007787 solid Substances 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 101100498818 Arabidopsis thaliana DDR4 gene Proteins 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000005294 ferromagnetic effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/17—Mechanical parametric or variational design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2113/00—Details relating to the application field
- G06F2113/20—Packaging, e.g. boxes or containers
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Computer Hardware Design (AREA)
- Evolutionary Computation (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application aims to provide an augmented reality-based intelligent manufacturing method, wherein the method comprises the following steps: acquiring first position information of a target object in an operation area in a first coordinate system corresponding to the operation area; determining second position information of the target object in a second coordinate system according to a coordinate mapping relation between the second coordinate system corresponding to the first user equipment and the first coordinate system; acquiring operation guidance information corresponding to the target object; and displaying the operation guidance information corresponding to the target object in a superposed manner according to the second position information. According to the method and the device, some auxiliary information related to workpiece assembly in the current operation area is presented to a user holding the augmented reality equipment, such as the installation position and the sequence of the workpiece, the worker is guided to complete the assembly of the workpiece, the assembly and maintenance cost is effectively reduced, the error probability in the installation process is reduced, the production efficiency is improved, and the quick response and the accurate intervention of the production link are realized.
Description
Technical Field
The application relates to the field of communication, in particular to an intelligent manufacturing technology based on augmented reality.
Background
In the industrial production process, although many production lines realize automatic production at present, some links still need manual assembly. The assembly process of industrial equipment is generally complex, involves many parts, and has high technical requirements for workers, and the workers not only need to spend a great deal of time and energy for professional training, but also have the key to successful assembly and the memory of the workers. At present, the condition of auxiliary assembly operation by means of paper specifications exists, which causes large workload of workers, high training cost, low assembly efficiency and unattractive error rate.
Disclosure of Invention
An object of the present application is to provide an augmented reality based intelligent manufacturing method and apparatus.
According to an aspect of the application, there is provided an augmented reality based smart manufacturing method at a first user equipment, the method comprising:
acquiring first position information of a target object in an operation area in a first coordinate system corresponding to the operation area;
determining second position information of the target object in a second coordinate system according to a coordinate mapping relation between the second coordinate system corresponding to the first user equipment and the first coordinate system;
acquiring operation guidance information corresponding to the target object;
and displaying the operation guidance information corresponding to the target object in a superposed manner according to the second position information.
According to another aspect of the present application, there is provided an augmented reality based smart manufacturing method at a network device, the method comprising:
and sending next step operation guide information corresponding to the operation guide information to the corresponding first user equipment.
According to yet another aspect of the present application, there is provided an augmented reality based smart manufacturing method at a second user equipment, the method comprising:
acquiring a next operation request which is submitted by a user through the second user equipment and corresponds to the operation guidance information in the corresponding first user equipment, wherein the first user equipment and the second user equipment serve the same intelligent manufacturing task;
sending the next operation request to corresponding network equipment;
receiving next operation guiding information which is sent by the network equipment based on the next operation request and corresponds to the operation guiding information;
and sending the next operation guide information to the first user equipment corresponding to the operation guide information.
According to an aspect of the present application, there is provided an augmented reality-based smart manufacturing method, wherein the method includes:
the method comprises the steps that first user equipment obtains first position information of a target object in an operation area in a first coordinate system corresponding to the operation area, second position information of the target object in a second coordinate system is determined according to a coordinate mapping relation between the second coordinate system corresponding to the first user equipment and the first coordinate system, operation guide information corresponding to the target object is obtained, and the operation guide information corresponding to the target object is displayed in a superposed mode on the target object according to the second position information;
the network equipment sends next step operation guide information corresponding to the operation guide information to the first user equipment;
and the first user equipment receives the next operation guidance information and displays the next operation guidance information on the target object in an overlapping manner.
According to another aspect of the present application, there is provided an augmented reality-based smart manufacturing method, wherein the method includes:
the method comprises the steps that first user equipment obtains first position information of a target object in an operation area in a first coordinate system corresponding to the operation area, second position information of the target object in a second coordinate system is determined according to a coordinate mapping relation between the second coordinate system corresponding to the first user equipment and the first coordinate system, operation guide information corresponding to the target object is obtained, and the operation guide information corresponding to the target object is displayed in a superposed mode on the target object according to the second position information;
the method comprises the steps that a second user device obtains a next operation request which is submitted by a user through the second user device and corresponds to operation guide information in a first user device, wherein the first user device and the second user device serve the same intelligent manufacturing task;
the second user equipment sends the next operation request to corresponding network equipment;
and the network equipment receives the next operation request, determines next operation guide information corresponding to the operation guide information, and sends the next operation guide information to the second user equipment.
And the second user equipment receives the next operation guiding information and sends the next operation guiding information to the first user equipment.
And the first user equipment receives the next operation guidance information and displays the next operation guidance information on the target object in an overlapping manner.
According to an aspect of the application, there is provided an augmented reality based smart manufactured first user equipment, wherein the equipment comprises:
the first position acquisition module is used for acquiring first position information of a target object in an operation area in a first coordinate system corresponding to the operation area;
a second position obtaining module, configured to determine second position information of the target object in a second coordinate system according to a coordinate mapping relationship between the second coordinate system and the first coordinate system, where the second coordinate system corresponds to the first user equipment;
the guidance information acquisition module is used for acquiring operation guidance information corresponding to the target object;
and the guidance information superposition module is used for superposing and displaying the operation guidance information corresponding to the target object on the target object according to the second position information.
According to another aspect of the present application, there is provided an augmented reality-based smart manufacturing network device, wherein the device comprises:
and the next step instruction sending module is used for sending next step operation instruction information corresponding to the operation instruction information to the corresponding first user equipment.
According to yet another aspect of the present application, there is provided a second user device for augmented reality-based smart manufacturing, wherein the device comprises:
a next step request acquisition module, configured to acquire a next step operation request corresponding to job guidance information in a corresponding first user equipment, where the first user equipment and the second user equipment serve a same intelligent manufacturing task, and the next step operation request is submitted by a user through the second user equipment;
a next step request sending module, configured to send the next step operation request to a corresponding network device;
a next step instruction receiving module, configured to receive next step operation instruction information corresponding to the operation instruction information and sent by the network device based on the next step operation request;
and the next step of instruction sending module is used for sending the next step of operation instruction information to the first user equipment corresponding to the operation instruction information.
According to an aspect of the present application, there is provided an augmented reality-based smart manufacturing system, wherein the system comprises a first user device as described above and a network device as described above.
According to another aspect of the present application, there is provided an augmented reality based smart manufacturing system, wherein the system is a first user device as described above, a network device as described above, and a second user device as described above.
According to an aspect of the present application, there is provided an augmented reality-based smart manufacturing apparatus, wherein the apparatus includes:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform:
acquiring first position information of a target object in an operation area in a first coordinate system corresponding to the operation area;
determining second position information of the target object in a second coordinate system according to a coordinate mapping relation between the second coordinate system corresponding to the first user equipment and the first coordinate system;
acquiring operation guidance information corresponding to the target object;
and displaying the operation guidance information corresponding to the target object in a superposed manner according to the second position information.
According to another aspect of the present application, there is provided an augmented reality-based smart manufacturing apparatus, wherein the apparatus includes:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform:
and sending next step operation guide information corresponding to the operation guide information to the corresponding first user equipment.
According to yet another aspect of the present application, there is provided an augmented reality-based smart manufacturing apparatus, wherein the apparatus comprises:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform:
acquiring a next operation request which is submitted by a user through the second user equipment and corresponds to the operation guidance information in the corresponding first user equipment, wherein the first user equipment and the second user equipment serve the same intelligent manufacturing task;
sending the next operation request to corresponding network equipment;
receiving next operation guiding information which is sent by the network equipment based on the next operation request and corresponds to the operation guiding information;
and sending the next operation guide information to the first user equipment corresponding to the operation guide information.
According to an aspect of the application, there is provided a computer-readable medium comprising instructions that, when executed, cause a system to:
acquiring first position information of a target object in an operation area in a first coordinate system corresponding to the operation area;
determining second position information of the target object in a second coordinate system according to a coordinate mapping relation between the second coordinate system corresponding to the first user equipment and the first coordinate system;
acquiring operation guidance information corresponding to the target object;
and displaying the operation guidance information corresponding to the target object in a superposed manner according to the second position information.
According to another aspect of the application, there is provided a computer-readable medium comprising instructions that, when executed, cause a system to:
and sending next step operation guide information corresponding to the operation guide information to the corresponding first user equipment.
According to yet another aspect of the application, there is provided a computer-readable medium comprising instructions that, when executed, cause a system to:
acquiring a next operation request which is submitted by a user through the second user equipment and corresponds to the operation guidance information in the corresponding first user equipment, wherein the first user equipment and the second user equipment serve the same intelligent manufacturing task;
sending the next operation request to corresponding network equipment;
receiving next operation guiding information which is sent by the network equipment based on the next operation request and corresponds to the operation guiding information;
and sending the next operation guide information to the first user equipment corresponding to the operation guide information.
Compared with the prior art, the method and the device have the advantages that some auxiliary information related to workpiece assembly in the current operation area is presented to a user holding the augmented reality equipment, such as the installation position and the installation sequence of the workpiece, workers are guided to complete the assembly work of the workpiece, the assembly and maintenance cost is effectively reduced, the error probability in the installation process is reduced, the production efficiency is improved, and the quick response and the accurate intervention of the production link are realized.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a system topology diagram of an augmented reality based intelligent manufacturing method according to one embodiment of the present application;
FIG. 2 illustrates a method flow diagram of a method for augmented reality based smart manufacturing at a first user device in accordance with one embodiment of the present application;
FIG. 3 shows a basic imaging model or basic pinhole model of a camera;
FIG. 4 illustrates a three-dimensional point translation between world and camera screen coordinate systems;
FIG. 5 illustrates a method flow diagram of a method for augmented reality based smart manufacturing at a network device in accordance with another embodiment of the present application;
FIG. 6 illustrates a method flow diagram of a method for augmented reality based smart manufacturing at a second user equipment device in accordance with yet another embodiment of the present application;
FIG. 7 illustrates a method flow diagram of a method of augmented reality based smart manufacturing in accordance with an aspect of the subject application;
FIG. 8 illustrates a method flow diagram of a method of augmented reality based smart manufacturing in accordance with yet another aspect of the subject application;
FIG. 9 illustrates an apparatus structure diagram of an augmented reality based smart manufacturing method at a first user equipment device in accordance with one embodiment of the present application;
FIG. 10 illustrates a device architecture diagram of an augmented reality based intelligent manufacturing method at a network device in accordance with another embodiment of the present application;
FIG. 11 illustrates an apparatus structure diagram of an augmented reality based intelligent manufacturing method in accordance with an aspect of the subject application;
FIG. 12 illustrates a system example diagram of an augmented reality based intelligent manufacturing method in accordance with an aspect of the subject application;
FIG. 13 illustrates a system example diagram of an augmented reality based intelligent manufacturing method in accordance with yet another aspect of the subject application;
FIG. 14 illustrates an exemplary system that can be used to implement the various embodiments described in this application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
The device referred to in this application includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product, such as a smart phone, a tablet computer, etc., capable of performing human-computer interaction with a user (e.g., human-computer interaction through a touch panel), and the mobile electronic product may employ any operating system, such as an android operating system, an iOS operating system, etc. The network device includes an electronic device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and hardware thereof includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like. The network device includes but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud of a plurality of servers; here, the Cloud is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual supercomputer consisting of a collection of loosely coupled computers. Including, but not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless Ad Hoc network (Ad Hoc network), etc. Preferably, the device may also be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the foregoing is by way of example only, and that other existing or future devices, which may be suitable for use in the present application, are also encompassed within the scope of the present application and are hereby incorporated by reference.
In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Fig. 1 illustrates a typical scenario of the present application, in which a user holds a first user device (e.g., an augmented reality device, etc.), and superimposes and displays job guidance information at a corresponding position of a target object in an operation area through the first user device, so as to assist the user in completing assembly of the target object, etc.; the scheme can be independently completed by the first user equipment, can also be completed by the cooperation of the first user equipment and network equipment (such as an MES production execution system), and can also be completed by the cooperation of the first user equipment, second user equipment (such as human-computer interaction equipment) and the network equipment; in addition, the first user equipment can also send the current operation guidance information to third user equipment (such as PC equipment) for the third user equipment to be superposed and displayed at the corresponding position of the target object. The augmented reality device includes, but is not limited to, augmented reality glasses, augmented reality helmets, and other augmented reality devices, and the augmented reality glasses are taken as an example to describe the following embodiments, and those skilled in the art should understand that the embodiments are also applicable to other existing or future augmented reality devices such as augmented reality helmets; network devices include, but are not limited to, cloud servers, MES (manufacturing execution system), and the following embodiments are described by taking the MES as an example, and it should be understood by those skilled in the art that the embodiments are also applicable to other existing or future network devices such as cloud servers; the third user device includes, but is not limited to, an augmented reality device, a tablet computer, a mobile device, a PC device, etc. for displaying the current operation guidance information in an overlaid manner, and the following embodiments are described herein by taking the PC device as an example, and it should be understood by those skilled in the art that the embodiments are also applicable to other existing or future third user devices such as an augmented reality device, a tablet computer, a mobile device, etc.
Fig. 2 illustrates an augmented reality based smart manufacturing method at a first user equipment terminal according to an aspect of the present application, wherein the method includes step S11, step S12, step S13 and step S14. In step S11, the first user equipment acquires first position information of a target object in an operation area in a first coordinate system corresponding to the operation area; in step S12, the first user equipment determines second position information of the target object in a second coordinate system according to a coordinate mapping relationship between the second coordinate system corresponding to the first user equipment and the first coordinate system; in step S13, the first user equipment obtains job guidance information corresponding to the target object; in step S14, the first user equipment displays the operation guidance information corresponding to the target object in an overlapping manner according to the second position information.
Specifically, in step S11, the first user equipment acquires first position information of the target object in the operation area in the first coordinate system corresponding to the operation area. The operation area includes, but is not limited to, a work table for operation, an operation area on the pipeline, a plane area available for operation, and the like. For example, a two-dimensional identification map that is horizontal to the desktop is placed in the operation area, the first user device scans the identification map, establishes a world coordinate system with the center of the identification map as an origin, and establishes a scale of the world coordinate system with the center of the identification map as the center of the entire world coordinate system, and the first user device uses the world coordinate system as a first coordinate system and calculates first position information (e.g., coordinate position information) of the target object in the first coordinate system through a computer vision algorithm.
Of course, those skilled in the art will appreciate that the above-described operating regions are merely exemplary, and that other operating regions, which may exist or become available in the future, are also encompassed within the scope of the present application and are hereby incorporated by reference.
In step S12, the first user device determines second position information of the target object in a second coordinate system according to a coordinate mapping relationship between the second coordinate system corresponding to the first user device and the first coordinate system. For example, it can be assumed that the target object has a first coordinate system (assumed as a world coordinate system), the glasses camera has a three-dimensional camera coordinate system, the eyes and the glasses screen form a virtual camera coordinate system, and the glasses screen has a second coordinate system (e.g., a two-dimensional coordinate system). The coordinates of the target object in a world coordinate system are known, and the target object is firstly converted into a three-dimensional camera coordinate system of a glasses solid camera, and a conversion matrix is obtained through a recognition tracking algorithm. And then, converting the external parameters (known) of the calibration parameters into a virtual camera coordinate system consisting of the human eyes and the glasses screen, and finally obtaining the coordinates of a second coordinate system on the glasses screen according to the internal parameters (known) of the virtual camera. Therefore, the final second position information of the target object in the second coordinate system can be calculated.
In step S13, the first user equipment obtains the job guidance information corresponding to the target object. The operation guidance information includes information such as an assembly position and an assembly sequence of the target object. For example, the first user equipment acquires the operation guidance information corresponding to the target object by identifying the target object and matching the target object in a local or cloud database; for example, the first user equipment reads the operation guidance information corresponding to the target object according to the operation of the user and the like; for another example, the first user equipment captures an image corresponding to the target object, sends the image to the network equipment, and receives the operation guidance information corresponding to the target object, which is identified by the network equipment based on the image. Step S11 and step S13 have no necessary order relationship.
In step S14, the first user equipment displays the operation guidance information corresponding to the target object in an overlapping manner according to the second position information. For example, the first user equipment displays the virtual overlay information in the operation guidance information in an overlay manner at a corresponding position according to the position information of the target object in the second coordinate system and according to the position relationship between the virtual overlay information (such as information of parts assembled in the target object) and the target object, which is displayed in the overlay manner in the operation guidance information.
For example, the first user holds augmented reality glasses, and the augmented reality glasses shoot image information related to the current operation area through the camera. And the augmented reality glasses establish a world coordinate system according to the corresponding two-dimensional identification image in the operation area, and calculate the position of the target object under the world coordinate system. It can be assumed that the glasses entity camera has a three-dimensional camera coordinate system, the eyes and the glasses screen form a virtual camera coordinate system, and the glasses screen has a two-dimensional coordinate system. The coordinates of a target object (such as parts of an electric control box, an outer box, a circuit board and the like) in a world coordinate system are known, the target object is firstly converted into a three-dimensional camera coordinate system of a glasses solid camera, and a conversion matrix is obtained through a recognition tracking algorithm. And then, converting the external parameters (known) of the calibration parameters into a virtual camera coordinate system consisting of the human eyes and the glasses screen, and finally obtaining the coordinates of a second coordinate system on the glasses screen according to the internal parameters (known) of the virtual camera. Therefore, the second position information of the target object in the glasses screen can be calculated finally. The augmented reality glasses inquire operation guide information related to the object in a database according to information related to the target object, or the augmented reality glasses read the operation guide information of the target object based on selection operation of a first user and the like, wherein the operation guide information comprises but is not limited to installation positions and installation sequences of circuit boards and the like in an outer box of the electronic control box. The augmented reality glasses display the operation guidance information corresponding to the target object in real time in a superimposed manner at the real-time position according to the real-time position of the target object in the glasses screen, for example, a virtual circuit board is virtually presented at the suggested installation position of the circuit board in the outer box of the electric control box.
Of course, those skilled in the art will appreciate that the above-described exemplary embodiments are merely examples, and that other exemplary embodiments, which are currently or later become known, may be devised and are intended to be included within the scope of the present invention and are hereby incorporated by reference.
In some embodiments, the method further comprises step S15 (not shown). In step S15, the first user device sends the job guidance information to a corresponding third user device, so that the third user device displays the job guidance information in an image of the target object in an overlapping manner. For example, the third user device comprises a third camera by means of which image information relating to the operating area is captured. The first user equipment sends the operation guidance information to the third camera device, and the third equipment device superposes the received operation guidance information on the image of the currently presented target object in real time.
For example, the third user holds a PC device mounted with a relatively fixed camera by which the PC device takes image information about the electric control box casing of the operation area. After the augmented reality glasses acquire the operation guide information about the outer box of the electronic control box, the operation guide information is sent to the PC equipment. The PC equipment receives the operation guide information and displays the operation guide information such as a virtual circuit board and the like in a superposition mode at a corresponding position in the currently presented image information about the outer box of the electric control box through a computer vision algorithm. Here, let us assume that an electronic control box outer box of an operation area has a world coordinate system, a camera of a PC device has a three-dimensional camera coordinate system, a screen of the PC device has a two-dimensional image coordinate system, and coordinates of the electronic control box outer box in the world coordinate system are known, first, coordinate information of the electronic control box outer box in the world coordinate system is transformed according to the camera coordinate system and the world coordinate system to obtain position information of the electronic control box outer box in the camera coordinate system, and then, according to coordinate changes of the camera coordinate system and the image coordinate system, position information of the electronic control box outer box in the screen is determined; according to the position of the outer box of the electric control box in the screen, the PC equipment superposes and displays the operation guidance information such as the installation position, the installation sequence and the like of parts such as a circuit board and the like at the corresponding position in the electric control box, wherein the installation sequence can be distinguished by different colors or different depth degrees of the same color and the like.
In some embodiments, the method further includes method S16 (not shown). In step S16, the first user equipment captures image information of the target object in the operation area through a capturing device of the first user equipment; in step S11, the first user equipment obtains first position information of the target object in the first coordinate system corresponding to the operation area through the image information. In some embodiments, in step S13, the first user equipment obtains the job guidance information corresponding to the target object according to the image information. For example, the first user equipment shoots image information of a target object in an operation area through the camera device, establishes a first coordinate system through the image information, and calculates first position information of the target object in the first coordinate system; and the first user equipment performs image recognition on the target object according to the image information, matches the image information with a model in a pre-stored database through a computer vision algorithm, and takes the operation guidance information corresponding to the matched model as the operation guidance information corresponding to the target object.
For example, the augmented reality glasses shoot image information about the outer box of the electric control box on the current operating platform through the camera, and the image information also comprises a plane where the operating platform is located. The plane is provided with a two-dimensional identification picture placed by a first user, the augmented reality glasses establish a three-dimensional world coordinate system by taking the center of the two-dimensional identification picture as an origin according to the shot image information and the two-dimensional identification picture in the image information, and the position information of the current outer box of the electric control box in the world coordinate system is calculated according to the distance between the outer box of the electric control box and the center of the two-dimensional identification picture. And matching the augmented reality glasses in a database according to the image information of the outer box of the electronic control box, and determining an outer box model of the electronic control box matched with the image information and the operation guidance information corresponding to the model.
In some embodiments, the method further comprises step S17 (not shown) and step S18 (not shown). In step S17, the first user equipment acquires next-step work guidance information corresponding to the work guidance information; in step S18, the first user equipment displays the next operation guidance information on the target object in an overlapping manner. For example, after the first user completes the job related to the current job guidance information, the first user equipment obtains the next job guidance information corresponding to the job guidance information, and reads the next job guidance information through the operations of the user such as selecting/clicking the next step. The first user equipment displays the next operation guidance information on the target object in a superposed manner at the real-time position of the target object in the equipment screen.
For example, a first user installs the electronic control box on the console, the first user has installed the circuit board at a corresponding position according to the presenting position of the virtual circuit board in the operation guidance information, the first user clicks to perform the next operation, and the augmented reality glasses read the next operation guidance information corresponding to the target object in the database according to the operation, such as fixing a corresponding screw at the corresponding position of the circuit board.
Of course, those skilled in the art will appreciate that the above-described further process instructional information is by way of example only, and that other further process instructional information, now known or later developed, that may be applicable to the present application, is also encompassed by the present invention and is incorporated herein by reference.
In some embodiments, in step S17, the first user device receives the next step of the job guidance information corresponding to the job guidance information sent by the corresponding network device. In some embodiments, the next job guidance information is sent by the network device based on the received next operation request. In some embodiments, the next job guidance information is sent by the network device based on the received next operation request. For example, after the user completes the current job guidance information, the first user equipment sends a next operation request to the network equipment when the first user equipment determines that the current job is completed based on the operation of the user or by identifying the current target object. The network equipment receives the next operation request, determines corresponding next operation guiding information in the database based on the next operation request, and sends the next operation guiding information to the first user equipment.
For example, a first user installs the electronic control box on the operation console, the first user has installed the circuit board at a corresponding position according to the presenting position of the virtual circuit board in the operation guidance information, the first user clicks to perform the next operation, and the augmented reality glasses send a next operation request to the MES production execution system according to the operation, wherein the next operation request includes relevant information of the current operation guidance information (such as the current operation guidance information or corresponding image information when the operation guidance is completed). And the MES production execution system receives the next operation request, determines next operation guide information corresponding to the target object in the database according to the next operation request, and determines the next operation guide information such as fixing a corresponding screw at a corresponding position of the circuit board according to the image information of the circuit board installed on the outer box of the electric control box. And the MES production execution system sends the next step of operation guidance information to the augmented reality glasses.
Of course, those skilled in the art will appreciate that the above-described next step operation request is by way of example only, and that other existing or future requests may be applicable to the present application and are intended to be encompassed within the scope of the present application and are hereby incorporated by reference.
In some embodiments, in step S17, the first user device receives the next step job guidance information corresponding to the job guidance information sent by the corresponding second user device, wherein the next step job guidance information is sent by the network device to the second user device. For example, the second user equipment includes a human-computer interaction device which establishes a communication connection with the first user equipment, the second user equipment may also establish a communication connection with the network device, the second user equipment may present job guidance information about a current target object operated by the first user, the second user may select to enter next operation information on the second user equipment based on the job guidance information, and the like. For another example, the second user equipment receives next step guidance information of the first user equipment, and sends the next step operation guidance information to the first user equipment, where the next step guidance information includes information returned by the network equipment according to a next step operation request sent by the second user equipment.
For example, the second user has a human-computer interaction device (e.g., a tablet computer, etc.), the human-computer interaction device establishes a communication connection with the augmented reality glasses and the network device, and the first user and the second user perform the same production execution task. The human-computer interaction equipment and the augmented reality glasses present the same operation guidance information, for example, the human-computer interaction equipment presents the operation guidance information on the screen, and the augmented reality glasses display the corresponding operation guidance information in a superposition manner on the screen. The method comprises the steps that a first user installs an electric control box on an operation desk, the first user installs a circuit board at a corresponding position according to the presenting position of a virtual circuit board in operation guidance information, the first user clicks to carry out the next operation, and the augmented reality glasses send a next operation request to the human-computer interaction equipment according to the operation, wherein the next operation request comprises relevant information of the current operation guidance information (such as the current operation guidance information or corresponding image information when the operation guidance is finished). And the MES production execution system determines next operation guidance information corresponding to the target object in the database according to the next operation request, and if the next operation guidance information is determined according to the image information of the circuit board which is installed on the outer box of the electric control box, the next operation guidance information is fixed at the corresponding position of the circuit board, such as a corresponding screw and the like. And the MES production execution system sends the next operation guidance equipment to the human-computer interaction equipment, and the next operation guidance equipment is forwarded to the augmented reality glasses through the human-computer interaction equipment.
For another example, a first user installs the electronic control box on the console, the first user has installed the circuit board at a corresponding position according to the presenting position of the virtual circuit board in the operation guidance information, a second user selects to perform the next operation on the screen of the human-computer interaction device, and the human-computer interaction device sends a next operation request to the network device according to the operation, wherein the next operation request includes relevant information of the current operation guidance information (for example, the current operation guidance information or corresponding image information when the operation guidance is completed). And the MES production execution system receives the next operation request, determines next operation guide information corresponding to the target object in the database according to the next operation request, and determines the next operation guide information such as fixing a corresponding screw at a corresponding position of the circuit board according to the image information of the circuit board installed on the outer box of the electric control box. And the MES production execution system sends the next operation guidance equipment to the human-computer interaction equipment, and the next operation guidance equipment is forwarded to the augmented reality glasses through the human-computer interaction equipment.
In some embodiments, in step S17, the first user device obtains a next operation request about the job guidance information submitted by a user through the first user device, sends the next operation request to the network device, and receives next job guidance information corresponding to the job guidance information, which is sent by the network device based on the next operation request. For example, after the user completes the current job guidance information, the user selects an operation of entering a next job on the first user equipment, and the first user equipment sends a next operation request to the network equipment based on the operation of the user. The network equipment receives the next operation request, determines corresponding next operation guiding information in the database based on the next operation request, and sends the next operation guiding information to the first user equipment.
For example, a first user installs the electronic control box on the operation console, the first user has installed the circuit board at a corresponding position according to the presenting position of the virtual circuit board in the operation guidance information, the first user clicks to perform the next operation, and the augmented reality glasses send a next operation request to the MES production execution system according to the operation, wherein the next operation request includes relevant information of the current operation guidance information (such as the current operation guidance information or corresponding image information when the operation guidance is completed). And the MES production execution system receives the next operation request, determines next operation guide information corresponding to the target object in the database according to the next operation request, and determines the next operation guide information such as fixing a corresponding screw at a corresponding position of the circuit board according to the image information of the circuit board installed on the outer box of the electric control box. And the MES production execution system sends the next step of operation guidance information to the augmented reality glasses.
In some embodiments, the method includes step S19 (not shown). In step S19, the first user equipment captures the operated target object through a capturing device in the first user equipment, and sends captured image information to the corresponding network equipment; in step S17, the first user device receives the next step of job guidance information corresponding to the job guidance information, which is sent by the network device after determining that the job guidance information has been completed according to the image information. For example, the first user device captures image information related to a target object when the current user is working in real time, and sends the image information to the network device. The network equipment receives the image information, matches the image information with a preset image of completed operation according to a computer vision algorithm, determines that the operation related to the current target object is completed if the image related to the target object in the shot image information is matched with the preset image, determines next operation guide information corresponding to the current operation guide information, and sends the next operation guide information to the first user equipment.
For example, a first user installs the electronic control box on the operation console, the first user installs the circuit board at the corresponding position according to the presenting position of the virtual circuit board in the operation guidance information, the augmented reality glasses shoot the image information of the corresponding position of the circuit board installed in the outer box of the electronic control box, and the image information is sent to the MES production execution system. And the MES production execution system receives the image information, matches the image information with a preset completed model in the execution task in a database, matches the preset model with completed operation guide information, determines that the current operation guide information is completed, and acquires corresponding next operation guide information according to the completion of the current operation guide, such as fixing a corresponding screw at a corresponding position of the circuit board. And then, the MES production execution system sends the next operation guide information to the augmented reality equipment.
In some embodiments, the method includes step S20 (not shown). In step S20, the first user equipment determines current position information of the target object after the operation in the second coordinate system; in step S18, the first user equipment displays the next operation guidance information on the target object in an overlapping manner according to the current location information. For example, the first user equipment determines the current position information of the target object in the second coordinate system in real time according to the shot image information after the operation, and displays the next operation guidance information on the target object at the position in an overlapping manner according to the position information.
For example, the augmented reality glasses determine the position of the outer box of the electronic control box in the world coordinate system in real time, and acquire the position of the outer box of the electronic control box on the glasses screen according to the coordinate conversion relation. The augmented reality glasses receive the next operation guidance information of the fixing screws at the corresponding positions of the circuit board installed in the outer box of the electronic control box, and the corresponding positions of the circuit board in the outer box of the electronic control box in the current screen are superposed to display the corresponding information of the screws to be fixed, such as the positions, the sequence, the models and the like of the screws to be fixed.
In some embodiments, the method includes step S01 (not shown). In step S01, the first user equipment captures the target object operated by the camera in the first user equipment, and determines whether the operation guidance information in the first user equipment is completed according to the image information. For example, the first user equipment captures image information related to a target object when the current user is working in real time, determines whether the current work is completed according to the image information, such as matching the current work with a preset image of the completed work according to a computer vision algorithm, and if the image related to the target object in the captured image information is matched with the preset image, the first user equipment determines that the work related to the current target object is completed, and reads next-step work guidance information corresponding to the current work guidance information from the database.
For example, a first user installs the electronic control box on the console, the first user has installed the circuit board at a corresponding position according to the presenting position of the virtual circuit board in the operation guidance information, the augmented reality glasses shoot image information of the corresponding position where the circuit board has been installed in the external box of the electronic control box, match the image information with a completed model preset in the execution task in the database, match the preset model with which the operation guidance information has been completed, determine that the current operation guidance information has been completed, the augmented reality glasses complete the current operation guidance, and obtain corresponding next operation guidance information, for example, fix a corresponding screw at the corresponding position of the circuit board.
In some embodiments, in step S12, the first user device determines the coordinate mapping relationship between the second coordinate system corresponding to the first user and the first coordinate system according to the first coordinate mapping relationship between the second coordinate system corresponding to the first user device and the human eye coordinate system, the second coordinate mapping relationship between the human eye coordinate system and the camera coordinate system, and the third coordinate mapping relationship between the camera coordinate system and the first coordinate system, and determines the second position information of the target object in the second coordinate system based on the coordinate mapping relationship between the second coordinate system and the first coordinate system.
For example, we can assume that the target object itself has a first coordinate system (assumed as a world coordinate system), the augmented reality glasses physical camera has a three-dimensional camera coordinate system, the eyes and the glasses screen form a virtual camera coordinate system, and the glasses screen has a two-dimensional second coordinate system (image coordinate system). The coordinates of the target object in a world coordinate system are known, the target object is firstly converted into a three-dimensional camera shooting coordinate system of the glasses solid camera, and a conversion matrix is obtained through a recognition tracking algorithm. And then, converting the external parameters (known) of the calibration parameters into a virtual camera coordinate system consisting of the human eyes and the glasses screen, and finally obtaining the coordinates on the glasses screen according to the internal parameters (known) of the virtual camera. The final position of the target object in the glasses screen can be calculated. Here we convert the coordinates of the target object in the world coordinate system to the physical camera coordinate system on the glasses (the coordinate system corresponding to the glasses camera), where the conversion matrix is derived by the recognition and tracking algorithm. After the augmented reality glasses determine the coordinate mapping relation from the world coordinate system to the image coordinate system, the position information of the target object in the glasses screen is obtained according to the position of the target object in the world coordinate system. The specific camera model converted from world coordinates to image coordinates is as follows:
1. ideal model
Fig. 3 shows a basic imaging model of a camera, commonly referred to as a basic pinhole model, given by a three-dimensional space to plane central projection transform.
As shown in FIG. 3, OcIs the center of the camera and is at a distance f from the image plane pi of the camera, where f is called the focal length of the camera. Space point XcProjection (or image) m on plane pi is at point OcIs an end point and passes through point XcIs intersected with the plane pi by a point OcThe rays which are end points and perpendicular to the image plane are called the optical axis or principal axis, and the intersection point p of the principal axis and the image plane is called the principal point of the camera. In order to algebraically describe this projection relationship, a camera coordinate system and an image plane coordinate system need to be established. On an image plane, an image coordinate system o-xy is established by taking a principal point p as the origin of coordinates of an image plane coordinate system and taking a horizontal line and a lead straight line as an x axis and a y axis respectively. In space, with the camera center OcEstablishing a camera coordinate system O-x for the origin of coordinates of the camera coordinate systemcyczcAs shown in fig. 3 (b). Space point XcHomogeneous coordinates in the camera coordinate system are denoted as Xc=(xc,yc,zc,1)TIts homogeneous sitting mark m in image coordinate system as (x, y,1)TScale (X)cAnd m) is a pair. According to the triangle similarity principle, the space point X can be deducedcWith its image point m, the following relationship is satisfied:
conversion to matrix form:
wherein, Xc=(xc,yc,zc,1)T,m=(x,y,1)TThe homogeneous coordinates of the spatial points and the image points, respectively, are a homogeneous linear transformation from space to image plane. Let P ═ diag (f, f,1) (I,0), then the above formula can be represented in a simpler form:
m=PXc (3)
note that: (3) is a homogeneous equation, meaning equal in the sense of differing by a non-zero constant factor. The matrix P is usually called a camera matrix. This is the algebraic representation of the basic imaging model.
2. Actual model
The theoretical case (principal point being the origin of the image coordinate system) was discussed above, but in the actual case,
firstly, the origin of coordinates of the image plane may not be on the principal point;
second, the images used for computer processing are typically digital images acquired by a CCD camera, digitally discretizing the points of the image plane.
In the ideal model derived above, the dimensions of the assumed image coordinates on the two axes are not equal, so that the pixels of the CCD camera after digital discretization are not a square, and it is therefore necessary to introduce non-equivalent scale factors; third, typical cameras have distortion parameters present.
Under the above three conditions, the ideal central projection model can be rewritten into a five-parameter model:
likewise, the projected relationship of the cameras can be written as:
m=K(I,0)Xc=PXc (5)
wherein:is a camera intrinsic parameter matrix, fx,fyCalled the scale factor of the CCD camera in the u-axis and v-axis directions, (u)0,v0)TReferred to as the principal point of the CCD camera. s is called the distortion factor or tilt factor of the CCD camera. The internal parameters of the camera are five in total.
3. General model
We generally describe a three-dimensional point, and since the camera may be moving all the time, we do not describe it based on the camera coordinate system, but rather in the world coordinate system. The relationship between the world coordinate system and the camera coordinate system can be described in terms of a rotation matrix and a translation vector, as shown in fig. 4.
Let the coordinates of the space point in the world coordinate system and the camera coordinate system be X ═ X, y, z,1 respectivelyT,Xc=(xc,yc,zc,1)TThen the relationship between the two is:
bringing (6) into (5) to obtain:
wherein,representing the coordinates of the camera centre in the world coordinate system, camera matrix Is the extrinsic parameter matrix of the camera. Where R ═ is a rotation matrix (α, β, γ), and α, β, γ are rotation angles around the x, y, z axes of the camera coordinate system, respectively.
Is a translation matrix, Tx,Ty,TzAre respectively the translation around the x, y and z axes of the camera coordinate system, so the camera external parameters consist of 6 parameters (alpha, beta, gamma, T)x,Ty,Tz)。
Fig. 5 illustrates a method for augmented reality based smart manufacturing at a network device in accordance with another aspect of the subject application, wherein the method includes step S21. In step S21, the network device transmits the next-step job guidance information corresponding to the job guidance information to the corresponding first user device.
For example, a first user holds augmented reality glasses, the augmented reality glasses shoot image information related to a current operation area through a camera device, and the augmented reality glasses establish communication connection with the network device. The network equipment determines the current operation guide information of the current first user based on communication, determines the corresponding next operation guide information in the database, and sends the next operation guide information to the first user equipment. The first user equipment receives the next operation guidance information, and superimposes the next operation guidance information on the target object in real time at the position according to the position of the target object in the current screen.
In some embodiments, in step S21, the network device receives a next operation request corresponding to job guidance information in a corresponding first user device, which is sent by a corresponding second user device, where the first user device and the second user device serve the same smart manufacturing task, determines next job guidance information corresponding to the job guidance information, and sends the next job guidance information to the second user device. For example, the second user equipment includes a human-computer interaction device which establishes a communication connection with the first user equipment, and the second user equipment may also establish a communication connection with the network device. The second user equipment acquires a next operation request corresponding to the operation guidance information of the current first user equipment, if the second user is operated based on the second user, the next operation request corresponding to the current operation information of the target object is acquired, and if the second user equipment receives the next operation request sent by the first user equipment; and the second user equipment sends the next operation request to the network equipment. And the network equipment determines corresponding next-step operation guide information according to the received next-step operation request, and sends the next-step operation guide information to the second user equipment.
For example, the second user has a human-computer interaction device (e.g., a tablet computer, etc.), the human-computer interaction device establishes a communication connection with the augmented reality glasses and the network device, and the first user and the second user perform a unified production execution task. The human-computer interaction equipment and the augmented reality glasses present the same operation guidance information, for example, the human-computer interaction equipment presents the operation guidance information on the screen, and the augmented reality glasses display the corresponding operation guidance information in a superposition manner on the screen. The method comprises the steps that a first user installs an electric control box on an operation desk, the first user installs a circuit board at a corresponding position according to the presenting position of a virtual circuit board in operation guidance information, the first user clicks to carry out the next operation, and the augmented reality glasses send a next operation request to the human-computer interaction equipment according to the operation, wherein the next operation request comprises relevant information of the current operation guidance information (such as the current operation guidance information or corresponding image information when the operation guidance is finished). And the MES production execution system determines next operation guidance information corresponding to the target object in the database according to the next operation request, and if the next operation guidance information is determined according to the image information of the circuit board which is installed on the outer box of the electric control box, the next operation guidance information is fixed at the corresponding position of the circuit board, such as a corresponding screw and the like. And the MES production execution system sends the next operation guidance equipment to the human-computer interaction equipment, and the next operation guidance equipment is forwarded to the augmented reality glasses through the human-computer interaction equipment.
For another example, a first user installs the electronic control box on the console, the first user has installed the circuit board at a corresponding position according to the presenting position of the virtual circuit board in the operation guidance information, a second user selects to perform the next operation on the screen of the human-computer interaction device, and the human-computer interaction device sends a next operation request to the network device according to the operation, wherein the next operation request includes relevant information of the current operation guidance information (for example, the current operation guidance information or corresponding image information when the operation guidance is completed). And the MES production execution system receives the next operation request, determines next operation guide information corresponding to the target object in the database according to the next operation request, and determines the next operation guide information such as fixing a corresponding screw at a corresponding position of the circuit board according to the image information of the circuit board installed on the outer box of the electric control box. And the MES production execution system sends the next operation guidance equipment to the human-computer interaction equipment, and the next operation guidance equipment is forwarded to the augmented reality glasses through the human-computer interaction equipment.
In some embodiments, in step S21, the network device receives a next operation request for the job guidance information sent by the first user device, determines the next job guidance information corresponding to the job guidance information, and sends the next job guidance information to the second user device. For example, after the user completes the current job guidance information, the user selects an operation to enter a next job on the first user equipment or the first user determines that the current job guidance is completed through a shot image about the job guidance, and the first user equipment sends a next operation request to the network equipment based on the user operation. The network equipment receives the next operation request, determines corresponding next operation guiding information in the database based on the next operation request, and sends the next operation guiding information to the second user equipment.
For example, a first user installs the electronic control box on the operation console, the first user installs the circuit board at a corresponding position according to the presenting position of the virtual circuit board in the operation guidance information, the first user clicks to perform the next operation, and the augmented reality glasses send a next operation request to the MES production execution system according to the operation; for another example, the augmented reality glasses shoot image information of relevant operation guidance of the electrical control box on the current operation platform through the camera device, and determine that the current operation guidance information is completed according to the operation guidance information, and the augmented reality glasses send a next operation request to the MES production execution system, where the next operation request includes relevant information of the current operation guidance information (e.g., the current operation guidance information or corresponding image information when the operation guidance is completed). And the MES production execution system receives the next operation request, determines next operation guide information corresponding to the target object in the database according to the next operation request, and determines the next operation guide information such as fixing a corresponding screw at a corresponding position of the circuit board according to the image information of the circuit board installed on the outer box of the electric control box. And the MES production execution system sends the next step of operation guidance information to the augmented reality glasses.
In some embodiments, the method further comprises step S22 (not shown). In step S22, the network device receives image information about a target object sent by a corresponding first user device, wherein the image information is obtained by shooting the target object operated by a shooting device in the first user device, and determines whether the operation guidance information in the first user device is completed according to the image information; in step S21, if the job guidance information is completed, the network device sends the next job guidance information corresponding to the job guidance information to the first user equipment. For example, the first user device captures image information related to a target object when the current user is working in real time, and sends the image information to the network device. The network equipment receives the image information, matches the image information with a preset image of completed operation according to a computer vision algorithm, determines that the operation related to the current target object is completed if the image related to the target object in the shot image information is matched with the preset image, determines next operation guide information corresponding to the current operation guide information, and sends the next operation guide information to the first user equipment.
For example, a first user installs the electronic control box on the operation console, the first user installs the circuit board at the corresponding position according to the presenting position of the virtual circuit board in the operation guidance information, the augmented reality glasses shoot the image information of the corresponding position of the circuit board installed in the outer box of the electronic control box, and the image information is sent to the MES production execution system. And the MES production execution system receives the image information, matches the image information with a preset completed model in the execution task in a database, matches the preset model with completed operation guide information, determines that the current operation guide information is completed, and acquires corresponding next operation guide information according to the completion of the current operation guide, such as fixing a corresponding screw at a corresponding position of the circuit board. And then, the MES production execution system sends the next operation guide information to the augmented reality equipment.
Fig. 6 illustrates an augmented reality based smart manufacturing method at a second user equipment terminal according to yet another aspect of the present application, wherein the method includes step S31, step S32, step S33 and step S34. In step S31, a second user device obtains a next operation request corresponding to job guidance information in a corresponding first user device, the request being submitted by a user through the second user device, wherein the first user device and the second user device serve a same intelligent manufacturing task; in step S32, the second user equipment sends the next operation request to the corresponding network equipment; in step S33, the second user equipment receives the next-step job guidance information corresponding to the job guidance information, which is sent by the network equipment based on the next-step operation request; in step S34, the second user device sends the next-step work guidance information to the first user device corresponding to the work guidance information. For example, a first user holds a first user device, a second user holds a human-computer interaction device (such as a PC device, etc.), the first user device captures image information related to a target object in a current operating area through a camera, and the first user device establishes a communication connection with the second user device and a network device. And if the current operation guidance of the user is finished, the second user equipment sends a next operation request to the network equipment based on the operation of the second user, the network equipment receives the request, determines next operation guidance information corresponding to the request in the database, and sends the next operation guidance information to the second user equipment. And the second user receives the next operation guide information and sends the next operation guide information to the first user equipment. The first user equipment receives the next operation guidance information, and superimposes the next operation guidance information on the target object in real time at the position according to the position of the target object in the current screen.
For example, a first user holds augmented reality glasses, a second user holds human-computer interaction equipment, the augmented reality equipment is in communication connection with the human-computer interaction equipment, and the human-computer interaction equipment is in communication connection with an MES production execution system. The human-computer interaction device and the augmented reality glasses present the same operation guidance information, such as installation guidance information related to an electric control box outer box on the current operation console. The method comprises the steps that a first user installs an electronic control box on an operation desk, the first user installs a circuit board at a corresponding position according to the presenting position of a virtual circuit board in operation guidance information, a second user selects to carry out next operation on a screen of a man-machine interaction device, the man-machine interaction device sends a next operation request to network equipment according to the operation, and the next operation request comprises relevant information of current operation guidance information (such as the current operation guidance information or corresponding image information when operation guidance is finished). And the MES production execution system receives the next operation request, determines next operation guide information corresponding to the target object in the database according to the next operation request, and determines the next operation guide information such as fixing a corresponding screw at a corresponding position of the circuit board according to the image information of the circuit board installed on the outer box of the electric control box. And the MES production execution system sends the next operation guidance equipment to the human-computer interaction equipment, and the next operation guidance equipment is forwarded to the augmented reality glasses through the human-computer interaction equipment. The augmented reality glasses receive the next operation guidance information of the fixing screws at the corresponding positions of the circuit board installed in the outer box of the electronic control box, and the corresponding positions of the circuit board in the outer box of the electronic control box in the current screen are superposed to display the corresponding information of the screws to be fixed, such as the positions, the sequence, the models and the like of the screws to be fixed.
In some embodiments, in step S31, the second user device receives a next operation request corresponding to the job guidance information in the first user device, which is sent by the first user device, wherein the first user device and the second user device serve the same intelligent manufacturing task.
For example, a first user installs the electronic control box on the console, the first user has installed the circuit board at a corresponding position according to the presenting position of the virtual circuit board in the operation guidance information, the first user clicks to perform the next operation, and the augmented reality glasses send a next operation request to the human-computer interaction device according to the operation, wherein the next operation request includes relevant information of the current operation guidance information (such as the current operation guidance information or corresponding image information when the operation guidance is completed). And the human-computer interaction equipment receives the next operation request, sends the next operation request to the MES production execution system to determine corresponding next operation guidance information, and forwards the next operation guidance information to the augmented reality glasses by the MES production execution system.
FIG. 7 illustrates an augmented reality based smart manufacturing method according to an aspect of the subject application, wherein the method comprises:
the method comprises the steps that first user equipment obtains first position information of a target object in an operation area in a first coordinate system corresponding to the operation area, second position information of the target object in a second coordinate system is determined according to a coordinate mapping relation between the second coordinate system corresponding to the first user equipment and the first coordinate system, operation guide information corresponding to the target object is obtained, and the operation guide information corresponding to the target object is displayed in a superposed mode on the target object according to the second position information;
the network equipment sends next step operation guide information corresponding to the operation guide information to the first user equipment;
and the first user equipment receives the next operation guidance information and displays the next operation guidance information on the target object in an overlapping manner.
FIG. 8 illustrates an augmented reality based smart manufacturing method according to another aspect of the subject application, wherein the method comprises:
the method comprises the steps that first user equipment obtains first position information of a target object in an operation area in a first coordinate system corresponding to the operation area, second position information of the target object in a second coordinate system is determined according to a coordinate mapping relation between the second coordinate system corresponding to the first user equipment and the first coordinate system, operation guide information corresponding to the target object is obtained, and the operation guide information corresponding to the target object is displayed in a superposed mode on the target object according to the second position information;
the method comprises the steps that a second user device obtains a next operation request which is submitted by a user through the second user device and corresponds to operation guide information in a first user device, wherein the first user device and the second user device serve the same intelligent manufacturing task;
the second user equipment sends the next operation request to corresponding network equipment;
and the network equipment receives the next operation request, determines next operation guide information corresponding to the operation guide information, and sends the next operation guide information to the second user equipment.
And the second user equipment receives the next operation guiding information and sends the next operation guiding information to the first user equipment.
And the first user equipment receives the next operation guidance information and displays the next operation guidance information on the target object in an overlapping manner.
Fig. 9 illustrates a first augmented reality based smart manufactured user device according to an aspect of the present application, wherein the device includes a first location acquisition module 11, a second location acquisition module 12, a guidance information acquisition module 13, and a guidance information superposition module 14. The first position obtaining module 11 is configured to obtain first position information of a target object in an operation area in a first coordinate system corresponding to the operation area; a second position obtaining module 12, configured to determine second position information of the target object in a second coordinate system according to a coordinate mapping relationship between the second coordinate system corresponding to the first user equipment and the first coordinate system; a guidance information obtaining module 13, configured to obtain operation guidance information corresponding to the target object; and the guidance information overlapping module 14 is configured to overlap and display the operation guidance information corresponding to the target object on the target object according to the second position information.
Specifically, the first position obtaining module 11 is configured to obtain first position information of a target object in an operation area in a first coordinate system corresponding to the operation area. The operation area includes, but is not limited to, a work table for operation, an operation area on the pipeline, a plane area available for operation, and the like. For example, a two-dimensional identification map that is horizontal to the desktop is placed in the operation area, the first user device scans the identification map, establishes a world coordinate system with the center of the identification map as an origin, and establishes a scale of the world coordinate system with the center of the identification map as the center of the entire world coordinate system, and the first user device uses the world coordinate system as a first coordinate system and calculates first position information (e.g., coordinate position information) of the target object in the first coordinate system through a computer vision algorithm.
Of course, those skilled in the art will appreciate that the above-described operating regions are merely exemplary, and that other operating regions, which may exist or become available in the future, are also encompassed within the scope of the present application and are hereby incorporated by reference.
The second position obtaining module 12 is configured to determine second position information of the target object in a second coordinate system according to a coordinate mapping relationship between the second coordinate system corresponding to the first user equipment and the first coordinate system. For example, it can be assumed that the target object has a first coordinate system (assumed as a world coordinate system), the glasses camera has a three-dimensional camera coordinate system, the eyes and the glasses screen form a virtual camera coordinate system, and the glasses screen has a second coordinate system (e.g., a two-dimensional coordinate system). The coordinates of the target object in a world coordinate system are known, the target object is firstly converted into a three-dimensional camera shooting coordinate system of the glasses solid camera, and a conversion matrix is obtained through a recognition tracking algorithm. And then, converting the external parameters (known) of the calibration parameters into a virtual camera coordinate system consisting of the human eyes and the glasses screen, and finally obtaining the coordinates of a second coordinate system on the glasses screen according to the internal parameters (known) of the virtual camera. Therefore, the final second position information of the target object in the second coordinate system can be calculated.
And the guidance information obtaining module 13 is configured to obtain operation guidance information corresponding to the target object. The operation guidance information includes information such as an assembly position and an assembly sequence of the target object. For example, the first user equipment acquires the operation guidance information corresponding to the target object by identifying the target object and matching the target object in a local or cloud database; for example, the first user equipment reads the operation guidance information corresponding to the target object according to the operation of the user and the like; for another example, the first user equipment captures an image corresponding to the target object, sends the image to the network equipment, and receives the operation guidance information corresponding to the target object, which is identified by the network equipment based on the image.
The guidance information overlaying module 14 overlays and displays the operation guidance information corresponding to the target object on the target object according to the second position information. For example, the first user equipment displays the virtual overlay information in the operation guidance information in an overlay manner at a corresponding position according to the position information of the target object in the second coordinate system and according to the position relationship between the virtual overlay information (such as information of parts assembled in the target object) and the target object, which is displayed in the overlay manner in the operation guidance information.
For example, the first user holds augmented reality glasses, and the augmented reality glasses shoot image information related to the current operation area through the camera. And the augmented reality glasses establish a world coordinate system according to the corresponding two-dimensional identification image in the operation area, and calculate the position of the target object under the world coordinate system. It can be assumed that the glasses entity camera has a three-dimensional camera coordinate system, the eyes and the glasses screen form a virtual camera coordinate system, and the glasses screen has a two-dimensional coordinate system. The coordinates of a target object (such as parts of an electric control box, an outer box, a circuit board and the like) in a world coordinate system are known, the target object is firstly converted into a three-dimensional camera coordinate system of a glasses solid camera, and a conversion matrix is obtained through a recognition tracking algorithm. And then, converting the external parameters (known) of the calibration parameters into a virtual camera coordinate system consisting of the human eyes and the glasses screen, and finally obtaining the coordinates of a second coordinate system on the glasses screen according to the internal parameters (known) of the virtual camera. Therefore, the second position information of the target object in the glasses screen can be calculated finally. The augmented reality glasses inquire operation guide information related to the object in a database according to information related to the target object, or the augmented reality glasses read the operation guide information of the target object based on selection operation of a first user and the like, wherein the operation guide information comprises but is not limited to installation positions and installation sequences of circuit boards and the like in an outer box of the electronic control box. The augmented reality glasses display the operation guidance information corresponding to the target object in real time in a superimposed manner at the real-time position according to the real-time position of the target object in the glasses screen, for example, a virtual circuit board is virtually presented at the suggested installation position of the circuit board in the outer box of the electric control box.
Of course, those skilled in the art will appreciate that the above-described exemplary embodiments are merely examples, and that other exemplary embodiments, which are currently or later become known, may be devised and are intended to be included within the scope of the present invention and are hereby incorporated by reference.
In some embodiments, the apparatus further comprises a job guidance transmission module 15 (not shown). And the operation guidance sending module 15 is configured to send the operation guidance information to a corresponding third user device, so that the third user device displays the operation guidance information in an image of the target object in an overlapping manner. For example, the third user device comprises a third camera by means of which image information relating to the operating area is captured. The first user equipment sends the operation guidance information to the third camera device, and the third equipment device superposes the received operation guidance information on the image of the currently presented target object in real time.
For example, the third user holds a PC device mounted with a relatively fixed camera by which the PC device takes image information about the electric control box casing of the operation area. After the augmented reality glasses acquire the operation guide information about the outer box of the electronic control box, the operation guide information is sent to the PC equipment. The PC equipment receives the operation guide information and displays the operation guide information such as a virtual circuit board and the like in a superposition mode at a corresponding position in the currently presented image information about the outer box of the electric control box through a computer vision algorithm. Here, let us assume that an electronic control box outer box of an operation area has a world coordinate system, a camera of a PC device has a three-dimensional camera coordinate system, a screen of the PC device has a two-dimensional image coordinate system, and coordinates of the electronic control box outer box in the world coordinate system are known, first, coordinate information of the electronic control box outer box in the world coordinate system is transformed according to the camera coordinate system and the world coordinate system to obtain position information of the electronic control box outer box in the camera coordinate system, and then, according to coordinate changes of the camera coordinate system and the image coordinate system, position information of the electronic control box outer box in the screen is determined; according to the position of the outer box of the electric control box in the screen, the PC equipment superposes and displays the operation guidance information such as the installation position, the installation sequence and the like of parts such as a circuit board and the like at the corresponding position in the electric control box, wherein the installation sequence can be distinguished by different colors or different depth degrees of the same color and the like.
In some embodiments, the apparatus further comprises an image capture module 16 (not shown). An image capturing module 16, configured to capture image information of the target object in the operation area through a capturing device of the first user equipment; the first position obtaining module is used for obtaining first position information of the target object in a first coordinate system corresponding to the operation area through the image information. In some embodiments, the guidance information obtaining module 13 is configured to obtain the operation guidance information corresponding to the target object according to the image information. For example, the first user equipment shoots image information of a target object in an operation area through the camera device, establishes a first coordinate system through the image information, and calculates first position information of the target object in the first coordinate system; and the first user equipment performs image recognition on the target object according to the image information, matches the image information with a model in a pre-stored database through a computer vision algorithm, and takes the operation guidance information corresponding to the matched model as the operation guidance information corresponding to the target object.
For example, the augmented reality glasses shoot image information about the outer box of the electric control box on the current operating platform through the camera, and the image information also comprises a plane where the operating platform is located. The plane is provided with a two-dimensional identification picture placed by a first user, the augmented reality glasses establish a three-dimensional world coordinate system by taking the center of the two-dimensional identification picture as an origin according to the shot image information and the two-dimensional identification picture in the image information, and the position information of the current outer box of the electric control box in the world coordinate system is calculated according to the distance between the outer box of the electric control box and the center of the two-dimensional identification picture. And matching the augmented reality glasses in a database according to the image information of the outer box of the electronic control box, and determining an outer box model of the electronic control box matched with the image information and the operation guidance information corresponding to the model.
In some embodiments, the apparatus further comprises a next step tutorial acquisition module 17 (not shown) and a next step tutorial overlay module 18 (not shown). A next step instruction acquisition module 17 for acquiring next step operation instruction information corresponding to the operation instruction information; and a next step guidance superposition module 18, configured to superpose and display the next step operation guidance information on the target object. For example, after the first user completes the job related to the current job guidance information, the first user equipment obtains the next job guidance information corresponding to the job guidance information, and reads the next job guidance information through the operations of the user such as selecting/clicking the next step. The first user equipment displays the next operation guidance information on the target object in a superposed manner at the real-time position of the target object in the equipment screen.
For example, a first user installs the electronic control box on the console, the first user has installed the circuit board at a corresponding position according to the presenting position of the virtual circuit board in the operation guidance information, the first user clicks to perform the next operation, and the augmented reality glasses read the next operation guidance information corresponding to the target object in the database according to the operation, such as fixing a corresponding screw at the corresponding position of the circuit board.
Of course, those skilled in the art will appreciate that the above-described further process instructional information is by way of example only, and that other further process instructional information, now known or later developed, that may be applicable to the present application, is also encompassed by the present invention and is incorporated herein by reference.
In some embodiments, the next step instruction obtaining module 17 is configured to receive next step operation instruction information corresponding to the operation instruction information, where the next step operation instruction information is sent by a corresponding network device. In some embodiments, the next job guidance information is sent by the network device based on the received next operation request. In some embodiments, the next job guidance information is sent by the network device based on the received next operation request. For example, after the user completes the current job guidance information, the first user equipment sends a next operation request to the network equipment when the first user equipment determines that the current job is completed based on the operation of the user or by identifying the current target object. The network equipment receives the next operation request, determines corresponding next operation guiding information in the database based on the next operation request, and sends the next operation guiding information to the first user equipment.
For example, a first user installs the electronic control box on the operation console, the first user has installed the circuit board at a corresponding position according to the presenting position of the virtual circuit board in the operation guidance information, the first user clicks to perform the next operation, and the augmented reality glasses send a next operation request to the MES production execution system according to the operation, wherein the next operation request includes relevant information of the current operation guidance information (such as the current operation guidance information or corresponding image information when the operation guidance is completed). And the MES production execution system receives the next operation request, determines next operation guide information corresponding to the target object in the database according to the next operation request, and determines the next operation guide information such as fixing a corresponding screw at a corresponding position of the circuit board according to the image information of the circuit board installed on the outer box of the electric control box. And the MES production execution system sends the next step of operation guidance information to the augmented reality glasses.
Of course, those skilled in the art will appreciate that the above-described next step operation request is by way of example only, and that other existing or future requests may be applicable to the present application and are intended to be encompassed within the scope of the present application and are hereby incorporated by reference.
In some embodiments, the next step instruction obtaining module 17 is configured to receive next step operation instruction information corresponding to the operation instruction information and sent by a second user equipment, where the next step operation instruction information is sent to the second user equipment by the network device. For example, the second user equipment includes a human-computer interaction device which establishes a communication connection with the first user equipment, the second user equipment may also establish a communication connection with the network device, the second user equipment may present job guidance information about a current target object operated by the first user, the second user may select to enter next operation information on the second user equipment based on the job guidance information, and the like. For another example, the second user equipment receives next step guidance information of the first user equipment, and sends the next step operation guidance information to the first user equipment, where the next step guidance information includes information returned by the network equipment according to a next step operation request sent by the second user equipment.
For example, the second user has a human-computer interaction device (e.g., a tablet computer, etc.), the human-computer interaction device establishes a communication connection with the augmented reality glasses and the network device, and the first user and the second user perform the same production execution task. The human-computer interaction equipment and the augmented reality glasses present the same operation guidance information, for example, the human-computer interaction equipment presents the operation guidance information on the screen, and the augmented reality glasses display the corresponding operation guidance information in a superposition manner on the screen. The method comprises the steps that a first user installs an electric control box on an operation desk, the first user installs a circuit board at a corresponding position according to the presenting position of a virtual circuit board in operation guidance information, the first user clicks to carry out the next operation, and the augmented reality glasses send a next operation request to the human-computer interaction equipment according to the operation, wherein the next operation request comprises relevant information of the current operation guidance information (such as the current operation guidance information or corresponding image information when the operation guidance is finished). And the MES production execution system determines next operation guidance information corresponding to the target object in the database according to the next operation request, and if the next operation guidance information is determined according to the image information of the circuit board which is installed on the outer box of the electric control box, the next operation guidance information is fixed at the corresponding position of the circuit board, such as a corresponding screw and the like. And the MES production execution system sends the next operation guidance equipment to the human-computer interaction equipment, and the next operation guidance equipment is forwarded to the augmented reality glasses through the human-computer interaction equipment.
For another example, a first user installs the electronic control box on the console, the first user has installed the circuit board at a corresponding position according to the presenting position of the virtual circuit board in the operation guidance information, a second user selects to perform the next operation on the screen of the human-computer interaction device, and the human-computer interaction device sends a next operation request to the network device according to the operation, wherein the next operation request includes relevant information of the current operation guidance information (for example, the current operation guidance information or corresponding image information when the operation guidance is completed). And the MES production execution system receives the next operation request, determines next operation guide information corresponding to the target object in the database according to the next operation request, and determines the next operation guide information such as fixing a corresponding screw at a corresponding position of the circuit board according to the image information of the circuit board installed on the outer box of the electric control box. And the MES production execution system sends the next operation guidance equipment to the human-computer interaction equipment, and the next operation guidance equipment is forwarded to the augmented reality glasses through the human-computer interaction equipment.
In some embodiments, the next step instruction obtaining module 17 is configured to obtain a next step operation request about the job instruction information, which is submitted by the user through the first user equipment, send the next step operation request to the network device, and receive the next step job instruction information corresponding to the job instruction information, which is sent by the network device based on the next step operation request. For example, after the user completes the current job guidance information, the user selects an operation of entering a next job on the first user equipment, and the first user equipment sends a next operation request to the network equipment based on the operation of the user. The network equipment receives the next operation request, determines corresponding next operation guiding information in the database based on the next operation request, and sends the next operation guiding information to the first user equipment.
For example, a first user installs the electronic control box on the operation console, the first user has installed the circuit board at a corresponding position according to the presenting position of the virtual circuit board in the operation guidance information, the first user clicks to perform the next operation, and the augmented reality glasses send a next operation request to the MES production execution system according to the operation, wherein the next operation request includes relevant information of the current operation guidance information (such as the current operation guidance information or corresponding image information when the operation guidance is completed). And the MES production execution system receives the next operation request, determines next operation guide information corresponding to the target object in the database according to the next operation request, and determines the next operation guide information such as fixing a corresponding screw at a corresponding position of the circuit board according to the image information of the circuit board installed on the outer box of the electric control box. And the MES production execution system sends the next step of operation guidance information to the augmented reality glasses.
In some embodiments, the apparatus includes a shooting job module 19 (not shown). A shooting operation module 19, configured to shoot the operated target object through a shooting device in the first user equipment, and send shot image information to a corresponding network device; the next step instruction obtaining module 17 is configured to receive next step operation instruction information corresponding to the operation instruction information, which is sent by the network device after determining that the operation instruction information is completed according to the image information. For example, the first user device captures image information related to a target object when the current user is working in real time, and sends the image information to the network device. The network equipment receives the image information, matches the image information with a preset image of completed operation according to a computer vision algorithm, determines that the operation related to the current target object is completed if the image related to the target object in the shot image information is matched with the preset image, determines next operation guide information corresponding to the current operation guide information, and sends the next operation guide information to the first user equipment.
For example, a first user installs the electronic control box on the operation console, the first user installs the circuit board at the corresponding position according to the presenting position of the virtual circuit board in the operation guidance information, the augmented reality glasses shoot the image information of the corresponding position of the circuit board installed in the outer box of the electronic control box, and the image information is sent to the MES production execution system. And the MES production execution system receives the image information, matches the image information with a preset completed model in the execution task in a database, matches the preset model with completed operation guide information, determines that the current operation guide information is completed, and acquires corresponding next operation guide information according to the completion of the current operation guide, such as fixing a corresponding screw at a corresponding position of the circuit board. And then, the MES production execution system sends the next operation guide information to the augmented reality equipment.
In some embodiments, the apparatus includes a location determination module 20 (not shown). A position determining module 20, configured to determine current position information of the operated target object in the second coordinate system; and the next step guiding and overlaying module 18 is used for overlaying and displaying the next step operation guiding information on the target object according to the current position information. For example, the first user equipment determines the current position information of the target object in the second coordinate system in real time according to the shot image information after the operation, and displays the next operation guidance information on the target object at the position in an overlapping manner according to the position information.
For example, the augmented reality glasses determine the position of the outer box of the electronic control box in the world coordinate system in real time, and acquire the position of the outer box of the electronic control box on the glasses screen according to the coordinate conversion relation. The augmented reality glasses receive the next operation guidance information of the fixing screws at the corresponding positions of the circuit board installed in the outer box of the electronic control box, and the corresponding positions of the circuit board in the outer box of the electronic control box in the current screen are superposed to display the corresponding information of the screws to be fixed, such as the positions, the sequence, the models and the like of the screws to be fixed.
In some embodiments, the device includes a next step reading module 01 (not shown). And the next step of the reading module 01 is used for shooting the operated target object through a shooting device in the first user equipment, and determining whether the operation guide information in the first user equipment is finished or not according to the image information. For example, the first user equipment captures image information related to a target object when the current user is working in real time, determines whether the current work is completed according to the image information, such as matching the current work with a preset image of the completed work according to a computer vision algorithm, and if the image related to the target object in the captured image information is matched with the preset image, the first user equipment determines that the work related to the current target object is completed, and reads next-step work guidance information corresponding to the current work guidance information from the database.
For example, a first user installs the electronic control box on the console, the first user has installed the circuit board at a corresponding position according to the presenting position of the virtual circuit board in the operation guidance information, the augmented reality glasses shoot image information of the corresponding position where the circuit board has been installed in the external box of the electronic control box, match the image information with a completed model preset in the execution task in the database, match the preset model with which the operation guidance information has been completed, determine that the current operation guidance information has been completed, the augmented reality glasses complete the current operation guidance, and obtain corresponding next operation guidance information, for example, fix a corresponding screw at the corresponding position of the circuit board.
In some embodiments, the second position obtaining module 12 is configured to determine, according to a first coordinate mapping relationship between a second coordinate system and a human eye coordinate system corresponding to the first user equipment, a second coordinate mapping relationship between the human eye coordinate system and a camera coordinate system, and a third coordinate mapping relationship between the camera coordinate system and the first coordinate system, a coordinate mapping relationship between the second coordinate system and the first coordinate system corresponding to the first user, and determine, based on the coordinate mapping relationship between the second coordinate system and the first coordinate system, second position information of the target object in the second coordinate system.
For example, we can assume that the target object itself has a first coordinate system (assumed as a world coordinate system), the augmented reality glasses physical camera has a three-dimensional camera coordinate system, the eyes and the glasses screen form a virtual camera coordinate system, and the glasses screen has a two-dimensional second coordinate system (image coordinate system). The coordinates of the target object in a world coordinate system are known, the target object is firstly converted into a three-dimensional camera shooting coordinate system of the glasses solid camera, and a conversion matrix is obtained through a recognition tracking algorithm. And then, converting the external parameters (known) of the calibration parameters into a virtual camera coordinate system consisting of the human eyes and the glasses screen, and finally obtaining the coordinates on the glasses screen according to the internal parameters (known) of the virtual camera. The final position of the target object in the glasses screen can be calculated. Here we convert the coordinates of the target object in the world coordinate system to the physical camera coordinate system on the glasses (the coordinate system corresponding to the glasses camera), where the conversion matrix is derived by the recognition and tracking algorithm. After the augmented reality glasses determine the coordinate mapping relation from the world coordinate system to the image coordinate system, the position information of the target object in the glasses screen is obtained according to the position of the target object in the world coordinate system. The specific camera model converted from world coordinates to image coordinates is as follows:
1. ideal model
Fig. 3 shows a basic imaging model of a camera, commonly referred to as a basic pinhole model, given by a three-dimensional space to plane central projection transform.
As shown in FIG. 3, OcIs the center of the camera and is at a distance f from the image plane pi of the camera, where f is called the focal length of the camera. Space point XcProjection (or image) m on plane pi is at point OcIs an end point and passes through point XcIs intersected with the plane pi by a point OcThe rays which are end points and perpendicular to the image plane are called the optical axis or principal axis, and the intersection point p of the principal axis and the image plane is called the principal point of the camera. In order to algebraically describe this projection relationship, a camera coordinate system and an image plane coordinate system need to be established. On an image plane, an image coordinate system o-xy is established by taking a principal point p as the origin of coordinates of an image plane coordinate system and taking a horizontal line and a lead straight line as an x axis and a y axis respectively. In space, with the camera center OcEstablishing a camera coordinate system O-x for the origin of coordinates of the camera coordinate systemcyczcAs shown in fig. 3 (b). Space point XcHomogeneous coordinates in the camera coordinate system are denoted as Xc=(xc,yc,zc,1)TIts homogeneous sitting mark m in image coordinate system as (x, y,1)TScale (X)cAnd m) is a pair. According to the triangle similarity principle, the space point X can be deducedcWith its image point m, the following relationship is satisfied:
conversion to matrix form:
wherein, Xc=(xc,yc,zc,1)T,m=(x,y,1)TThe homogeneous coordinates of the spatial points and the image points, respectively, are a homogeneous linear transformation from space to image plane. Let P ═ diag (f, f,1) (I,0), then the above formula can be represented in a simpler form:
m=PXc (11)
note that: (11) is a homogeneous equation, meaning equal in the sense of differing by a non-zero constant factor. The matrix P is usually called a camera matrix. This is the algebraic representation of the basic imaging model.
2. Actual model
The theoretical case (principal point being the origin of the image coordinate system) was discussed above, but in the actual case,
firstly, the origin of coordinates of the image plane may not be on the principal point;
second, the images used for computer processing are typically digital images acquired by a CCD camera, digitally discretizing the points of the image plane.
In the ideal model derived above, the dimensions of the assumed image coordinates on the two axes are not equal, so that the pixels of the CCD camera after digital discretization are not a square, and it is therefore necessary to introduce non-equivalent scale factors; third, typical cameras have distortion parameters present.
Under the above three conditions, the ideal central projection model can be rewritten into a five-parameter model:
likewise, the projected relationship of the cameras can be written as:
m=K(I,0)Xc=PXc (13)
wherein:is a camera intrinsic parameter matrix, fx,fyCalled the scale factor of the CCD camera in the u-axis and v-axis directions, (u)0,v0)TReferred to as the principal point of the CCD camera. s is called the distortion factor or tilt factor of the CCD camera. The internal parameters of the camera are five in total.
3. General model
We generally describe a three-dimensional point, and since the camera may be moving all the time, we do not describe it based on the camera coordinate system, but rather in the world coordinate system. The relationship between the world coordinate system and the camera coordinate system can be described in terms of a rotation matrix and a translation vector, as shown in fig. 4.
Let the coordinates of the space point in the world coordinate system and the camera coordinate system be X ═ X, y, z,1 respectivelyT,Xc=(xc,yc,zc,1)TThen the relationship between the two is:
bringing (14) into (13) to obtain:
wherein,representing the coordinates of the camera centre in the world coordinate system, camera matrix Is the extrinsic parameter matrix of the camera. Where R ═ is a rotation matrix (α, β, γ), and α, β, γ are rotation angles around the x, y, z axes of the camera coordinate system, respectively.
Is a translation matrix, Tx,Ty,TzAre respectively the translation around the x, y and z axes of the camera coordinate system, so the camera external parameters consist of 6 parameters (alpha, beta, gamma, T)x,Ty,Tz)。
Fig. 10 illustrates an augmented reality based smart manufacturing network device according to another aspect of the present application, wherein the device includes a next step instruction transmitting module 21. And a next step instruction sending module 21, configured to send next step operation instruction information corresponding to the operation instruction information to the corresponding first user equipment.
For example, a first user holds augmented reality glasses, the augmented reality glasses shoot image information related to a current operation area through a camera device, and the augmented reality glasses establish communication connection with the network device. The network equipment determines the current operation guide information of the current first user based on communication, determines the corresponding next operation guide information in the database, and sends the next operation guide information to the first user equipment. The first user equipment receives the next operation guidance information, and superimposes the next operation guidance information on the target object in real time at the position according to the position of the target object in the current screen.
In some embodiments, the next step instruction sending module 21 is configured to receive a next step operation request corresponding to job instruction information in a corresponding first user equipment, where the first user equipment and the second user equipment serve a same intelligent manufacturing task, determine next step job instruction information corresponding to the job instruction information, and send the next step job instruction information to the second user equipment. For example, the second user equipment includes a human-computer interaction device which establishes a communication connection with the first user equipment, and the second user equipment may also establish a communication connection with the network device. The second user equipment acquires a next operation request corresponding to the operation guidance information of the current first user equipment, if the second user is operated based on the second user, the next operation request corresponding to the current operation information of the target object is acquired, and if the second user equipment receives the next operation request sent by the first user equipment; and the second user equipment sends the next operation request to the network equipment. And the network equipment determines corresponding next-step operation guide information according to the received next-step operation request, and sends the next-step operation guide information to the second user equipment.
For example, the second user has a human-computer interaction device (e.g., a tablet computer, etc.), the human-computer interaction device establishes a communication connection with the augmented reality glasses and the network device, and the first user and the second user perform a unified production execution task. The human-computer interaction equipment and the augmented reality glasses present the same operation guidance information, for example, the human-computer interaction equipment presents the operation guidance information on the screen, and the augmented reality glasses display the corresponding operation guidance information in a superposition manner on the screen. The method comprises the steps that a first user installs an electric control box on an operation desk, the first user installs a circuit board at a corresponding position according to the presenting position of a virtual circuit board in operation guidance information, the first user clicks to carry out the next operation, and the augmented reality glasses send a next operation request to the human-computer interaction equipment according to the operation, wherein the next operation request comprises relevant information of the current operation guidance information (such as the current operation guidance information or corresponding image information when the operation guidance is finished). And the MES production execution system determines next operation guidance information corresponding to the target object in the database according to the next operation request, and if the next operation guidance information is determined according to the image information of the circuit board which is installed on the outer box of the electric control box, the next operation guidance information is fixed at the corresponding position of the circuit board, such as a corresponding screw and the like. And the MES production execution system sends the next operation guidance equipment to the human-computer interaction equipment, and the next operation guidance equipment is forwarded to the augmented reality glasses through the human-computer interaction equipment.
For another example, a first user installs the electronic control box on the console, the first user has installed the circuit board at a corresponding position according to the presenting position of the virtual circuit board in the operation guidance information, a second user selects to perform the next operation on the screen of the human-computer interaction device, and the human-computer interaction device sends a next operation request to the network device according to the operation, wherein the next operation request includes relevant information of the current operation guidance information (for example, the current operation guidance information or corresponding image information when the operation guidance is completed). And the MES production execution system receives the next operation request, determines next operation guide information corresponding to the target object in the database according to the next operation request, and determines the next operation guide information such as fixing a corresponding screw at a corresponding position of the circuit board according to the image information of the circuit board installed on the outer box of the electric control box. And the MES production execution system sends the next operation guidance equipment to the human-computer interaction equipment, and the next operation guidance equipment is forwarded to the augmented reality glasses through the human-computer interaction equipment.
In some embodiments, the next step instruction sending module 21 is configured to receive a next step operation request related to job instruction information sent by the first user equipment, determine the next step job instruction information corresponding to the job instruction information, and send the next step job instruction information to the second user equipment. For example, after the user completes the current job guidance information, the user selects an operation to enter a next job on the first user equipment or the first user determines that the current job guidance is completed through a shot image about the job guidance, and the first user equipment sends a next operation request to the network equipment based on the user operation. The network equipment receives the next operation request, determines corresponding next operation guiding information in the database based on the next operation request, and sends the next operation guiding information to the second user equipment.
For example, a first user installs the electronic control box on the operation console, the first user installs the circuit board at a corresponding position according to the presenting position of the virtual circuit board in the operation guidance information, the first user clicks to perform the next operation, and the augmented reality glasses send a next operation request to the MES production execution system according to the operation; for another example, the augmented reality glasses shoot image information of relevant operation guidance of the electrical control box on the current operation platform through the camera device, and determine that the current operation guidance information is completed according to the operation guidance information, and the augmented reality glasses send a next operation request to the MES production execution system, where the next operation request includes relevant information of the current operation guidance information (e.g., the current operation guidance information or corresponding image information when the operation guidance is completed). And the MES production execution system receives the next operation request, determines next operation guide information corresponding to the target object in the database according to the next operation request, and determines the next operation guide information such as fixing a corresponding screw at a corresponding position of the circuit board according to the image information of the circuit board installed on the outer box of the electric control box. And the MES production execution system sends the next step of operation guidance information to the augmented reality glasses.
In some embodiments, the apparatus also includes a job completion determination module 22 (not shown). A job completion determining module 22, configured to receive image information about a target object sent by a corresponding first user equipment, where the image information is obtained by shooting the target object being operated by a shooting device in the first user equipment, and determine whether job guidance information in the first user equipment is completed according to the image information; the next step instruction sending module 21 is configured to send, to the first user equipment, next step operation instruction information corresponding to the operation instruction information if the operation instruction information is completed. For example, the first user device captures image information related to a target object when the current user is working in real time, and sends the image information to the network device. The network equipment receives the image information, matches the image information with a preset image of completed operation according to a computer vision algorithm, determines that the operation related to the current target object is completed if the image related to the target object in the shot image information is matched with the preset image, determines next operation guide information corresponding to the current operation guide information, and sends the next operation guide information to the first user equipment.
For example, a first user installs the electronic control box on the operation console, the first user installs the circuit board at the corresponding position according to the presenting position of the virtual circuit board in the operation guidance information, the augmented reality glasses shoot the image information of the corresponding position of the circuit board installed in the outer box of the electronic control box, and the image information is sent to the MES production execution system. And the MES production execution system receives the image information, matches the image information with a preset completed model in the execution task in a database, matches the preset model with completed operation guide information, determines that the current operation guide information is completed, and acquires corresponding next operation guide information according to the completion of the current operation guide, such as fixing a corresponding screw at a corresponding position of the circuit board. And then, the MES production execution system sends the next operation guide information to the augmented reality equipment.
Fig. 11 shows a second augmented reality based smart manufacturing user equipment according to yet another aspect of the present application, wherein the equipment comprises a next request acquisition module 31, a next request transmission module 32, a next instruction receiving module 33 and a next instruction transmission module 34. A next step request obtaining module 31, configured to obtain a next step operation request corresponding to job guidance information in a corresponding first user equipment, where the first user equipment and the second user equipment serve a same intelligent manufacturing task, and the next step operation request is submitted by a user through the second user equipment; a next request sending module 32, configured to send the next operation request to a corresponding network device; a next step instruction receiving module 33, configured to receive next step operation instruction information corresponding to the operation instruction information and sent by the network device based on the next step operation request; and a next step instruction sending module 34, configured to send the next step operation instruction information to the first user equipment corresponding to the operation instruction information. For example, a first user holds a first user device, a second user holds a human-computer interaction device (such as a PC device, etc.), the first user device captures image information related to a target object in a current operating area through a camera, and the first user device establishes a communication connection with the second user device and a network device. And if the current operation guidance of the user is finished, the second user equipment sends a next operation request to the network equipment based on the operation of the second user, the network equipment receives the request, determines next operation guidance information corresponding to the request in the database, and sends the next operation guidance information to the second user equipment. And the second user receives the next operation guide information and sends the next operation guide information to the first user equipment. The first user equipment receives the next operation guidance information, and superimposes the next operation guidance information on the target object in real time at the position according to the position of the target object in the current screen.
For example, a first user holds augmented reality glasses, a second user holds human-computer interaction equipment, the augmented reality equipment is in communication connection with a human-computer interaction equipment, and the human-computer interaction equipment is in communication connection with an MES production execution system. The human-computer interaction device and the augmented reality glasses present the same operation guidance information, such as installation guidance information related to an electric control box outer box on the current operation console. The method comprises the steps that a first user installs an electronic control box on an operation desk, the first user installs a circuit board at a corresponding position according to the presenting position of a virtual circuit board in operation guidance information, a second user selects to carry out next operation on a screen of a man-machine interaction device, the man-machine interaction device sends a next operation request to network equipment according to the operation, and the next operation request comprises relevant information of current operation guidance information (such as the current operation guidance information or corresponding image information when operation guidance is finished). And the MES production execution system receives the next operation request, determines next operation guide information corresponding to the target object in the database according to the next operation request, and determines the next operation guide information such as fixing a corresponding screw at a corresponding position of the circuit board according to the image information of the circuit board installed on the outer box of the electric control box. And the MES production execution system sends the next operation guidance equipment to the human-computer interaction equipment, and the next operation guidance equipment is forwarded to the augmented reality glasses through the human-computer interaction equipment. The augmented reality glasses receive the next operation guidance information of the fixing screws at the corresponding positions of the circuit board installed in the outer box of the electronic control box, and the corresponding positions of the circuit board in the outer box of the electronic control box in the current screen are superposed to display the corresponding information of the screws to be fixed, such as the positions, the sequence, the models and the like of the screws to be fixed.
In some embodiments, the next step request obtaining module 31 is configured to receive a next step operation request corresponding to job guidance information in a first user equipment, where the first user equipment and the second user equipment serve a same smart manufacturing task.
For example, a first user installs the electronic control box on the console, the first user has installed the circuit board at a corresponding position according to the presenting position of the virtual circuit board in the operation guidance information, the first user clicks to perform the next operation, and the augmented reality glasses send a next operation request to the human-computer interaction device according to the operation, wherein the next operation request includes relevant information of the current operation guidance information (such as the current operation guidance information or corresponding image information when the operation guidance is completed). And the human-computer interaction equipment receives the next operation request, sends the next operation request to the MES production execution system to determine corresponding next operation guidance information, and forwards the next operation guidance information to the augmented reality glasses by the MES production execution system.
FIG. 12 illustrates an augmented reality based smart manufacturing system, wherein the system is a first user device as described above and a network device as described above, according to an aspect of the subject application.
FIG. 13 illustrates an augmented reality based smart manufacturing system, according to one aspect of the subject application, wherein the system is a first user device as described above, a network device as described above, and a second user device as described above.
The present application also provides a computer readable storage medium having stored thereon computer code which, when executed, performs a method as in any one of the preceding.
The present application also provides a computer program product, which when executed by a computer device, performs the method of any of the preceding claims.
The present application further provides a computer device, comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
FIG. 14 illustrates an exemplary system that can be used to implement the various embodiments described herein;
in some embodiments, as shown in FIG. 14, the system 300 can be implemented as any of the augmented reality based smart manufacturing apparatuses of the various described embodiments. In some embodiments, system 300 may include one or more computer-readable media (e.g., system memory or NVM/storage 320) having instructions and one or more processors (e.g., processor(s) 305) coupled with the one or more computer-readable media and configured to execute the instructions to implement modules to perform the actions described herein.
For one embodiment, system control module 310 may include any suitable interface controllers to provide any suitable interface to at least one of processor(s) 305 and/or any suitable device or component in communication with system control module 310.
The system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315. Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 320 and communication interface(s) 325.
For example, NVM/storage 320 may be used to store data and/or instructions. NVM/storage 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 320 may include storage resources that are physically part of the device on which system 300 is installed or may be accessed by the device and not necessarily part of the device. For example, NVM/storage 320 may be accessible over a network via communication interface(s) 325.
Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. System 300 may wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) (e.g., memory controller module 330) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) of the system control module 310 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310 to form a system on a chip (SoC).
In various embodiments, system 300 may be, but is not limited to being: a server, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Those skilled in the art will appreciate that the form in which the computer program instructions reside on a computer-readable medium includes, but is not limited to, source files, executable files, installation package files, and the like, and that the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Computer-readable media herein can be any available computer-readable storage media or communication media that can be accessed by a computer.
Communication media includes media by which communication signals, including, for example, computer readable instructions, data structures, program modules, or other data, are transmitted from one system to another. Communication media may include conductive transmission media such as cables and wires (e.g., fiber optics, coaxial, etc.) and wireless (non-conductive transmission) media capable of propagating energy waves such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied in a modulated data signal, for example, in a wireless medium such as a carrier wave or similar mechanism such as is embodied as part of spread spectrum techniques. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, FeRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed that can store computer-readable information/data for use by a computer system.
An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Claims (33)
1. An augmented reality based smart manufacturing method at a first user equipment, wherein the method comprises:
acquiring first position information of a target object in an operation area in a first coordinate system corresponding to the operation area;
determining a coordinate mapping relation between a second coordinate system corresponding to the first user equipment and a first coordinate system according to a first coordinate mapping relation between the second coordinate system corresponding to the first user equipment and a human eye coordinate system, a second coordinate mapping relation between the human eye coordinate system and a camera coordinate system and a third coordinate mapping relation between the camera coordinate system and the first coordinate system, wherein the second coordinate system comprises a two-dimensional coordinate system established based on a screen of the first user equipment, and the human eye coordinate system comprises a three-dimensional coordinate system established based on a human eye and the screen;
determining second position information of the target object in the second coordinate system based on the first position information and the coordinate mapping relation between the second coordinate system and the first coordinate system;
acquiring operation guidance information corresponding to the target object;
and displaying the operation guidance information corresponding to the target object in a superposed manner according to the second position information.
2. The method of claim 1, wherein the method further comprises:
and sending the operation guide information to corresponding third user equipment so that the third user equipment can superpose and display the operation guide information on the image of the target object.
3. The method according to claim 1 or 2, wherein the method further comprises:
shooting image information of the target object in the operation area through a shooting device of the first user equipment;
the acquiring first position information of a target object in an operation area in a first coordinate system corresponding to the operation area includes:
and acquiring first position information of the target object in a first coordinate system corresponding to the operation area through the image information.
4. The method according to claim 3, wherein the acquiring of the operation guidance information corresponding to the target object comprises:
and acquiring the operation guidance information corresponding to the target object according to the image information.
5. The method of claim 1, wherein the method further comprises:
acquiring next step operation guide information corresponding to the operation guide information;
and overlapping and displaying the next operation guidance information on the target object.
6. The method of claim 5, wherein the obtaining of next step operation guidance information corresponding to the operation guidance information comprises:
and receiving next operation guide information which is sent by the corresponding network equipment and corresponds to the operation guide information.
7. The method of claim 6, wherein the next step job guidance information is transmitted by the network device based on the received next step operation request.
8. The method of claim 7, wherein the receiving next step operation guidance information corresponding to the operation guidance information and transmitted by the corresponding network device comprises:
and receiving next-step operation guide information which is sent by corresponding second user equipment and corresponds to the operation guide information, wherein the next-step operation guide information is sent to the second user equipment by the network equipment.
9. The method of claim 7, wherein the receiving next step operation guidance information corresponding to the operation guidance information and transmitted by the corresponding network device comprises:
acquiring a next operation request which is submitted by a user through the first user equipment and relates to the operation guide information;
sending the next operation request to the network equipment;
and receiving next operation guide information which is sent by the network equipment based on the next operation request and corresponds to the operation guide information.
10. The method of claim 6, wherein the method further comprises:
shooting the operated target object through a shooting device in the first user equipment, and sending the shot image information to corresponding network equipment;
wherein the receiving of the next operation guidance information corresponding to the operation guidance information and sent by the corresponding network device includes:
and receiving next step operation guide information which is sent by the network equipment after the operation guide information is determined to be finished according to the image information and corresponds to the operation guide information.
11. The method of claim 5, wherein the method further comprises:
determining the current position information of the operated target object in the second coordinate system;
wherein, the displaying the next operation guidance information in an overlapping manner on the target object comprises:
and overlapping and displaying the next operation guidance information on the target object according to the current position information.
12. The method of claim 5, wherein the method further comprises:
shooting real-time image information of the operated target object through a shooting device in the first user equipment;
determining whether the operation guidance information in the first user equipment is finished according to the real-time image information;
the acquiring of the next step of operation guidance information corresponding to the operation guidance information includes:
and if the operation guide information is finished, reading the next operation guide information corresponding to the operation guide information.
13. An intelligent manufacturing method based on augmented reality at a network device end, wherein the method comprises:
sending next step operation guiding information corresponding to the operation guiding information to corresponding first user equipment, wherein the next step operation guiding information comprises next step operation of the operation guiding information corresponding to a target object, the operation guiding information is used for being superposed and displayed on the target object according to second position information, the second position information comprises position information of the target object in a corresponding second coordinate system, the second position information is determined based on first position information of the target object in a first coordinate system corresponding to an operation area, a coordinate mapping relation between the second coordinate system and a first coordinate system, the coordinate mapping relation is determined according to a first coordinate mapping relation between the second coordinate system and a human eye coordinate system, a second coordinate mapping relation between the human eye coordinate system and a camera coordinate system and a third coordinate mapping relation between the camera coordinate system and the first coordinate system, the second coordinate system comprises a two-dimensional coordinate system established based on a screen of the first user equipment, and the human eye coordinate system comprises a three-dimensional coordinate system established based on a human eye and the screen.
14. The method of claim 13, wherein the sending next step work guidance information corresponding to the work guidance information to the corresponding first user equipment comprises:
receiving a next operation request which is sent by corresponding second user equipment and corresponds to operation guidance information in corresponding first user equipment, wherein the first user equipment and the second user equipment serve the same intelligent manufacturing task;
determining next step operation guide information corresponding to the operation guide information;
and sending the next operation guide information to the second user equipment.
15. The method of claim 13, wherein the sending next step work guidance information corresponding to the work guidance information to the corresponding first user equipment comprises:
receiving a next operation request which is sent by corresponding first user equipment and relates to the operation guidance information;
determining next step operation guide information corresponding to the operation guide information;
and sending the next step of operation guidance information to corresponding second user equipment, wherein the first user equipment and the second user equipment serve the same intelligent manufacturing task.
16. The method of claim 13, wherein the method further comprises:
receiving image information which is sent by corresponding first user equipment and is about a target object, wherein the image information is obtained by shooting the target object which is operated through a shooting device in the first user equipment;
determining whether the job guidance information in the first user equipment is finished according to the image information;
the step of sending the next operation guidance information corresponding to the operation guidance information to the corresponding first user equipment comprises the following steps:
and if the operation guide information is finished, sending next operation guide information corresponding to the operation guide information to the first user equipment.
17. An augmented reality based smart manufacturing method at a second user equipment, wherein the method comprises:
acquiring a next operation request which is submitted by a user through the second user equipment and corresponds to the operation guidance information in the corresponding first user equipment, wherein the first user equipment and the second user equipment serve the same intelligent manufacturing task;
sending the next operation request to corresponding network equipment;
receiving next operation guiding information which is sent by the network equipment based on the next operation request and corresponds to the operation guiding information, wherein the next operation guiding information comprises next operation of the operation guiding information corresponding to a target object, the operation guiding information is used for being superposed and displayed on the target object according to second position information, the second position information comprises position information of the target object in a corresponding second coordinate system, the second position information is determined based on first position information of the target object in a first coordinate system corresponding to an operation area, a coordinate mapping relation between the second coordinate system and the first coordinate system, and the coordinate mapping relation is determined according to a first coordinate mapping relation between the second coordinate system and a human eye coordinate system, a second coordinate mapping relation between the human eye coordinate system and a camera coordinate system, and a third coordinate mapping relation between the camera coordinate system and the first coordinate system The second coordinate system comprises a two-dimensional coordinate system established based on a screen of the first user equipment, and the human eye coordinate system comprises a three-dimensional coordinate system established based on human eyes and the screen;
and sending the next operation guide information to the first user equipment corresponding to the operation guide information.
18. The method of claim 17, wherein the obtaining of the next operation request corresponding to the job guidance information in the corresponding first user equipment submitted by the user through the second user equipment, wherein the first user equipment and the second user equipment serve the same intelligent manufacturing task comprises:
and receiving a next operation request which is sent by first user equipment and corresponds to the operation guidance information in the first user equipment, wherein the first user equipment and the second user equipment serve the same intelligent manufacturing task.
19. An augmented reality based smart manufacturing method, wherein the method comprises:
the method comprises the steps that first user equipment obtains first position information of a target object in an operation area in a first coordinate system corresponding to the operation area, and determines a coordinate mapping relation between a second coordinate system corresponding to the first user equipment and a first coordinate mapping relation of a human eye coordinate system, a second coordinate mapping relation of the human eye coordinate system and a camera coordinate system and a third coordinate mapping relation of the camera coordinate system and the first coordinate system, wherein the second coordinate system comprises a two-dimensional coordinate system established based on a screen of the first user equipment, and the human eye coordinate system comprises a three-dimensional coordinate system established based on human eyes and the screen; determining second position information of the target object in the second coordinate system based on the first position information and the coordinate mapping relation between the second coordinate system and the first coordinate system, acquiring operation guide information corresponding to the target object, and displaying the operation guide information corresponding to the target object in a superposed manner on the target object according to the second position information;
the network equipment sends next step operation guide information corresponding to the operation guide information to the first user equipment;
and the first user equipment receives the next operation guidance information and displays the next operation guidance information on the target object in an overlapping manner.
20. An augmented reality based smart manufacturing method, wherein the method comprises:
the method comprises the steps that first user equipment obtains first position information of a target object in an operation area in a first coordinate system corresponding to the operation area, and determines a coordinate mapping relation between a second coordinate system corresponding to the first user equipment and a first coordinate mapping relation of a human eye coordinate system, a second coordinate mapping relation of the human eye coordinate system and a camera coordinate system and a third coordinate mapping relation of the camera coordinate system and the first coordinate system, wherein the second coordinate system comprises a two-dimensional coordinate system established based on a screen of the first user equipment, and the human eye coordinate system comprises a three-dimensional coordinate system established based on human eyes and the screen; determining second position information of the target object in the second coordinate system based on the first position information and the coordinate mapping relation between the second coordinate system and the first coordinate system, acquiring operation guide information corresponding to the target object, and displaying the operation guide information corresponding to the target object in a superposed manner on the target object according to the second position information;
the method comprises the steps that a second user device obtains a next operation request which is submitted by a user through the second user device and corresponds to operation guide information in a first user device, wherein the first user device and the second user device serve the same intelligent manufacturing task;
the second user equipment sends the next operation request to corresponding network equipment;
the network equipment receives the next operation request, determines next operation guide information corresponding to the operation guide information, and sends the next operation guide information to the second user equipment;
the second user equipment receives the next operation guide information and sends the next operation guide information to the first user equipment;
and the first user equipment receives the next operation guidance information and displays the next operation guidance information on the target object in an overlapping manner.
21. An augmented reality based first user device for smart manufacturing, wherein the device comprises:
the first position acquisition module is used for acquiring first position information of a target object in an operation area in a first coordinate system corresponding to the operation area;
the second position acquisition module is used for determining a coordinate mapping relation between a second coordinate system corresponding to the first user equipment and the first coordinate system according to a first coordinate mapping relation between the second coordinate system corresponding to the first user equipment and a human eye coordinate system, a second coordinate mapping relation between the human eye coordinate system and a camera coordinate system and a third coordinate mapping relation between the camera coordinate system and the first coordinate system, wherein the second coordinate system comprises a two-dimensional coordinate system established based on a screen of the first user equipment, and the human eye coordinate system comprises a three-dimensional coordinate system established based on a human eye and the screen; determining second position information of the target object in the second coordinate system based on the first position information and the coordinate mapping relation between the second coordinate system and the first coordinate system;
the guidance information acquisition module is used for acquiring operation guidance information corresponding to the target object;
and the guidance information superposition module is used for superposing and displaying the operation guidance information corresponding to the target object on the target object according to the second position information.
22. The apparatus of claim 21, wherein the apparatus further comprises:
and the operation guidance sending module is used for sending the operation guidance information to corresponding third user equipment so that the third user equipment can superpose and display the operation guidance information on the image of the target object.
23. The apparatus of claim 21 or 22, wherein the apparatus further comprises:
a next step instruction acquisition module for acquiring next step operation instruction information corresponding to the operation instruction information;
and the next step of guidance superposition module is used for superposing and displaying the next step of operation guidance information on the target object.
24. The device of claim 23, wherein the next step direction acquisition module is to:
and receiving next operation guide information which is sent by the corresponding network equipment and corresponds to the operation guide information.
25. The device of claim 24, wherein the next step direction acquisition module is to:
acquiring a next operation request which is submitted by a user through the first user equipment and relates to the operation guide information;
sending the next operation request to the network equipment;
and receiving next operation guide information which is sent by the network equipment based on the next operation request and corresponds to the operation guide information.
26. The apparatus of claim 24, wherein the apparatus further comprises:
the shooting operation module is used for shooting the operated target object through a shooting device in the first user equipment and sending the shot image information to corresponding network equipment;
wherein the next step guidance acquisition module is configured to:
and receiving next step operation guide information which is sent by the network equipment after the operation guide information is determined to be finished according to the image information and corresponds to the operation guide information.
27. The apparatus of claim 24, wherein the apparatus further comprises a next step reading module to:
shooting real-time image information of the operated target object through a shooting device in the first user equipment;
determining whether the operation guidance information in the first user equipment is finished according to the real-time image information;
wherein the next step guidance acquisition module is configured to:
and if the operation guide information is finished, reading the next operation guide information corresponding to the operation guide information.
28. An augmented reality based intelligently manufactured network device, wherein the device comprises:
a next step instruction sending module, configured to send next step operation instruction information corresponding to the operation instruction information to a corresponding first user device, where the next step operation instruction information includes a next step operation of the operation instruction information corresponding to a target object, the operation instruction information is used to be displayed on the target object in an overlapping manner according to second position information, the second position information includes position information of the target object in a corresponding second coordinate system, the second position information is determined based on first position information of the target object in a first coordinate system corresponding to an operation area, a coordinate mapping relationship between the second coordinate system and the first coordinate system, the coordinate mapping relationship is determined according to a first coordinate mapping relationship between the second coordinate system and a human eye coordinate system, a second coordinate mapping relationship between the human eye coordinate system and a camera coordinate system, and a third coordinate mapping relationship between the camera coordinate system and the first coordinate system, the second coordinate system comprises a two-dimensional coordinate system established based on a screen of the first user equipment, and the human eye coordinate system comprises a three-dimensional coordinate system established based on a human eye and the screen.
29. A second augmented reality-based smart manufactured user device, wherein the device comprises:
a next step request acquisition module, configured to acquire a next step operation request corresponding to job guidance information in a corresponding first user equipment, where the first user equipment and the second user equipment serve a same intelligent manufacturing task, and the next step operation request is submitted by a user through the second user equipment;
a next step request sending module, configured to send the next step operation request to a corresponding network device;
a next step instruction receiving module, configured to receive next step operation instruction information corresponding to the operation instruction information, where the next step operation instruction information includes a next step operation of the operation instruction information corresponding to a target object, the operation instruction information is used to be displayed on the target object in an overlapping manner according to second position information, the second position information includes position information of the target object in a corresponding second coordinate system, the second position information is determined according to first position information of the target object in a first coordinate system corresponding to an operation area, a coordinate mapping relationship between the second coordinate system and the first coordinate system, and the coordinate mapping relationship is determined according to a first coordinate mapping relationship between the second coordinate system and a human eye coordinate system, a second coordinate mapping relationship between the human eye coordinate system and a camera coordinate system, and a coordinate mapping relationship between the camera coordinate system and the first coordinate system Determining a third coordinate mapping relation of a system, wherein the second coordinate system comprises a two-dimensional coordinate system established based on a screen of the first user equipment, and the human eye coordinate system comprises a three-dimensional coordinate system established based on human eyes and the screen;
and the next step of instruction sending module is used for sending the next step of operation instruction information to the first user equipment corresponding to the operation instruction information.
30. An augmented reality based smart manufacturing system, wherein the system comprises a first user device as claimed in any one of claims 21 to 27 and a network device as claimed in claim 28.
31. The system of claim 30, further comprising a second user equipment as claimed in claim 29.
32. An augmented reality based smart manufacturing apparatus, wherein the apparatus comprises:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the operations of the method of any of claims 1 to 18.
33. A computer-readable medium comprising instructions that, when executed, cause a system to perform the operations of any of the methods of claims 1-18.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2018106678609 | 2018-06-26 | ||
CN201810667860 | 2018-06-26 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109032348A CN109032348A (en) | 2018-12-18 |
CN109032348B true CN109032348B (en) | 2021-09-14 |
Family
ID=64641502
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810743981.7A Active CN109032348B (en) | 2018-06-26 | 2018-07-09 | Intelligent manufacturing method and equipment based on augmented reality |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109032348B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110196580B (en) * | 2019-05-29 | 2020-12-15 | 中国第一汽车股份有限公司 | Assembly guidance method, system, server and storage medium |
CN110728756B (en) * | 2019-09-30 | 2024-02-09 | 亮风台(上海)信息科技有限公司 | Remote guidance method and device based on augmented reality |
CN113191717A (en) * | 2020-01-14 | 2021-07-30 | 海尔数字科技(青岛)有限公司 | Data processing method, device, equipment and medium |
CN111462341A (en) * | 2020-04-07 | 2020-07-28 | 江南造船(集团)有限责任公司 | Augmented reality construction assisting method, device, terminal and medium |
CN111583419A (en) * | 2020-05-25 | 2020-08-25 | 重庆忽米网络科技有限公司 | 5G-based reality augmentation auxiliary assembly method and system |
CN112288882A (en) * | 2020-10-30 | 2021-01-29 | 北京市商汤科技开发有限公司 | Information display method and device, computer equipment and storage medium |
CN112365607A (en) * | 2020-11-06 | 2021-02-12 | 北京市商汤科技开发有限公司 | Augmented reality AR interaction method, device, equipment and storage medium |
CN112365574A (en) * | 2020-11-06 | 2021-02-12 | 北京市商汤科技开发有限公司 | Method, device, equipment and storage medium for displaying augmented reality AR information |
CN112734588A (en) * | 2021-01-05 | 2021-04-30 | 新代科技(苏州)有限公司 | Augmented reality processing auxiliary system and use method thereof |
CN114063512B (en) * | 2021-11-15 | 2023-09-19 | 中国联合网络通信集团有限公司 | Maintenance service guiding and monitoring method, cloud platform, AR glasses and system |
CN117528399A (en) * | 2022-07-29 | 2024-02-06 | 华为技术有限公司 | Method for installing intelligent device and electronic device |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102789514A (en) * | 2012-04-20 | 2012-11-21 | 青岛理工大学 | Induction method for 3D online induction system for mechanical equipment disassembly and assembly |
CN103797472A (en) * | 2011-07-12 | 2014-05-14 | 谷歌公司 | Systems and methods for accessing an interaction state between multiple devices |
CN104484523A (en) * | 2014-12-12 | 2015-04-01 | 西安交通大学 | Equipment and method for realizing augmented reality induced maintenance system |
CN104820585A (en) * | 2014-01-30 | 2015-08-05 | 卡雷风险投资有限责任公司 | Apparatus and Method for Multi-User Editing of Computer-Generated Content |
CN106814457A (en) * | 2017-01-20 | 2017-06-09 | 杭州青杉奇勋科技有限公司 | Augmented reality glasses and the method that household displaying is carried out using the glasses |
CN106817568A (en) * | 2016-12-05 | 2017-06-09 | 网易(杭州)网络有限公司 | A kind of augmented reality display methods and device |
CN106993181A (en) * | 2016-11-02 | 2017-07-28 | 大辅科技(北京)有限公司 | Many VR/AR equipment collaborations systems and Synergistic method |
CN107065810A (en) * | 2017-06-05 | 2017-08-18 | 深圳增强现实技术有限公司 | A kind of method and system of augmented reality industrial operation auxiliary |
CN107168537A (en) * | 2017-05-19 | 2017-09-15 | 山东万腾电子科技有限公司 | A kind of wearable task instruction method and system of collaborative augmented reality |
CN207319184U (en) * | 2017-10-23 | 2018-05-04 | 北京华锐同创系统技术有限公司 | A kind of field device maintenance maintenance and current check system |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4532856B2 (en) * | 2003-07-08 | 2010-08-25 | キヤノン株式会社 | Position and orientation measurement method and apparatus |
IL169934A (en) * | 2005-07-27 | 2013-02-28 | Rafael Advanced Defense Sys | Real-time geographic information system and method |
US20150170256A1 (en) * | 2008-06-05 | 2015-06-18 | Aisle411, Inc. | Systems and Methods for Presenting Information Associated With a Three-Dimensional Location on a Two-Dimensional Display |
KR101561913B1 (en) * | 2009-04-17 | 2015-10-20 | 엘지전자 주식회사 | Method for displaying image for mobile terminal and apparatus thereof |
JP4679661B1 (en) * | 2009-12-15 | 2011-04-27 | 株式会社東芝 | Information presenting apparatus, information presenting method, and program |
US8994558B2 (en) * | 2012-02-01 | 2015-03-31 | Electronics And Telecommunications Research Institute | Automotive augmented reality head-up display apparatus and method |
US9293118B2 (en) * | 2012-03-30 | 2016-03-22 | Sony Corporation | Client device |
US9699375B2 (en) * | 2013-04-05 | 2017-07-04 | Nokia Technology Oy | Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system |
-
2018
- 2018-07-09 CN CN201810743981.7A patent/CN109032348B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103797472A (en) * | 2011-07-12 | 2014-05-14 | 谷歌公司 | Systems and methods for accessing an interaction state between multiple devices |
CN102789514A (en) * | 2012-04-20 | 2012-11-21 | 青岛理工大学 | Induction method for 3D online induction system for mechanical equipment disassembly and assembly |
CN104820585A (en) * | 2014-01-30 | 2015-08-05 | 卡雷风险投资有限责任公司 | Apparatus and Method for Multi-User Editing of Computer-Generated Content |
CN104484523A (en) * | 2014-12-12 | 2015-04-01 | 西安交通大学 | Equipment and method for realizing augmented reality induced maintenance system |
CN106993181A (en) * | 2016-11-02 | 2017-07-28 | 大辅科技(北京)有限公司 | Many VR/AR equipment collaborations systems and Synergistic method |
CN106817568A (en) * | 2016-12-05 | 2017-06-09 | 网易(杭州)网络有限公司 | A kind of augmented reality display methods and device |
CN106814457A (en) * | 2017-01-20 | 2017-06-09 | 杭州青杉奇勋科技有限公司 | Augmented reality glasses and the method that household displaying is carried out using the glasses |
CN107168537A (en) * | 2017-05-19 | 2017-09-15 | 山东万腾电子科技有限公司 | A kind of wearable task instruction method and system of collaborative augmented reality |
CN107065810A (en) * | 2017-06-05 | 2017-08-18 | 深圳增强现实技术有限公司 | A kind of method and system of augmented reality industrial operation auxiliary |
CN207319184U (en) * | 2017-10-23 | 2018-05-04 | 北京华锐同创系统技术有限公司 | A kind of field device maintenance maintenance and current check system |
Also Published As
Publication number | Publication date |
---|---|
CN109032348A (en) | 2018-12-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109032348B (en) | Intelligent manufacturing method and equipment based on augmented reality | |
CN108304075B (en) | Method and device for performing man-machine interaction on augmented reality device | |
CN109887003B (en) | Method and equipment for carrying out three-dimensional tracking initialization | |
Ahn et al. | 2D drawing visualization framework for applying projection-based augmented reality in a panelized construction manufacturing facility: Proof of concept | |
CN109615664B (en) | Calibration method and device for optical perspective augmented reality display | |
CN113741698A (en) | Method and equipment for determining and presenting target mark information | |
CN113240769B (en) | Spatial link relation identification method and device and storage medium | |
CN109191554B (en) | Super-resolution image reconstruction method, device, terminal and storage medium | |
JP6160290B2 (en) | Information processing apparatus, determination method, and determination program | |
CN109584377B (en) | Method and device for presenting augmented reality content | |
CN109582147B (en) | Method for presenting enhanced interactive content and user equipment | |
CN111652946B (en) | Display calibration method and device, equipment and storage medium | |
US20180204387A1 (en) | Image generation device, image generation system, and image generation method | |
CN109656363B (en) | Method and equipment for setting enhanced interactive content | |
CN110728756B (en) | Remote guidance method and device based on augmented reality | |
CN112669392B (en) | Map positioning method and system applied to indoor video monitoring system | |
CN110751735A (en) | Remote guidance method and device based on augmented reality | |
CN112288826A (en) | Calibration method and device of binocular camera and terminal | |
CN113869231B (en) | Method and equipment for acquiring real-time image information of target object | |
CN116109684B (en) | Online video monitoring two-dimensional and three-dimensional data mapping method and device for variable electric field station | |
CA3120722C (en) | Method and apparatus for planning sample points for surveying and mapping, control terminal and storage medium | |
CN112950759B (en) | Three-dimensional house model construction method and device based on house panoramic image | |
WO2022160406A1 (en) | Implementation method and system for internet of things practical training system based on augmented reality technology | |
Askarian Bajestani et al. | Scalable and view-independent calibration of multi-projector display for arbitrary uneven surfaces | |
CN110675445B (en) | Visual positioning method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP02 | Change in the address of a patent holder | ||
CP02 | Change in the address of a patent holder |
Address after: 201210 7th Floor, No. 1, Lane 5005, Shenjiang Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai Patentee after: HISCENE INFORMATION TECHNOLOGY Co.,Ltd. Patentee after: CHINA ELECTRONICS STANDARDIZATION INSTITUTE Address before: Room 501 / 503-505, 570 shengxia Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 201203 Patentee before: HISCENE INFORMATION TECHNOLOGY Co.,Ltd. Patentee before: CHINA ELECTRONICS STANDARDIZATION INSTITUTE |