CN109584376B - Composition teaching method, device and equipment based on VR technology and storage medium - Google Patents
Composition teaching method, device and equipment based on VR technology and storage medium Download PDFInfo
- Publication number
- CN109584376B CN109584376B CN201811468300.7A CN201811468300A CN109584376B CN 109584376 B CN109584376 B CN 109584376B CN 201811468300 A CN201811468300 A CN 201811468300A CN 109584376 B CN109584376 B CN 109584376B
- Authority
- CN
- China
- Prior art keywords
- composition
- user
- objects
- picture
- virtual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Educational Technology (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
The invention discloses a picture composition teaching method based on VR technology, which is applied to wearable equipment and comprises the following steps: receiving a composition instruction input by a user, and displaying a virtual environment containing a composition picture frame interface to the user through the wearable device; receiving an instruction of a user for selecting from the pre-constructed composition creation elements, receiving an instruction of the user for arranging the selected composition creation elements, and displaying the selected composition creation elements on an arrangement area corresponding to a composition picture frame interface; a final composition picture is generated. This application can combine together VR technique and education, utilizes new technological means to let the student realize the glamour of design and art, lets the student utilize this application to master and apply specific composition form and design the creation, has promoted user's teaching experience, has richened the diversification of teaching. In addition, the application also provides a composition teaching device, equipment and a computer readable storage medium based on the VR technology, which have the technical effects.
Description
Technical Field
The invention relates to the technical field of VR virtual reality, in particular to a composition teaching method, a composition teaching device, composition teaching equipment and a computer readable storage medium based on VR technology.
Background
Virtual Reality (VR) is a computer simulation system that creates and experiences a Virtual world, which uses a computer to create a simulated environment, which is a system simulation of multi-source information-fused, interactive, three-dimensional dynamic views and physical behaviors to immerse users in the environment.
The virtual reality technology is an important direction of the simulation technology, is a collection of various technologies such as the simulation technology, the computer graphics man-machine interface technology, the multimedia technology sensing technology network technology and the like, and is a challenging cross technology leading-edge subject and research field. Virtual reality technology (VR) mainly includes aspects of simulating environment, perception, natural skills, sensing equipment and the like. The simulated environment is a three-dimensional realistic image generated by a computer and dynamic in real time. Perception means that an ideal VR should have the perception that everyone has. In addition to visual perception generated by computer graphics technology, there are also perceptions such as auditory sensation, touch sensation, force sensation, and movement, and even olfactory sensation and taste sensation, which are also called multi-perception. The natural skill refers to the head rotation, eyes, gestures or other human body behavior actions of a human, and data adaptive to the actions of the participants are processed by the computer, respond to the input of the user in real time and are respectively fed back to the five sense organs of the user. The sensing device refers to a three-dimensional interaction device.
In recent years, VR roaming has been applied in tourist attractions in some cases to restore ancient culture. VR provides a new way of entertainment in the aspect of interactive games, so that players can experience immersive play. However, the application of VR technology in combination with education is still in the experimental stage at present, and new technology application to education has a broad market. In view of this, it is very necessary to provide a composition teaching method based on VR technology.
Disclosure of Invention
The invention aims to provide a composition teaching method, a composition teaching device, composition teaching equipment and a computer readable storage medium based on a VR (virtual reality) technology, so that the VR technology is combined with education, the application market of the VR technology is expanded, and the teaching experience of a user is improved.
In order to solve the technical problem, the invention provides a picture composition teaching method based on VR technology, which is applied to wearable equipment, and the method comprises the following steps:
receiving a composition instruction input by a user, and displaying a virtual environment containing a composition picture frame interface to the user through the wearable device;
receiving an instruction of a user for selecting from pre-constructed composition creation elements, receiving an instruction of the user for arranging the selected composition creation elements, and displaying the selected composition creation elements on an arrangement area corresponding to the composition frame interface;
a final composition picture is generated.
Optionally, after the generating the final composition picture, the method further includes:
and grading the composition picture according to a preset grading rule to obtain a composition score.
Optionally, the scoring the composition picture according to a preset scoring rule to obtain a composition score includes:
dividing the composition picture frame interface into n parts by acquiring coordinates of two diagonal points of the composition picture frame interface, and emitting n rays for ray detection;
acquiring the frame occupation ratio of a corresponding object in a composition frame interface through ray detection, and acquiring the maximum difference value of the number of lattices in the number of lattices occupied by all objects; determining a guest-host scoring value according to the difference, wherein the guest-host scoring value is higher if the difference is larger;
acquiring a lattice matrix returned by detection, sequentially retrieving according to the sequence, recording the lattice matrix into an array if an unretrieved object is retrieved, diffusing four objects around the retrieved object, continuing the retrieval process if the unretrieved object is retrieved again, recording the quantity of the retrieved adjacent objects into an array, acquiring a maximum quantity difference value, determining a density score value according to the maximum quantity difference value, wherein the larger the maximum quantity difference value is, the higher the density score value is;
acquiring coordinates of a corresponding object through ray detection, comparing the coordinates of the object to judge whether the distance between the object and the shooting equipment exceeds a specified distance, if the distance exceeds the specified distance, determining the distance is virtual, otherwise determining the distance is real, determining a virtual-real scoring value according to a virtual-real object quantity comparison value, and if the virtual-real object quantity comparison is smaller than a preset first ratio difference, determining the virtual-real scoring value to be higher;
and acquiring a lattice matrix returned by ray detection, retrieving the number of the lattices with the objects and the number of the lattices without the objects, comparing, determining a blank scoring value according to the ratio of the number of the lattices with the objects to the number of the lattices without the objects, wherein the blank scoring value is higher when the difference between the ratio of the number of the lattices with the objects to the number of the lattices without the objects and a preset second ratio is smaller.
Optionally, after the scoring is performed on the composition picture according to a preset scoring rule, obtaining a composition score, the method further includes:
generating display information according to the composition scores of a plurality of composition pictures, and displaying the display information on a ranking list, wherein the display information comprises any one or any combination of the following information: composition picture, composition score, character information.
Optionally, the method further comprises:
and receiving a preview instruction input by a user, and displaying a pre-stored scene virtual picture to the user through the wearable device.
Optionally, the method further comprises:
receiving an instruction input by a user for performing grabbing control on an object in a scene virtual picture, and performing movement and/or rotation control on the object in the scene virtual picture.
Optionally, the method further comprises:
and receiving an instruction input by a user for shooting a current scene virtual picture displayed by the display device, capturing the current scene virtual picture, and generating a shot image.
The application still provides a composition teaching device based on VR technique, is applied to wearable equipment, the device includes:
the display module is used for receiving a composition instruction input by a user and displaying a virtual environment containing a composition picture frame interface to the user through the wearable device;
the composition module is used for receiving an instruction of a user for selecting composition creation elements constructed in advance, receiving an instruction of the user for arranging the selected composition creation elements, and displaying the selected composition creation elements on an arrangement area corresponding to the composition picture frame interface;
and the generating module is used for generating a final composition picture.
The application also provides a composition teaching equipment based on VR technique includes:
a memory for storing a computer program;
a processor for implementing the steps of any one of the VR technology based composition teaching methods when executing the computer program.
The present application further provides a computer-readable storage medium having a computer program stored thereon, where the computer program, when executed by a processor, implements the steps of any of the VR technology-based composition teaching methods described above.
The invention provides a picture composition teaching method based on VR technology, which is applied to wearable equipment, and comprises the following steps: receiving a composition instruction input by a user, and displaying a virtual environment containing a composition picture frame interface to the user through the wearable device; receiving an instruction of a user for selecting from the pre-constructed composition creation elements, receiving an instruction of the user for arranging the selected composition creation elements, and displaying the selected composition creation elements on an arrangement area corresponding to a composition picture frame interface; a final composition picture is generated. This application can combine together VR technique and education, utilizes new technological means to let the student realize the glamour of design and art, lets the student utilize this application to master and apply specific composition form and design the creation, has promoted user's teaching experience, has richened the diversification of teaching. In addition, the application also provides a composition teaching device, equipment and a computer readable storage medium based on the VR technology, which have the technical effects.
Drawings
In order to more clearly illustrate the embodiments or technical solutions of the present invention, the drawings used in the embodiments or technical solutions of the present invention will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart of an embodiment of a composition teaching method based on VR technology according to the present invention;
FIG. 2 is a flowchart of another embodiment of a VR-based composition teaching method in accordance with the present invention;
fig. 3 is a schematic view of a cloud window preview roaming scenario in an embodiment of the present application;
FIG. 4 is a diagram illustrating a roaming scenario in an embodiment of the present application;
FIG. 5 is a schematic diagram of composition training in an embodiment of the present application;
FIG. 6 is an interface diagram of an interactive menu;
FIG. 7 is a schematic view of an appreciation of the work in an embodiment of the present application;
FIG. 8 is a functional diagram of taking a photograph in an embodiment of the present application;
FIG. 9 is a functional diagram illustrating a function of viewing a photograph in an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating scoring of a current composition according to an embodiment of the present application;
FIG. 11 is a diagram illustrating a composition creation element being grabbed in an embodiment of the present application;
FIG. 12 is a block diagram of a VR technology based composition teaching device according to an embodiment of the present invention;
fig. 13 is a block diagram of a composition teaching apparatus based on VR technology according to an embodiment of the present invention.
Detailed Description
VR roaming has been applied to tourist attractions in some cases to restore ancient cultural events. VR provides a new way of entertainment in the aspect of interactive games, so that players can experience immersive play. However, the application of VR technology in combination with education is still in the experimental stage at present, and new technology application to education has a broad market. The core of the invention is to provide a picture composition teaching method, a picture composition teaching device, picture composition teaching equipment and a computer readable storage medium based on a VR technology, so that the VR technology and education are combined, the application market of the VR technology is expanded, and the teaching experience of a user is improved.
In order that those skilled in the art will better understand the disclosure, reference will now be made in detail to the embodiments of the disclosure as illustrated in the accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A flowchart of a specific embodiment of a VR technology-based composition teaching method provided by the present invention is shown in fig. 1, and the method is applied to a wearable device, and specifically includes:
step S101: receiving a composition instruction input by a user, and displaying a virtual environment containing a composition picture frame interface to the user through the wearable device;
after the user wears the wearable device, man-machine interaction with the wearable device can be achieved. The wearable device presents a virtual environment containing a composition picture frame interface to a user. The wearable device may be embodied as a wearable helmet, but may also be other wearable devices, which is not limited herein.
Step S102: receiving an instruction of a user for selecting from pre-constructed composition creation elements, receiving an instruction of the user for arranging the selected composition creation elements, and displaying the selected composition creation elements on an arrangement area corresponding to the composition frame interface;
composition creation element can carry out corresponding setting in advance according to the demand of difference in this application, can be used for creating different theme works through setting up different composition creation elements. The user can directly select a desired one from a plurality of composition authoring elements. And arranging in the composition picture frame interface, and displaying at a corresponding position on the composition picture frame interface.
Step S103: a final composition picture is generated.
The invention provides a picture composition teaching method based on VR technology, which is applied to wearable equipment, and comprises the following steps: receiving a composition instruction input by a user, and displaying a virtual environment containing a composition picture frame interface to the user through the wearable device; receiving an instruction of a user for selecting from the pre-constructed composition creation elements, receiving an instruction of the user for arranging the selected composition creation elements, and displaying the selected composition creation elements on an arrangement area corresponding to a composition picture frame interface; a final composition picture is generated. This application can combine together VR technique and education, utilizes new technological means to let the student realize the glamour of design and art, lets the student utilize this application to master and apply specific composition form and design the creation, has promoted user's teaching experience, has richened the diversification of teaching.
The application blends the VR virtual reality technology and the artistic thinking closely, and the VR virtual reality technology and the artistic thinking are deeply permeated together. The method has the greatest advantages that the dialogue between the works and the participants is constructed, and more immersive artistic environment and dream which cannot be realized in real conditions are created. The participator can create in the project, various ideas in the brain are changed into visible virtual objects and environments, people can understand and experience the design composition, and the aesthetic perception is improved.
As a specific implementation manner, the embodiment of the application can be specifically applied to composition training of design basic courses. For example, with the traditional Chinese painting style as the creation background, after wearing the wearable device, the virtual scene of setting such as the artistic conception of the water countryside in the south of the Yangtze river can be experienced, and the artistic literacy of students is cultivated.
On the basis of the above embodiments, the composition teaching method based on VR technology provided by this application further includes, after generating a final composition screen: and grading the composition picture according to a preset grading rule to obtain a composition score.
As a specific implementation manner, referring to fig. 2, the process of scoring the composition picture according to a preset scoring rule to obtain a composition score may specifically include:
step S201, dividing the composition picture frame interface into n parts by acquiring coordinates of two diagonal points of the composition picture frame interface, and emitting n rays for ray detection;
step S202: carrying out guest-host scoring;
acquiring the frame occupation ratio of a corresponding object in a composition frame interface through ray detection, and acquiring the maximum difference value of the number of lattices in the number of lattices occupied by all objects; if the difference is large, it is proved that the number of the objects is far more than the number of lattices of other objects, and the operation is mainly performed. The larger the difference, the stronger the host-guest contrast, the higher the score. Determining a guest-host scoring value according to the difference, wherein the guest-host scoring value is higher if the difference is larger;
step S203: carrying out density scoring;
acquiring a lattice matrix returned by detection, sequentially retrieving according to the sequence, wherein the sequence can be from left to right and from top to bottom, recording the lattice matrix into an array if an object which is not retrieved is retrieved, diffusing four objects around the retrieved object, continuing the retrieval process if the object which is not retrieved is retrieved again, recording the number of the retrieved adjacent objects into an array, acquiring the maximum number difference, determining the density score value according to the maximum number difference, and increasing the density score value if the maximum number difference is larger;
step S204: performing virtual-actual scoring;
acquiring coordinates of a corresponding object through ray detection, comparing the coordinates of the object to judge whether the distance between the object and the shooting equipment exceeds a specified distance, if the distance exceeds the specified distance, determining the distance is virtual, otherwise determining the distance is real, determining a virtual-real scoring value according to a virtual-real object quantity comparison value, and if the difference between the virtual-real object quantity comparison and a preset first ratio is smaller, determining the virtual-real scoring value to be higher; the preset first ratio may be 1.
Step S205: carrying out a blank scoring;
and acquiring a lattice matrix returned by ray detection, retrieving the number of the lattices with the objects and the number of the lattices without the objects, comparing, determining a blank scoring value according to the ratio of the number of the lattices with the objects to the number of the lattices without the objects, wherein the blank scoring value is higher when the difference between the ratio of the number of the lattices with the objects to the number of the lattices without the objects and a preset second ratio is smaller. The preset second ratio may be 3.
Step S206: a composition score is obtained based on the above scores.
The sum of the score of each item may be averaged to obtain a composition score. Of course, different weights corresponding to different scoring items may be used to calculate the final composition score.
Further, after the composition picture is scored according to a preset scoring rule, obtaining a composition score, the method further includes: generating display information according to the composition scores of the composition pictures, and displaying the display information on the ranking list, wherein the display information comprises any one or any combination of the following information: composition picture, composition score, role information. The scoring ranking of each design composition can be clearly recorded through the ranking list.
The method takes the traditional Chinese painting composition principle as a scoring standard, and works are freely created by triggering the functions of moving, zooming, rotating and the like of objects in the painting frame.
On the basis of any of the above embodiments, the VR technology-based composition teaching method provided by the present application may further include: and receiving a preview instruction input by a user, and displaying a pre-stored scene virtual picture to the user through the wearable device.
Specifically, the interest of the participant who wants to enter the tour can be stimulated by setting a cloud window to preview the roaming scene, as shown in fig. 3, which is a schematic diagram of the cloud window preview roaming scene in the embodiment of the present application. The participator can enter a scene with the subject of the design composition, and freely swim and collect wind in the scene, as shown in a schematic diagram of a roaming scene in the embodiment of fig. 4, the user can personally experience the traditional Chinese painting mood, and more artistic inspiration and creation power can be obtained in practice perception.
On the basis of any of the above embodiments, the VR technology-based composition teaching method provided by the present application may further include: receiving an instruction input by a user for performing grabbing control on an object in a scene virtual picture, and performing movement and/or rotation control on the object in the scene virtual picture.
The participator can grab the object and put into the picture frame, and can rotate, zoom and move the object, so as to freely create the artistic works. A set of scoring algorithm is developed in the scene in a Chinese painting composition mode, the score ranking of each designed work is recorded, and a schematic diagram of composition training in the embodiment of the application is referred to FIG. 5.
In addition, the composition teaching method based on VR technology provided by the present application may further include: and receiving an instruction input by a user for shooting the current scene virtual picture displayed by the display device, capturing the current scene virtual picture, and generating a shot image.
This application is through feeling the artistic conception of chinese painting, and the application function of shooing can let the participant collect the creation material at any time.
The following describes in detail a specific implementation process of the VR technology-based composition teaching method provided in the present application with reference to specific diagrams. In this embodiment, the home page scene includes a home page interface, a role interface, and a checkpoint interface. And jumping to a roaming interface or a composition training interface can be selected on the level interface.
The uppermost menu button may display/hide all UI menus, and pressing the click button of the handle protrusion may be click-determined for all UI functions. The interactive menu includes a works appreciation function, a photographing function, a help function, and a composition training, tour gate and home gate switching function, as shown in an interface diagram of the interactive menu in fig. 6.
Wherein, the works appreciation function is provided with a second-level menu: the appreciation function of the master works is shown in fig. 7, which is an appreciation schematic diagram of the works in the embodiment of the present application. Moreover, a secondary menu is arranged under the photographing function: the take and view photos function as shown in fig. 8 and 9. Finally, a second level menu is provided under the help function: instructions for use of the handle.
The interactive design comprises the following steps:
(1) Instantaneous shift: pressing a big button in the middle of the handle can emit a ray, and the target site can be reached after releasing the ray.
(2) Grabbing interaction: objects with blue icons can be grabbed in the scene, and grabbing can be performed by pressing a side button.
The UI interface design comprises the following steps:
(1) The scoring function is as follows: the menu button at the top of the handle may display/hide the score of the current composition as shown in fig. 10, which is a schematic diagram illustrating the scoring of the current composition according to the embodiment of the present application.
(2) Ranking list: and pressing a ranking list button to jump to a role ranking interface.
(3) And returning to the home page: pressing the back button may return to the home interface.
In the composition training scene operation process, the interactive design comprises the following steps:
(1) Grabbing interaction: the grasping can be performed by pressing a button on the side of the handle, as shown in fig. 11, which is a schematic diagram illustrating the grasping operation performed on the composition creation element in the embodiment of the present application.
(2) Placing and crossing: depressing a click button on the handle protrusion triggers and moves the object. A large button in the middle of the handle is used for zooming objects up and down and rotating objects left and right.
(3) The scoring function is as follows: pressing the menu button in front of the frame allows the score of the current composition to be viewed.
Furthermore, the operations of handle clicking, instantaneous moving, grabbing and the like can also have corresponding sound effect feedback, and objects to be grabbed have corresponding sound effect feedback, such as birds, fishes, grass, lotus and the like, have corresponding personalized sound effects.
In addition, the voice explanation of the part can be triggered, such as the function of product appreciation, and the appreciation sound effect can be played at the same time.
In the following, the composition teaching apparatus based on VR technology provided by the embodiment of the present invention is described, and the composition teaching apparatus based on VR technology described below and the composition teaching method based on VR technology described above may be referred to correspondingly.
Fig. 12 is a block diagram of a configuration teaching apparatus based on VR technology according to an embodiment of the present invention, where the configuration teaching apparatus based on VR technology referring to fig. 12 may include:
the display module 100 is configured to receive a composition instruction input by a user, and display a virtual environment including a composition picture frame interface to the user through the wearable device;
the composition module 200 is configured to receive an instruction of a user selecting from pre-constructed composition creation elements, receive an instruction of a user arranging the selected composition creation elements, and display the selected composition creation elements on an arrangement region corresponding to the composition frame interface;
a generating module 300 for generating a final composition picture.
The composition teaching device based on the VR technology of this embodiment is used to implement the composition teaching method based on the VR technology, and therefore specific embodiments in the composition teaching device based on the VR technology can be found in the embodiments of the composition teaching method based on the VR technology in the foregoing, for example, the display module 100, the composition module 200, and the generation module 300 are respectively used to implement steps S101, S102, and S103 in the composition teaching method based on the VR technology, so that specific embodiments thereof may refer to descriptions of corresponding embodiments of each part, and are not described herein again.
The invention provides a composition teaching device based on VR technology, which is applied to wearable equipment, and the method comprises the following steps: receiving a composition instruction input by a user, and displaying a virtual environment containing a composition picture frame interface to the user through the wearable device; receiving an instruction of a user for selecting from the pre-constructed composition creation elements, receiving an instruction of the user for arranging the selected composition creation elements, and displaying the selected composition creation elements on an arrangement area corresponding to a composition frame interface; a final composition picture is generated. This application can combine together VR technique and education, utilizes new technological means to let the student realize the glamour of design and art, lets the student utilize this application to master and apply specific composition form and design the creation, has promoted user's teaching experience, has richened the diversification of teaching.
In addition, this application still provides a composition teaching equipment based on VR technique, as shown in fig. 13, this equipment specifically includes:
a memory 11 for storing a computer program;
a processor 12 for implementing the steps of any one of the VR technology based composition teaching methods when executing the computer program.
Furthermore, the present application also provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of any of the VR technology-based composition teaching methods described above.
To sum up, this application can combine together VR technique and education, utilizes new technological means to let the student realize the glamour of design and art, lets the student utilize this application to master and use specific composition form to design the creation, has promoted user's teaching experience, has richened the diversification of teaching.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed in the embodiment corresponds to the method disclosed in the embodiment, so that the description is simple, and the relevant points can be referred to the description of the method part.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The VR technology-based composition teaching method, apparatus, device, and computer-readable storage medium provided by the present invention are described in detail above. The principles and embodiments of the present invention are explained herein using specific examples, which are presented only to assist in understanding the method and its core concepts. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.
Claims (9)
1. A composition teaching method based on VR technology is applied to wearable equipment, and is characterized in that the method comprises the following steps:
receiving a composition instruction input by a user, and displaying a virtual environment containing a composition picture frame interface to the user through the wearable device;
receiving an instruction of selecting the composition creation elements from the pre-constructed composition creation elements by a user, receiving an instruction of arranging the selected composition creation elements by the user, and displaying the selected composition creation elements on the arrangement area corresponding to the composition frame interface;
generating a final composition picture;
wherein, the scoring the composition picture according to a preset scoring rule to obtain the composition score comprises:
dividing the composition frame interface into n parts by acquiring coordinates of two diagonal points of the composition frame interface, and transmitting n rays for ray detection;
through ray detection, obtaining the frame occupation ratio of a corresponding object in a composition frame interface, and obtaining the maximum difference value of the number of lattices in the number of lattices occupied by all objects; determining a guest-host scoring value according to the difference, wherein the guest-host scoring value is higher if the difference is larger;
acquiring a lattice matrix returned by detection, sequentially retrieving according to the sequence, recording the lattice matrix into an array if an unretrieved object is retrieved, diffusing four objects around the retrieved object, continuing the retrieval process if the unretrieved object is retrieved again, recording the quantity of the retrieved adjacent objects into an array, acquiring a maximum quantity difference value, determining a density score value according to the maximum quantity difference value, wherein the larger the maximum quantity difference value is, the higher the density score value is;
acquiring coordinates of a corresponding object through ray detection, comparing the coordinates of the object to judge whether the distance between the object and the shooting equipment exceeds a specified distance, if the distance exceeds the specified distance, determining the distance is virtual, otherwise determining the distance is real, determining a virtual-real scoring value according to a virtual-real object quantity comparison value, and if the difference between the virtual-real object quantity comparison and a preset first ratio is smaller, determining the virtual-real scoring value to be higher;
and acquiring a lattice matrix returned by ray detection, retrieving the number of the lattices with the objects and the number of the lattices without the objects, comparing, determining a blank scoring value according to the ratio of the number of the lattices with the objects to the number of the lattices without the objects, wherein the blank scoring value is higher when the difference between the ratio of the number of the lattices with the objects to the number of the lattices without the objects and a preset second ratio is smaller.
2. The VR technology-based composition teaching method of claim 1, further comprising, after the generating a final composition picture:
and scoring the composition picture according to a preset scoring rule to obtain a composition score.
3. The VR technology-based composition teaching method of claim 2, wherein after scoring the composition picture according to a preset scoring rule to obtain a composition score, the VR technology-based composition teaching method further comprises:
generating display information according to the composition scores of a plurality of composition pictures, and displaying the display information on a ranking list, wherein the display information comprises any one or any combination of the following information: composition picture, composition score, role information.
4. The VR technology based composition teaching method of claim 3, further comprising:
and receiving a preview instruction input by a user, and displaying a pre-stored scene virtual picture to the user through the wearable device.
5. The VR technology based composition teaching method of claim 4, further comprising:
receiving an instruction input by a user for performing grabbing control on an object in a scene virtual picture, and performing movement and/or rotation control on the object in the scene virtual picture.
6. The VR technology-based composition teaching method of claim 5, further comprising:
and receiving an instruction input by a user for shooting the current scene virtual picture displayed by the display device, capturing the current scene virtual picture, and generating a shot image.
7. A composition teaching device based on VR technique is applied to wearable equipment, its characterized in that, the device includes:
the display module is used for receiving a composition instruction input by a user and displaying a virtual environment containing a composition picture frame interface to the user through the wearable device;
the composition module is used for receiving an instruction of selecting the composition creation elements from the composition creation elements constructed in advance by a user, receiving an instruction of arranging the selected composition creation elements by the user, and displaying the selected composition creation elements on the arrangement area corresponding to the composition frame interface;
the generating module is used for generating a final composition picture;
the device is also used for dividing the composition picture frame interface into n parts by acquiring coordinates of two diagonal points of the composition picture frame interface and emitting n rays for ray detection; acquiring the frame occupation ratio of a corresponding object in a composition frame interface through ray detection, and acquiring the maximum difference value of the number of lattices in the number of lattices occupied by all objects; determining a guest-host scoring value according to the difference, wherein the guest-host scoring value is higher if the difference is larger; acquiring a lattice matrix returned by detection, sequentially retrieving according to the sequence, recording the lattice matrix into an array if an unretrieved object is retrieved, diffusing four objects around the retrieved object, continuing the retrieval process if the unretrieved object is retrieved again, recording the quantity of the retrieved adjacent objects into an array, acquiring a maximum quantity difference value, determining a density score value according to the maximum quantity difference value, wherein the larger the maximum quantity difference value is, the higher the density score value is; acquiring coordinates of a corresponding object through ray detection, comparing the coordinates of the object to judge whether the distance between the object and the shooting equipment exceeds a specified distance, if the distance exceeds the specified distance, determining the distance is virtual, otherwise determining the distance is real, determining a virtual-real scoring value according to a virtual-real object quantity comparison value, and if the difference between the virtual-real object quantity comparison and a preset first ratio is smaller, determining the virtual-real scoring value to be higher; and acquiring a lattice matrix returned by ray detection, retrieving the number of the lattices with the objects and the number of the lattices without the objects, comparing, determining a blank score value according to the ratio of the number of the lattices with the objects to the number of the lattices without the objects, wherein the blank score value is higher when the difference between the ratio of the number of the lattices with the objects to the number of the lattices without the objects and a preset second ratio is smaller.
8. A composition teaching equipment based on VR technique, includes:
a memory for storing a computer program;
a processor for implementing the steps of the VR technology based composition teaching method of any of claims 1 to 6 when the computer program is executed.
9. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, carries out the steps of the VR technology based composition teaching method as claimed in any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811468300.7A CN109584376B (en) | 2018-12-03 | 2018-12-03 | Composition teaching method, device and equipment based on VR technology and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811468300.7A CN109584376B (en) | 2018-12-03 | 2018-12-03 | Composition teaching method, device and equipment based on VR technology and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109584376A CN109584376A (en) | 2019-04-05 |
CN109584376B true CN109584376B (en) | 2023-04-07 |
Family
ID=65927000
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811468300.7A Active CN109584376B (en) | 2018-12-03 | 2018-12-03 | Composition teaching method, device and equipment based on VR technology and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109584376B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111796846B (en) * | 2020-07-06 | 2023-12-12 | 广州一起精彩艺术教育科技有限公司 | Information updating method, device, terminal equipment and readable storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108076384A (en) * | 2018-01-02 | 2018-05-25 | 京东方科技集团股份有限公司 | A kind of image processing method based on virtual reality, device, equipment and medium |
WO2018194306A1 (en) * | 2017-04-20 | 2018-10-25 | Samsung Electronics Co., Ltd. | System and method for two dimensional application usage in three dimensional virtual reality environment |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101281654A (en) * | 2008-05-20 | 2008-10-08 | 上海大学 | Method for processing cosmically complex three-dimensional scene based on eight-fork tree |
US20190347865A1 (en) * | 2014-09-18 | 2019-11-14 | Google Inc. | Three-dimensional drawing inside virtual reality environment |
US20160092405A1 (en) * | 2014-09-30 | 2016-03-31 | Microsoft Technology Licensing, Llc | Intent Based Authoring |
US10560678B2 (en) * | 2016-11-09 | 2020-02-11 | Mediatek Inc. | Method and apparatus having video encoding function with syntax element signaling of rotation information of content-oriented rotation applied to 360-degree image content or 360-degree video content represented in projection format and associated method and apparatus having video decoding function |
EP3358462A1 (en) * | 2017-02-06 | 2018-08-08 | Tata Consultancy Services Limited | Context based adaptive virtual reality (vr) assistant in vr environments |
-
2018
- 2018-12-03 CN CN201811468300.7A patent/CN109584376B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018194306A1 (en) * | 2017-04-20 | 2018-10-25 | Samsung Electronics Co., Ltd. | System and method for two dimensional application usage in three dimensional virtual reality environment |
CN108076384A (en) * | 2018-01-02 | 2018-05-25 | 京东方科技集团股份有限公司 | A kind of image processing method based on virtual reality, device, equipment and medium |
Non-Patent Citations (1)
Title |
---|
魏海涛 ; 鲁汉榕 ; 吴彩华 ; 郑国杰 ; 冯亚军 ; .用面向科学思维的教学方法改进计算机图形学课程教学.计算机教育.2016,(第08期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN109584376A (en) | 2019-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Argyriou et al. | Engaging immersive video consumers: Challenges regarding 360-degree gamified video applications | |
Mortara et al. | 3D Virtual environments as effective learning contexts for cultural heritage | |
Tsampounaris et al. | Exploring visualizations in real-time motion capture for dance education | |
Bishko | Animation principles and Laban movement analysis: movement frameworks for creating empathic character performances | |
Potts | The DC comics guide to creating comics: Inside the art of visual storytelling | |
CN109584376B (en) | Composition teaching method, device and equipment based on VR technology and storage medium | |
Gomez | Immersive virtual reality for learning experiences | |
Kritikos et al. | Interactive Historical Documentary in Virtual Reality | |
Borba | Towards a full body narrative: a communicational approach to techno-interactions in virtual reality | |
Hossain et al. | UEmbed: An Authoring Tool to Make Game Development Accessible for Users Without Knowledge of Coding | |
EP1395897A2 (en) | System for presenting interactive content | |
Huseinovic et al. | Interactive animated storytelling in presenting intangible cultural heritage | |
Flynn | Imaging Gameplay–The Design and Construction of Spatial Worlds | |
Vogiatzakis | Edutainment: Development of a video with an indirect goal of education. | |
Fröhlich | Natural and playful interaction for 3d digital content creation | |
Escudeiro et al. | Virtualsign translator as a base for a serious game | |
Geigel et al. | Virtual theatre: a collaborative curriculum for artists and technologists | |
Lam | Exploring virtual reality painting technology and its potential for artistic purposes | |
Lin et al. | Out of theater: Interactive Mixed-reality Performance for Intangible Culture Heritage Glove Puppetry | |
Maughan | The Return of Flânerie: Walter Benjamin and the Experience of Videogames | |
Yuly et al. | Application of the Seegmiller-Tillman Combination Method for Character Design of Capability Test Games | |
Moon et al. | Designing AR game enhancing interactivity between virtual objects and hand for overcoming space limit | |
Goates | Out of Bounds: A Visual Exploration of the Glitch | |
Naas | How to Cheat in Maya 2017: Tools and Techniques for Character Animation | |
Edmonds | Case Studies and Lessons |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |