CN117135530A - Method, device, equipment and storage medium for acquiring hearing space perception information - Google Patents
Method, device, equipment and storage medium for acquiring hearing space perception information Download PDFInfo
- Publication number
- CN117135530A CN117135530A CN202311395669.0A CN202311395669A CN117135530A CN 117135530 A CN117135530 A CN 117135530A CN 202311395669 A CN202311395669 A CN 202311395669A CN 117135530 A CN117135530 A CN 117135530A
- Authority
- CN
- China
- Prior art keywords
- space
- virtual
- real
- sound
- objective
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000008447 perception Effects 0.000 title claims abstract description 160
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000011156 evaluation Methods 0.000 claims abstract description 19
- 238000012549 training Methods 0.000 claims description 32
- 238000012360 testing method Methods 0.000 claims description 25
- 238000010276 construction Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 13
- 238000004422 calculation algorithm Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 239000012774 insulation material Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012821 model calculation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T90/00—Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Stereophonic System (AREA)
Abstract
The application relates to a method, a device, equipment and a storage medium for acquiring hearing space perception information, in particular to the technical field of sound debugging. The method comprises the following steps: constructing a subjective and objective association model, wherein the subjective and objective association model is a model associating subjective hearing space perception information with objective sound system objective parameters, the hearing space perception information is used for describing subjective hearing feeling related to space, and the sound system objective parameters are used for describing objective parameters which support quantitative definition in a sound system; acquiring current objective parameters of a target sound system; and inputting objective parameters of the target sound system into the subjective and objective correlation model to obtain target hearing space perception information, wherein the target hearing space perception information is used for indicating a subjective evaluation result aiming at hearing space perception information currently. Based on the technical scheme provided by the application, the scientificity and the repeatability of subjective hearing debugging can be improved.
Description
Technical Field
The application relates to the technical field of sound debugging, in particular to a method, a device, equipment and a storage medium for acquiring hearing space perception information.
Background
When the sound system is debugged, two debugging stages, namely objective parameter test and subjective listening tone test, are generally required, and accordingly, the sound evaluation also needs to be carried out objective parameter evaluation and subjective listening tone evaluation. Wherein subjective listening includes obtaining listening space perception information, which is information describing subjective spatial impressions of sound.
Subjective hearing requires a professional listener to operate, is limited by various factors such as the level of the listener, the state of hearing, memory, reasoning and judgment, and the like, and is a greater proportion of sound system debugging and evaluation in terms of time consumption.
Therefore, the existing schemes cannot quickly and scientifically complete subjective evaluation of the listening space perception information of the sound system.
Disclosure of Invention
The application provides a method, a device, equipment and a storage medium for acquiring hearing space perception information.
In one aspect, a method for obtaining hearing space perception information is provided, where the method includes:
constructing a subjective and objective association model, wherein the subjective and objective association model is a model associating subjective hearing space perception information with objective sound system objective parameters, the hearing space perception information is used for describing subjective hearing feeling related to space, and the sound system objective parameters are used for describing objective parameters which support quantitative definition in a sound system;
Acquiring current objective parameters of a target sound system;
and inputting objective parameters of the target sound system into the subjective and objective correlation model to obtain target hearing space perception information, wherein the target hearing space perception information is used for indicating a subjective evaluation result aiming at hearing space perception information currently.
In yet another aspect, an apparatus for acquiring auditory space perception information is provided, the apparatus comprising:
the association model construction module is used for constructing a subjective and objective association model, wherein the subjective and objective association model is a model for associating subjective hearing space perception information with objective parameters of an objective sound system, the hearing space perception information is used for describing subjective hearing feeling related to space, and the objective parameters of the sound system are used for describing objective parameters supporting quantitative definition in the sound system;
the objective parameter acquisition module is used for acquiring the objective parameters of the current target sound system;
the subjective result acquisition module is used for inputting objective parameters of the target sound system into the subjective and objective correlation model to obtain target hearing space perception information, and the target hearing space perception information is used for indicating a subjective evaluation result aiming at hearing space perception information at present.
In yet another aspect, a computer device is provided, where the computer device includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, at least one program, a code set, or an instruction set is loaded and executed by the processor to implement the method for obtaining hearing space perception information described above.
In yet another aspect, a computer readable storage medium is provided, where at least one instruction, at least one program, a code set, or an instruction set is stored, where the at least one instruction, at least one program, code set, or instruction set is loaded and executed by a processor to implement the method for obtaining hearing space perception information described above.
In yet another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the above-mentioned method for acquiring the hearing space perception information.
The technical scheme provided by the application can comprise the following beneficial effects:
through establishing a subjective and objective association model, subjective hearing space perception information which is difficult to calibrate is associated with corresponding objective parameters of a sound system, objective parameters of the target sound system are input into the subjective and objective association model, corresponding target hearing space perception information can be output, subjective evaluation progress of the sound system can be accelerated by using the method to obtain hearing space perception information, labor consumption is reduced, and scientificity and repeatability of subjective hearing debugging are improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a method flow diagram illustrating a method of acquiring auditory spatial perception information according to an exemplary embodiment.
Fig. 2 is a method flow diagram illustrating a method of acquiring auditory spatial perception information according to an exemplary embodiment.
FIG. 3 is a schematic diagram illustrating different sound image locations, according to an example embodiment.
Fig. 4 is a schematic diagram of a tilted virtual sound source space region, shown according to an example embodiment.
Fig. 5 is a schematic diagram of a perceived sound field, shown according to an exemplary embodiment.
Fig. 6 is a schematic diagram of a perceived sound field, shown according to an example embodiment.
Fig. 7 is a schematic diagram of a perceived surround area shown in accordance with an exemplary embodiment.
Fig. 8 is a block diagram showing a structure of an apparatus for acquiring auditory spatial perception information according to an exemplary embodiment.
Fig. 9 is a schematic diagram of a computer device provided in accordance with an exemplary embodiment.
Detailed Description
The following description of the embodiments of the present application will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the application are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be understood that the "indication" mentioned in the embodiments of the present application may be a direct indication, an indirect indication, or an indication having an association relationship. For example, a indicates B, which may mean that a indicates B directly, e.g., B may be obtained by a; it may also indicate that a indicates B indirectly, e.g. a indicates C, B may be obtained by C; it may also be indicated that there is an association between a and B.
In the description of the embodiments of the present application, the term "corresponding" may indicate that there is a direct correspondence or an indirect correspondence between the two, or may indicate that there is an association between the two, or may indicate a relationship between the two and the indicated, configured, etc.
In the embodiment of the present application, the "predefining" may be implemented by pre-storing corresponding codes, tables or other manners that may be used to indicate relevant information in devices (including, for example, terminal devices and network devices), and the present application is not limited to the specific implementation manner thereof.
The existing sound system debugging needs to carry out objective parameter test and subjective listening debugging, and the sound evaluation also needs to carry out objective parameter evaluation and subjective listening evaluation. However, the existing scheme cannot rapidly and scientifically complete subjective tuning or subjective evaluation of the sound system.
Subjective listening tests are a greater percentage of sound system tuning and evaluation in terms of time consumption. To adjust the sound effect, a great deal of manpower and material resources are required to perform subjective listening debugging.
However, because the domestic sound field starts late, the professionals in the relevant fields of acoustics and electronics are few, only part of enterprises have the personnel with hard professional knowledge, long-time trained hearing and focused subjective debugging of sound. Even if a professional is used for subjective listening debugging, parameters needing to be adjusted and tested are generally hundreds to thousands of parameters because of unstable hearing feeling of a person, listening fatigue can occur in long-time debugging, even if the professional is trained, larger deviation can occur in statistics and even listening feeling before forgetting is extremely likely to occur in two listening feeling under the condition of the same audio parameters, and the problem of subjective and objective data connection of a sound system is caused. Furthermore, because of the vague and complex data of the debug and data logging schemes, analysis is very difficult, and many of the previous testing and adjustment tasks are very easily invalidated, requiring a significant amount of repetitive effort.
Aiming at the problems, in the embodiment of the application, a method for establishing a correlation parameter model between subjective listening experience and objectivity of a vehicle-mounted sound system is provided, so that subjective listening development has support and guidance of objective data, and is more scientific and efficient.
The technical scheme provided by the application is further described below.
Fig. 1 is a method flow diagram illustrating a method of acquiring auditory spatial perception information according to an exemplary embodiment. The method is applied to the computer equipment. As shown in fig. 1, the method for obtaining the listening space perception information may include the following steps:
step 110: and constructing a subjective and objective association model.
The subjective and objective association model is a model associating subjective hearing space perception information with objective sound system objective parameters, wherein the hearing space perception information is used for describing subjective hearing feeling related to space, and the sound system objective parameters are used for describing objective parameters supporting quantitative definition in a sound system.
In the embodiment of the application, a subjective and objective association model is constructed, and subjective hearing space perception information is associated with objective parameters of an objective sound system.
In one possible implementation, the listening space perception information comprises at least one of: perceived sound image location, perceived sound image width, perceived sound image symmetry, perceived sound image drift, perceived sound field width, perceived sound field symmetry, distance perception, and surround perception.
In one possible implementation, the sound system objective parameters include at least one of:
(1) The objective parameters of the audio hardware system are used for describing objective parameters related to hardware settings in the audio system.
Exemplary audio hardware system objective parameters include: speaker parameters, power amplifier parameters, door plate coupling cavity parameters, sound absorption and insulation material use parameters, speaker placement position information, acoustic decoration and the like.
(2) Objective parameters of the audio software system are used for describing objective parameters related to software settings in the audio system.
Exemplary audio software system objective parameters include: the power allocation situation, the frequency division situation, the filter type and parameters, the delay setting, various sound effect algorithms and the like of each channel.
(3) Test data describing objective parameters related to listening tests performed on the test audio file.
Exemplary, test data include: the synthesized signal frequency response acquired by the microphone, distortion, binaural time differences, binaural level differences, acquired audio time domain signals, personnel distribution within the sound application space, acquisition location of the listener, test audio files used, etc.
It will be appreciated that before subjective listening begins, the acquisition positions of the listener need to be confirmed, including binaural positions and head positions, which can affect subjective listening decisions if they change too much in several listening sessions, and sometimes the acquisition positions of the listener need to be changed to determine if there are problems with the sound system. It is therefore necessary to determine and record the acquisition location of the listener in the entire sound system application space, which is also one item of test data.
In one possible implementation, the sound system application space in which the sound system is located includes: a vehicle; or, home theatres. The sound system application space is used for defining the space type where the sound system is located.
It will be appreciated that the specific types of audio hardware system objective parameters, audio software system objective parameters, and test data in the above audio system objective parameters may be different for different types of audio system application spaces. Such as: in the application space of the sound system of the home theater, the objective parameters of the sound hardware system mainly relate to the placement position information of the speakers, acoustic decoration and the like; in the application space of the sound system of the vehicle, the objective parameters of the sound hardware system mainly relate to speaker parameters, power amplifier parameters and the like.
Step 120: and obtaining the objective parameters of the current target sound system.
In the embodiment of the application, the collected objective parameters of the sound system are recorded as objective parameters of the sound system under the current opportunity to be subjected to subjective evaluation.
Step 130: and inputting objective parameters of the target sound system into the subjective and objective association model to obtain the target hearing space perception information.
The target hearing space perception information is used for indicating a subjective evaluation result aiming at the hearing space perception information.
In the embodiment of the application, the subjective and objective association model establishes a connection between subjective and objective hearing space perception information and objective sound system objective parameters, so that the objective sound system objective parameters are input into the subjective and objective association model, and the corresponding objective hearing space perception information can be output.
In summary, according to the method for acquiring the hearing space perception information provided by the embodiment, the subjective hearing space perception information which is difficult to calibrate is associated with the objective parameter of the corresponding sound system by establishing the subjective and objective association model, the objective parameter of the target sound system is input into the subjective and objective association model, and the corresponding target hearing space perception information can be output.
In an exemplary embodiment, the listening space perception information is determined by fusing a real space in the real world of a real technology-associated sound system application space with a space model in the virtual world thereof, and further converting information acquired in the real world into the space model.
Fig. 2 is a method flow diagram illustrating a method of acquiring auditory spatial perception information according to an exemplary embodiment. The method is applied to the computer equipment. As shown in fig. 2, the method for obtaining the listening space perception information may include the following steps:
step 210: under the condition that the objective parameters of the sound system are preset, corresponding listening space perception information is acquired by using mixed reality equipment, and a group of training data is obtained, wherein the training data comprises: preset objective parameters of the sound system and corresponding hearing space perception information.
In the embodiment of the application, the acquisition equipment is used for acquiring the objective parameters of the sound system to obtain the preset objective parameters of the sound system, and the mixed reality equipment is used for acquiring corresponding hearing space perception information, so that the preset objective parameters of the sound system and the corresponding hearing space perception information form a group of training data.
The test data in the objective parameters of the sound system may be collected by placing the collection sensor or by directly using a mixed reality device with integrated collection sensor, for example. Such as: in collecting test data in a vehicle, 2 microphones (test microphones) may be placed at least at the ear positions of each occupant; since the deployed microphones may interfere with the listener's hearing, if the mixed reality device has available measurement microphones meeting the frequency and accuracy ranges to test the audio signal at the human ear, the microphones on the mixed reality device are used preferentially for data acquisition.
It will be appreciated that the mixed reality device used needs to provide a clear field of view, allowing accurate visualization of the surrounding environment; it is desirable to provide a device in the form of a pair of glasses that is worn daily, without affecting the listening judgment of the listener as much as possible, and without having a significant reflection or shielding effect.
It will be appreciated that if the acquisition device of the objective parameters of the audio system has a great influence on subjective listening, then objective testing may be performed first, and then subjective listening may be performed under the same software and hardware parameters.
It will be appreciated that at least one objective parameter of the audio system, such as a speaker parameter, a power amplifier parameter, etc., is adjusted to re-preset the objective parameter of the audio system, and then step 210 is repeated until the subjective listening tone test is finished, thereby obtaining multiple sets of training data.
In one possible implementation, step 210 includes:
(1) In the mixed reality device, a space model of the sound system application space is spatially calibrated with a real space of the sound system application space.
In the implementation manner, a space model of an application space of the sound system is built first, and then the space model and the real space are subjected to space calibration, so that virtual space points in the space model correspond to real space points in the real space one by one.
The space model is mainly established in two ways: the first is to manually build by using computer graphics, for example, when a space model is built on a vehicle, a CAD model of the whole vehicle is directly used, and part of external space is added; the second modeling mode is to utilize mixed reality equipment to scan the application space of the sound system, and to a certain extent, the mixed reality equipment is matched with manual adjustment to build a space model.
The main purpose of the space calibration is to establish a relation between a space model and a real space, so that space points felt in the mixed reality equipment and space points in the real space are in one-to-one correspondence, and no deviation occurs.
(2) Under the condition that the objective parameters of the sound system are preset, the mixed reality equipment is used for marking the real space information, and the real space information comprises: a real-world sensing space region sensed in real space, a real-world acquisition position in real space, the real-world acquisition position including: head position and/or binaural position.
In this implementation, preset objective parameters of the sound system are collected, and meanwhile, the real space information in the real space marked by the listener using the mixed reality device is collected, where the real space information may include a real sensing space area where the listener senses sound, and may also include a current real collecting position of the listener. The real acquisition position is used as a part of objective parameters of the sound system, and can be used for determining hearing space perception information.
(3) Converting real space information into virtual space information, the virtual space information including: the virtual sensing space region corresponding to the real sensing space region in the space model and the virtual collecting position corresponding to the real collecting position in the space model.
In the implementation manner, on the basis of completing space calibration, the marked real space information is converted into virtual space information in the space model, and the virtual space information also correspondingly comprises a virtual perception space region for the listener to perceive sound and can also comprise the current virtual acquisition position of the listener.
(4) And combining the virtual space information to determine hearing space perception information, wherein the hearing space perception information and preset objective parameters of the sound system form a group of training data.
In this implementation, the virtual spatial information is processed to determine final auditory spatial perception information, and the subjective auditory spatial perception information and objective parameters of the objective sound system form a set of training data.
Step 220: and performing model training by using a plurality of groups of training data to obtain a subjective and objective correlation model.
In the embodiment of the application, after a plurality of groups of training data are obtained, a model for correlating the hearing space perception information with the objective parameters of the acoustic system, namely a subjective and objective correlation model, can be calculated. Wherein the data amount of the training data is generally more than 1000 pieces.
The model calculation may use, but is not limited to, neural network algorithms, bayesian algorithms, clustering algorithms, support vector machines (Support Vector Machine, SVM), and other algorithm techniques.
For example, the neural network algorithm is preferably used for calculation fitting to obtain the subjective and objective association model. If some data with stronger association are found in the debugging and testing processes, the data can be used as priori knowledge to the algorithm, and the defect of data quantity is overcome. For the fitted subjective and objective correlation model, some objective parameters of the sound system which are not tested are used as input, predicted hearing space perception information can be obtained, and a listener can confirm whether the information is accurate or not. If the confidence level of the requirement is reached after multiple experiments, the fitted subjective and objective correlation model is considered to be accurate, otherwise, the algorithm parameters are adjusted and re-fitted until the model reaches the confidence level of the requirement.
Step 230: and obtaining the objective parameters of the current target sound system.
The specific implementation manner of this step may be referred to the above embodiments, and will not be described herein.
Step 240: and inputting objective parameters of the target sound system into the subjective and objective association model to obtain the target hearing space perception information.
The specific implementation manner of this step may be referred to the above embodiments, and will not be described herein.
In summary, according to the method for acquiring the hearing space perception information provided in the embodiment, through space calibration, a real space of a sound system in the real world and a space model of the sound system in the virtual world are applied in the mixed reality device in advance, and then the information acquired in the real world is converted into the space model by using the mixed reality device, and then the hearing space perception information can be quickly acquired by combining the virtual space information in the space model.
In an exemplary embodiment, a mixed reality device may be used to mark a spatial region perceived by a listener perceiving sound sources, perceiving sound fields, perceiving surrounding regions, and the like, and accordingly, the listening space perception information may include information with different properties as follows:
(1) The listening space perception information includes: the sound image location is perceived.
Where perceived sound image location refers to the localization of perceived sound sources produced for multi-channel sound reproduction, in human subjective hearing.
In order to determine the perceived sound image position, a spatial region corresponding to a perceived sound source in real space is marked by using a mixed reality device, and is marked as a real sound source spatial region, and a real acquisition position is marked, wherein the real sound source spatial region and the real acquisition position form real space information, and the corresponding virtual space information comprises: the real sound source space region corresponds to the virtual sound source space region in the space model, and the real acquisition position corresponds to the virtual acquisition position in the space model.
In this embodiment, the perceived sound image position may be determined using two kinds of virtual space information, that is, a virtual sound source space region and a virtual acquisition position: and taking the space area of the virtual sound source at the virtual acquisition position as the perceived sound image position.
Illustratively, the perceived sound image location may be generally subdivided into a left sound image location, a middle sound image location, and a right sound image location, depending on the location of the perceived sound image location relative to the listener.
For example, the real/virtual sound source space region corresponding to the perceived sound image position may be represented by a three-dimensional space region such as an ellipsoid region or a cuboid region, and components having three dimensions of a height position, a left-right position, and a front-rear position, and the perceived sound image position may be represented by a single space point cloud in a discrete coordinated system. It is understood that the real/virtual sound source space region can be represented by simplifying a two-dimensional space region such as an elliptical surface or a rectangular surface, which is a distance from the listener in space, without focusing on the thickness of the perceived sound image (the depth in the short axis direction of the ellipsoid or the depth in the depth direction of the rectangular parallelepiped). In addition, in some requirements, the real/virtual sound source space region corresponding to the perceived sound image position can be simplified to be a space point, for example, the position of the central point coordinate in the space region is used to represent the whole perceived sound image position.
Referring in conjunction to fig. 3, there is shown a left image position 301, a middle image position 302 and a right image position 303 perceived by a listener at corresponding acquisition positions within the vehicle's audio system application space. At this time, the virtual acquisition position, the virtual sound source space region corresponding to the left sound image position, the virtual sound source space region of the middle sound image position, and the virtual sound source space region of the right sound image position should be saved as final perceived sound image positions.
(2) The listening space perception information includes: the width of the sound image is perceived.
The perceived sound image width refers to the width of a perceived sound source produced for multi-channel sound reproduction in human subjective hearing.
In order to determine the perceived sound image width, a spatial region corresponding to a perceived sound source in real space is marked by using a mixed reality device, and is marked as a real sound source spatial region, and a real acquisition position is marked, wherein the real sound source spatial region and the real acquisition position form real space information, and the corresponding virtual space information comprises: the real sound source space region corresponds to the virtual sound source space region in the space model, and the real acquisition position corresponds to the virtual acquisition position in the space model.
In this embodiment, the perceived sound image width may be determined using two kinds of virtual space information, i.e., a virtual sound source space region and a virtual acquisition position: and projecting the length of the virtual sound source space area at the virtual acquisition position in the transverse axis direction as the perceived sound image width.
The horizontal axis direction may be a left-right direction corresponding to a perceived sound image position when the listener looks forward in a head-up manner, or may be a length of the virtual sound source space region in the long axis direction.
Referring to fig. 4 in combination, there is shown a virtual sound source space region inclined, in which the length of the virtual sound source space region in the long axis direction is regarded as the perceived sound image width because the subjective intuition of a person is not satisfied if the length of the listener in the left-right direction corresponding to the front head-up is regarded as the perceived sound image width.
(3) The listening space perception information includes: the symmetry of the sound image is perceived.
The perceived sound image symmetry refers to the symmetry of perceived sound sources generated for multi-channel sound reproduction on human subjective hearing.
In order to determine the perceived sound image symmetry, a spatial region corresponding to a perceived sound source in real space is marked by using a mixed reality device and is marked as a real sound source spatial region, the real sound source spatial region forms real space information, and corresponding virtual space information comprises: the real sound source space region corresponds to the virtual sound source space region in the space model.
In this embodiment, the perceived sound image symmetry can be determined using virtual space information, which is a virtual sound source space region: taking the self symmetry of the first type virtual sound source space region as perceived sound image symmetry; and/or, taking the relative symmetry of the two second type virtual sound source space regions with the first type virtual sound source space region as a symmetry axis as perceived sound image symmetry; the first type virtual sound source space region is a virtual sound source space region opposite to the virtual acquisition position, and the second type virtual sound source space region is a virtual sound source space region oblique to the virtual acquisition position.
Wherein the first type virtual sound source space region may be understood as a virtual sound source space region corresponding to the intermediate sound image position, and the second type virtual sound source space region may be understood as a virtual sound source space region corresponding to the left/right sound image position.
In the present embodiment, in determining the perceived sound image symmetry, the first aspect may be to evaluate whether the intermediate sound image position itself is symmetrical, and the second aspect may be to evaluate the symmetry of the left and right sound image positions with respect to the intermediate sound image position, such as: the center point of the middle sound image position is used as the center axis in the up-down front-back direction, and whether the left and right sound image positions are symmetrical about the center axis is evaluated, specifically whether the respective center points of the left and right sound image positions are symmetrical about the center axis is evaluated.
Illustratively, the perceived sound image symmetry corresponding to the left and right sound image locations is quantified using the deviation values. Wherein the deviation value is a value calculated from the center point of the sound image position and/or the sound image width.
Such as: the distance from the center point of the left sound image position to the central axis is subtracted by the distance from the center point of the right sound image position to the central axis to obtain a deviation value, wherein the deviation value is closer to 0, which indicates that the left and right sound image positions are more symmetrical in the left and right distance, if 0, the left and right distances are completely symmetrical, positive and negative of the deviation value indicate the distance relationship, if positive indicates that the center point of the left sound image position is farther from the central axis, the absolute value of the deviation value indicates the left and right symmetry, and the larger the deviation value indicates the left and right symmetry.
Such as: and recording a surface which is arranged on the virtual acquisition position and is perpendicular to the central axis as a listener standard surface, subtracting the distance from the central point of the left sound image position to the listener standard surface from the distance from the central point of the right sound image position to the listener standard surface to obtain a deviation value, wherein the closer the deviation value is to 0, the higher the symmetry of the left and right sound image positions on the front-rear distance is, if the deviation value is 0, the full symmetry on the front-rear distance is indicated, the positive and negative of the deviation value indicates the far-near relationship, if the negative indicates that the central point of the left sound image position is closer to the listener standard surface, and the larger the absolute value of the deviation value indicates the front-rear symmetry, the more asymmetric the left and right sound image positions on the front-rear distance is indicated.
Such as: and calculating the volume difference value of the virtual sound source space region corresponding to the left and right sound image positions, or taking the width number difference value as a deviation value, wherein the closer the deviation value is to 0, the higher the symmetry of the left and right sound image positions in terms of volume size is, if positive numbers indicate that the left sound image position is larger, the absolute value represents the degree of deviation of the size, and the larger the absolute value represents the larger the deviation of the sizes of the sound image positions at two sides.
(4) The listening space perception information includes: the perceived sound image drifts.
Wherein perceived sound image drift refers to the drift of perceived sound sources that occur for multi-channel sound reproduction in human subjective hearing.
In order to determine perceived sound image drift, a spatial region corresponding to a perceived sound source in real space is marked by using a mixed reality device, and is marked as a real sound source spatial region, and a real acquisition position is marked, wherein the real sound source spatial region and the real acquisition position form real space information, and the corresponding virtual space information comprises: the real sound source space region corresponds to the virtual sound source space region in the space model, and the real acquisition position corresponds to the virtual acquisition position in the space model.
In this embodiment, the perceived sound image drift may be determined using one kind of virtual space information, that is, the virtual sound source space region corresponding to different times, or using two kinds of virtual space information, that is, the virtual sound source space region corresponding to different times and the virtual acquisition position. Taking the movement condition of the central position point of the virtual sound source space area as perceived sound image drift; or, the movement condition of the central position point of the virtual sound source space area relative to the virtual acquisition position is taken as the perceived sound image drift.
One definition of perceived sound image drift is sound image drift in a static situation, that is, when the acquisition position is fixed, the perceived sound image represented by the virtual sound source space region moves itself. If there is no sound image drift, the perceived sound image drift index is 0, and if there is a sound image drift, the distance between the two furthest corresponding points in the sound image drift condition can be calculated as the perceived sound image drift index, and the larger the index is, the more obvious the sound image drift is. Another definition of perceived sound image drift is sound image drift in a micro-dynamic state, that is, when the acquisition position moves, the sound image also moves to a degree significantly exceeding the degree of movement of the acquisition position, for example, the acquisition position moves to the right by 2cm, but the perceived middle sound image position moves to the left by 10cm, which is the perceived sound image drift, and the index of perceived sound field drift considers the change condition of the relative relationship between the virtual acquisition position and the central position point of the virtual sound source space region.
(5) The listening space perception information includes: the sound field width is perceived.
Where perceived sound field width refers to the width of the perceived sound field produced for multi-channel sound reproduction, i.e. the auditory image of the individual sound sources over the spatial width, in human subjective hearing.
In order to determine the perceived sound field width, a spatial region corresponding to the perceived sound field in real space is marked by using a mixed reality device, and is marked as a real sound field spatial region, and a real acquisition position is marked, wherein the real sound field spatial region and the real acquisition position form real space information, and the corresponding virtual space information comprises: the virtual sound field space region corresponding to the real sound field space region in the space model and the virtual acquisition position corresponding to the real acquisition position in the space model.
In this embodiment, the perceived sound field width may be determined using two kinds of virtual space information, i.e., a virtual sound field space region and a virtual acquisition position: and taking an included angle formed by the virtual sound field space region and the virtual acquisition position as a perceived sound field width.
Wherein the information of perceived sound field width is typically represented using angles. Referring to fig. 5 in combination, the oval area formed by the leftmost instrument sound and the rightmost instrument sound perceived by the listener is the perceived sound field, and the listener needs to mark the range of the perceived sound field in space when listening, which is generally indicated by a large ellipsoid. The included angle formed by the leftmost side and the rightmost side of the sound field and the collecting position corresponding to the listener is the perceived sound field width. By way of example, an ideal perceived sound field width is around 60 °, and the application is not limited to this particular value.
(6) The listening space perception information includes: the sound field symmetry is perceived.
Where perceived sound field symmetry refers to the symmetry of the perceived sound field produced for multi-channel sound reproduction in human subjective hearing.
In order to determine the symmetry of the perceived sound field, a spatial region corresponding to the perceived sound field in real space is marked by using a mixed reality device, and is marked as a real sound field spatial region, and a real acquisition position is marked, wherein the real sound field spatial region and the real acquisition position form real space information, and the corresponding virtual space information comprises: the virtual sound field space region corresponding to the real sound field space region in the space model and the virtual acquisition position corresponding to the real acquisition position in the space model.
In this embodiment, the perceived sound field symmetry may be determined using two kinds of virtual space information, i.e., a virtual sound field space region and a virtual acquisition position: and taking the self symmetry of the virtual sound field space region taking the virtual acquisition position as a symmetry axis as the perceived sound field symmetry.
The perceived sound field symmetry may specifically refer to whether the perceived sound field is symmetrical with respect to a position in a direction directly in front of the acquisition position. Referring to fig. 6, the position in the right front direction of the acquisition position is marked as a central axis, the width of the perceived sound field is a sector area, at this time, the angle of the sector on the left of the central axis and the angle of the sector on the right of the central axis can be calculated, if the two angles are not equal, the perceived sound field is asymmetric relative to the front of the listener, and the perceived sound field symmetry can be further represented by using the value of the left angle minus the right angle (all represented by the angle larger than 0), and if the value is 0, the perceived sound field is completely symmetric; if the positive value and the negative value indicate that the perceived sound field deviates to one side, the positive value indicates that the perceived sound field deviates to the left side, and the negative value indicates that the perceived sound field deviates to the right side; the larger the absolute value of the subtracted values, the worse the symmetry of the perceived sound field.
(7) The listening space perception information includes: and (5) sensing the distance.
The distance sense refers to the distance sense of a perceived sound source and a perceived sound field generated by multi-channel sound reproduction in human subjective hearing.
In order to determine the sense of distance, a spatial region corresponding to a perceived sound source in real space is marked by using a mixed reality device, and is marked as a real sound source spatial region, a spatial region corresponding to a perceived sound field in real space is marked as a real sound field spatial region, and a real acquisition position is marked, wherein the real sound source spatial region, the real sound field spatial region and the real acquisition position form real space information, and the corresponding virtual space information comprises: the virtual sound source space region corresponding to the real sound source space region in the space model, the virtual sound field space region corresponding to the real sound field space region in the space model and the virtual acquisition position corresponding to the real acquisition position in the space model.
In this embodiment, the distance sense may be determined using three kinds of virtual space information, that is, a virtual sound source space region, a virtual sound field space region, and a virtual acquisition position: and taking the virtual sound source space region and the distance from the virtual sound source space region to the virtual acquisition position as distance sense.
(8) The listening space perception information includes: a sense of wrap around.
The sense of surround refers to the perceived surround angle, surround thickness, and the like, which are produced for multi-channel sound reproduction, in human subjective hearing.
In order to determine the sense of surrounding, a spatial region corresponding to a perceived surrounding region in real space is marked with a mixed reality device, and is marked as a real surrounding spatial region, and a real collecting position is marked, wherein the real surrounding spatial region and the real collecting position form real space information, and the corresponding virtual space information comprises: the real surrounding space region corresponds to a virtual surrounding space region in the space model, and the real acquisition position corresponds to a virtual acquisition position in the space model.
In this embodiment, the surrounding sense can be determined by using two kinds of virtual space information, i.e., a virtual surrounding space region and a virtual acquisition position: the width of the virtual surrounding space region and the included angle formed by the virtual surrounding space region and the virtual collecting position are used as surrounding sense.
Referring to fig. 7, the listener marks the perceived surrounding area, including the surrounding angle and the surrounding thickness, for example, the listener only perceives a surrounding sense of an angle, and the thickness is low, so that a circular arc with a smaller width as shown by the perceived surrounding area 701 is marked, if the sense is full-scale surrounding and the surrounding degree is thick, then a circular area as shown by the perceived surrounding area 702 is marked, if the perceived surrounding sense is stronger and is thicker, then a circular area as shown by the perceived surrounding area 703 is marked, and after the perceived surrounding area is converted into the virtual surrounding space area, the width of the virtual surrounding space area, the included angle formed by the virtual surrounding space area and the virtual collecting position can be used as the surrounding sense.
It will be appreciated that the above method embodiments may be implemented alone or in combination, and the application is not limited in this regard.
Fig. 8 is a block diagram showing a structure of an apparatus for acquiring auditory spatial perception information according to an exemplary embodiment. The device comprises:
the association model construction module 801 is configured to construct a subjective-objective association model, where the subjective-objective association model is a model that associates subjective listening space perception information with objective parameters of an objective sound system, where the listening space perception information is used to describe subjective listening experiences related to space, and the objective parameters of the sound system are used to describe objective parameters supporting quantitative definition in the sound system;
an objective parameter obtaining module 802, configured to obtain an objective parameter of a current target sound system;
the subjective result obtaining module 803 is configured to input objective parameters of the target sound system into the subjective and objective correlation model to obtain target listening space perception information, where the target listening space perception information is used to indicate a subjective evaluation result of the listening space perception information currently.
In one possible implementation manner, the association model building module 801 includes:
The training data acquisition unit is used for acquiring corresponding listening space perception information by using mixed reality equipment under the condition that objective parameters of a sound system are preset, so as to obtain a group of training data, wherein the training data comprises: preset objective parameters of a sound system and corresponding hearing space perception information;
and the model training unit is used for carrying out model training by using a plurality of groups of training data to obtain the subjective and objective correlation model.
In a possible implementation manner, the training data acquisition unit is configured to:
in the mixed reality equipment, a space model of an application space of a sound system and a real space of the application space of the sound system are subjected to space calibration;
under the condition that the objective parameters of the sound system are preset, the mixed reality equipment is used for marking the real space information, and the real space information comprises: a real-world acquisition position in real space of a real-world perception space region perceived in real space, the real-world acquisition position comprising: head position and/or binaural position;
converting the real space information into virtual space information, the virtual space information including: the virtual sensing space region corresponding to the real sensing space region in the space model and the virtual collecting position corresponding to the real collecting position in the space model;
And combining the virtual space information to determine the hearing space perception information, wherein the hearing space perception information and preset objective parameters of the sound system form a group of training data.
In one possible implementation, the listening space perception information includes: sensing the position of the sound image; the real world perception spatial region includes: and perceiving a real sound source space region corresponding to the sound source in the real space, wherein the virtual perception space region comprises: the real sound source space region corresponds to a virtual sound source space region in the space model;
the training data acquisition unit is used for:
and taking the virtual sound source space area at the virtual acquisition position as the perceived sound image position.
In one possible implementation, the listening space perception information includes: sensing the width of the sound image; the real world perception spatial region includes: and perceiving a real sound source space region corresponding to the sound source in the real space, wherein the virtual perception space region comprises: the real sound source space region corresponds to a virtual sound source space region in the space model;
the training data acquisition unit is used for:
and projecting the length of the virtual sound source space area at the virtual acquisition position in the transverse axis direction as the perceived sound image width.
In one possible implementation, the listening space perception information includes: sensing the symmetry of the sound image; the real world perception spatial region includes: and perceiving a real sound source space region corresponding to the sound source in the real space, wherein the virtual perception space region comprises: the real sound source space region corresponds to a virtual sound source space region in the space model;
the training data acquisition unit is used for:
taking the self symmetry of the first type virtual sound source space area as the perceived sound image symmetry;
and/or the number of the groups of groups,
taking the relative symmetry of two second-type virtual sound source space regions with the first-type virtual sound source space region as a symmetry axis as the perceived sound image symmetry;
wherein the first type virtual sound source space region is a virtual sound source space region that is being directed to the virtual acquisition location, and the second type virtual sound source space region is a virtual sound source space region that is diagonal to the virtual acquisition location.
In one possible implementation, the listening space perception information includes: sensing sound image drift; the real world perception spatial region includes: and perceiving a real sound source space region corresponding to the sound source in the real space, wherein the virtual perception space region comprises: the real sound source space region corresponds to a virtual sound source space region in the space model;
The training data acquisition unit is used for:
taking the movement condition of the central position point of the virtual sound source space area as the perceived sound image drift;
or alternatively, the first and second heat exchangers may be,
and taking the movement condition of the central position point of the virtual sound source space area relative to the virtual acquisition position as the perceived sound image drift.
In one possible implementation, the listening space perception information includes: sensing the width of the sound field; the real world perception spatial region includes: and perceiving a real sound field space region corresponding to the sound field in a real space, wherein the virtual perception space region comprises: the virtual sound field space region corresponding to the real sound field space region in the space model;
the training data acquisition unit is used for:
and taking an included angle formed by the virtual sound field space region and the virtual acquisition position as the perceived sound field width.
In one possible implementation, the listening space perception information includes: sensing sound field symmetry; the real world perception spatial region includes: and perceiving a real sound field space region corresponding to the sound field in a real space, wherein the virtual perception space region comprises: the virtual sound field space region corresponding to the real sound field space region in the space model;
The training data acquisition unit is used for:
and taking the self symmetry of the virtual sound field space region taking the virtual acquisition position as a symmetry axis as the perceived sound field symmetry.
In one possible implementation, the listening space perception information includes: sensing distance; the real world perception spatial region includes: a real sound source space region corresponding to a perceived sound source in real space, a real sound field space region corresponding to a perceived sound field in real space, the virtual perceived space region comprising: a virtual sound source space region corresponding to the real sound source space region in the space model and a virtual sound field space region corresponding to the real sound field space region in the space model;
the training data acquisition unit is used for:
and taking the distance between the virtual sound source space region and the virtual sound field space region and the virtual acquisition position as the distance sense.
In one possible implementation, the listening space perception information includes: a sense of wrap around; the real world perception spatial region includes: and sensing a real surrounding space region corresponding to the surrounding region in real space, wherein the virtual sensing space region comprises: the real surrounding space region corresponds to a virtual surrounding space region in the space model;
The training data acquisition unit is used for:
and taking the width of the virtual surrounding space area and an included angle formed by the virtual surrounding space area and the virtual acquisition position as the surrounding sense.
In one possible implementation, the sound system application space includes:
a vehicle;
or, home theatres.
In one possible implementation, the objective parameters of the sound system include at least one of:
objective parameters of the audio hardware system, which are used for describing objective parameters related to hardware setting in the audio system;
objective parameters of the sound software system, which are used for describing objective parameters related to software settings in the sound system;
test data describing objective parameters related to listening tests performed on the test audio file.
It should be noted that: the device for acquiring the hearing space perception information provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Referring to fig. 9, a schematic diagram of a computer device according to an exemplary embodiment of the present application is provided, where the computer device includes a memory and a processor, and the memory is configured to store a computer program, and the computer program is executed by the processor to implement the method for expressing vibrotactile.
The processor may be a central processing unit (Central Processing Unit, CPU). The processor may also be any other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules, corresponding to the methods in embodiments of the present application. The processor executes various functional applications of the processor and data processing, i.e., implements the methods of the method embodiments described above, by running non-transitory software programs, instructions, and modules stored in memory.
The memory may include a memory program area and a memory data area, wherein the memory program area may store an operating system, at least one application program required for a function; the storage data area may store data created by the processor, etc. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some implementations, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
In an exemplary embodiment, a computer readable storage medium is also provided for storing at least one computer program that is loaded and executed by a processor to implement all or part of the steps of the above method. For example, the computer readable storage medium may be Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), compact disc Read-Only Memory (CD-ROM), magnetic tape, floppy disk, optical data storage device, and the like.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Claims (16)
1. A method for obtaining auditory space perception information, the method comprising:
constructing a subjective and objective association model, wherein the subjective and objective association model is a model associating subjective hearing space perception information with objective sound system objective parameters, the hearing space perception information is used for describing subjective hearing feeling related to space, and the sound system objective parameters are used for describing objective parameters which support quantitative definition in a sound system;
Acquiring current objective parameters of a target sound system;
and inputting objective parameters of the target sound system into the subjective and objective correlation model to obtain target hearing space perception information, wherein the target hearing space perception information is used for indicating a subjective evaluation result aiming at hearing space perception information currently.
2. The method of claim 1, wherein the constructing the subjective and objective relevance model comprises:
under the condition that objective parameters of a sound system are preset, corresponding listening space perception information is acquired by using mixed reality equipment, and a set of training data is obtained, wherein the training data comprises: preset objective parameters of a sound system and corresponding hearing space perception information;
and performing model training by using a plurality of groups of training data to obtain the subjective and objective correlation model.
3. The method according to claim 2, wherein the obtaining, using the mixed reality device, the corresponding listening space perception information in the case of presetting the objective parameters of the sound system, obtains a set of training data, including:
in the mixed reality equipment, a space model of an application space of a sound system and a real space of the application space of the sound system are subjected to space calibration;
Under the condition that the objective parameters of the sound system are preset, the mixed reality equipment is used for marking the real space information, and the real space information comprises: a real-world acquisition position in real space of a real-world perception space region perceived in real space, the real-world acquisition position comprising: head position and/or binaural position;
converting the real space information into virtual space information, the virtual space information including: the virtual sensing space region corresponding to the real sensing space region in the space model and the virtual collecting position corresponding to the real collecting position in the space model;
and combining the virtual space information to determine the hearing space perception information, wherein the hearing space perception information and preset objective parameters of the sound system form a group of training data.
4. A method according to claim 3, wherein the listening space perception information comprises: sensing the position of the sound image; the real world perception spatial region includes: and perceiving a real sound source space region corresponding to the sound source in the real space, wherein the virtual perception space region comprises: the real sound source space region corresponds to a virtual sound source space region in the space model;
The determining the listening space perception information by combining the virtual space information comprises the following steps:
and taking the virtual sound source space area at the virtual acquisition position as the perceived sound image position.
5. A method according to claim 3, wherein the listening space perception information comprises: sensing the width of the sound image; the real world perception spatial region includes: and perceiving a real sound source space region corresponding to the sound source in the real space, wherein the virtual perception space region comprises: the real sound source space region corresponds to a virtual sound source space region in the space model;
the determining the listening space perception information by combining the virtual space information comprises the following steps:
and projecting the length of the virtual sound source space area at the virtual acquisition position in the transverse axis direction as the perceived sound image width.
6. A method according to claim 3, wherein the listening space perception information comprises: sensing the symmetry of the sound image; the real world perception spatial region includes: and perceiving a real sound source space region corresponding to the sound source in the real space, wherein the virtual perception space region comprises: the real sound source space region corresponds to a virtual sound source space region in the space model;
The determining the listening space perception information by combining the virtual space information comprises the following steps:
taking the self symmetry of the first type virtual sound source space area as the perceived sound image symmetry;
and/or the number of the groups of groups,
taking the relative symmetry of two second-type virtual sound source space regions with the first-type virtual sound source space region as a symmetry axis as the perceived sound image symmetry;
wherein the first type virtual sound source space region is a virtual sound source space region that is being directed to the virtual acquisition location, and the second type virtual sound source space region is a virtual sound source space region that is diagonal to the virtual acquisition location.
7. A method according to claim 3, wherein the listening space perception information comprises: sensing sound image drift; the real world perception spatial region includes: and perceiving a real sound source space region corresponding to the sound source in the real space, wherein the virtual perception space region comprises: the real sound source space region corresponds to a virtual sound source space region in the space model;
the determining the listening space perception information by combining the virtual space information comprises the following steps:
taking the movement condition of the central position point of the virtual sound source space area as the perceived sound image drift;
Or alternatively, the first and second heat exchangers may be,
and taking the movement condition of the central position point of the virtual sound source space area relative to the virtual acquisition position as the perceived sound image drift.
8. A method according to claim 3, wherein the listening space perception information comprises: sensing the width of the sound field; the real world perception spatial region includes: and perceiving a real sound field space region corresponding to the sound field in a real space, wherein the virtual perception space region comprises: the virtual sound field space region corresponding to the real sound field space region in the space model;
the determining the listening space perception information by combining the virtual space information comprises the following steps:
and taking an included angle formed by the virtual sound field space region and the virtual acquisition position as the perceived sound field width.
9. A method according to claim 3, wherein the listening space perception information comprises: sensing sound field symmetry; the real world perception spatial region includes: and perceiving a real sound field space region corresponding to the sound field in a real space, wherein the virtual perception space region comprises: the virtual sound field space region corresponding to the real sound field space region in the space model;
The determining the listening space perception information by combining the virtual space information comprises the following steps:
and taking the self symmetry of the virtual sound field space region taking the virtual acquisition position as a symmetry axis as the perceived sound field symmetry.
10. A method according to claim 3, wherein said listening to spatial perception information comprises: sensing distance; the real world perception spatial region includes: a real sound source space region corresponding to a perceived sound source in real space, a real sound field space region corresponding to a perceived sound field in real space, the virtual perceived space region comprising: a virtual sound source space region corresponding to the real sound source space region in the space model and a virtual sound field space region corresponding to the real sound field space region in the space model;
the determining the listening space perception information by combining the virtual space information comprises the following steps:
and taking the distance between the virtual sound source space region and the virtual sound field space region and the virtual acquisition position as the distance sense.
11. A method according to claim 3, wherein said listening to spatial perception information comprises: a sense of wrap around; the real world perception spatial region includes: and sensing a real surrounding space region corresponding to the surrounding region in real space, wherein the virtual sensing space region comprises: the real surrounding space region corresponds to a virtual surrounding space region in the space model;
The determining the listening space perception information by combining the virtual space information comprises the following steps:
and taking the width of the virtual surrounding space area and an included angle formed by the virtual surrounding space area and the virtual acquisition position as the surrounding sense.
12. A method according to claim 3, wherein the sound system application space comprises:
a vehicle;
or, home theatres.
13. The method of claim 1, wherein the sound system objective parameters include at least one of:
objective parameters of the audio hardware system, which are used for describing objective parameters related to hardware setting in the audio system;
objective parameters of the sound software system, which are used for describing objective parameters related to software settings in the sound system;
test data describing objective parameters related to listening tests performed on the test audio file.
14. An apparatus for obtaining auditory space perception information, the apparatus comprising:
the association model construction module is used for constructing a subjective and objective association model, wherein the subjective and objective association model is a model for associating subjective hearing space perception information with objective parameters of an objective sound system, the hearing space perception information is used for describing subjective hearing feeling related to space, and the objective parameters of the sound system are used for describing objective parameters supporting quantitative definition in the sound system;
The objective parameter acquisition module is used for acquiring the objective parameters of the current target sound system;
the subjective result acquisition module is used for inputting objective parameters of the target sound system into the subjective and objective correlation model to obtain target hearing space perception information, and the target hearing space perception information is used for indicating a subjective evaluation result aiming at hearing space perception information at present.
15. A computer device, comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, and the at least one instruction, at least one program, set of codes, or set of instructions are loaded and executed by the processor to implement a method for obtaining auditory space perception information according to any one of claims 1 to 13.
16. A computer readable storage medium having stored therein at least one instruction, at least one program, code set, or instruction set, the at least one instruction, at least one program, code set, or instruction set being loaded and executed by a processor to implement a method of obtaining auditory spatial perception information according to any one of claims 1 to 13.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311395669.0A CN117135530B (en) | 2023-10-26 | 2023-10-26 | Method, device, equipment and storage medium for acquiring hearing space perception information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311395669.0A CN117135530B (en) | 2023-10-26 | 2023-10-26 | Method, device, equipment and storage medium for acquiring hearing space perception information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117135530A true CN117135530A (en) | 2023-11-28 |
CN117135530B CN117135530B (en) | 2024-03-29 |
Family
ID=88858576
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311395669.0A Active CN117135530B (en) | 2023-10-26 | 2023-10-26 | Method, device, equipment and storage medium for acquiring hearing space perception information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117135530B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118310620A (en) * | 2024-06-07 | 2024-07-09 | 深圳市声菲特科技技术有限公司 | Sound field testing method and system based on feature analysis |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090238371A1 (en) * | 2008-03-20 | 2009-09-24 | Francis Rumsey | System, devices and methods for predicting the perceived spatial quality of sound processing and reproducing equipment |
CN102711010A (en) * | 2012-05-29 | 2012-10-03 | 苏州上声电子有限公司 | Method and device for controlling broadband sound field of loudspeaker array by utilizing secondary residual sequence |
JP2014007556A (en) * | 2012-06-25 | 2014-01-16 | Nippon Hoso Kyokai <Nhk> | Auditory impression amount estimation device and program thereof |
CN106535076A (en) * | 2016-11-22 | 2017-03-22 | 深圳埃蒙克斯科技有限公司 | Spatial calibration method of stereo system and mobile terminal device thereof |
CN110191392A (en) * | 2019-05-07 | 2019-08-30 | 广州市迪士普音响科技有限公司 | A kind of virtual reality public address implementation method |
CN110974247A (en) * | 2019-12-18 | 2020-04-10 | 华南理工大学 | Device of spatial auditory clinical audiometry system |
CN111935624A (en) * | 2020-09-27 | 2020-11-13 | 广州汽车集团股份有限公司 | Objective evaluation method, system, equipment and storage medium for in-vehicle sound space sense |
CN112312278A (en) * | 2020-12-28 | 2021-02-02 | 汉桑(南京)科技有限公司 | Sound parameter determination method and system |
CN112487865A (en) * | 2020-11-02 | 2021-03-12 | 杭州兆华电子有限公司 | Automatic loudspeaker classification method based on machine learning |
-
2023
- 2023-10-26 CN CN202311395669.0A patent/CN117135530B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090238371A1 (en) * | 2008-03-20 | 2009-09-24 | Francis Rumsey | System, devices and methods for predicting the perceived spatial quality of sound processing and reproducing equipment |
CN102711010A (en) * | 2012-05-29 | 2012-10-03 | 苏州上声电子有限公司 | Method and device for controlling broadband sound field of loudspeaker array by utilizing secondary residual sequence |
JP2014007556A (en) * | 2012-06-25 | 2014-01-16 | Nippon Hoso Kyokai <Nhk> | Auditory impression amount estimation device and program thereof |
CN106535076A (en) * | 2016-11-22 | 2017-03-22 | 深圳埃蒙克斯科技有限公司 | Spatial calibration method of stereo system and mobile terminal device thereof |
CN110191392A (en) * | 2019-05-07 | 2019-08-30 | 广州市迪士普音响科技有限公司 | A kind of virtual reality public address implementation method |
CN110974247A (en) * | 2019-12-18 | 2020-04-10 | 华南理工大学 | Device of spatial auditory clinical audiometry system |
CN111935624A (en) * | 2020-09-27 | 2020-11-13 | 广州汽车集团股份有限公司 | Objective evaluation method, system, equipment and storage medium for in-vehicle sound space sense |
CN112487865A (en) * | 2020-11-02 | 2021-03-12 | 杭州兆华电子有限公司 | Automatic loudspeaker classification method based on machine learning |
CN112312278A (en) * | 2020-12-28 | 2021-02-02 | 汉桑(南京)科技有限公司 | Sound parameter determination method and system |
CN113194384A (en) * | 2020-12-28 | 2021-07-30 | 汉桑(南京)科技有限公司 | Sound parameter determination method and system |
CN113207062A (en) * | 2020-12-28 | 2021-08-03 | 汉桑(南京)科技有限公司 | Sound parameter determination method and system |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118310620A (en) * | 2024-06-07 | 2024-07-09 | 深圳市声菲特科技技术有限公司 | Sound field testing method and system based on feature analysis |
CN118310620B (en) * | 2024-06-07 | 2024-08-06 | 深圳市声菲特科技技术有限公司 | Sound field testing method and system based on feature analysis |
Also Published As
Publication number | Publication date |
---|---|
CN117135530B (en) | 2024-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11706582B2 (en) | Calibrating listening devices | |
US20190364378A1 (en) | Calibrating listening devices | |
CN109076305B (en) | Augmented reality headset environment rendering | |
JP5702852B2 (en) | Method for selecting perceptually optimal HRTF filters in a database according to morphological parameters | |
Gupta et al. | HRTF database at FIU DSP lab | |
CN117135530B (en) | Method, device, equipment and storage medium for acquiring hearing space perception information | |
CN112073891B (en) | System and method for generating head-related transfer functions | |
Stitt et al. | Sensitivity analysis of pinna morphology on head-related transfer functions simulated via a parametric pinna model | |
Durin et al. | Acoustic analysis of the directional information captured by five different hearing aid styles | |
Liu et al. | An improved anthropometry-based customization method of individual head-related transfer functions | |
US10743128B1 (en) | System and method for generating head-related transfer function | |
Li et al. | Towards mobile 3d hrtf measurement | |
Pausch et al. | Hybrid multi-harmonic model for the prediction of interaural time differences in individual behind-the-ear hearing-aid-related transfer functions | |
EP4383756A1 (en) | Method and system for generating a personalised head-related transfer function | |
Di Giusto et al. | Evaluation of the accuracy of photogrammetry for head-related transfer functions acquisition using numerical methods | |
US20240056756A1 (en) | Method for Generating a Personalised HRTF | |
CN117437367B (en) | Early warning earphone sliding and dynamic correction method based on auricle correlation function | |
FANTINI et al. | TOWARD A NOVEL SET OF PINNA ANTHROPOMETRIC FEATURES FOR INDIVIDUALIZING HEAD-RELATED TRANSFER FUNCTIONS | |
García | Fast Individual HRTF Acquisition with Unconstrained Head Movements for 3D Audio | |
CN117729503A (en) | Method for measuring auricle parameters in real time and dynamically correcting and reminding sliding of earmuffs | |
Mathew et al. | Measuring Auditory Localization Potential on XR Devices | |
Liu | Generating Personalized Head-Related Transfer Function (HRTF) using Scanned Mesh from iPhone FaceID | |
CN118250628A (en) | Audio signal processing method, system, equipment and storage medium | |
FI20195495A1 (en) | System and method for generating head-related transfer function | |
Lokki | Objective comparison of measured and modeled binaural room responses |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |