CN110347862B - Recording processing method, device, equipment, system and audio equipment - Google Patents
Recording processing method, device, equipment, system and audio equipment Download PDFInfo
- Publication number
- CN110347862B CN110347862B CN201910550414.4A CN201910550414A CN110347862B CN 110347862 B CN110347862 B CN 110347862B CN 201910550414 A CN201910550414 A CN 201910550414A CN 110347862 B CN110347862 B CN 110347862B
- Authority
- CN
- China
- Prior art keywords
- recording
- user
- instruction
- recording file
- file
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/61—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/63—Querying
- G06F16/638—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/68—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/683—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C7/00—Arrangements for writing information into, or reading information out from, a digital store
- G11C7/16—Storage of analogue signals in digital stores using an arrangement comprising analogue/digital [A/D] converters, digital memories and digital/analogue [D/A] converters
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Software Systems (AREA)
- Signal Processing For Digital Recording And Reproducing (AREA)
- Management Or Editing Of Information On Record Carriers (AREA)
- Telephonic Communication Services (AREA)
Abstract
The invention discloses a recording processing method, a device, equipment, a system and audio equipment. The method is implemented by an audio device, comprising: recording voice content sent by a user in response to a recording instruction sent by the user, generating a corresponding recording file, and storing the recording file in a specified storage area; and in response to a recording extracting instruction sent by the user, extracting the recording file corresponding to the recording extracting instruction from the specified storage area, and playing the recording file to the user.
Description
Technical Field
The present invention relates to the field of recording processing technologies, and in particular, to a recording processing method, apparatus, device, system, and audio device.
Background
In recent years, with the development of artificial intelligence technology and device manufacturing technology, the popularity of artificial intelligence devices has increased dramatically. For example, smart speakers have become essential home devices for many homes in recent years, and users can acquire information, perform entertainment, control application services such as home appliances and the like through natural language interaction, so that users can acquire brand-new and fast home experience.
However, the function of the smart sound box is single at present, which mainly provides information or entertainment services for users, and cannot meet the increasingly diversified requirements of users, for example, the recording requirement of the users for important information.
Disclosure of Invention
An object of the present invention is to provide a new technical solution for setting an audio device to implement playback of a recording.
According to a first aspect of the present invention, there is provided a recording processing method, implemented by an audio device, comprising:
recording voice content sent by a user in response to a recording instruction sent by the user, generating a corresponding recording file, and storing the recording file in a specified storage area;
and responding to a recording extracting instruction sent by a user, extracting the recording file corresponding to the recording extracting instruction from the specified storage area, and playing the recording file to the user.
According to a second aspect of the present invention, there is provided a recording processing apparatus, provided on an audio device side, comprising:
the voice recording unit is used for recording voice contents sent by a user in response to a recording instruction sent by the user, generating a corresponding recording file and storing the recording file in a specified storage area;
and the recording extracting unit is used for responding to a recording extracting instruction sent by a user, extracting the recording file corresponding to the recording extracting instruction from the specified storage area, and playing the recording file to the user.
According to a third aspect of the present invention, there is provided an audio recording processing apparatus comprising:
a memory for storing executable instructions;
and the processor is used for operating the recording processing equipment according to the control of the executable instruction to execute the recording processing method according to the first aspect of the invention.
According to a fourth aspect of the present invention, there is provided an audio apparatus comprising:
the sound recording processing apparatus according to the second aspect of the present invention or the sound recording processing device according to the third aspect of the present invention.
According to a fifth aspect of the present invention, there is provided an audio recording processing system comprising:
an audio device according to the fourth aspect of the invention;
at least one mobile terminal;
the mobile terminal comprises a memory and a processor, wherein the memory stores executable instructions, the processor is used for operating the mobile terminal according to the control of the executable instructions and executing the following recording processing method, and the recording processing method comprises the following steps:
establishing a connection with the audio device;
and recording voice content sent by a user, generating a corresponding recording file, and sending the recording file to the audio equipment.
According to one embodiment of the disclosure, the recording file is recorded and generated and stored in the designated storage area by setting the audio equipment to respond to the recording instruction sent by the user, and the corresponding recording file is extracted from the designated storage area to be played to the user by responding to the recording extraction instruction sent by the user, so that the recording playback function based on the audio equipment is realized, the functions of the audio equipment (such as an intelligent sound box, an intelligent earphone and the like) are enriched, the increasingly diversified requirements of the user are met, and the user experience is improved.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a block diagram showing an example of a hardware configuration of an audio apparatus 1000 that can be used to implement an embodiment of the present invention.
Fig. 2 shows a flowchart of a recording processing method of an embodiment of the present invention.
Fig. 3 is a schematic diagram of an example of a recording processing method according to an embodiment of the present invention.
Fig. 4 is a diagram showing an example of the relationship between the data transfer risk of the audio device and the user's attention.
Fig. 5 is a diagram showing an example of the preset time length index relationship.
Fig. 6 shows a block diagram of the recording processing apparatus 3000 of the embodiment of the present invention.
Fig. 7 shows a block diagram of an audio recording processing apparatus 4000 of an embodiment of the present invention.
FIG. 8 shows a block diagram of a recording processing system 7000 of an embodiment of the present invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as exemplary only and not as limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
< hardware configuration >
Fig. 1 is a block diagram showing a hardware configuration of an audio device 1000 that can implement an embodiment of the present invention.
The audio device 1000 may also include a smart speaker, a smart headset, etc. As shown in fig. 1, the audio apparatus 1000 may include a processor 1100, a memory 1200, an interface device 1300, a communication device 1400, a display device 1500, an input device 1600, a speaker 1700, a microphone 1800, and the like. The processor 1100 may be a central processing unit CPU, a microprocessor MCU, or the like. The memory 1200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface device 1300 includes, for example, a USB interface, a headphone interface, and the like. The communication device 1400 is capable of wired or wireless communication, for example, and may specifically include Wifi communication, bluetooth communication, 2G/3G/4G/5G communication, and the like. The display device 1500 is, for example, a liquid crystal display panel, a touch panel, or the like. Input device 1600 may include, for example, a touch screen, a keyboard, a somatosensory input, etc. A user may input voice instructions through the microphone 1800 to trigger the audio device 1000 to execute the audio device 1000 to process the voice instructions under the control of the processor 1100 according to executable instructions stored in the memory 1200, and the processed results of the voice instructions are played to the user through the speaker 1700.
The audio device shown in fig. 1 is merely illustrative and is in no way meant to limit the invention, its application, or uses. In an embodiment of the present invention, the memory 1200 of the audio device 1000 is configured to store instructions for controlling the processor 1100 to operate so as to execute any recording processing method provided by the embodiment of the present invention. It will be appreciated by those skilled in the art that although a plurality of devices are shown in fig. 1 for the audio device 1000, the present invention may relate to only some of the devices, for example, the audio device 1000 relates to only the processor 1100 and the storage device 1200. The skilled person can design the instructions according to the disclosed solution. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.
< example >
In this embodiment, a recording processing method is provided, which is implemented by an audio device. The audio device is a product such as a sound box and an earphone realized based on an artificial intelligence technology (for example, an intelligent voice technology), and can provide corresponding application services by interacting with a user, for example, receiving a voice instruction of the user to play a song, shopping, inquiring weather information, and the like. In one example, the hardware configuration of the audio device may be as shown in fig. 1.
As shown in fig. 2, the recording processing method includes: steps S2100-S2200.
Step S2100, in response to the recording instruction sent by the user, logs the voice content sent by the user, generates a corresponding recording file, and stores the recording file in a designated storage area.
In this embodiment, after the recording instruction sent by the user is subjected to recognition processing, the voice content which is sent by the user subsequently and needs to be recorded is recorded, so as to generate a corresponding recording file. For example, as shown in fig. 3, when the audio device is, for example, a smart speaker, after receiving the recording instruction, the audio device may send recording guide information to guide the user to send out the voice content of the desired recording, perform recording, and generate a corresponding recording file.
The generated audio file can be set with a unique corresponding file identifier, the file identifier is used for uniquely identifying the audio file, can be set as a time tag for generating the audio file, or can be set by a user through external operation in a self-defined manner.
After the sound recording file is generated, the generated sound recording file can be stored in a designated storage area after being encrypted according to a preset encryption mode, so that the safety of the sound recording file is improved. The preset encryption mode may be a default encryption mode or an encryption mode set by a user in a self-defined manner, which is not limited herein.
The audio file is stored in the appointed storage area, and the storage can be performed according to the file identifier of the audio file. For example, the file identifier of the sound recording file is a time tag for generating the sound recording file, and the sound recording file can be sorted and stored in sequence according to the time tag of each sound recording file, so that the corresponding sound recording file can be quickly found according to the time tag when the sound recording file is extracted, and the sound recording extraction efficiency is improved.
The designated storage area for storing the recording file can be a local storage area of the audio equipment, and a nonvolatile storage area in the audio equipment can be designated as the designated storage area in advance, so that the recording file stored in the designated storage area can be prevented from being lost after the audio equipment is powered off and restarted.
Step S2200, responding to the recording extracting instruction sent by the user, extracting the recording file corresponding to the recording extracting instruction from the designated storage area, and playing the recording file to the user.
In the present embodiment, the sound recording extracting instruction is used to indicate a sound recording file that the user desires to extract. For example, as shown in fig. 3, a recording extraction instruction issued by the user indicates that it is desired to extract the most recently recorded recording file that is played. Alternatively, the record extraction instructions may also indicate that the user desires to extract all of the record files currently stored in the audio device.
The sound recording file may have a unique corresponding file identification. The recording extracting instruction sent by the user may also include a file identifier of the recording file, and may directly indicate the recording file to be extracted. Or, when the file identifier of the sound recording file is the time tag for generating the sound recording file, the sound recording extracting instruction sent by the user may include time information for which the sound recording file is expected to be extracted, and the sound recording file with the corresponding time tag may be found according to the time information.
In this embodiment, when the plurality of audio files corresponding to the audio record extracting instruction are extracted from the designated storage area, the audio files may be sequentially played according to the time of generating the audio files, or may be sequentially played according to a sequence defined by another user.
The recording processing method shown in fig. 2 has been described above, and the recording processing method includes setting the audio device to respond to the recording instruction sent by the user, recording the generated recording file and storing the recording file in the designated storage area, and responding to the recording extraction instruction sent by the user, extracting the corresponding recording file from the designated storage area and playing the recording file to the user, so as to implement the recording playback function based on the audio device, enrich the functions of the audio device, meet the increasingly diversified requirements of the user, and improve the user experience.
In one example, the recording instruction includes a recording operation instruction and extracted user identity information, where the extracted user identity information is used to indicate a user who is allowed to extract the corresponding recording file, and specifically, the extracted user identity information may include a user identifier, which is possessed by the user who is allowed to extract the corresponding recording file and is used to uniquely identify the user, where the user identifier may be a user name, a user ID, or the like.
In this example, as shown in fig. 2, recording the voice content uttered by the user in response to the recording instruction uttered by the user to generate a corresponding recording file may include: steps S2110-S2130.
Step S2110, recognizing the recording instruction, and acquiring a recording operation instruction included in the recording instruction and the extracted user identity information.
And S2120, recording the voice content according to the recording operation instruction.
Step S2130 is to generate a sound recording file associated with the authorization user identity information according to the voice content and the extracted user identity information.
In this example, when the user sends the recording instruction to the audio device, the user allowed to listen to the recording file may be specified by extracting the user identity information of the recording instruction, and the recording file generated by recording the voice content of the user by the audio device is associated with the user identity information.
In this example, as shown in fig. 2, in response to a recording extraction instruction issued by a user, extracting a recording file corresponding to the recording extraction instruction from the specified storage area, and playing the recording file to the user may include: steps S2210-S2220.
Step S2210, after receiving the recording extraction instruction, obtaining the target identity information of the user who sent the recording extraction instruction.
In this example, the target identity information is user identity information that the user who issued the recording extraction instruction has. The user identity information may be a user identification for uniquely identifying the user, which may be a user name, a user ID, or the like.
Step S2220, extracting user identity information associated with the recording file corresponding to the recording extracting instruction is obtained, and when the target identity information is consistent with the extracted user identity information, the recording file corresponding to the recording extracting instruction is extracted from the designated storage area and played to the user.
In this example, the extracted user identity information associated with the recording file corresponding to the recording extraction instruction is consistent with the target identity information of the user who sent the recording extraction instruction, which means that the user who sent the recording extraction instruction is allowed to extract the recording file corresponding to the recording extraction instruction, and at this time, the recording file corresponding to the recording extraction instruction is extracted from the designated storage area and played to the user.
In another example, the recording processing method shown in fig. 2 may further include: steps S2310-S2330.
Step S2310, when the audio file saved in the designated storage area is triggered to be sent to the remote object, a data security risk index is obtained.
In the use process of the audio device, the audio device may be triggered to send the recording file stored in the designated storage area to a remote object (for example, a cloud server connected to the audio device to provide a background service, a mobile terminal connected to the audio device in a pairing manner through WIFI or bluetooth, or the like), and data transmission such as sending the recording file may not be perceived by a user using the audio device, and the user may not know the specific remote object sent by the recording file, a purpose that the recording file may be used after being sent, or the like, so that a risk of privacy disclosure may be brought to the user.
In an example, a data transfer interface (including an application program interface or a hardware interface) supported by the audio device may be snooped to implement the detection of the audio device whether to be triggered to send the sound recording file to the remote object. And when detecting that the audio equipment is triggered to send the recording file to the remote object, acquiring the current data security risk index of the audio equipment. The data security risk index is used to characterize the risk of the audio device delivering data in the current interaction environment. The higher the data security risk index, the greater the risk that the corresponding audio device will deliver the data.
After the data security risk index is obtained, the audio equipment is controlled to send the recording file to the remote object according to the data security risk index in combination with the subsequent steps, and privacy disclosure risks brought to a user when the audio equipment sends the recording file to the remote object are avoided.
In one example, the step of obtaining a data security risk index may include: step S2110-step S2120.
Step S2110, acquiring a silence duration between the last interaction completion time of the audio device and the user for completing the voice interaction and the current time.
The user and the audio device complete voice interaction, which may be that the user sends a wake-up word to successfully wake up the audio device, or that the audio device actively sends a voice message to interact with the user to obtain a voice response of the user, and so on. In this example, the interaction between the audio device and the user may be monitored in real time, and the time when the audio device and the user complete the voice interaction each time may be recorded in real time, so as to obtain the last time when the audio device and the user complete the voice interaction.
At the time of the interaction completion, the audio device has completed the voice interaction with the user the last time, and the audio device has gained the attention of the user the last time. Correspondingly, the silent duration between the interaction completion moment and the current moment is obtained, and the change of the user attention of the user to the audio equipment can be represented through the silent duration. In this example, it is considered that the user attention of the audio device is inversely proportional to the data transfer risk of the audio device, i.e., the higher the user attention of the audio device, the lower the data transfer risk of the audio device, for example, as shown in fig. 4.
And S2120, determining a data security risk index according to the silent duration and a preset duration index relation.
The preset duration index relationship is used for describing the corresponding relationship between different silent durations and data security risk indexes. The preset duration index relationship can be obtained by extracting historical use data of the audio equipment or calculating a data transmission risk model constructed by the audio equipment. And determining a corresponding data security risk index according to the silent time length through a preset time length index relation. The silent duration reflects the change of the attention of the user, the data security risk index reflects the data transmission risk, and the longer the silent duration, the larger the corresponding data security risk index.
For example, suppose the audio device interacts with the user to include three stages of activating the wakeup word, (receiving and processing) the voice command, playing the information (including playing the processing result of the voice command), the user interaction time is the time when the audio equipment is activated by the awakening word sent by the user at the last time, and the silent duration is 0 when the current time is the user interaction time, the corresponding data security risk index is also 0, after which, as the current time moves backwards, the length of silence increases, assuming that every 10 seconds of increase in the length of silence, the corresponding data security risk index is increased by 1 until the corresponding data security risk index is increased to 60 minutes after the silent period is increased to 10 minutes, the time may be kept unchanged until the current time moves back to the next time when the audio device is activated by the next wakeup word, and the corresponding preset time index relationship may be as shown in fig. 5.
The data security risk index is determined according to the silent duration and the preset duration index relationship, the silent duration reflecting the change of the attention of the user and the preset duration index relationship can be utilized to accurately determine the data security risk index reflecting the data transmission risk, so that the data transmission of the audio equipment is accurately controlled based on the data security risk index and the subsequent steps, and the security risk in the data transmission is avoided.
After the current data security risk index is obtained, the following steps are carried out:
step S2320, when the data security risk index belongs to the preset low risk index range, the audio file is sent to the remote object.
The preset low risk index range is a numerical range reflecting low risk of data transmission and a data security risk index, and can be set based on a specific acquisition mode of the data security risk index according to a specific application scene or application requirements.
For example, the data security risk index is determined based on the silent duration in the above example according to the preset duration index relationship shown in fig. 5, the data security risk index is 0-60, and the preset low risk index range may be set to be 0-30, that is, the data security risk index is not less than 0 and not greater than 30, which reflects that the risk of transferring data is low and does not cause data leakage.
When the data security risk index belongs to the preset low risk index range, the data transmission is determined to have no leakage risk, and the recording file is sent to the remote object, so that the privacy leakage risk brought to the user by the audio equipment sending the recording file to the remote object can be avoided.
Step S2330, when the data security risk index does not belong to the preset low risk index range, after the data transfer authority of the user is obtained, the sound recording file is sent to the remote object.
When the data security risk index does not belong to the preset low risk index range, the data transmission is at a high risk of leakage, and after the data transmission authorization is obtained for the user, the recording file is sent to the remote object, so that the recording file can be prevented from being sent to the remote object in a high risk environment under the condition that the user is unaware of the data transmission authorization, and the security risk of privacy leakage is brought.
The data transfer authorization is the right given by the user to allow the audio device to transfer data. In one example, obtaining data transfer authorization to a user may include: steps S2331-S2333.
Step S2331, a data transfer application is sent to the user.
The data transfer application is used for applying for data transfer authority to a user, the data transfer application can include an application authority indication for indicating that the audio device requests the data transfer authority, and the data transfer application can also include other contents according to specific application requirements.
For example, the data transfer application may also include the rights content required by the audio device to transfer the data. For example, the data transferred by the audio device is audio data, and correspondingly, the authority content required by the audio device to transfer the data includes microphone authority and the like; or, the data transmitted by the audio device is video data, and correspondingly, the authority content required by the data transmitted by the audio device comprises camera authority and the like; or, the data transferred by the audio device is geographic position data, and correspondingly, the authority content required by the audio device for transferring the data comprises positioning authority and the like.
In this example, the audio device may send the data transfer application to the user through voice interaction, or the audio device may send the data transfer application to at least one mobile terminal used by the user and connected to the audio device. The audio equipment can be paired with a mobile terminal used by a user through modes such as WIFI, Bluetooth or other wireless connection and the like to establish connection. The mobile terminal may include a mobile phone, a tablet computer, and the like. The data transmission application is sent by at least one mobile terminal which is used by a user and is connected with the audio equipment, so that the voice transmission range of the audio equipment can be broken through, and the data transmission application is sent to the user in a mode of higher timeliness and higher safety and secrecy. In order to improve the security of data interaction, encryption or data integrity protection can be further implemented on the connection between the audio device and the mobile terminal.
After sending the slave data transfer application to the user, the user is triggered to return a corresponding data transfer response.
In step S2332, after the data transfer response returned by the user indicates that data transfer is allowed, the user is triggered to perform authentication.
After sending the data transfer application to the user, the audio device may wait for the user to return a corresponding data transfer response. After the data transfer response indicates that the data transfer is allowed, the user may be triggered to perform authentication through the audio device by issuing a corresponding voice instruction, for example, performing voiceprint authentication on a sound issued by the user or requesting the user to provide a corresponding voice authority password.
Or after sending the data transfer application to the mobile terminal connected to the audio device through the audio device, the mobile terminal may wait for the connection with the audio device and return a corresponding data transfer response. After the data transmission response indicates that the data transmission is allowed, the user can be triggered to perform identity authentication through the mobile terminal by the connection established between the audio equipment and the mobile terminal and by sending a corresponding instruction. The authentication mode can be fingerprint identification authentication, face identification authentication, digital password authentication, voice password authentication, gesture authentication and the like supported by the mobile terminal. The user identity authentication is implemented through the mobile terminal, the existing authentication module of the mobile terminal can be directly called to implement the user identity authentication, and compared with the method that the identity authentication module is arranged in the audio equipment to carry out the user identity authentication, the audio equipment does not need to be changed, the implementation is simpler, the implementation complexity is lower, and the popularization is easier.
It should be understood that, after the data transfer response indicates that data transfer is not allowed, it may be determined that the user does not give the data transfer permission, and the data transfer permission acquisition fails, and the subsequent steps in this embodiment are not executed, and the data transfer is intercepted. Or after sending data transfer applications to a plurality of mobile terminals, when data transfer responses indicating permission of data transfer returned by all the mobile terminals are not collected, it may be determined that the user does not give data transfer permission, and the acquisition of the data transfer permission fails, and subsequent steps in this embodiment will not be executed to intercept the data transfer.
In step S2333, after the obtained authentication result indicates that the authentication is passed, it is determined to obtain the data transfer authorization.
After the identity verification result indicates that the identity verification is passed, the user allowing data transmission is a legal and effective user, the data transmission authorization can be correspondingly determined, the authenticity and the effectiveness of the obtained data transmission authorization are guaranteed, and the safety of data transmission based on the data transmission authorization is improved.
It should be understood that, when the authentication result indicates that the authentication fails, it may be determined that the user does not have a legal identity giving a data transfer permission, and the data transfer permission is failed to be obtained, and the subsequent step in this embodiment is not performed to intercept the data transfer.
The steps S2310-S2330 in this example have been described above, and by obtaining the current data security risk index, and controlling the audio device to send the recording file to the remote object according to the data security risk index, the audio device is prevented from sending the recording file to the remote object, which may bring privacy disclosure risk to the user.
In another example, the method for extracting a sound recording provided in this embodiment may further include:
step S2410, responding to the recording deleting instruction sent by the user, and deleting the recording file corresponding to the recording deleting instruction in the appointed storage area.
The recording deletion instruction is used for indicating the recording file which the user desires to delete, and may include a file identifier of the recording file which the user desires to delete, a file range of the recording file which the user desires to delete, and the like. The record deletion instruction may also instruct deletion of all saved record files in the designated storage area, i.e., emptying of all record files.
The recording deleting instruction sent by the user can delete or empty the recording file stored in the audio equipment, so that the storage efficiency of the audio equipment is improved.
In another example, the method for extracting a recording provided in this embodiment may further include:
step S2420, the recording file sent by the mobile terminal which establishes the connection is received and stored in the appointed storage area, so that the recording file can be extracted and played when the corresponding recording extracting instruction is received.
In this example, the mobile terminal, such as a mobile phone, a tablet computer, a palm computer, and the like, may be paired with the audio device to establish a connection through WIFI, bluetooth, and the like.
The user can be through the mobile terminal who establishes the connection with audio equipment, and after the recording speech generation recording file of admission, send for audio equipment, keep in appointed storage area by audio equipment, when the recording that receives the user and send draws the instruction, draw, play, realize the function of long-range record playback through audio equipment, enrich audio equipment's application function, promote user experience.
The step of extracting and playing when the corresponding audio record extracting instruction is received may be implemented as step S2200 shown in fig. 2, which is not described herein again.
In another example, the method for extracting a sound recording provided in this embodiment may further include:
and step S2430, when the interactive operation performed by the user is detected, sending a record extracting prompt to the user according to the record file stored in the designated storage area.
In this example, whether the user performs the interactive operation may be detected by listening to an interactive interface (software interface or hardware interface) of the audio device. When detecting that the user performs the interactive operation, a record extraction prompt may be sent to the user according to the record file stored in the designated storage area, for example, to prompt the user whether to extract the record file currently stored in the user for playing, or to prompt the user that another user designates a record for the user to extract the record for listening, whether to extract the record for playing, or the like, when the user performing the interactive operation has the same user identity information according to the user identity information associated with the stored record file.
In the embodiment, when the user is detected to implement the interactive operation, the recording extraction prompt is sent to the user according to the recording file stored in the designated storage area, the recording file acquired in the audio equipment can be dynamically prompted to the user in real time, the user selects to extract and play, the recording playback function of the audio equipment is enriched, and the user experience is better improved.
< recording processing device >
In this embodiment, there is further provided a recording processing apparatus 3000, as shown in fig. 6, including: the voice recording unit 3100 and the recording extracting unit 3200 are used for implementing the recording processing method provided in this embodiment, and are not described herein again.
The voice recording unit 3100 is configured to record voice content sent by a user in response to a recording instruction sent by the user, generate a corresponding recording file, and store the recording file in a designated storage area;
the recording extracting unit 3200 is configured to, in response to a recording extracting instruction issued by a user, extract, from the specified storage area, a recording file corresponding to the recording extracting instruction, and play the recording file to the user.
Optionally, the recording processing apparatus 3000 is further configured to:
and in response to a recording deletion instruction sent by a user, deleting the recording file corresponding to the recording deletion instruction in the specified storage area.
Optionally, the recording processing apparatus 3000 is further configured to:
and receiving the recording file sent by the mobile terminal establishing the connection, and storing the recording file in the appointed storage area for extracting and playing when receiving the corresponding recording extracting instruction.
Optionally, the recording instruction includes a recording operation instruction and extracted user identity information, where the extracted user identity information is used to indicate a user who is allowed to extract a corresponding recording file;
the voice logging unit 3100 is further configured to:
identifying the recording instruction, and acquiring the recording operation instruction and the extracted user identity information included in the recording instruction;
recording the voice content according to the recording operation instruction;
generating the sound recording file associated with the extracted user identity information according to the voice content and the extracted user identity information;
and an audio record extraction unit 3200, further configured to:
after receiving the recording extraction instruction, acquiring target identity information of a user sending the recording extraction instruction;
and acquiring extracted user identity information associated with the recording file corresponding to the recording extraction instruction, and extracting the recording file corresponding to the recording extraction instruction from the specified storage area to play the recording file to a user when the target identity information is consistent with the extracted user identity information.
Optionally, the recording processing apparatus 3000 is further configured to:
when the recording file stored in the designated storage area is triggered to be sent to a remote object, acquiring a data security risk index;
when the data security risk index belongs to a preset low risk index range, sending the sound recording file to the remote object;
and when the data security risk index does not belong to a preset low risk index range, after the data transmission permission of the user is acquired, the recording file is sent to the remote object.
Optionally, the obtaining of the data security risk index includes:
acquiring the interaction completion time when the audio equipment completes voice interaction with a user last time and the silence duration between the interaction completion time and the current time;
determining the data security risk index according to the silent duration and a preset duration index relationship;
wherein the duration index relationship is used for describing the corresponding relationship between the different silent durations and the data security risk index;
and/or the presence of a gas in the gas,
the method further includes obtaining data transfer authorization to the user, including:
when the data transmitted by the audio equipment belong to preset sensitive data, sending a data transmission application to a user;
after the data transmission response returned by the user indicates that the data transmission is allowed, triggering the user to carry out identity authentication;
and determining to acquire the data transfer authorization after the acquired authentication result indicates that the authentication is passed.
Optionally, the recording processing apparatus 3000 is further configured to:
when interactive operation carried out by a user is detected, sending a recording extracting prompt to the user according to the recording file stored in the designated storage area;
and/or the presence of a gas in the gas,
and encrypting the generated sound recording file according to a preset encryption mode, and then storing the sound recording file in the appointed storage area.
It will be appreciated by those skilled in the art that the recording processing device 3000 can be implemented in various ways. For example, the recording processing apparatus 3000 can be realized by a processor configured by instructions. For example, the recording processing apparatus 3000 may be realized by storing instructions in a ROM and reading the instructions from the ROM into a programmable device when starting up the device. For example, the recording processing apparatus 3000 may be cured into a dedicated device (e.g., ASIC). The recording processing apparatus 3000 may be divided into units independent of each other, or may be implemented by combining them together. The recording processing apparatus 3000 may be implemented by one of the various implementations described above, or may be implemented by a combination of two or more of the various implementations described above.
In this embodiment, the recording processing apparatus 3000 is provided on the audio device side, and may be a software module provided in the audio device, a patch, an insert, or the like loaded in the audio device, or may be an application program provided in a device that establishes a connection with the audio device. In one example, the recording processing apparatus 3000 may be packaged in a software development kit (e.g., SDK) and installed and operated by an audio device.
< recording processing apparatus >
In the present embodiment, there is also provided an audio recording processing apparatus 4000, as shown in fig. 7, including:
a memory 4100 for storing executable instructions;
a processor 4200, configured to execute the recording processing apparatus 4000 according to the control of the executable instructions to perform the recording processing method as provided in this embodiment.
In this embodiment, the recording processing device 4000 may be provided on the audio device side, may be provided in the audio device, or may be an independent device that establishes a wired or wireless connection with the audio device.
< Audio apparatus >
In the present embodiment, there is also provided an audio device 5000, including:
such as the recording processing means 3000 shown in fig. 6 or the recording processing apparatus 6000 shown in fig. 7.
In the present embodiment, the hardware configuration of the audio device 5000 may be as shown in fig. 1, for example, by storing the recording processing apparatus 3000 in the memory 1200, loading the recording processing apparatus 3000 in the processor 1100, and implementing the recording processing method in the present embodiment, or by storing executable instructions in the memory 1200 and implementing the recording processing method in the present embodiment by the processor 1100 according to the control of the executable instructions. The audio device 5000 may include a smart speaker, a smart headset, and the like.
< data processing System >
In this embodiment, a data processing system 7000 is further provided, as shown in fig. 8, including:
the audio device 5000 provided in the present embodiment;
at least one mobile terminal 6000.
In this embodiment, the mobile terminal 6000 may be a mobile phone, a tablet computer, a palm computer, a notebook computer, or the like.
The mobile terminal 6000 includes a memory 6100 and a processor 6200. The processor 6100 may be a central processing unit CPU, a microprocessor MCU, or the like. The memory 6200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The memory 6200 stores executable instructions, and the processor 6100 is configured to execute the mobile terminal 6000 according to control of the executable instructions, and execute the following data processing methods, including: steps S6100 to S6200.
In step S6100, a connection is established with the audio device 5000.
In this embodiment, the mobile terminal 6000 may pair with the audio device 5000 to establish a connection through WIFI, bluetooth or other connection methods. The connection between the mobile terminal 6000 and the audio device 5000 may be encrypted or data integrity protected to ensure the security of the data transfer over the connection.
Step S6200, the voice content sent by the user is recorded, and a corresponding recording file is generated and sent to the audio device 5000.
In this embodiment, the mobile terminal 6000 records the voice content sent by the user, generates a corresponding recording file, sends the recording file to the audio device 5000, can trigger the audio device 5000 to store the recording file in a designated storage area, and extracts and plays the recording file when receiving a corresponding recording extraction instruction, so that a remote recording playback function is realized, and the user experience of the audio device is enriched.
The recording processing method, apparatus, device, system and audio device provided in this embodiment have been described above with reference to the accompanying drawings and examples, where the audio device is configured to respond to a recording instruction sent by a user, record a generated recording file and store the generated recording file in a specified storage area, and respond to a recording extraction instruction sent by the user, extract a corresponding recording file from the specified storage area and play the recording file to the user, so as to implement a recording playback function based on the audio device, enrich functions of the audio device (e.g., a smart speaker, a smart headphone, etc.), meet increasingly diverse requirements of the user, and improve user experience.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable sound recording processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable sound recording processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable sound processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer-readable program instructions may also be loaded onto a computer, other programmable sound recording processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable sound recording processing apparatus, or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable sound recording processing apparatus, or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.
Claims (9)
1. The recording processing method is characterized by being implemented through audio equipment, wherein the audio equipment is a sound box or an earphone which provides corresponding application service for user interaction; the method comprises the following steps:
recording voice content sent by a user in response to a recording instruction sent by the user, generating a corresponding recording file, and storing the recording file in a specified storage area;
responding to a recording extracting instruction sent by a user, extracting a recording file corresponding to the recording extracting instruction from the specified storage area, and playing the recording file to the user;
and (c) a second step of,
when the recording file stored in the designated storage area is triggered to be sent to a remote object, acquiring a data security risk index;
when the data security risk index belongs to a preset low risk index range, sending the sound recording file to the remote object;
when the data security risk index does not belong to a preset low risk index range, after the data transmission permission of a user is obtained, the recording file is sent to the remote object;
the obtaining of the data security risk index comprises the following steps:
acquiring the interaction completion time when the audio equipment completes voice interaction with a user last time and the silent time length between the current time and the interaction completion time;
determining the data security risk index according to the silent duration and a preset duration index relationship;
wherein the duration index relationship is used for describing the corresponding relationship between the different silence durations and the data security risk index.
2. The method of claim 1, further comprising:
in response to a recording deletion instruction sent by a user, deleting the recording file corresponding to the recording deletion instruction in the specified storage area;
and/or the presence of a gas in the atmosphere,
and receiving the sound recording file sent by the mobile terminal establishing the connection, and storing the sound recording file in the appointed storage area for extracting and playing when receiving the corresponding sound recording extracting instruction.
3. The method of claim 1,
the recording instruction comprises a recording operation instruction and extracted user identity information, and the extracted user identity information is used for indicating a user allowed to extract a corresponding recording file;
the recording instruction that the response user sent, the pronunciation content that the admission user sent, generate corresponding recording file, include:
identifying the recording instruction, and acquiring the recording operation instruction and the extracted user identity information included in the recording instruction;
recording the voice content according to the recording operation instruction;
generating the sound recording file associated with the extracted user identity information according to the voice content and the extracted user identity information;
and (c) a second step of,
the responding to a recording extracting instruction sent by a user, extracting the recording file corresponding to the recording extracting instruction from the specified storage area, and playing the recording file to the user comprises the following steps:
after receiving the recording extracting instruction, acquiring target identity information of a user sending the recording extracting instruction;
and acquiring extracted user identity information associated with the recording file corresponding to the recording extraction instruction, and extracting the recording file corresponding to the recording extraction instruction from the specified storage area to play the recording file to a user when the target identity information is consistent with the extracted user identity information.
4. The method of claim 1,
the method further includes obtaining data transfer authorization to the user, including:
when the data transmitted by the audio equipment belong to preset sensitive data, sending a data transmission application to a user;
after the data transmission response returned by the user indicates that the data transmission is allowed, triggering the user to carry out identity authentication;
and determining to acquire the data transfer authorization after the acquired authentication result indicates that the authentication is passed.
5. The method of claim 1, further comprising:
when the interactive operation implemented by the user is detected, sending a record extracting prompt to the user according to the record file stored in the designated storage area;
and/or the presence of a gas in the atmosphere,
and encrypting the sound recording file according to a preset encryption mode, and then storing the sound recording file in the appointed storage area.
6. The recording processing device is characterized by being arranged on the side of audio equipment, wherein the audio equipment is a sound box or an earphone which provides corresponding application service for user interaction; the device comprises:
the voice recording unit is used for recording voice contents sent by a user in response to a recording instruction sent by the user, generating a corresponding recording file and storing the recording file in a specified storage area;
the recording extracting unit is used for responding to a recording extracting instruction sent by a user, extracting a recording file corresponding to the recording extracting instruction from the specified storage area and playing the recording file to the user;
when the sound recording file stored in the designated storage area is triggered to be sent to a remote object, acquiring a data security risk index;
when the data security risk index belongs to a preset low risk index range, sending the sound recording file to the remote object;
when the data security risk index does not belong to a preset low risk index range, after the data transmission permission of a user is obtained, the sound recording file is sent to the remote object;
the method for obtaining the data security risk index comprises the following steps:
acquiring the interaction completion time when the audio equipment completes voice interaction with a user last time and the silence duration between the interaction completion time and the current time;
determining the data security risk index according to the silent duration and a preset duration index relationship;
wherein the duration index relationship is used for describing the corresponding relationship between the different silence durations and the data security risk index.
7. An audio recording processing apparatus, characterized by comprising:
a memory for storing executable instructions;
a processor for operating the recording processing device to execute the recording processing method according to the control of the executable instruction, wherein the method is as defined in any one of claims 1-5.
8. An audio device, comprising:
the recording processing apparatus as claimed in claim 6 or the recording processing device as claimed in claim 7.
9. A recording processing system, comprising:
the audio device of claim 8;
at least one mobile terminal;
the mobile terminal comprises a memory and a processor, the memory stores executable instructions, the processor is used for operating the mobile terminal according to the control of the executable instructions, and the following recording processing method is executed, and the recording processing method comprises the following steps:
establishing a connection with the audio device;
and recording voice content sent by a user, generating a corresponding recording file, and sending the recording file to the audio equipment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910550414.4A CN110347862B (en) | 2019-06-24 | 2019-06-24 | Recording processing method, device, equipment, system and audio equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910550414.4A CN110347862B (en) | 2019-06-24 | 2019-06-24 | Recording processing method, device, equipment, system and audio equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110347862A CN110347862A (en) | 2019-10-18 |
CN110347862B true CN110347862B (en) | 2022-09-06 |
Family
ID=68182861
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910550414.4A Active CN110347862B (en) | 2019-06-24 | 2019-06-24 | Recording processing method, device, equipment, system and audio equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110347862B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111586050A (en) * | 2020-05-08 | 2020-08-25 | 上海明略人工智能(集团)有限公司 | Audio file transmission method and device, storage medium and electronic equipment |
CN112187721B (en) * | 2020-09-01 | 2022-02-11 | 珠海格力电器股份有限公司 | Voice processing method and device, intelligent voice message leaving equipment and storage medium |
CN112423194B (en) * | 2020-10-19 | 2022-03-29 | 北京搜狗智能科技有限公司 | Distribution method and device and recording equipment |
CN112287691B (en) * | 2020-11-10 | 2024-02-13 | 深圳市天彦通信股份有限公司 | Conference recording method and related equipment |
CN113779546B (en) * | 2021-06-01 | 2024-03-26 | 武汉深之度科技有限公司 | Recording authority management method, computing device and storage medium |
CN113688422A (en) * | 2021-08-26 | 2021-11-23 | 上海明略人工智能(集团)有限公司 | Method and device for checking recording data, electronic equipment and storage medium |
CN114627869A (en) * | 2022-03-17 | 2022-06-14 | 广东美的厨房电器制造有限公司 | Audio output method, output device, cooking apparatus, server, and storage medium |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2882599B1 (en) * | 2005-02-25 | 2007-05-04 | Somfy Soc Par Actions Simplifi | COMMUNICATION SYSTEM WITH CROSS ACCOUNTING AND ASSOCIATED COMMUNICATION FRAME |
CN101754248A (en) * | 2009-12-28 | 2010-06-23 | 中兴通讯股份有限公司 | Method and device for controlling data sending of access terminal |
KR20130133629A (en) * | 2012-05-29 | 2013-12-09 | 삼성전자주식회사 | Method and apparatus for executing voice command in electronic device |
CN105430721A (en) * | 2015-10-30 | 2016-03-23 | 东莞酷派软件技术有限公司 | Data interaction control method, data interaction control device and wearable equipment |
CN105718809A (en) * | 2016-01-15 | 2016-06-29 | 珠海格力电器股份有限公司 | Mobile communication terminal and data security monitoring method and device thereof |
CN107066229A (en) * | 2017-01-24 | 2017-08-18 | 广东欧珀移动通信有限公司 | The method and terminal of recording |
CN107403623A (en) * | 2017-07-31 | 2017-11-28 | 努比亚技术有限公司 | Store method, terminal, Cloud Server and the readable storage medium storing program for executing of recording substance |
CN108920927A (en) * | 2018-07-30 | 2018-11-30 | 比奥香港有限公司 | A kind of recording based on biological identification, speech playing method and equipment |
CN109525459B (en) * | 2018-11-23 | 2020-05-22 | 上海控创信息技术股份有限公司 | Reliability test method for train control system after loading information safety monitoring engine |
CN109510891B (en) * | 2018-12-29 | 2020-12-01 | 深圳市趣创科技有限公司 | Voice-controlled recording device and method |
-
2019
- 2019-06-24 CN CN201910550414.4A patent/CN110347862B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110347862A (en) | 2019-10-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110347862B (en) | Recording processing method, device, equipment, system and audio equipment | |
CN108595970B (en) | Configuration method and device of processing assembly, terminal and storage medium | |
JP6592583B1 (en) | Method and apparatus for information exchange | |
JP6738809B2 (en) | Method and system for purchasing, sharing and transferring ownership of digital music from a near field communication (NFC) chip using an authenticated data file | |
KR20180072148A (en) | Method for managing contents and electronic device for the same | |
KR102391784B1 (en) | A primary device, an accessory device, and methods for processing operations on the primary device and the accessory device | |
CN105407098A (en) | Identity verification method and device | |
CN104484593B (en) | terminal verification method and device | |
CN106778295B (en) | File storage method, file display method, file storage device, file display device and terminal | |
US20210326429A1 (en) | Access control method and device, electronic device and storage medium | |
CN110334529B (en) | Data processing method, device, equipment, system and audio equipment | |
CN110400562B (en) | Interactive processing method, device, equipment and audio equipment | |
CN103281375A (en) | Contact management method, device and system for third-party application | |
CN103914520A (en) | Data query method, terminal equipment and server | |
KR20150121892A (en) | Payment method, apparatus and sytem for recognizing information of line body service in the system | |
KR20140105343A (en) | Device and method for securing datausing a plurality of mode in the device | |
CN110278273B (en) | Multimedia file uploading method, device, terminal, server and storage medium | |
CN106782498B (en) | Voice information playing method and device and terminal | |
CN106060050B (en) | Auth method and terminal device | |
CN108881766B (en) | Video processing method, device, terminal and storage medium | |
CN105303120A (en) | Short message reading method and apparatus | |
KR20140105681A (en) | Apparatus and method for encryption data in secure mode | |
CN111050209A (en) | Multimedia resource playing method and device | |
CN110347248B (en) | Interactive processing method, device, equipment and audio equipment | |
CN109104759B (en) | Interaction method of electronic equipment, electronic equipment and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |