WO2007105436A1 - ウェアラブル端末 - Google Patents
ウェアラブル端末 Download PDFInfo
- Publication number
- WO2007105436A1 WO2007105436A1 PCT/JP2007/053187 JP2007053187W WO2007105436A1 WO 2007105436 A1 WO2007105436 A1 WO 2007105436A1 JP 2007053187 W JP2007053187 W JP 2007053187W WO 2007105436 A1 WO2007105436 A1 WO 2007105436A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- terminal
- wearable
- wearable terminal
- terminals
- communication
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
- H04N7/185—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/414—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
- H04N21/41407—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42203—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/462—Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
- H04N21/4622—Retrieving content or additional data from different sources, e.g. from a broadcast channel and the Internet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/023—Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/52—Network services specially adapted for the location of the user terminal
Definitions
- the present invention relates to a wearable terminal.
- a wearable terminal is a terminal worn like clothes, a bag, a pen, or a wristwatch. In recent years, the weight has been reduced, and a terminal equipped with a microphone and a camera is not uncommon.
- a wearable device for example, a wearable camera can automatically shoot, and can use voice as a trigger to release a shutter or start movie recording (see Patent Document 1).
- Patent Document 1 Japanese Patent Laid-Open No. 2004-356970
- the profile creation function is a memoir of the event using data automatically collected by the wearable device as a component when the user wearing the wearable device participates in an event such as a party or group trip. This is a function that creates documents and materials as so-called “profiles” of events. Since the wearable terminal is equipped with a camera and a microphone, video data and audio data collected by the powerful wearable terminal can be used as data material for creating a profile file.
- An object of the present invention is to provide a wearable terminal that can broaden the range of expression when creating a profile by using the appearance of the person participating in the event as data material. .
- the present invention provides a wearable terminal worn by a person who participates in an event in which a plurality of people participate, and a plurality of wearable terminals existing within a predetermined range.
- the request means for transmitting a request signal and receiving a response to the request signal one or more wearable terminals as communication partners are determined based on the received response, and data communication with the determined wearable terminal is performed.
- the data is collected by a wearable terminal determined as a communication partner, and becomes a component of the profile when creating the profile of the event. It is a wearable terminal characterized.
- the present invention When the present invention is equipped with the above-described configuration and wears a wearable terminal capable of transmitting a response as described above, it is worn by other participants. It is possible to determine what is a communication partner from the wearable terminals, and the data received by the communication partner can be used as a component of the profile. Therefore, even if it is data material that cannot be acquired from its own wearable terminal, if it is data material that can be acquired by the wearable terminal force worn by other participants of the event, the other participant will wear it. This data material can be acquired from a mobile terminal that can be used to create a profile, so the range of expressions in creating a profile can be expanded.
- the event is a concept including a meeting, a lecture, a meal, a standing talk, a group trip, a party, and the like.
- the communication means is a wearable terminal belonging to the same conversation group as the own terminal based on the received response for each of the plurality of wearable terminals.
- the wearable terminal that has been determined to belong to the same conversation group may be determined as the communication partner.
- the response includes voice information acquired by each wearable terminal that has received the request signal, and the communication means includes the utterance interval and the voice of the terminal itself in the received voice information of each wearable terminal.
- a wearable terminal that includes a duplication rate calculating means for calculating a duplication rate with an utterance interval in information and is determined to belong to the same conversation group as the own terminal is the voice information of the own terminal among the plurality of wearable terminals. It may be a wearable terminal whose overlap rate with the utterance interval is within a predetermined threshold.
- duplication rate calculation means in this claim corresponds to the same conversation group detection unit in the embodiment.
- the utterance section in the voice information includes an affair in conversation, which is a portion where vowels continue for a predetermined period, and the duplication rate calculation means excludes the duplication rate except for the voicing interval. May be calculated.
- the response includes position information of each wearable terminal that has received the request signal, and the communication means determines the wearable from the received position information of each wearable terminal and the position information of the terminal itself.
- a wearable terminal that includes distance calculation means for calculating the distance between the terminal and the own terminal and that is determined to belong to the same conversation group as the own terminal has a predetermined distance from the own terminal among the plurality of wearable terminals. Even if it is a wearable terminal that falls within the threshold of.
- the distance calculation means in this claim corresponds to the same conversation group detection unit in the embodiment.
- the response includes direction information of each mobile terminal that has received the request signal, and the communication means calculates a change amount of direction information per predetermined time from the received direction information of each wearable terminal.
- a wearable terminal that includes an azimuth change amount calculation means and is determined to belong to the same conversation group as the own terminal has a difference from the azimuth change amount of the terminal among the plurality of wearable terminals within a predetermined range. It may be a wearable terminal.
- direction change amount calculation means in this claim corresponds to the same conversation group detection unit in the embodiment.
- the event profile is created by the terminal itself, and the communication means transmits the created profile to the one or more wearable terminals.
- the user related to the wearable terminal it is not necessary for the user related to the wearable terminal to perform editing work, for example, the user profile related to the wearable terminal or a profile file capturing only the speaker can be easily created.
- the created profile can be shared with one or more wearable terminals that are communication partners.
- the collected data is an image
- the profile is a set of a wearable terminal associated with each speaker and a wearable terminal that captures an image of the speaker in the determined wearable terminal. It may be created based on.
- the profile may be created using a voice collected by a wearable terminal associated with a speaker among the determined wearable terminals.
- the wearable terminal may include recording means for recording data relating to the determined wearable terminal.
- the server device includes clustering means for acquiring position information indicating positions of the plurality of wearable terminals, and clustering the plurality of wearable terminals into a plurality of clusters based on the acquired position information,
- the communication means may determine one or more wearable terminals as communication partners for each cluster.
- FIG. 1 is a diagram showing a state in which a user wears a wearable terminal.
- FIG. 2 is a diagram showing a situation in which there are multiple users wearing wearable terminals in the vicinity.
- FIG. 3 A diagram showing the mutual communication status when there are a plurality of mobile terminals.
- FIG. 4 is a diagram showing the appearance of a wearable terminal.
- FIG. 5 is a diagram showing a shooting direction of the wearable terminal.
- FIG. 6 is a diagram showing a position detection system using an infrared wide-angle camera and an infrared tag.
- FIG. 7 is a diagram showing a communication sequence.
- FIG. 8 is a diagram showing data received for each wearable terminal power.
- FIG. 9 is a diagram showing data received from Location Sano OO.
- FIG. 10 is a diagram showing a hardware configuration of wearable terminal 100.
- FIG. 11 shows functional blocks of wearable terminal 100.
- FIG. 12 shows a terminal ID list.
- FIG. 13 is a diagram showing a flow of identical conversation group detection processing.
- FIG. 14 is a flowchart showing the same conversation group detection process.
- FIG. 15 is a diagram showing a flow of creation processing.
- FIG. 16 is a diagram showing a flow of creation processing.
- FIG. 19 is a diagram schematically showing the created profile.
- FIG. 20 is a diagram showing an internal configuration of profile information.
- ⁇ 21 It is a diagram showing a server centralized management type communication situation.
- FIG. 22 is a diagram showing functional blocks of the creation server 500.
- ⁇ 23 It is a diagram showing the internal configuration of the same conversation group detection unit 520.
- FIG. 24 shows functional blocks of wearable terminal 600.
- FIG. 25 is a diagram showing a processing flow of the creation server.
- FIG. 26 is a diagram showing a flow of clustering processing.
- FIG. 27 is a diagram showing a flow of the same conversation group detection processing 2.
- FIG. 28 (a) is a map obtained by bird's-eye view of the positions of 21 persons at a certain time.
- (B) is a diagram illustrating a result after the clustering by the clustering unit 521 is performed.
- C) is a diagram showing the aggressiveness and direction of each person's conversation.
- D) shows the result after the conversation group has been detected.
- E is a diagram showing the result of dividing all participants into one of the talk groups.
- FIG. 29 is a diagram showing a flow of creation processing 2
- FIG. 30 is a diagram showing a terminal ID list 2;
- the wearable terminals 100, 100a, 100b, 100c, 100d, 100e, 100f, 100g, and lOOh shown in FIG. 2 are connected to each other by a communication network as shown in FIG.
- the communication network is a wireless LAN.
- Figure 3 is a diagram showing the state of communication between multiple wearable terminals.
- the person wearing the wearable terminal 100 to 100h exists in the communication range, and is composed of the conversation group 1 composed of the wearable terminals 100 to 100e and the wearable terminals 100f and 100g.
- the conversation group 2 is formed, and the person wearing the wearable terminal 100h is not included in these conversation groups, but illustrates the situation where the person is sharing the place.
- each wearable terminal is configured to include a camera and a microphone, and records images and sounds acquired by them on a recording medium.
- a dual-purpose terminal having this configuration, the sound of each wearer can be acquired well, but in general, the wearer himself cannot be photographed, as shown in Fig. 5.
- the image information in the facing direction is acquired.
- a feature of wearable terminal 100 is that wearable terminals (wearable terminals 100a to 100h) that can be worn by multiple people are worn by one or more wearable terminals (ie, wearable terminals 100a to 100h).
- Determine wearable terminals wearable terminals 100a to 100e in the example of Fig. 3 related to users who belong to the same conversation group as the user, and register or authenticate them.
- Data communication is performed with the wearable terminal determined to be none. Then, when creating a profile, the images and sounds taken by these wearable terminals become the components of the profile.
- a position detection method using an infrared wide-angle camera and an infrared tag is used. Specifically, in this method, an infrared tag is attached to a user wearing a wearable terminal, an infrared tag in an image taken with an infrared wide-angle camera is acquired as a bright spot, and the acquired bright spot image is acquired. The coordinate transformation from the coordinates in the middle to the real space coordinates is used to determine the 3D position of the infrared tag. The determined three-dimensional position of the infrared tag is regarded as the position of the wearable terminal.
- FIG. 6 shows a position detection system using an infrared wide-angle camera and an infrared tag.
- This position detection system includes wearable terminals 100-100h worn by each user, infrared tags 200-200h attached to each user (not shown), six infrared wide-angle cameras 300a-300f, It is comprised including.
- the infrared tag 200 to 200h is an infrared emission marker composed of an LED that emits infrared rays and a device that controls the blinking thereof.
- a device that controls the blinking thereof For example, as shown in Fig. 1, it has a name tag shape and is worn on the chest of a user.
- Infrared wide-angle cameras 300a to 300f are cameras each including a camera that acquires a moving image, a filter that blocks visible light and transmits light in the infrared region, and a wide-angle lens.
- the location server 400 processes images obtained by the six infrared wide-angle cameras 300a to 300f, calculates the position of each infrared tag 200 to 200h (wearable terminal 100 to 100h), and manages the calculated position information. More specifically, the position of each bright spot on the infrared image is converted into real space coordinates based on the installation positions of the infrared wide-angle cameras 300a to 300f, and the coordinates of each converted bright spot, that is, the infrared tag 200-200h. The position of each wearable device Are stored in the storage device and managed.
- Wearable terminal 100 has a wireless LAN communication function as described above, and can receive the position information of each wearable terminal 100 to 100h from location Sano OO. .
- the location sano OO may send each position information to each wearable terminal 100-: LOOh, and the wearable terminal 100 may obtain the position information from each wearable terminal 100a-100h.
- FIG. 8 shows the data received by each wearable terminal. As shown in FIG. 8, the received data consists of a wearable terminal ID, direction information, and voice information.
- Figure 9 shows the data received from Location Sano 00. As shown in FIG. 9, the received data also includes the terminal ID and position information of each wearable terminal.
- the wearable terminal 100 performs the above-described processing every predetermined time, detects a dynamically changing conversation group, and creates a profile using images and sounds acquired by the wearable terminals belonging to the detected conversation group. To do.
- FIG. 10 is a diagram showing a hardware configuration of the mobile terminal 100.
- Wearable terminal 100 is CP U101, ROM102, RAM103, microphone 104, camera 107, A, D variable ⁇ 105, 108, encoder 106, 109, electronic compass 110, memory card 111, and communication unit (wireless LAN interface) 112 Is done.
- the CPU 101, ROM 102, and RAM 103 constitute a computer system, and a program stored in the ROM 102 is read into the CPU 101, and functions are achieved by cooperation between the program and hardware resources.
- the electronic compass 110 determines the direction using the geomagnetism and detects the direction in which the terminal is facing.
- the memory card 111 is a portable medium for recording profile information and the like.
- the communication unit 112 sends a poll to each wearable terminal and location Sano 00, and responds to each wearable terminal (terminal ID, direction information, and voice information) and a response from the location Sano 00 (terminal ID, wearable terminal 100- 100h position information) is received.
- P2P communication is performed with the wearable terminal belonging to the same conversation group in order to acquire images and sounds that are the components of the profile.
- profile creation the created profile is sent to each wearable terminal belonging to the same conversation group, so P2P communication is performed with the wearable terminal belonging to the same conversation group.
- FIG. 11 shows functional blocks of wearable terminal 100.
- the wearable terminal 100 includes an imaging unit 121, a sound collection unit 122, a direction detection unit 125, an utterance timing extraction unit 126, a communication unit 127, an identical conversation group detection unit 128, a subject detection unit 129, an imaging condition determination unit 130, and a creation unit. 131 and a recording unit 132 are included.
- the imaging unit 121 includes a CCD and a CMOS, and has a function of converting external light into an electrical signal and outputting the converted electrical signal to the creation unit 131.
- the sound collection unit 122 is configured to include four microphones, AZD-converts the voice signals acquired by the respective microphone forces, and the converted signals are used as the speech timing extraction unit 126 and the same conversation group detection unit. It has a function to output to 128. More specifically, the sound collection unit 122 includes a wearer direction sound acquisition unit 123 and a non-wearer direction sound acquisition unit 124. The
- the wearer direction voice acquisition unit 123 performs directivity control so that the wearer's voice coming from the wearer's mouth direction of the wearable terminal 100 can be collected at a high SZN ratio.
- the directivity control can be realized by using the directivity control method of the subtractive array microphone that subtracts the sound signal of each microphone.
- the non-wearer direction sound acquisition unit 124 performs directivity control so that various environmental sounds arriving from directions other than the wearer's mouth of the wearable terminal 100 can be collected at a high SZN ratio.
- the directivity control can be realized by using the directivity control method of the addition type array microphone that adds the sound signals of the respective microphones.
- the direction detection unit 125 includes an electronic compass 110 and the like, and has a function of detecting the direction of the wearable terminal 100 and outputting it to the same conversation group detection unit 128.
- the utterance timing extraction unit 126 receives the audio signal from the wearer direction audio acquisition unit 123, detects the utterance from the received audio signal, and extracts the utterance timing of the detected utterance. Specifically, the start and end times of the voice section of the user wearing the wearable terminal 100 collected by the wearer direction voice acquisition unit 123 are obtained.
- extraction methods for example, an extraction method using speech power, an extraction method using cepstrum, a speech segment extraction method using a statistical method, and the like can be considered. Either method may be adopted depending on the required extraction accuracy and cost.
- the communication unit 127 includes an antenna, and receives data transmitted from the other wearable terminals 100a to 100h and the location server 400 via the antenna, and the profile created by the creation unit 131. It has a function to transmit to other mobile terminals 100a to 100h.
- the received image is sent to the subject detection unit 129, and the received position information, azimuth information, and voice information are sent to the same conversation group detection unit 128.
- IEEE802.llg wireless LAN is used as the wireless communication system. The strength of the radio wave in the wireless LAN can be set freely, so that the communication range can be determined
- the same conversation group detection unit 128 transmits direction information and voice information to each terminal via the communication unit 127, and transmits position information of the wearable terminal 100 to 100h to the location Sano 00.
- the communication unit 127 obtains the direction information and voice information of the wearable terminals 100a to 100h and the position information of the wearable terminal 100 to LOOh from the communication unit 127.
- the direction information of the terminal is acquired from the direction detection unit 125, and the voice information collected by the terminal is acquired from the sound collection unit 122. Then, wearable terminals belonging to the same conversation group are detected using the position information, the direction information, and the voice information. A specific processing flow will be described later.
- the subject detection unit 129 receives the images transmitted from the communication unit 127, detects each image force subject, and sends the detection result to the imaging condition determination unit 130.
- the shooting condition determination unit 130 receives the detection result of the subject from the subject detection unit 129, and determines whether or not the shooting condition of the received subject is good. Specifically, based on the position information and direction information of the terminal related to the speaker and the position information and direction information of each other terminal !, an image that captures the speaker as a subject on other terminals And determine the image with the best shooting conditions.
- the best shooting conditions are, for example, that the target speaker is larger and clearer, that the subject can be clearly grasped by direct light, and that the subject is accurately framed without being blocked by other objects. And so on. Specifically, the flow will be described later.
- the creation unit 131 creates a profile relating to a user belonging to the same conversation group, using video and audio acquired by the detected wearable terminal belonging to the same conversation group. For example, from the images that also acquired the terminal power belonging to the same conversation group, select the image that captured the speaker in the same conversation group and that has the best shooting conditions as a result of the determination by the shooting condition determination unit 130 Then, by combining the selected image and the sound collected by the terminal 100 corresponding to the image, an image of the speaker is created.
- the image is a photograph of a user related to the terminal 100, and the result of determination by the shooting condition determination unit 130 is that the shooting condition is the best. Select the image you want. Then, by combining the selected image and the sound collected by the own terminal 100 corresponding to the image, a user profile for the own terminal 100 is created.
- the recording unit 132 also includes a memory card 111, a RAM 103, etc. Or the image and sound acquired by the terminal 100 are associated with the result of the same conversation group detection unit 128 and recorded in the RAM 103.
- FIG. 12 shows the terminal ID list.
- the terminal ID list is a list in which terminal IDs of terminals constituting the same conversation group as this terminal are associated with voice, image, and time.
- the terminal IDaaa, bbb, cc c, ddd, and eee power-dong conversation group are formed.
- Terminal IDs aaa, bbb, ccc, ddd, and eee are the terminal IDs of wearable terminals 100a, 100b, 100c, 100d, and lOOe, respectively.
- FIGS. 13 and 14 are flowcharts showing the same conversation group detection process.
- Wearable terminal 100 dynamically detects a wearable terminal belonging to the same conversation group by executing the processes shown in FIGS. 13 and 14 at regular intervals. Here, processing is performed every 3 minutes.
- i is a variable that identifies one wearable terminal.
- the same conversation group detection unit 128 requests each terminal to transmit azimuth information and voice information (step S101).
- it requests the location sano 00 to transmit the location information of each terminal and its own terminal (step S102).
- Responses are also sent to each terminal and location search input that received the request, and the communication unit 127 receives them.
- the same conversation group detection unit 128 determines whether or not the communication unit 127 has received a response (direction information, voice information, and position information) (step S103). If it has been received (Yes in step S103), then it acquires the orientation information and audio information of its own terminal from the orientation detection unit 125 and the sound collection unit 122 (step S104). After obtaining the direction information and voice information of the terminal itself, i is initialized (step S105), and the following processing is performed for each terminal (steps S106 to 119).
- the distance between the terminal and the terminal is calculated from the acquired position information (step S106). For example, if the terminal is a terminal 100a, the position information is pi (xl, yl, zl) and p2 (x2, y2, z2) from FIG. [Number 1]
- rl 2 ⁇ (xl-x2) 2 ⁇ (yl-y2) 2
- the values of zl and z2 are the same.
- the predetermined range is 5 m, and it is determined whether or not the distance between the two points is within 5 m (step S107).
- step S108 it is determined whether or not the voice of the terminal is mixed as the environmental sound of the terminal. Specifically, the voice acquired by the wearer direction voice acquisition unit of the terminal is compared with the voice acquired by the non-wearer direction voice acquisition unit 124 of the terminal, and the start and end of the voice match. It is determined whether or not. If they match, it is determined that the voice of the terminal is mixed. This is in consideration of the case where the user of the terminal uses a loudspeaker or the like. That is, even if the distance between two points is more than 5 m, the wearer of the terminal itself may be listening to the user's voice related to the terminal. Determine that there is a possibility of belonging to a group and proceed with the process.
- Mobility information is calculated for each of the terminal and the terminal (step S109). Mobility information is calculated by the distance traveled and the amount of change in direction per given time (3 minutes here). After calculating the mobility information, it is determined whether or not the power is stopped at both terminals (step S110). This is because if both terminals are stopped, it is highly likely that they belong to the same conversation group. Specifically, it is determined whether the travel distance and the direction change amount are zero.
- step S110 it is determined whether or not both terminals are moving in the same direction. This is because even if they are moving, they are likely to belong to the same conversation group if they are moving in the same direction. Specifically, position It is determined whether or not the information and the change amount of the direction information are the same in both terminals. If they are the same, it is determined that they are moving in the same direction.
- both terminals are determined to be stationary (Yes in step S110), or if both terminals are determined to move in the same direction (Yes in step S111), both terminals It is determined whether or not there is a utterance section (step S112).
- step S112 it is determined whether or not there is an utterance section in one terminal (step S114). If there is an utterance section in one terminal (Yes in step S114), it is determined whether or not the voice of the other terminal is mixed as the environmental sound of one terminal (step S115).
- step S 116 it is determined whether or not the same third party's voice is mixed as an environmental sound in both terminals. Specifically, the voice acquired by the non-wearer direction voice acquisition unit 124 of the terminal is compared with the voice acquired by the non-wearer direction voice acquisition unit of the terminal. It is determined whether or not they match. This means that even if there is no voice section on both terminals, if the same third party voice is mixed in the terminal and the terminal, both will hear a certain speaker's voice! Because it is possible, it is recognized as the same conversation group.
- step S113 If both terminals have utterance intervals (Yes in step S112), it is determined whether or not the overlapping rate of each utterance interval is within 5% (step S113). If determined to be within 5% (Yes in Step S113), the sound of the other terminal is mixed as the environmental sound of one terminal (Yes in Step S115), or both terminals are environmental sounds. If it is determined that voices of the same third party are mixed (Yes in step S116), it is determined that the terminal and the terminal belong to the same conversation group (step S117).
- step S118 If negative determinations are made in steps S108, 111, 113, 115, and 116, respectively, it is determined that the terminal and the terminal are not in the same conversation group (step S118).
- the terminal ID of the terminal and the determination result are stored in association with each other (step SI 19), it is determined whether i is the last (step S 120), and if it is not the last, i is counted (step S 121). Move to S 1 06. If i is the last, the same conversation group detection process is terminated.
- FIGS 15 and 16 show the flow of the creation process.
- i is a variable that identifies one wearable terminal in the same conversation group other than the terminal related to the speaker
- j is a variable that identifies one utterance section.
- step S206 It is determined whether or not the speaker terminal and the terminal are in the same direction. Specifically, it is determined by the direction information of both terminals. This is because, if both terminals are facing different directions, it is possible to obtain a good speaker image that is likely to be facing each other.
- step S207 it is determined whether or not the distance between both terminals is 2 m or more. If the distance between the two terminals is less than 2m (No in step S207), it is highly possible that a good image can be acquired without any obstacles between them. Select as a candidate image (step S209). When the distance between both terminals is 2 m or more (Yes in step S207), it is determined whether or not there is an obstacle between both terminals (step S208). Specifically, the presence / absence of an obstacle is acquired by the terminal in addition to determining whether a third party terminal exists between the terminal and the speaker terminal in the same conversation group based on the location information.
- the image to be analyzed is analyzed, and the determination is made based on whether or not the face image can be detected. If face detection is possible, determine that there are no obstacles. If it is determined that there is no obstacle (No in step S208), the image of the terminal is selected as a candidate image for profile creation (step S209).
- step S210 When it is determined that the speaker terminal and the terminal are facing the same direction (step S206), or when an image of the terminal is selected as a candidate, whether or not i is the last is determined (step) S210). If it is not the last, i is counted by 1 (step S211), and the process proceeds to step S206. When i is the last, it determines which image is finally used from the selected candidates based on the evaluation function.
- F f (d, r, snr) is used as the evaluation function. Where d Is the angle of the speaker's face, r is the distance between both terminals, and snr is the sharpness of the image.
- d is calculated from the speaker terminal and the orientation information of the terminal, and the evaluation is higher as the distance from the front is closer.
- r is calculated from the speaker terminal and the position information of the terminal, and the closer the evaluation, the higher the evaluation.
- snr also calculates forces such as contrast and SZN ratio. The clearer the evaluation, the higher the evaluation.
- step S213 it is determined whether or not j is the last (step S213). If it is not the last, j is counted by 1 (step S214), and the process proceeds to step S204. If j is the last, the sound corresponding to each image is acquired (step S215), and a video combining each image and sound is created (step S216). The created video is transmitted to each terminal belonging to the same conversation group (step S217). Each terminal receives the video and records the received video. This makes it possible to share videos created by the same conversation group.
- FIG. 17 is a diagram schematically showing the utterance timing of each utterer and from which terminal the image of the utterer at the time of each utterance is acquired.
- the utterance timing of the user related to terminal 100 is shown in the first level.
- the second level shows the user's utterance timing for terminal 100a.
- the third level shows the user's utterance timing related to terminal 100c.
- the image acquisition terminal ID determined in the fourth row is shown.
- the user associated with the terminal 100c speaks at times tl to t2 and t7 to t8, and the image of the user at that time uses an image acquired by the terminal having terminal ID 000 (wearable terminal 100). Is shown.
- the user related to terminal 100 speaks, and the image indicates that the image acquired by the terminal with IDccc (wearable terminal 100c) is used.
- the user related to the terminal 100a speaks, and the image shows that the image acquired by the terminal having IDeee (wearable terminal 100e) is used.
- FIG. 18 is a diagram corresponding to FIG. 17, and shows the relationship between the utterance timing of each speaker, the terminal ID of the terminal that is the image acquisition target, and the acquired image. By recording the table shown in this figure, it is possible to determine from which terminal the video in a certain scene was obtained.
- FIG. 19 is a diagram schematically showing the created profile.
- Wearable terminal 100 By combining the sound acquired in step 1 and the image acquired by the image acquisition terminal determined at each utterance timing, a stream in which the speaker is always photographed can be created.
- FIG. 20 shows the internal structure of profile information.
- the profile information includes playlist information, audio files, and image files.
- the playlist information defining the playlist includes audio file link information indicating an audio file, image file link information indicating a corresponding image file, a playback start time, and a playback end time. Thereby, it is possible to link audio information and a plurality of pieces of image information.
- the audio file is a file that stores the audio information of the terminal itself.
- An image file is a file that stores image information acquired from wearable terminals belonging to the same conversation group.
- the mobile terminal 100 acquires the mobile terminal 100-: LOOh position information, direction information, and voice information, and wearables belonging to the same conversation group from the acquired information. Detect the terminal. In the example of Fig. 3, wearable terminals 100a to 100e (conversation group 1) are detected. Therefore, it is not necessary to register a wearable terminal for acquiring data in advance. In addition, it is possible to always create an image of a speaker by using images and sounds captured by wearable terminals belonging to the conversation group 1 without bothering the user.
- the wearable terminal 100 detects a wearable terminal that provides an image necessary for a profile, acquires an image from the detected wearable terminal, and creates a profile using the acquired image.
- P2P type ad-hoc mode P2P type ad-hoc mode
- a creation server that manages and controls a plurality of wearable terminals efficiently manages a conversation group that shares images and sounds.
- Conversation group l to k create a profile for each conversation group using images and sounds acquired by wearable terminals belonging to each conversation group, and create the profile A configuration for transmitting messages to wearable terminals belonging to each conversation group by communication will be described.
- Server mode with centralized server management
- a communication sequence in the server centralized management type will be described. This is basically the same communication sequence as described in Fig. 7.
- the creation server 500 includes a communication unit 510, an identical conversation group detection unit 520, an utterance timing extraction unit 540, a subject detection unit 550, an imaging condition determination unit 560, a recording unit 570, and a creation unit 580.
- Communication unit 510 receives direction information and voice information to which each wearable terminal power is also transmitted, and position information of each wearable terminal transmitted from location server 400.
- the received azimuth information, voice information, and position information are sent to the same conversation group detection unit 520, and the voice information is sent to the utterance timing extraction unit 540 and the recording unit 570.
- the detected wearable terminal image information belonging to the same conversation group is received and sent to the recording unit 570.
- the created profile is sent to each wearable terminal.
- the same conversation group detection unit 520 includes a clustering unit 521, an intra-cluster conversation group detection unit 522, an utterance information calculation unit 523, and a fitness level calculation unit 524.
- the clustering unit 521 receives the position information of each wearable terminal from the communication unit 510, and clusters a plurality of wearable terminals into a predetermined number of clusters kO based on the received position information of each wearable terminal.
- the clustering result is transmitted to the intra-cluster conversation group detection unit 522.
- clustering is performed using k-means. A specific processing flow will be described later.
- the intra-cluster conversation group detection unit 522 receives the clustering result from the clustering unit 521. Within each cluster, utterance overlap is calculated from the voice information of each terminal, and k identical conversation groups are detected according to the calculated overlap, position information, and direction information. The detection result is sent to the utterance information calculation unit 523.
- the utterance information calculation unit 523 has a function of receiving the detection result of the same conversation group from the intra-cluster conversation group detection unit 522 and calculating the utterance information (speech time rate, speaker change frequency) of the speakers belonging to each conversation group. .
- Speaking time rate refers to the proportion of the time that an individual speaks in the total conversation time.
- the utterance time rate is calculated for each speaker.
- the speaker change frequency refers to the number of speaker changes that occur within a conversation group per unit time.
- the conversation activity level of the conversation group is calculated from the utterance information, and the calculated conversation activity level is sent to the fitness level calculation unit 524.
- the conversation activity level is defined to take a larger value as the conversation is active as the utterance time rate of each speaker is equal, and as the speaker change frequency is higher.
- the fitness level calculation unit 524 receives position information from the clustering unit 521 and conversation activity and direction information from the utterance information calculation unit 523, and calculates mobility information from the position information and direction information.
- the conversation group suitability for each conversation group is calculated for those who do not belong to the conversation group.
- the conversation group fitness is calculated from the position information, direction information, mobility, and conversation activity of the target conversation group. The closer the position to the conversation group is, the closer the conversation group is. It is defined to take a larger value as the fitness level is higher, the higher the conversation means, the higher the activity level of the conversation means. As a result, those who do not belong to the conversation group belong to the group with the highest degree of conversation group suitability.
- the utterance timing extraction unit 540, the subject detection unit 550, the imaging condition determination unit 560, and the creation unit 580 are the same as the utterance timing extraction unit 126, subject detection unit 129, and imaging condition determination unit described in the first embodiment. 130 and the creation unit 131.
- the recording unit 570 appropriately records the audio information and image information acquired by each terminal received by the communication unit 510. Also, the profile created by the creation unit 580 is recorded. ⁇ Configuration of wearable terminal>
- FIG. 24 is a diagram showing functional blocks of wearable terminal 600.
- the wearable terminal has an image pickup unit 601, a sound pickup unit 602, A position detection unit 603, a communication unit 604, and a recording unit 605 are included.
- the imaging unit 601, the sound collection unit 602, and the direction detection unit 603 are the same as the imaging unit 121, the sound collection unit 122, and the direction detection unit 125 described in the first embodiment.
- the communication unit 604 transmits azimuth information and audio information to the creation server 500, and receives a profile from the creation server.
- the received profile is sent to the recording unit 605.
- the recording unit 605 records the profile transmitted from the communication unit 604 on a recording medium.
- the creation server 500 requests each wearable terminal to transmit direction information and audio information (step S301). Requests location Sano OO to transmit the location information of each wearable terminal (step S302).
- clustering processing is performed (step S304), and each terminal is classified into a plurality of clusters.
- the same conversation group detection process 2 is performed in each cluster (step S305), and the same conversation group is detected.
- a creation process is performed (step S306), and a profile is created. Note that the creation process is the same as the creation process shown in FIGS. 15 and 16 described in the first embodiment.
- Fig. 26 shows the flow of the clustering process.
- i is a variable indicating one terminal
- j is a variable indicating one cluster
- n is the number of terminals
- k is the number of clusters.
- step S404 it is determined whether or not there is a cluster having a short distance from the randomly assigned cluster (step S407). If there is a cluster with a short distance, reassign terminal xi to the nearest central cluster (steps S408).
- FIG. 27 is a diagram showing the same conversation group detection process 2.
- j is a variable indicating one cluster.
- the speech overlap is calculated from the voice information of each terminal in each cluster (step S501).
- the same conversation group is detected based on the calculated utterance redundancy, position information, and direction information (step S502).
- the processes of steps S106 to S117 in FIGS. 13 and 14 are performed for the combination of wearable terminals belonging to the cluster.
- the wearable terminals determined to be the same conversation group are determined to be the same conversation group, the wearable terminals are the same conversation group.
- conversation group 1 is synonymous with the fact that wearable terminal 600 and 600a, 600a and 600b, 600b and 600, and power are determined to be the same conversation group.
- utterance information is calculated for each conversation group (step S503), and the conversation activity of each conversation group is calculated based on the calculated utterance information (step S504).
- j is initialized (step S505), and it is determined whether there is a person who does not belong to the conversation group in the cluster and there are multiple conversation groups in the cluster (Ste S506). If the determination is affirmative, the conversation group fitness for each conversation group is calculated (step S507), and it is determined that it belongs to the conversation group with the highest conversation group fitness (step S507). Step S508). In the cluster, it is determined whether or not the other person does not belong to the conversation group (step S509).
- step S507 If the person does not belong to the conversation group, the process proceeds to step S507. If there is no person who does not belong to the conversation group, or if the determination is negative in step S506, it is determined whether j is the last (step S510). If j is not the last, j Counts 1 (step S511), and proceeds to step S506. If j is last, the process ends.
- Fig. 28 (a) is a map of bird's-eye view of 21 people at a certain time.
- the location information of each terminal is acquired by the same system as in FIG.
- FIG. 28 (b) shows the result after clustering by the clustering unit 521.
- Figure 28 (c) shows each person's aggressiveness and direction.
- FIG. 28 (d) shows the result after the conversation group is detected by the intra-cluster conversation group detection unit 522.
- two conversation groups, conversation group 1-1 and conversation group 1-2 are detected in cluster 1
- two conversation groups, conversation group 2 1, and conversation group 2-2 are detected in cluster 2. It is shown that one conversation group called conversation group 3 is detected in cluster 3.
- Figure 28 (e) shows the result of finally dividing all conversational groups into conversation groups. From this figure, it can be seen that the conversation group 1 1 and the conversation group 1 2 have been expanded, respectively, and those who have a low conversation participation frequency or who do not participate in the conversation are included in each conversation group.
- the creation server 500 classifies wearable terminals to be managed into clusters, determines conversation groups for each cluster, and is acquired by wearable terminals belonging to each conversation group.
- a profile for each conversation group can be easily created using images and sounds.
- the number of trials when searching for each conversation group can be reduced, and the amount of computation can be greatly reduced. Can do.
- Embodiments 1 and 2 wearable terminals that belong to the same conversation group are detected for the overlap strength of the utterance section. It can also be used for the same conversation group detection process.
- Aichichi is used for the same conversation group detection process.
- the term “aizuchi” refers to those containing long vowels such as “ ⁇ ”, “how”, and “fun”. Aizuchi is often a long vowel like this, and itself is a single frame. Often laze.
- vowel detection is performed using parameters that indicate numerical values characteristic of vowels, such as cepstrum and LPC coefficients, and the condition determination method has a duration of, for example, 200 [msec] or more and 2 [sec] or less. Detection can be performed easily. Of course, the detection method is not limited to the method described here.
- AIZUCHI is often inserted redundantly in the other party's utterance section.
- the pattern is that speaker A is speaking and the next speaker B listening to it speaks. From this, if the conversations of speakers A and B overlap, and the beginning of the utterance section of speaker B and the utterance of speaker B is unclear, the utterance is the overlapped section. By not considering it, it is possible to reduce the overlap time of utterance and improve the detection performance of the same conversation group.
- the force S described based on the embodiment is not limited to the above embodiment.
- the position may be detected using GPS, the position may be detected using an ultrasonic wave, the position may be detected using a wireless LAN, or an RF ID tag may be attached.
- the position may be detected by using it, or the position may be detected by other methods.
- each wearable terminal force obtains respective position information.
- the wearable terminal is a camera type, but it may be a watch type, a pen type, a megane type, or the like.
- the wearable terminal is attached as shown in FIG. 1.
- the terminal is not limited to this, and the terminal can be pinned to the chest, worn in the shape of glasses, or worn like headphones. It may be shaped and worn.
- the power Bluetooth using a wireless LAN may be used as the communication method, or another communication method may be used. Any form such as wireless communication, wired communication, or packet communication using an IP network may be used as long as it can transmit information such as images, sounds, positions, and directions.
- the same conversation group is detected using the voice information, the position information, and the direction information.
- the voice information may be used to detect the same conversation group.
- the range in which the sound can be acquired is about several meters, and since it is not possible to pick up sound beyond that distance, it is possible to estimate a certain distance depending on whether or not the sound can be picked up. That is, when the voice cannot be collected, the wearable terminals related to the user who is emitting the voice are not regarded as the same conversation group.
- the same conversation group may be detected from the voice information and the position information, or the same conversation group may be detected from the voice information and the direction information.
- the present invention is not limited to a powerful conversation group in which wearable terminals belonging to the same conversation group are detected.
- a wearable terminal sharing an object of interest may be detected, or a wearable terminal whose position is close may be simply detected.
- the terminal that acquires the image of the speaker at the utterance timing of a certain speaker is one terminal determined by the evaluation function.
- a plurality of terminals with high evaluation are determined, and A profile may be created by combining the acquired images. As a result, images with various angles can be obtained.
- the creation unit 131 may always create a single stream in which only a specific person who has created a profile is created by stitching together video images of speakers.
- the terminal 100 may combine an image obtained by photographing itself selected by the evaluation function without considering the utterance timing and the sound acquired by the terminal 100.
- the flow is as follows.
- FIG. 29 is a diagram showing a flow of creation processing 2.
- i is a variable that identifies one wearable terminal.
- each wearable terminal determined to belong to the same conversation group is requested to transmit image information (step S601).
- image information is received from each wearable terminal (Yes in step S602), i is initialized (step S603). After i is initialized, it is determined whether or not the terminal and the terminal are in the same direction (step S604).
- step S605 If both terminals are facing different directions (No in step S604), it is determined whether or not the distance between both terminals is 2 m or more (step S605). If the distance between both terminals is less than 2 m (No in step S605), the terminal image is selected as a candidate image for profile creation (step S607). If the distance between both terminals is 2 m or more (Yes in step S605), it is determined whether there is an obstacle between the terminals (step S606). If it is determined that there is no obstacle (No in step S606), the terminal image is selected as a candidate image for profile creation (step S607).
- step S604 When it is determined that the terminal and the terminal are facing the same direction (Yes in step S604), when it is determined that there is an obstacle between both terminals (Yes in step S606), or After step S607, it is determined whether i is the last (step S608). If it is not the last, i is counted by 1 (step S609), and the process proceeds to step S604. When i is the last, it determines which image is finally used from the selected candidates based on the evaluation function. Then, the audio of the section corresponding to the image is acquired (step S611), and a video in which the image and the audio are combined is created (step S612).
- the image used for the profile is selected based on the evaluation function.
- the presence / absence of an obstacle is determined by the terminal in addition to determining whether or not a third party terminal exists between the terminal and the speaker terminal in the same conversation group. Analyzes the acquired image and determines that there is no obstacle if face image detection is possible! In this method, it is determined whether or not the detected face image and the direction vector formed by the terminal and the terminal can be matched, and if there is a match, it is determined that there is no obstacle. Also good.
- the evaluation function is not limited to this. Redundancy by continuously selecting the same screen (continuous selection is performed) May be subject to evaluation. This redundancy is calculated from the time length.
- step S111 in FIG. 13 it is determined in step S111 in FIG. 13 that the position and orientation change amount is the same, but it is determined that the mobile terminal is moving in the same direction. If the change amount of the position and orientation of the is within a predetermined range, it may be determined that they are moving in the same direction.
- the wearable terminal force image information belonging to the same conversation group may be acquired, and the force voice information for which the profile is created may be acquired, or a log related to the conversation may be acquired.
- the voice collected by the terminal may not contain much of the voice of a person who is in the same conversation group but at a slightly distant location.
- clear voice can be recorded.
- clear conversations can be recorded by using the conversation logs.
- acquiring wearable terminal power data (image, sound, log, etc.) belonging to the same conversation group may not be in real time.
- the conversation group detection process is performed at certain intervals, but may be performed each time a change occurs in the position information of each wearable terminal. If there is a change in location information, the video to be shared should be provided according to the conversation group to which the user associated with the wearable terminal belongs. For example, in Fig. 2, the wearable terminal 100b user acquires the conversation video up to the period when the user stayed in the conversation group 1, and the conversation video of the conversation group 2 that was added after that acquired the conversation video after that time. You can do it. Note that if you join a conversation on the way, you may often catch up with the conversation content of the conversation group up to that point. A short time It may be possible to provide a mechanism for reproducing.
- Embodiment 1 described above voice, image, and time are recorded in association with the terminal IDs of the terminals that make up the same conversation group.
- the positional information, the voice acquired by each terminal, etc. may be recorded together.
- the azimuth and position of the captured image remains as a record.For example, it is possible to determine the presence or absence of backlight using the azimuth information, or to automatically record where the position information was recorded. I'll do it for you.
- the wearable terminal 100 detects wearable terminals belonging to the same conversation group and creates a profile.
- the wearable terminal 100 detects the wearable terminal and sets the profile.
- a configuration that includes a separate server to create the server.
- the wearable terminal 100 has the power to perform the same conversation group detection process and the profile file creation process.
- the terminal worn by the person who spoke first is the same conversation group detection process or profile.
- the creation process may be performed on behalf of the conversation member.
- the same conversation group detection process is always performed at fixed intervals determined individually for each terminal, and the profile creation process is represented by the terminal worn by the person who spoke first. It is also possible that any one terminal in the conversation member performs the processing on behalf of the conversation member, or the terminal may perform the processing on behalf of the long utterance time. ,.
- clustering is performed based on position information.
- the center of gravity of the person is shifted in the direction in which the person is facing, or mobility information is provided. May be used to affect the position centroid of the entire group by shifting the centroid position of the person in the direction of the person.
- the cluster number kO may be changed according to the detected number of participants. For example, the larger the number of participants, the larger the cluster number kO may be.
- Embodiment 2 described above even for a person who does not speak, the power that is made to belong to any conversation group by obtaining the fitness is shown in FIG. As shown in Fig. 28 (d), there is no need to forcibly determine that it belongs to some conversation group. ) To stop the decision.
- whether or not they are the same conversation group is not limited to the same conversation group detection process described in the first and second embodiments, but the lesser the utterance overlap, the closer the position, and the more facing each other. Any method may be used as long as it is determined that the conversation groups are the same.
- the profile created by the creation server 500 is transmitted to the wearable terminal, and the wearable terminal that receives the transmitted profile stores the profile.
- a playback unit may be provided, and the creation server may send the profile by streaming.
- the wearable terminal does not serve as a viewer, it may be a simple sensor terminal.
- the same conversation group is detected using the degree of overlap of voice.
- voice recognition is performed using the voice information acquired by the sound collection unit, and the recognized character information and the communication unit are acquired.
- the same conversation group may be detected from the character information that is the voice recognition result of the other terminal. For example, if the keyword information is included in each character information at a certain frequency or more, it may be assumed that they are the same conversation group.
- terminal arrangement party type terminal arrangement
- Terminal arrangement school type
- a terminal when a large number of people talk to each other while watching Applicable to arrangement (viewing type).
- the server device force S clustering processing is performed. Even in the case of the P2P type ad-hoc mode of the first embodiment, the wearable Terminal power S clustering processing may be performed.
- the present invention is useful in a situation where a plurality of persons in the vicinity are each wearing a wearable terminal.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Telephonic Communication Services (AREA)
- Studio Devices (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/279,011 US8581700B2 (en) | 2006-02-28 | 2007-02-21 | Wearable device |
JP2008505022A JP4669041B2 (ja) | 2006-02-28 | 2007-02-21 | ウェアラブル端末 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006-053030 | 2006-02-28 | ||
JP2006053030 | 2006-02-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2007105436A1 true WO2007105436A1 (ja) | 2007-09-20 |
Family
ID=38509272
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2007/053187 WO2007105436A1 (ja) | 2006-02-28 | 2007-02-21 | ウェアラブル端末 |
Country Status (4)
Country | Link |
---|---|
US (1) | US8581700B2 (ja) |
JP (1) | JP4669041B2 (ja) |
CN (1) | CN101390380A (ja) |
WO (1) | WO2007105436A1 (ja) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010092454A (ja) * | 2008-09-12 | 2010-04-22 | Nec Tokin Corp | Rfidリーダシステム |
JP2013072978A (ja) * | 2011-09-27 | 2013-04-22 | Fuji Xerox Co Ltd | 音声解析装置および音声解析システム |
JP2013142843A (ja) * | 2012-01-12 | 2013-07-22 | Fuji Xerox Co Ltd | 動作解析装置、音声取得装置、および、動作解析システム |
JP2013164468A (ja) * | 2012-02-09 | 2013-08-22 | Fuji Xerox Co Ltd | 音声解析装置、音声解析システムおよびプログラム |
JP2013164299A (ja) * | 2012-02-09 | 2013-08-22 | Univ Of Tsukuba | 測定装置、測定方法、測定プログラム及び測定システム |
JP2013181899A (ja) * | 2012-03-02 | 2013-09-12 | Fuji Xerox Co Ltd | 音声解析装置、音声解析システムおよびプログラム |
JP2014044172A (ja) * | 2012-08-28 | 2014-03-13 | Fuji Xerox Co Ltd | 位置特定システムおよび端末装置 |
JP2014164164A (ja) * | 2013-02-26 | 2014-09-08 | Fuji Xerox Co Ltd | 音声解析装置、信号解析装置、音声解析システムおよびプログラム |
JP2014191201A (ja) * | 2013-03-27 | 2014-10-06 | Fuji Xerox Co Ltd | 音声解析システム、音声端末装置およびプログラム |
JP2016144134A (ja) * | 2015-02-04 | 2016-08-08 | 富士ゼロックス株式会社 | 音声解析装置、音声解析システムおよびプログラム |
WO2016158267A1 (ja) * | 2015-03-27 | 2016-10-06 | ソニー株式会社 | 情報処理装置、情報処理方法、およびプログラム |
JP2016539403A (ja) * | 2013-10-14 | 2016-12-15 | ノキア テクノロジーズ オサケユイチア | コンテキスト上の関係に基づくメディア・ファイルを識別するための方法と装置 |
JP2018042061A (ja) * | 2016-09-06 | 2018-03-15 | 株式会社デンソーテン | 電子機器、接続対象の電子機器、通信システム及び通信方法 |
WO2019097674A1 (ja) * | 2017-11-17 | 2019-05-23 | 日産自動車株式会社 | 車両用操作支援装置 |
JP2020114021A (ja) * | 2016-09-26 | 2020-07-27 | 株式会社Jvcケンウッド | 無線機 |
JP2020135532A (ja) * | 2019-02-21 | 2020-08-31 | 前田建設工業株式会社 | アラート出力システム、アラート出力方法、及びプログラム |
JP2022012926A (ja) * | 2020-07-02 | 2022-01-18 | 秋夫 湯田 | 人との接触記録装置。 |
Families Citing this family (64)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8019091B2 (en) | 2000-07-19 | 2011-09-13 | Aliphcom, Inc. | Voice activity detector (VAD) -based multiple-microphone acoustic noise suppression |
US9066186B2 (en) | 2003-01-30 | 2015-06-23 | Aliphcom | Light-based detection for acoustic applications |
US9099094B2 (en) | 2003-03-27 | 2015-08-04 | Aliphcom | Microphone array with rear venting |
WO2008032329A2 (en) * | 2006-09-13 | 2008-03-20 | Alon Atsmon | Providing content responsive to multimedia signals |
US8120486B2 (en) * | 2008-06-10 | 2012-02-21 | Symbol Technologies, Inc. | Methods and systems for tracking RFID devices |
US20110181409A1 (en) * | 2010-01-28 | 2011-07-28 | Chastie Samms | Interchangeable communication device |
JP5581329B2 (ja) * | 2010-06-30 | 2014-08-27 | パナソニック株式会社 | 会話検出装置、補聴器及び会話検出方法 |
CN203435060U (zh) * | 2010-07-15 | 2014-02-12 | 艾利佛有限公司 | 无线电话会议的电话系统和电话网关 |
US9064501B2 (en) * | 2010-09-28 | 2015-06-23 | Panasonic Intellectual Property Management Co., Ltd. | Speech processing device and speech processing method |
US20120189140A1 (en) * | 2011-01-21 | 2012-07-26 | Apple Inc. | Audio-sharing network |
CN102625077B (zh) * | 2011-01-27 | 2015-04-22 | 深圳市宇恒互动科技开发有限公司 | 一种会议记录方法、会议摄像装置、客户机及系统 |
ITTO20110530A1 (it) * | 2011-06-16 | 2012-12-17 | Fond Istituto Italiano Di Tecnologia | Sistema di interfaccia per interazione uomo-macchina |
EP2611110A1 (en) * | 2011-12-30 | 2013-07-03 | Alcatel Lucent, S.A. | Method for producing professional-like shared user-generated media content |
WO2013142642A1 (en) | 2012-03-23 | 2013-09-26 | Dolby Laboratories Licensing Corporation | Clustering of audio streams in a 2d/3d conference scene |
TW201342819A (zh) * | 2012-04-09 | 2013-10-16 | Motionstek Inc | 可攜式電子裝置遙控系統與遙控裝置、及行車安全裝置 |
US10142496B1 (en) * | 2013-01-26 | 2018-11-27 | Ip Holdings, Inc. | Mobile device image capture and image modification including filters, superimposing and geofenced comments in augmented reality |
US10282791B2 (en) * | 2013-02-22 | 2019-05-07 | Curate, Inc. | Communication aggregator |
EP3005026A4 (en) * | 2013-06-07 | 2017-01-11 | Sociometric Solutions, Inc. | Social sensing and behavioral analysis system |
KR20150009072A (ko) * | 2013-07-12 | 2015-01-26 | 삼성전자주식회사 | 동작모드 제어 방법 및 그 방법을 처리하는 전자 장치 |
US20150185839A1 (en) * | 2013-12-28 | 2015-07-02 | Aleksander Magi | Multi-screen wearable electronic device for wireless communication |
USD750070S1 (en) | 2013-12-28 | 2016-02-23 | Intel Corporation | Wearable computing device |
US10360907B2 (en) | 2014-01-14 | 2019-07-23 | Toyota Motor Engineering & Manufacturing North America, Inc. | Smart necklace with stereo vision and onboard processing |
US10248856B2 (en) | 2014-01-14 | 2019-04-02 | Toyota Motor Engineering & Manufacturing North America, Inc. | Smart necklace with stereo vision and onboard processing |
US9578307B2 (en) * | 2014-01-14 | 2017-02-21 | Toyota Motor Engineering & Manufacturing North America, Inc. | Smart necklace with stereo vision and onboard processing |
US9915545B2 (en) * | 2014-01-14 | 2018-03-13 | Toyota Motor Engineering & Manufacturing North America, Inc. | Smart necklace with stereo vision and onboard processing |
US10024679B2 (en) | 2014-01-14 | 2018-07-17 | Toyota Motor Engineering & Manufacturing North America, Inc. | Smart necklace with stereo vision and onboard processing |
US9349277B2 (en) * | 2014-04-01 | 2016-05-24 | Prof4Tech Ltd. | Personal security devices and methods |
WO2016018796A1 (en) * | 2014-07-28 | 2016-02-04 | Flir Systems, Inc. | Systems and methods for video synopses |
US10375081B2 (en) * | 2014-08-13 | 2019-08-06 | Intel Corporation | Techniques and system for extended authentication |
US10024678B2 (en) | 2014-09-17 | 2018-07-17 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable clip for providing social and environmental awareness |
US9922236B2 (en) | 2014-09-17 | 2018-03-20 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable eyeglasses for providing social and environmental awareness |
CN104202805B (zh) * | 2014-09-22 | 2018-02-27 | 联想(北京)有限公司 | 通信控制方法和电子设备 |
US10656906B2 (en) | 2014-09-23 | 2020-05-19 | Levaughn Denton | Multi-frequency sensing method and apparatus using mobile-based clusters |
US11150868B2 (en) | 2014-09-23 | 2021-10-19 | Zophonos Inc. | Multi-frequency sensing method and apparatus using mobile-clusters |
US11068234B2 (en) | 2014-09-23 | 2021-07-20 | Zophonos Inc. | Methods for collecting and managing public music performance royalties and royalty payouts |
US10127005B2 (en) * | 2014-09-23 | 2018-11-13 | Levaughn Denton | Mobile cluster-based audio adjusting method and apparatus |
US11544036B2 (en) | 2014-09-23 | 2023-01-03 | Zophonos Inc. | Multi-frequency sensing system with improved smart glasses and devices |
CN104320163B (zh) * | 2014-10-10 | 2017-01-25 | 安徽华米信息科技有限公司 | 一种通讯方法及装置 |
WO2016073363A1 (en) * | 2014-11-03 | 2016-05-12 | Motion Insight LLC | Motion tracking wearable element and system |
JP6300705B2 (ja) * | 2014-11-19 | 2018-03-28 | キヤノン株式会社 | 装置連携による認証管理方法、情報処理装置、ウェアラブルデバイス、コンピュータプログラム |
US10490102B2 (en) | 2015-02-10 | 2019-11-26 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for braille assistance |
US9972216B2 (en) | 2015-03-20 | 2018-05-15 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for storing and playback of information for blind users |
CN104811903A (zh) * | 2015-03-25 | 2015-07-29 | 惠州Tcl移动通信有限公司 | 一种组建交流群的方法及其可穿戴设备 |
US10560135B1 (en) | 2015-06-05 | 2020-02-11 | Life365, Inc. | Health, wellness and activity monitor |
US11329683B1 (en) * | 2015-06-05 | 2022-05-10 | Life365, Inc. | Device configured for functional diagnosis and updates |
US10185513B1 (en) | 2015-06-05 | 2019-01-22 | Life365, Inc. | Device configured for dynamic software change |
US9974492B1 (en) | 2015-06-05 | 2018-05-22 | Life365, Inc. | Health monitoring and communications device |
GB2539949A (en) * | 2015-07-02 | 2017-01-04 | Xovia Ltd | Wearable Devices |
US10728488B2 (en) * | 2015-07-03 | 2020-07-28 | H4 Engineering, Inc. | Tracking camera network |
JP6544088B2 (ja) * | 2015-07-06 | 2019-07-17 | 富士通株式会社 | 端末、情報漏洩防止方法および情報漏洩防止プログラム |
US9923941B2 (en) | 2015-11-05 | 2018-03-20 | International Business Machines Corporation | Method and system for dynamic proximity-based media sharing |
US9854529B2 (en) * | 2015-12-03 | 2017-12-26 | Google Llc | Power sensitive wireless communication radio management |
US10024680B2 (en) | 2016-03-11 | 2018-07-17 | Toyota Motor Engineering & Manufacturing North America, Inc. | Step based guidance system |
US9958275B2 (en) | 2016-05-31 | 2018-05-01 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for wearable smart device communications |
US10561519B2 (en) | 2016-07-20 | 2020-02-18 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable computing device having a curved back to reduce pressure on vertebrae |
US10432851B2 (en) | 2016-10-28 | 2019-10-01 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable computing device for detecting photography |
USD827143S1 (en) | 2016-11-07 | 2018-08-28 | Toyota Motor Engineering & Manufacturing North America, Inc. | Blind aid device |
US10012505B2 (en) | 2016-11-11 | 2018-07-03 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable system for providing walking directions |
US10521669B2 (en) | 2016-11-14 | 2019-12-31 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for providing guidance or feedback to a user |
US11830380B2 (en) * | 2019-01-10 | 2023-11-28 | International Business Machines Corporation | System and method for social learning utilizing user devices |
US11178504B2 (en) * | 2019-05-17 | 2021-11-16 | Sonos, Inc. | Wireless multi-channel headphone systems and methods |
CN111127734B (zh) * | 2019-12-31 | 2022-01-14 | 中国银行股份有限公司 | 银行柜台网点排队管理系统、装置及方法 |
JP7400531B2 (ja) * | 2020-02-26 | 2023-12-19 | 株式会社リコー | 情報処理システム、情報処理装置、プログラム、情報処理方法及び部屋 |
US11386804B2 (en) * | 2020-05-13 | 2022-07-12 | International Business Machines Corporation | Intelligent social interaction recognition and conveyance using computer generated prediction modeling |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002024113A (ja) * | 1998-06-30 | 2002-01-25 | Masanobu Kujirada | 出会い・連絡支援システム |
JP2002523855A (ja) * | 1998-08-21 | 2002-07-30 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 情報処理装置 |
JP2004356970A (ja) * | 2003-05-29 | 2004-12-16 | Casio Comput Co Ltd | ウエアラブルカメラの撮影方法、撮像装置、及び撮影制御プログラム |
Family Cites Families (61)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3723667A (en) * | 1972-01-03 | 1973-03-27 | Pkm Corp | Apparatus for speech compression |
US4706117A (en) * | 1984-06-01 | 1987-11-10 | Arnold Schoolman | Stereo laser disc viewing system |
US4867442A (en) * | 1987-10-09 | 1989-09-19 | Matthews H Gerard | Physical exercise aid |
JPH0253636A (ja) | 1988-08-18 | 1990-02-22 | Kubota Ltd | 作業車の自動変速構造 |
US5144294A (en) * | 1990-10-17 | 1992-09-01 | Ldj Industries, Inc. | Radio frequency message apparatus for aiding ambulatory travel of visually impaired persons |
JPH05122689A (ja) | 1991-10-25 | 1993-05-18 | Seiko Epson Corp | テレビ会議システム |
US5426425A (en) * | 1992-10-07 | 1995-06-20 | Wescom, Inc. | Intelligent locator system with multiple bits represented in each pulse |
US5568516A (en) * | 1993-07-02 | 1996-10-22 | Phonic Ear Incorporated | Very low power cordless headset system |
US5572401A (en) * | 1993-12-13 | 1996-11-05 | Key Idea Development L.L.C. | Wearable personal computer system having flexible battery forming casing of the system |
JP2737682B2 (ja) | 1995-02-13 | 1998-04-08 | 日本電気株式会社 | テレビ会議システム |
US5815579A (en) * | 1995-03-08 | 1998-09-29 | Interval Research Corporation | Portable speakers with phased arrays |
US5617477A (en) * | 1995-03-08 | 1997-04-01 | Interval Research Corporation | Personal wearable communication system with enhanced low frequency response |
US6301367B1 (en) * | 1995-03-08 | 2001-10-09 | Interval Research Corporation | Wearable audio system with acoustic modules |
US5687245A (en) * | 1995-06-07 | 1997-11-11 | Interval Research Corporation | Sampled chamber transducer with enhanced low frequency response |
US5682434A (en) * | 1995-06-07 | 1997-10-28 | Interval Research Corporation | Wearable audio system with enhanced performance |
US5694475A (en) * | 1995-09-19 | 1997-12-02 | Interval Research Corporation | Acoustically transparent earphones |
AU7154196A (en) * | 1995-09-19 | 1997-04-09 | Interval Research Corporation | Earphones with eyeglass attachments |
US5945988A (en) * | 1996-06-06 | 1999-08-31 | Intel Corporation | Method and apparatus for automatically determining and dynamically updating user preferences in an entertainment system |
US6091832A (en) * | 1996-08-12 | 2000-07-18 | Interval Research Corporation | Wearable personal audio loop apparatus |
US6405213B1 (en) * | 1997-05-27 | 2002-06-11 | Hoyt M. Layson | System to correlate crime incidents with a subject's location using crime incident data and a subject location recording device |
US6014080A (en) * | 1998-10-28 | 2000-01-11 | Pro Tech Monitoring, Inc. | Body worn active and passive tracking device |
JP3492895B2 (ja) | 1997-10-29 | 2004-02-03 | 京セラ株式会社 | コードレス監視カメラシステム |
US6097927A (en) * | 1998-01-27 | 2000-08-01 | Symbix, Incorporated | Active symbolic self design method and apparatus |
US20010011954A1 (en) * | 1998-03-11 | 2001-08-09 | Monty M. Shelton | Public area locator system |
US7266498B1 (en) * | 1998-12-18 | 2007-09-04 | Intel Corporation | Method and apparatus for reducing conflicts between speech-enabled applications sharing speech menu |
US7230582B1 (en) * | 1999-02-12 | 2007-06-12 | Fisher-Rosemount Systems, Inc. | Wearable computer in a process control environment |
US20020175990A1 (en) * | 1999-03-31 | 2002-11-28 | Jacquelyn Annette Martino | Mirror based interface for computer vision applications |
US6850773B1 (en) * | 1999-10-21 | 2005-02-01 | Firooz Ghassabian | Antenna system for a wrist communication device |
US6694034B2 (en) * | 2000-01-07 | 2004-02-17 | Etymotic Research, Inc. | Transmission detection and switch system for hearing improvement applications |
US20010011025A1 (en) * | 2000-01-31 | 2001-08-02 | Yuji Ohki | Receiver wearable on user's wrist |
US6757719B1 (en) * | 2000-02-25 | 2004-06-29 | Charmed.Com, Inc. | Method and system for data transmission between wearable devices or from wearable devices to portal |
US20010042014A1 (en) * | 2000-05-15 | 2001-11-15 | Lowry Brian C. | System and method of providing communication between a vendor and client using an interactive video display |
US6714233B2 (en) * | 2000-06-21 | 2004-03-30 | Seiko Epson Corporation | Mobile video telephone system |
US7031924B2 (en) * | 2000-06-30 | 2006-04-18 | Canon Kabushiki Kaisha | Voice synthesizing apparatus, voice synthesizing system, voice synthesizing method and storage medium |
US6754632B1 (en) * | 2000-09-18 | 2004-06-22 | East Carolina University | Methods and devices for delivering exogenously generated speech signals to enhance fluency in persons who stutter |
JP2002123878A (ja) | 2000-10-16 | 2002-04-26 | Matsushita Electric Ind Co Ltd | 音センサー付き監視カメラ装置およびそれを用いた監視方法 |
US20060126861A1 (en) * | 2000-11-20 | 2006-06-15 | Front Row Advantage, Inc. | Personal listening device for events |
JP2002160883A (ja) * | 2000-11-27 | 2002-06-04 | Hitachi Building Systems Co Ltd | 車椅子用エスカレーター |
US20020094845A1 (en) * | 2001-01-16 | 2002-07-18 | Rei Inasaka | Body worn display system |
US7013009B2 (en) * | 2001-06-21 | 2006-03-14 | Oakley, Inc. | Eyeglasses with wireless communication features |
GB2389742B (en) * | 2002-06-11 | 2006-03-01 | Adam Raff | Communications device and method |
US20040086141A1 (en) * | 2002-08-26 | 2004-05-06 | Robinson Arthur E. | Wearable buddy audio system |
US7502627B2 (en) * | 2002-12-23 | 2009-03-10 | Systems Application Engineering, Inc. | System for product selection |
US20040146172A1 (en) * | 2003-01-09 | 2004-07-29 | Goswami Vinod Kumar | Wearable personal audio system |
US20040204168A1 (en) * | 2003-03-17 | 2004-10-14 | Nokia Corporation | Headset with integrated radio and piconet circuitry |
JP2004318828A (ja) * | 2003-03-31 | 2004-11-11 | Seiko Epson Corp | データバックアップシステム及びデータバックアップ方法、装着可能なコンピュータ、メール送信システム、画像情報送信システム並びにデータバックアッププログラム |
US7096048B2 (en) * | 2003-04-01 | 2006-08-22 | Sanders Donald T | Portable communications device |
US20040212637A1 (en) * | 2003-04-22 | 2004-10-28 | Kivin Varghese | System and Method for Marking and Tagging Wireless Audio and Video Recordings |
WO2004110099A2 (en) * | 2003-06-06 | 2004-12-16 | Gn Resound A/S | A hearing aid wireless network |
US7738664B2 (en) * | 2003-10-07 | 2010-06-15 | Kddi Corporation | Apparatus for fault detection for parallelly transmitted audio signals and apparatus for delay difference detection and adjustment for parallelly transmitted audio signals |
EP1524586A1 (en) * | 2003-10-17 | 2005-04-20 | Sony International (Europe) GmbH | Transmitting information to a user's body |
EP1531478A1 (en) | 2003-11-12 | 2005-05-18 | Sony International (Europe) GmbH | Apparatus and method for classifying an audio signal |
US7702728B2 (en) * | 2004-01-30 | 2010-04-20 | Microsoft Corporation | Mobile shared group interaction |
JP2005227555A (ja) * | 2004-02-13 | 2005-08-25 | Renesas Technology Corp | 音声認識装置 |
US7664558B2 (en) * | 2005-04-01 | 2010-02-16 | Apple Inc. | Efficient techniques for modifying audio playback rates |
US7605714B2 (en) * | 2005-05-13 | 2009-10-20 | Microsoft Corporation | System and method for command and control of wireless devices using a wearable device |
EP1657958B1 (en) * | 2005-06-27 | 2012-06-13 | Phonak Ag | Communication system and hearing device |
US20070241862A1 (en) * | 2006-04-12 | 2007-10-18 | Dimig Steven J | Transponder authorization system and method |
US7586418B2 (en) * | 2006-11-17 | 2009-09-08 | General Electric Company | Multifunctional personal emergency response system |
EP2408222A1 (en) * | 2006-12-20 | 2012-01-18 | Phonak AG | Wireless communication system |
JP2008198028A (ja) * | 2007-02-14 | 2008-08-28 | Sony Corp | ウェアラブル装置、認証方法、およびプログラム |
-
2007
- 2007-02-21 CN CNA2007800067349A patent/CN101390380A/zh active Pending
- 2007-02-21 WO PCT/JP2007/053187 patent/WO2007105436A1/ja active Search and Examination
- 2007-02-21 JP JP2008505022A patent/JP4669041B2/ja not_active Expired - Fee Related
- 2007-02-21 US US12/279,011 patent/US8581700B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002024113A (ja) * | 1998-06-30 | 2002-01-25 | Masanobu Kujirada | 出会い・連絡支援システム |
JP2002523855A (ja) * | 1998-08-21 | 2002-07-30 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 情報処理装置 |
JP2004356970A (ja) * | 2003-05-29 | 2004-12-16 | Casio Comput Co Ltd | ウエアラブルカメラの撮影方法、撮像装置、及び撮影制御プログラム |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010092454A (ja) * | 2008-09-12 | 2010-04-22 | Nec Tokin Corp | Rfidリーダシステム |
JP2013072978A (ja) * | 2011-09-27 | 2013-04-22 | Fuji Xerox Co Ltd | 音声解析装置および音声解析システム |
JP2013142843A (ja) * | 2012-01-12 | 2013-07-22 | Fuji Xerox Co Ltd | 動作解析装置、音声取得装置、および、動作解析システム |
JP2013164468A (ja) * | 2012-02-09 | 2013-08-22 | Fuji Xerox Co Ltd | 音声解析装置、音声解析システムおよびプログラム |
JP2013164299A (ja) * | 2012-02-09 | 2013-08-22 | Univ Of Tsukuba | 測定装置、測定方法、測定プログラム及び測定システム |
JP2013181899A (ja) * | 2012-03-02 | 2013-09-12 | Fuji Xerox Co Ltd | 音声解析装置、音声解析システムおよびプログラム |
JP2014044172A (ja) * | 2012-08-28 | 2014-03-13 | Fuji Xerox Co Ltd | 位置特定システムおよび端末装置 |
JP2014164164A (ja) * | 2013-02-26 | 2014-09-08 | Fuji Xerox Co Ltd | 音声解析装置、信号解析装置、音声解析システムおよびプログラム |
JP2014191201A (ja) * | 2013-03-27 | 2014-10-06 | Fuji Xerox Co Ltd | 音声解析システム、音声端末装置およびプログラム |
JP2016539403A (ja) * | 2013-10-14 | 2016-12-15 | ノキア テクノロジーズ オサケユイチア | コンテキスト上の関係に基づくメディア・ファイルを識別するための方法と装置 |
US10437830B2 (en) | 2013-10-14 | 2019-10-08 | Nokia Technologies Oy | Method and apparatus for identifying media files based upon contextual relationships |
JP2016144134A (ja) * | 2015-02-04 | 2016-08-08 | 富士ゼロックス株式会社 | 音声解析装置、音声解析システムおよびプログラム |
JPWO2016158267A1 (ja) * | 2015-03-27 | 2018-01-25 | ソニー株式会社 | 情報処理装置、情報処理方法、およびプログラム |
WO2016158267A1 (ja) * | 2015-03-27 | 2016-10-06 | ソニー株式会社 | 情報処理装置、情報処理方法、およびプログラム |
WO2016157642A1 (ja) * | 2015-03-27 | 2016-10-06 | ソニー株式会社 | 情報処理装置、情報処理方法、およびプログラム |
JP2018042061A (ja) * | 2016-09-06 | 2018-03-15 | 株式会社デンソーテン | 電子機器、接続対象の電子機器、通信システム及び通信方法 |
JP2020114021A (ja) * | 2016-09-26 | 2020-07-27 | 株式会社Jvcケンウッド | 無線機 |
WO2019097674A1 (ja) * | 2017-11-17 | 2019-05-23 | 日産自動車株式会社 | 車両用操作支援装置 |
CN111801667A (zh) * | 2017-11-17 | 2020-10-20 | 日产自动车株式会社 | 车辆用操作辅助装置 |
JPWO2019097674A1 (ja) * | 2017-11-17 | 2020-12-03 | 日産自動車株式会社 | 車両用操作支援装置 |
JP7024799B2 (ja) | 2017-11-17 | 2022-02-24 | 日産自動車株式会社 | 車両用操作支援装置 |
CN111801667B (zh) * | 2017-11-17 | 2024-04-02 | 日产自动车株式会社 | 车辆用操作辅助装置和车辆用操作辅助方法 |
JP2020135532A (ja) * | 2019-02-21 | 2020-08-31 | 前田建設工業株式会社 | アラート出力システム、アラート出力方法、及びプログラム |
JP7263043B2 (ja) | 2019-02-21 | 2023-04-24 | 前田建設工業株式会社 | アラート出力システム、アラート出力方法、及びプログラム |
JP2022012926A (ja) * | 2020-07-02 | 2022-01-18 | 秋夫 湯田 | 人との接触記録装置。 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2007105436A1 (ja) | 2009-07-30 |
CN101390380A (zh) | 2009-03-18 |
JP4669041B2 (ja) | 2011-04-13 |
US20090058611A1 (en) | 2009-03-05 |
US8581700B2 (en) | 2013-11-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4669041B2 (ja) | ウェアラブル端末 | |
CN108683972B (zh) | 声音处理系统 | |
US9633270B1 (en) | Using speaker clustering to switch between different camera views in a video conference system | |
US9554091B1 (en) | Identifying conference participants and active talkers at a video conference endpoint using user devices | |
JP6759406B2 (ja) | カメラ撮影制御方法、装置、インテリジェント装置およびコンピュータ記憶媒体 | |
US20090315974A1 (en) | Video conferencing device for a communications device and method of manufacturing and using the same | |
US11047965B2 (en) | Portable communication device with user-initiated polling of positional information of nodes in a group | |
US20210350823A1 (en) | Systems and methods for processing audio and video using a voice print | |
US9277178B2 (en) | Information processing system and storage medium | |
JP7100824B2 (ja) | データ処理装置、データ処理方法及びプログラム | |
JP2005059170A (ja) | 情報収集ロボット | |
EP4113452A1 (en) | Data sharing method and device | |
CN114845081A (zh) | 信息处理装置、记录介质及信息处理方法 | |
TW200804852A (en) | Method for tracking vocal target | |
CN113965715A (zh) | 一种设备协同控制方法和装置 | |
KR100583987B1 (ko) | 정보 수집 로봇 | |
WO2018154902A1 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
Danninger et al. | The connector: facilitating context-aware communication | |
JP6286289B2 (ja) | 管理装置、会話システム、会話管理方法及びプログラム | |
CN112565598A (zh) | 聚焦方法与装置、终端、计算机可读存储介质和电子设备 | |
WO2021129444A1 (zh) | 文件聚类方法及装置、存储介质和电子设备 | |
CN113707165B (zh) | 音频处理方法、装置及电子设备和存储介质 | |
CN113170023A (zh) | 信息处理装置和方法以及程序 | |
CN114374903B (zh) | 拾音方法和拾音装置 | |
JP2004248125A (ja) | 映像切り替え装置、映像切り替え方法、この方法のプログラムおよびこのプログラムを記録した記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07714687 Country of ref document: EP Kind code of ref document: A1 |
|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 12279011 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2008505022 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200780006734.9 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 07714687 Country of ref document: EP Kind code of ref document: A1 |
|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) |