WO2013046590A1 - Information processing device, information processing method, and program - Google Patents
Information processing device, information processing method, and program Download PDFInfo
- Publication number
- WO2013046590A1 WO2013046590A1 PCT/JP2012/005906 JP2012005906W WO2013046590A1 WO 2013046590 A1 WO2013046590 A1 WO 2013046590A1 JP 2012005906 W JP2012005906 W JP 2012005906W WO 2013046590 A1 WO2013046590 A1 WO 2013046590A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- word
- candidate
- language
- information processing
- vector
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/284—Lexical analysis, e.g. tokenisation or collocates
Definitions
- the present invention relates to an information processing apparatus, an information processing method, and a program for converting input data in which two words are continuous into a first language.
- Non-Patent Document 1 uses a class N-gram using a thesaurus.
- Non-Patent Document 1 increases the construction cost of the thesaurus.
- An object of the present invention is to reduce the cost when converting input data in which two words are continuous into a first language.
- an information processing apparatus that converts input data in which at least two words are continuous into a first language,
- the first word has a plurality of candidate words that are candidates for the word in the first language, and the second word has one candidate word,
- For the first word for each candidate word, based on the statistical data of the first language, by selecting a simultaneously used word that is a word used at the same time as the candidate word together with the number of times of use,
- Candidate vector generation means for generating a candidate vector for each candidate word;
- a context vector generating means for generating a context vector for the second word by selecting the simultaneously used word of the second word together with the number of uses based on the statistical data of the first language;
- Selecting means for selecting the candidate word having the candidate vector having the highest similarity with the context vector as a word of the first language corresponding to the first word;
- an information processing method for converting input data in which at least two words are continuous into a first language The first word has a plurality of candidate words that are candidates for the word in the first language, and the second word has one candidate word,
- the computer selects, for each candidate word, a simultaneously used word, which is a word used simultaneously with the candidate word, together with the number of times of use, based on the statistical data of the first language.
- the computer To generate a candidate vector for each candidate word, The computer generates a context vector for the second word by selecting the simultaneously used word of the second word together with the number of uses based on the statistical data of the first language, There is provided an information processing method in which the computer selects the candidate word having the candidate vector having the highest similarity with the context vector as a word in the first language corresponding to the first word.
- a program for converting input data in which at least two words are continuous into a first language The first word has a plurality of candidate words that are candidates for the word in the first language, and the second word has one candidate word
- On the computer For the first word, for each candidate word, based on the statistical data of the first language, by selecting a simultaneously used word that is a word that is used at the same time as the candidate word together with the number of uses thereof, The ability to generate candidate vectors for each candidate word; A function of generating a context vector for the second word by selecting the simultaneously used word of the second word together with the number of uses based on the statistical data of the first language; A function of selecting the candidate word having the candidate vector having the highest similarity with the context vector as a word of the first language corresponding to the first word; A program for realizing the above is provided.
- FIG. 3 is a flowchart of processing performed by the information processing apparatus illustrated in FIG. 1. It is a block diagram which shows the function structure of the information processing apparatus which concerns on 2nd Embodiment. It is a flowchart of the process which the information processing apparatus shown in FIG. 3 performs.
- FIG. 3 is a diagram for explaining Example 1;
- FIG. 3 is a diagram for explaining Example 1;
- FIG. 3 is a diagram for explaining Example 1;
- FIG. 6 is a diagram for explaining a second embodiment.
- FIG. 6 is a diagram for explaining a second embodiment.
- FIG. 6 is a diagram for explaining a second embodiment.
- FIG. 1 is a block diagram illustrating a functional configuration of the information processing apparatus according to the first embodiment.
- This information processing apparatus is an apparatus that converts input data in which two words are continuous into character data of a first language.
- the first word has a plurality of candidate words (hereinafter referred to as candidate words) in the first language after conversion.
- the second word has one candidate word.
- the information processing device is, for example, a machine translation device or a speech recognition device.
- the input data is character data in a second language that is a language different from the first language. Whether there are a plurality of candidate words is determined using, for example, external dictionary data.
- This information processing apparatus includes a candidate vector generation unit 10, a context vector generation unit 20, and a selection unit 30.
- the candidate vector generation unit 10 performs the following processing for each candidate word for the first word. First, the candidate vector generation part 10 selects the simultaneous use word which is a word used simultaneously with a candidate word with the use frequency based on the statistical data of a 1st language. And the candidate vector production
- the statistical data used here is, for example, statistical data of chain information of two words in the first language.
- the context vector generation unit 20 generates a context vector for the second word by selecting at least one simultaneously used word together with the number of uses based on the statistical data of the first language.
- the selection unit 30 calculates the similarity between the context vector and the candidate vector for each candidate word. Then, the selection unit 30 selects the candidate word having the highest calculated similarity as the first language word corresponding to the first word.
- each component of the information processing apparatus shown in FIG. 1 is not a hardware unit configuration but a functional unit block.
- Each component of the information processing apparatus is centered on an arbitrary computer CPU, memory, a program for realizing the components shown in the figure loaded in the memory, a storage unit such as a hard disk for storing the program, and a network connection interface. It is realized by any combination of hardware and software. There are various modifications of the implementation method and apparatus.
- FIG. 2 is a flowchart of processing performed by the information processing apparatus shown in FIG.
- input data is input to the information processing apparatus.
- the candidate vector generation unit 10 selects a word that is the first word (a word having a plurality of candidate words) from among the words included in the input data by using dictionary data stored outside.
- the candidate vector generation unit 10 generates a candidate vector for each candidate word by using statistical data for the selected first word (step S20).
- the context vector generation unit 20 uses the dictionary data to select a word that is the second word (a word having one candidate word) among the words included in the input data. Then, the context vector generation unit 20 generates a context vector for the selected second word (step S40).
- the selection unit 30 calculates the similarity between the context vector and the candidate vector for each candidate word (step S60). And the selection part 30 selects the candidate word with the highest calculated similarity as a word of the 1st language corresponding to a 1st word using statistical data (step S80).
- input data in which two words are continuous can be converted into the first language with high accuracy without constructing a thesaurus.
- the accuracy of conversion may be higher than when using a thesaurus.
- the accuracy may be deteriorated in contents related to a thesaurus node that is not separated by the thesaurus.
- a thesaurus things are classified based on materials such as “wood” and “metal”. Estimate the probability of occurrence of “branch breaks” that do not appear much in the real world, using the probability of learning data that “board breaks” when “branches” “boards” are included in this “wood” node It can happen.
- the appearance probability is not calculated, the conversion accuracy is higher than in the case of using the thesaurus.
- FIG. 3 is a block diagram illustrating a functional configuration of the information processing apparatus according to the second embodiment.
- FIG. 4 is a flowchart of processing performed by the information processing apparatus illustrated in FIG.
- the information processing apparatus according to the present embodiment has the same configuration as the information processing apparatus according to the first embodiment, except that the conversion unit 40 is provided.
- the conversion unit 40 converts the input data into the first language based on the statistical data.
- the statistical data used here is, for example, statistical data of chain information of two words in the first language. Specifically, the conversion unit 40 converts each of two words constituting the input data into words of the first language using dictionary data. Then, the conversion unit 40 generates a chain of words in the first language after conversion, and selects the chain having the largest statistical number among the generated chains as the first language after conversion (step S10 in FIG. 4). .
- the candidate vector generation unit 10, the context vector generation unit 20, and the selection unit 30 are input data that could not be converted by the conversion unit 40, that is, input in which none of the first language word chains were included in the statistical data.
- the processing shown in the first embodiment is performed on the data.
- the same effect as that of the first embodiment can be obtained.
- the conversion unit 40 performs conversion processing based on the statistical data, the conversion accuracy is further increased.
- Example 1 An operation when the information processing apparatus according to the second embodiment selects a Japanese translation for “break” for the original data “radio breaks” will be described.
- “Break” is an intransitive verb that takes the English SV syntax.
- the conversion unit 40 recognizes that “radio” has one translated word “radio” by referring to the dictionary data. In this case, “radio” forms a context. Further, the conversion unit 40 recognizes that “breaks” has a prototype “break” and that “break” has a plurality of translated words “break”, “break”, “break”,.
- the selection candidates are a combination of a verb prototype and a translation such as ⁇ break '', ⁇ break '', ⁇ break '', and the context is the subject such as ⁇ radio '' A combination of a prototype and a translated word is used as input data as shown in FIG.
- Statistic data holds a sentence subject with an SV syntax, a verb prototype, and a translation pair as two pieces of chain information. Then, in step S10 of FIG. 4, the conversion unit 40 sets “radio (radio) break (break)”, “radio (radio) break (break)”, “radio (radio) (break (break)”,. To see if exists in the statistics. Here, it is assumed that not all sets exist in the statistical data.
- the candidate vector generation unit 10 refers to the statistical data for each of the selection candidates “break”, “break”, and “break” in step S20 of FIG. Assume that the statistical data is as shown in FIG. In the example shown in FIG. 6, “bat (break)” break appears once and “bone” break appears once for “break”. In addition, for “break”, “cable (cable) break” appears twice. Furthermore, “tv (TV) break” appears twice for “break”.
- the candidate vector generation unit 10 continues to generate candidate vectors as shown in FIG. 6 in step S20 of FIG.
- Candidate vectors corresponding to “break” are “bat: 1, bone: 1,...”.
- the candidate vector corresponding to “break” is “cable (cable): 2,...”.
- the candidate vector corresponding to “break” is “tv (television): 2,...”.
- the context vector generation unit 20 refers to the statistical data for “radio” in step S40 of FIG.
- the statistical data “radio (radio) start (start)” once, “radio (radio) close (near)” once, “radio (radio) open (start)” once appears Suppose you are.
- the context vector generation unit 20 extracts “start”, “close”, “open”, and so on, and extracts statistical data regarding this word.
- “radio (radio) start (start)” is once
- “tv (television) ⁇ ⁇ ⁇ start (start)” is twice
- “radio (radio) close (close)” Once, “bakery close” is twice, “radiooopen” is once, “opera open” is once, ...
- the context vector generation unit 20 selects “radio (radio): 1, tv (television): 2, ...” for “start”, and “radio” for “close”.
- the selection part 30 calculates the similarity of a candidate vector and a context vector in step S60 of FIG. For example, using cosine similarity, find the similarity between the break vector and the start vector, find the similarity between break and close, break and open Find the similarity between (start), find the similarity between break (start) and start (start), ..., find the similarity between break (break) and open (start).
- the selection part 30 selects the candidate word which has a candidate vector with the highest similarity with a context vector in step S80 of FIG.
- the cosine similarity of the break candidate vector and the start context vector pair is the highest compared to the cosine similarity of the other candidate vector / context vector pairs. Select to have a break.
- the translation of “break” for the context “radio” is selected for the original data “radio breaks”. And even if there is no data about ⁇ radio break '' in the statistical data, ⁇ break '' is accurately translated from ⁇ break '' from ⁇ break '', ⁇ break '', ⁇ break '' You can choose.
- Example 2 In the case where the information processing apparatus according to the second embodiment is a Japanese speech recognition apparatus, an operation when a voice input “Ema is broken” is input will be described.
- the input data is a network (voice data) in a state before the language model is applied.
- the first word candidate word is “bait”, “picture horse”, and “branch” one word at a time
- the second word one candidate word is one word
- break And A context is formed by these two words.
- the conversion unit 40 confirms that the statistical data does not include a chain of three words “break the bait”, “break the ema”, and “break the branch”.
- the candidate vector generation unit 10 extracts data including each candidate word (food, ema, and branch) from the statistical data.
- candidate word food, ema, and branch
- a candidate vector is generated as shown in FIG. 9 by using a portion other than the candidate word of the extracted data (that is, a simultaneously used word) and its frequency.
- the context vector generation unit 20 extracts data including “breaking” in the context part from the statistical data.
- the plate breaks can be extracted.
- the context vector generation unit 20 recognizes “plates” and “cops” other than the context unit (“break”) as simultaneously used words, and extracts data including these from the statistical data.
- “plate is broken” and “thin plate is thin” can be extracted for “plate”
- “cup is broken” and “drink with a cup” are extracted for “cup”.
- the context vector generation unit 20 generates a context vector as shown in FIG.
- the selection unit 30 calculates the similarity between the candidate vector and the context vector.
- the selection unit 30 calculates the degree of similarity between these two vectors by, for example, counting the number of elements having elements having a frequency of 1 or more and having the same elements using a Hamming distance.
- the selection part 30 selects the candidate word which has a candidate vector with the highest similarity with a context vector.
- a “board” context vector and a “picture horse” candidate vector having a common element “thin” are selected.
- the speech recognition result “Ema can break” can be output with high accuracy using the statistical data.
- an information processing device is not only a language processing system such as a machine translation device or a speech recognition device, but also a recommendation system that uses an action history such as “the person who bought this product also bought it”. May be.
- the context is described as one, but there may be a plurality of contexts.
- the similarity calculation unit calculates the similarity for the context vectors of a plurality of contexts, and the selection unit extracts the candidate vector / context vector pair having the highest similarity, thereby selecting one context. May be.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Machine Translation (AREA)
Abstract
A candidate-vector generator (10) selects, on the basis of statistical data for a first language, a simultaneously used word that is a word used at the same time as a candidate word, the simultaneously used word being selected together with the usage count thereof. The candidate vector generator (10) then generates a candidate vector for individual candidate words according to at least one of the simultaneously used words and the usage count thereof. A context vector generator (20) generates a context vector for a second word by selecting, on the basis of statistical data for the first language, at least one simultaneously used word together with the usage count thereof. A selection unit (30) calculates the degree of similarity between the context vector and the candidate vector for individual candidate words. The selection unit (30) then selects the candidate word with the highest calculated degree of similarity as a word for the first language corresponding to a first word.
Description
本発明は、2つの単語が連続している入力データを第1言語に変換する情報処理装置、情報処理方法、及びプログラムに関する。
The present invention relates to an information processing apparatus, an information processing method, and a program for converting input data in which two words are continuous into a first language.
機械翻訳や音声認識において、2つの単語が連続している入力データを、統計データを用いて第1言語に変換することが行われている。このような技術において、現在最もよく用いられている言語モデルは、N-gramモデルである。このモデルは、N個の単語の連鎖の統計に基づいて、(N-1)個までの単語の連鎖から次のN個目の単語の出現確率を与えるものである。N-gramモデルでは、学習データからこの出現確率を求める必要があるため、学習データにたまたま出現しなかった単語は出現確率が0になってしまう。これを回避するために、非特許文献1ではシソーラスを使ったクラスN-gramが用いられている。
In machine translation and speech recognition, input data in which two words are continuous is converted into a first language using statistical data. In such a technique, the language model most often used at present is the N-gram model. This model gives the appearance probability of the next N-th word from a chain of up to (N−1) words based on the statistics of the N-word chain. In the N-gram model, since it is necessary to obtain the appearance probability from the learning data, the word that does not appear in the learning data has an appearance probability of zero. In order to avoid this, Non-Patent Document 1 uses a class N-gram using a thesaurus.
非特許文献1に記載の技術では、シソーラスの構築コストが高くなってしまう。本発明の目的は、2つの単語が連続している入力データを、第1言語に変換する際のコストを低くすることにある。
The technology described in Non-Patent Document 1 increases the construction cost of the thesaurus. An object of the present invention is to reduce the cost when converting input data in which two words are continuous into a first language.
本発明によれば、少なくとも2つの単語が連続している入力データを、第1言語に変換する情報処理装置であって、
第1の前記単語は、前記第1言語の単語の候補である候補単語が複数あり、第2の前記単語は、前記候補単語が一つであり、
前記第1の単語について、前記候補単語ごとに、前記第1言語の統計データに基づいて、前記候補単語と同時に使用されている単語である同時使用単語をその使用回数とともに選択することにより、前記候補単語ごとに候補ベクトルを生成する候補ベクトル生成手段と、
前記第2の単語について、前記第1言語の統計データに基づいて、前記第2の単語の前記同時使用単語をその使用回数とともに選択することにより、文脈ベクトルを生成する文脈ベクトル生成手段と、
前記文脈ベクトルと最も類似度が高い前記候補ベクトルを有している前記候補単語を、前記第1の単語に対応する前記第1言語の単語として選択する選択手段と、
を備える情報処理装置が提供される。 According to the present invention, an information processing apparatus that converts input data in which at least two words are continuous into a first language,
The first word has a plurality of candidate words that are candidates for the word in the first language, and the second word has one candidate word,
For the first word, for each candidate word, based on the statistical data of the first language, by selecting a simultaneously used word that is a word used at the same time as the candidate word together with the number of times of use, Candidate vector generation means for generating a candidate vector for each candidate word;
A context vector generating means for generating a context vector for the second word by selecting the simultaneously used word of the second word together with the number of uses based on the statistical data of the first language;
Selecting means for selecting the candidate word having the candidate vector having the highest similarity with the context vector as a word of the first language corresponding to the first word;
An information processing apparatus is provided.
第1の前記単語は、前記第1言語の単語の候補である候補単語が複数あり、第2の前記単語は、前記候補単語が一つであり、
前記第1の単語について、前記候補単語ごとに、前記第1言語の統計データに基づいて、前記候補単語と同時に使用されている単語である同時使用単語をその使用回数とともに選択することにより、前記候補単語ごとに候補ベクトルを生成する候補ベクトル生成手段と、
前記第2の単語について、前記第1言語の統計データに基づいて、前記第2の単語の前記同時使用単語をその使用回数とともに選択することにより、文脈ベクトルを生成する文脈ベクトル生成手段と、
前記文脈ベクトルと最も類似度が高い前記候補ベクトルを有している前記候補単語を、前記第1の単語に対応する前記第1言語の単語として選択する選択手段と、
を備える情報処理装置が提供される。 According to the present invention, an information processing apparatus that converts input data in which at least two words are continuous into a first language,
The first word has a plurality of candidate words that are candidates for the word in the first language, and the second word has one candidate word,
For the first word, for each candidate word, based on the statistical data of the first language, by selecting a simultaneously used word that is a word used at the same time as the candidate word together with the number of times of use, Candidate vector generation means for generating a candidate vector for each candidate word;
A context vector generating means for generating a context vector for the second word by selecting the simultaneously used word of the second word together with the number of uses based on the statistical data of the first language;
Selecting means for selecting the candidate word having the candidate vector having the highest similarity with the context vector as a word of the first language corresponding to the first word;
An information processing apparatus is provided.
本発明によれば、少なくとも2つの単語が連続している入力データを、第1言語に変換する情報処理方法であって、
第1の前記単語は、前記第1言語の単語の候補である候補単語が複数あり、第2の前記単語は、前記候補単語が一つであり、
コンピュータが、前記第1の単語について、前記候補単語ごとに、前記第1言語の統計データに基づいて、前記候補単語と同時に使用されている単語である同時使用単語をその使用回数とともに選択することにより、前記候補単語ごとに候補ベクトルを生成し、
前記コンピュータが、前記第2の単語について、前記第1言語の統計データに基づいて、前記第2の単語の前記同時使用単語をその使用回数とともに選択することにより、文脈ベクトルを生成し、
前記コンピュータが、前記文脈ベクトルと最も類似度が高い前記候補ベクトルを有している前記候補単語を、前記第1の単語に対応する前記第1言語の単語として選択する、情報処理方法が提供される。 According to the present invention, there is provided an information processing method for converting input data in which at least two words are continuous into a first language,
The first word has a plurality of candidate words that are candidates for the word in the first language, and the second word has one candidate word,
For the first word, the computer selects, for each candidate word, a simultaneously used word, which is a word used simultaneously with the candidate word, together with the number of times of use, based on the statistical data of the first language. To generate a candidate vector for each candidate word,
The computer generates a context vector for the second word by selecting the simultaneously used word of the second word together with the number of uses based on the statistical data of the first language,
There is provided an information processing method in which the computer selects the candidate word having the candidate vector having the highest similarity with the context vector as a word in the first language corresponding to the first word. The
第1の前記単語は、前記第1言語の単語の候補である候補単語が複数あり、第2の前記単語は、前記候補単語が一つであり、
コンピュータが、前記第1の単語について、前記候補単語ごとに、前記第1言語の統計データに基づいて、前記候補単語と同時に使用されている単語である同時使用単語をその使用回数とともに選択することにより、前記候補単語ごとに候補ベクトルを生成し、
前記コンピュータが、前記第2の単語について、前記第1言語の統計データに基づいて、前記第2の単語の前記同時使用単語をその使用回数とともに選択することにより、文脈ベクトルを生成し、
前記コンピュータが、前記文脈ベクトルと最も類似度が高い前記候補ベクトルを有している前記候補単語を、前記第1の単語に対応する前記第1言語の単語として選択する、情報処理方法が提供される。 According to the present invention, there is provided an information processing method for converting input data in which at least two words are continuous into a first language,
The first word has a plurality of candidate words that are candidates for the word in the first language, and the second word has one candidate word,
For the first word, the computer selects, for each candidate word, a simultaneously used word, which is a word used simultaneously with the candidate word, together with the number of times of use, based on the statistical data of the first language. To generate a candidate vector for each candidate word,
The computer generates a context vector for the second word by selecting the simultaneously used word of the second word together with the number of uses based on the statistical data of the first language,
There is provided an information processing method in which the computer selects the candidate word having the candidate vector having the highest similarity with the context vector as a word in the first language corresponding to the first word. The
本発明によれば、少なくとも2つの単語が連続している入力データを、第1言語に変換するためのプログラムであって、
第1の前記単語は、前記第1言語の単語の候補である候補単語が複数あり、第2の前記単語は、前記候補単語が一つであり、
コンピュータに、
前記第1の単語について、前記候補単語ごとに、前記第1言語の統計データに基づいて、前記候補単語と同時に使用されている単語である同時使用単語をその使用回数とともに選択することにより、前記候補単語ごとに候補ベクトルを生成する機能と、
前記第2の単語について、前記第1言語の統計データに基づいて、前記第2の単語の前記同時使用単語をその使用回数とともに選択することにより、文脈ベクトルを生成する機能と、
前記文脈ベクトルと最も類似度が高い前記候補ベクトルを有している前記候補単語を、前記第1の単語に対応する前記第1言語の単語として選択する機能と、
を実現させるプログラムが提供される。 According to the present invention, there is provided a program for converting input data in which at least two words are continuous into a first language,
The first word has a plurality of candidate words that are candidates for the word in the first language, and the second word has one candidate word,
On the computer,
For the first word, for each candidate word, based on the statistical data of the first language, by selecting a simultaneously used word that is a word that is used at the same time as the candidate word together with the number of uses thereof, The ability to generate candidate vectors for each candidate word;
A function of generating a context vector for the second word by selecting the simultaneously used word of the second word together with the number of uses based on the statistical data of the first language;
A function of selecting the candidate word having the candidate vector having the highest similarity with the context vector as a word of the first language corresponding to the first word;
A program for realizing the above is provided.
第1の前記単語は、前記第1言語の単語の候補である候補単語が複数あり、第2の前記単語は、前記候補単語が一つであり、
コンピュータに、
前記第1の単語について、前記候補単語ごとに、前記第1言語の統計データに基づいて、前記候補単語と同時に使用されている単語である同時使用単語をその使用回数とともに選択することにより、前記候補単語ごとに候補ベクトルを生成する機能と、
前記第2の単語について、前記第1言語の統計データに基づいて、前記第2の単語の前記同時使用単語をその使用回数とともに選択することにより、文脈ベクトルを生成する機能と、
前記文脈ベクトルと最も類似度が高い前記候補ベクトルを有している前記候補単語を、前記第1の単語に対応する前記第1言語の単語として選択する機能と、
を実現させるプログラムが提供される。 According to the present invention, there is provided a program for converting input data in which at least two words are continuous into a first language,
The first word has a plurality of candidate words that are candidates for the word in the first language, and the second word has one candidate word,
On the computer,
For the first word, for each candidate word, based on the statistical data of the first language, by selecting a simultaneously used word that is a word that is used at the same time as the candidate word together with the number of uses thereof, The ability to generate candidate vectors for each candidate word;
A function of generating a context vector for the second word by selecting the simultaneously used word of the second word together with the number of uses based on the statistical data of the first language;
A function of selecting the candidate word having the candidate vector having the highest similarity with the context vector as a word of the first language corresponding to the first word;
A program for realizing the above is provided.
本発明によれば、2つの単語が連続している入力データを、第1言語に変換する際のコストを低くすることができる。
According to the present invention, it is possible to reduce the cost when converting input data in which two words are continuous into the first language.
上述した目的、およびその他の目的、特徴および利点は、以下に述べる好適な実施の形態、およびそれに付随する以下の図面によってさらに明らかになる。
The above-described object and other objects, features, and advantages will be further clarified by a preferred embodiment described below and the following drawings attached thereto.
以下、本発明の実施の形態について、図面を用いて説明する。尚、すべての図面において、同様な構成要素には同様の符号を付し、適宜説明を省略する。
Hereinafter, embodiments of the present invention will be described with reference to the drawings. In all the drawings, the same reference numerals are given to the same components, and the description will be omitted as appropriate.
(第1の実施形態)
図1は、第1の実施形態に係る情報処理装置の機能構成を示すブロック図である。この情報処理装置は、2つの単語が連続している入力データを、第1言語の文字データに変換する装置である。第1の単語は、変換後の第1言語の単語の候補(以下、候補単語と記載)が複数ある。第2の単語は、候補単語が一つである。情報処理装置は、例えば、機械翻訳装置、又は音声認識装置である。情報処理装置が機械翻訳装置である場合、入力データは、第1言語とは異なる言語である第2言語の文字データである。なお、候補単語が複数あるか否かは、例えば外部の辞書データを用いて判断される。 (First embodiment)
FIG. 1 is a block diagram illustrating a functional configuration of the information processing apparatus according to the first embodiment. This information processing apparatus is an apparatus that converts input data in which two words are continuous into character data of a first language. The first word has a plurality of candidate words (hereinafter referred to as candidate words) in the first language after conversion. The second word has one candidate word. The information processing device is, for example, a machine translation device or a speech recognition device. When the information processing device is a machine translation device, the input data is character data in a second language that is a language different from the first language. Whether there are a plurality of candidate words is determined using, for example, external dictionary data.
図1は、第1の実施形態に係る情報処理装置の機能構成を示すブロック図である。この情報処理装置は、2つの単語が連続している入力データを、第1言語の文字データに変換する装置である。第1の単語は、変換後の第1言語の単語の候補(以下、候補単語と記載)が複数ある。第2の単語は、候補単語が一つである。情報処理装置は、例えば、機械翻訳装置、又は音声認識装置である。情報処理装置が機械翻訳装置である場合、入力データは、第1言語とは異なる言語である第2言語の文字データである。なお、候補単語が複数あるか否かは、例えば外部の辞書データを用いて判断される。 (First embodiment)
FIG. 1 is a block diagram illustrating a functional configuration of the information processing apparatus according to the first embodiment. This information processing apparatus is an apparatus that converts input data in which two words are continuous into character data of a first language. The first word has a plurality of candidate words (hereinafter referred to as candidate words) in the first language after conversion. The second word has one candidate word. The information processing device is, for example, a machine translation device or a speech recognition device. When the information processing device is a machine translation device, the input data is character data in a second language that is a language different from the first language. Whether there are a plurality of candidate words is determined using, for example, external dictionary data.
この情報処理装置は、候補ベクトル生成部10、文脈ベクトル生成部20、及び選択部30を備えている。
This information processing apparatus includes a candidate vector generation unit 10, a context vector generation unit 20, and a selection unit 30.
候補ベクトル生成部10は、第1の単語について、候補単語ごとに、以下の処理を行う。まず、候補ベクトル生成部10は、第1言語の統計データに基づいて、候補単語と同時に使用されている単語である同時使用単語を、その使用回数とともに選択する。そして候補ベクトル生成部10は、少なくとも一つの同時使用単語及びその使用回数により、候補ベクトルを、候補単語ごとに生成する。なお、ここで用いられる統計データは、例えば、第1言語における2つの単語の連鎖情報の統計データである。
The candidate vector generation unit 10 performs the following processing for each candidate word for the first word. First, the candidate vector generation part 10 selects the simultaneous use word which is a word used simultaneously with a candidate word with the use frequency based on the statistical data of a 1st language. And the candidate vector production | generation part 10 produces | generates a candidate vector for every candidate word with at least 1 simultaneous use word and its usage frequency. The statistical data used here is, for example, statistical data of chain information of two words in the first language.
文脈ベクトル生成部20は、第2の単語について、第1言語の統計データに基づいて、少なくとも一つの同時使用単語をその使用回数とともに選択することにより、文脈ベクトルを生成する。
The context vector generation unit 20 generates a context vector for the second word by selecting at least one simultaneously used word together with the number of uses based on the statistical data of the first language.
選択部30は、候補単語ごとに、文脈ベクトルと候補ベクトルの類似度を算出する。そして選択部30は、算出した類似度が最も高い候補単語を、第1の単語に対応する第1言語の単語として選択する。
The selection unit 30 calculates the similarity between the context vector and the candidate vector for each candidate word. Then, the selection unit 30 selects the candidate word having the highest calculated similarity as the first language word corresponding to the first word.
なお、図1に示した情報処理装置の各構成要素は、ハードウエア単位の構成ではなく、機能単位のブロックを示している。この情報処理装置の各構成要素は、任意のコンピュータのCPU、メモリ、メモリにロードされた本図の構成要素を実現するプログラム、そのプログラムを格納するハードディスクなどの記憶ユニット、ネットワーク接続用インタフェースを中心にハードウエアとソフトウエアの任意の組合せによって実現される。そして、その実現方法、装置には様々な変形例がある。
Note that each component of the information processing apparatus shown in FIG. 1 is not a hardware unit configuration but a functional unit block. Each component of the information processing apparatus is centered on an arbitrary computer CPU, memory, a program for realizing the components shown in the figure loaded in the memory, a storage unit such as a hard disk for storing the program, and a network connection interface. It is realized by any combination of hardware and software. There are various modifications of the implementation method and apparatus.
図2は、図1に示した情報処理装置が行う処理のフローチャートである。まず情報処理装置に、入力データが入力される。すると候補ベクトル生成部10は、外部に記憶されている辞書データを用いることにより、入力データに含まれる単語のうち第1の単語(候補単語が複数ある単語)であるものを選択する。そして候補ベクトル生成部10は、選択した第1の単語に対して、統計データを用いることにより、候補ベクトルを、候補単語ごとに生成する(ステップS20)。
FIG. 2 is a flowchart of processing performed by the information processing apparatus shown in FIG. First, input data is input to the information processing apparatus. Then, the candidate vector generation unit 10 selects a word that is the first word (a word having a plurality of candidate words) from among the words included in the input data by using dictionary data stored outside. Then, the candidate vector generation unit 10 generates a candidate vector for each candidate word by using statistical data for the selected first word (step S20).
また文脈ベクトル生成部20は、辞書データを用いることにより、入力データに含まれる単語のうち第2の単語(候補単語が一つである単語)であるものを選択する。そして文脈ベクトル生成部20は、選択した第2の単語に対して、文脈ベクトルを生成する(ステップS40)。
Further, the context vector generation unit 20 uses the dictionary data to select a word that is the second word (a word having one candidate word) among the words included in the input data. Then, the context vector generation unit 20 generates a context vector for the selected second word (step S40).
次いで選択部30は、候補単語ごとに、文脈ベクトルと候補ベクトルの類似度を算出する(ステップS60)。そして選択部30は、統計データを用いることにより、算出した類似度が最も高い候補単語を、第1の単語に対応する第1言語の単語として選択する(ステップS80)。
Next, the selection unit 30 calculates the similarity between the context vector and the candidate vector for each candidate word (step S60). And the selection part 30 selects the candidate word with the highest calculated similarity as a word of the 1st language corresponding to a 1st word using statistical data (step S80).
以上、本実施形態によれば、シソーラスを構築しなくても、2つの単語が連続している入力データを、高い精度で第1言語に変換することができる。また以下のように、シソーラスを用いる場合と比較して、変換の精度が高くなる場合もある。
As described above, according to the present embodiment, input data in which two words are continuous can be converted into the first language with high accuracy without constructing a thesaurus. In addition, as described below, the accuracy of conversion may be higher than when using a thesaurus.
例えばシソーラスを用いる場合、シソーラスで分離されていないシソーラスノードに関連する内容では精度が悪くなることがある。例えば、シソーラスにおいて、モノが「木材」「金属」など、材質に基づいて分類されているとする。この「木材」ノードに「枝」「板」が含まれていたときに「板が割れる」という学習データの出現確率を使って、実世界ではあまり出現しない「枝が割れる」の出現確率を推定してしまうことがあり得る。この場合、これに対して本実施形態によれば、このような出現確率の算出を行わないため、シソーラスを用いる場合と比較して、変換の精度が高くなる。
For example, when a thesaurus is used, the accuracy may be deteriorated in contents related to a thesaurus node that is not separated by the thesaurus. For example, in a thesaurus, things are classified based on materials such as “wood” and “metal”. Estimate the probability of occurrence of “branch breaks” that do not appear much in the real world, using the probability of learning data that “board breaks” when “branches” “boards” are included in this “wood” node It can happen. In this case, according to this embodiment, since the appearance probability is not calculated, the conversion accuracy is higher than in the case of using the thesaurus.
(第2の実施形態)
図3は、第2の実施形態に係る情報処理装置の機能構成を示すブロック図である。図4は、図3に示した情報処理装置が行う処理のフローチャートである。本実施形態に係る情報処理装置は、変換部40を備えている点を除いて、第1の実施形態に係る情報処理装置と同様の構成である。 (Second Embodiment)
FIG. 3 is a block diagram illustrating a functional configuration of the information processing apparatus according to the second embodiment. FIG. 4 is a flowchart of processing performed by the information processing apparatus illustrated in FIG. The information processing apparatus according to the present embodiment has the same configuration as the information processing apparatus according to the first embodiment, except that theconversion unit 40 is provided.
図3は、第2の実施形態に係る情報処理装置の機能構成を示すブロック図である。図4は、図3に示した情報処理装置が行う処理のフローチャートである。本実施形態に係る情報処理装置は、変換部40を備えている点を除いて、第1の実施形態に係る情報処理装置と同様の構成である。 (Second Embodiment)
FIG. 3 is a block diagram illustrating a functional configuration of the information processing apparatus according to the second embodiment. FIG. 4 is a flowchart of processing performed by the information processing apparatus illustrated in FIG. The information processing apparatus according to the present embodiment has the same configuration as the information processing apparatus according to the first embodiment, except that the
変換部40は、統計データに基づいて、入力データを第1言語に変換する。ここで用いられる統計データは、例えば、第1言語における2つの単語の連鎖情報の統計データである。具体的には、変換部40は、辞書データを用いて入力データを構成する2つの単語のそれぞれを第1言語の単語に変換する。そして変換部40は、変換後の第1言語の単語の連鎖を生成し、生成した連鎖のうち、最も統計数の多い連鎖を、変換後の第1言語として選択する(図4のステップS10)。
The conversion unit 40 converts the input data into the first language based on the statistical data. The statistical data used here is, for example, statistical data of chain information of two words in the first language. Specifically, the conversion unit 40 converts each of two words constituting the input data into words of the first language using dictionary data. Then, the conversion unit 40 generates a chain of words in the first language after conversion, and selects the chain having the largest statistical number among the generated chains as the first language after conversion (step S10 in FIG. 4). .
そして候補ベクトル生成部10、文脈ベクトル生成部20、及び選択部30は、変換部40で変換できなかった入力データ、すなわち第一言語の単語の連鎖がいずれも統計データに含まれていなかった入力データに対して、第1の実施形態で示した処理を行う。
The candidate vector generation unit 10, the context vector generation unit 20, and the selection unit 30 are input data that could not be converted by the conversion unit 40, that is, input in which none of the first language word chains were included in the statistical data. The processing shown in the first embodiment is performed on the data.
本実施形態によっても、第1の実施形態と同様の効果を得ることができる。また、まず、変換部40が統計データに基づいて変換処理を行うため、変換の精度はさらに高くなる。
Also in this embodiment, the same effect as that of the first embodiment can be obtained. First, since the conversion unit 40 performs conversion processing based on the statistical data, the conversion accuracy is further increased.
(実施例1)
「radio breaks」という原データに対し、第2の実施形態に係る情報処理装置が、「break」に対する日本語の訳語を選択する際の動作を説明する。「break」は、英語のSVの構文を取る自動詞である。 Example 1
An operation when the information processing apparatus according to the second embodiment selects a Japanese translation for “break” for the original data “radio breaks” will be described. “Break” is an intransitive verb that takes the English SV syntax.
「radio breaks」という原データに対し、第2の実施形態に係る情報処理装置が、「break」に対する日本語の訳語を選択する際の動作を説明する。「break」は、英語のSVの構文を取る自動詞である。 Example 1
An operation when the information processing apparatus according to the second embodiment selects a Japanese translation for “break” for the original data “radio breaks” will be described. “Break” is an intransitive verb that takes the English SV syntax.
図5に示すように、「radio breaks」という原データを翻訳するにあたって、変換部40は、辞書データを参照することにより、「radio」は「ラジオ」という一つの訳語を持つと認識する。この場合、「radio」は一つの文脈を形成する。また変換部40は、「breaks」は「break」という原型を持つこと、及び「break」が「折れる」、「切れる」、「壊れる」・・・という複数の訳語を持つと認識する。ここで選択候補を「break(折れる)」、「break(切れる)」、「break(壊れる)」のように動詞の原型と訳語の組とし、文脈を「radio(ラジオ)」のように主語の原型と訳語の組とし、これを図3のような入力データとする。
5, when translating the original data “radio breaks”, the conversion unit 40 recognizes that “radio” has one translated word “radio” by referring to the dictionary data. In this case, “radio” forms a context. Further, the conversion unit 40 recognizes that “breaks” has a prototype “break” and that “break” has a plurality of translated words “break”, “break”, “break”,. Here, the selection candidates are a combination of a verb prototype and a translation such as `` break '', `` break '', `` break '', and the context is the subject such as `` radio '' A combination of a prototype and a translated word is used as input data as shown in FIG.
統計データは、SVの構文を持つ文の主語と動詞の原型と訳語の組を2個の連鎖情報として保持している。そして変換部40は、図4のステップS10において、「radio(ラジオ) break(折れる)」、「radio(ラジオ) break(切れる)」、「radio(ラジオ) break(壊れる)」・・・の組が統計データに存在するかを確認する。ここではすべての組が統計データに存在しなかったとする。
Statistic data holds a sentence subject with an SV syntax, a verb prototype, and a translation pair as two pieces of chain information. Then, in step S10 of FIG. 4, the conversion unit 40 sets “radio (radio) break (break)”, “radio (radio) break (break)”, “radio (radio) (break (break)”,. To see if exists in the statistics. Here, it is assumed that not all sets exist in the statistical data.
すると、候補ベクトル生成部10は、図4のステップS20において、選択候補の「break(折れる)」、「break(切れる)」、「break(壊れる)」のそれぞれに対して統計データを参照する。統計データは、図6に示すようになっていたとする。図6に示す例では、「break(折れる)」に対して「bat(バット) break(折れる)」が1回、「bone(骨) break(折れる)」が1回、それぞれ出現している。また「break(切れる)」に対しては、「cable(ケーブル) break(切れる)」が2回、出現している。さらに「break(壊れる)」に対しては、「tv(テレビ) break(壊れる)」が2回、出現している。
Then, the candidate vector generation unit 10 refers to the statistical data for each of the selection candidates “break”, “break”, and “break” in step S20 of FIG. Assume that the statistical data is as shown in FIG. In the example shown in FIG. 6, “bat (break)” break appears once and “bone” break appears once for “break”. In addition, for “break”, “cable (cable) break” appears twice. Furthermore, “tv (TV) break” appears twice for “break”.
そして候補ベクトル生成部10は、引き続き図4のステップS20において、図6のような候補ベクトルを生成する。「break(折れる)」に対応する候補ベクトルは、「bat(バット):1,bone(骨):1,...」である。「break(切れる)」に対応する候補ベクトルは、「cable(ケーブル):2,...」である。「break(壊れる)」に対応する候補ベクトルは、「tv(テレビ):2,...」である。
The candidate vector generation unit 10 continues to generate candidate vectors as shown in FIG. 6 in step S20 of FIG. Candidate vectors corresponding to “break” are “bat: 1, bone: 1,...”. The candidate vector corresponding to “break” is “cable (cable): 2,...”. The candidate vector corresponding to “break” is “tv (television): 2,...”.
また文脈ベクトル生成部20は、図4のステップS40において、「radio(ラジオ)」に対して統計データを参照する。個の統計データでは、「radio(ラジオ) start(始まる)」が1回、「radio(ラジオ) close(近くだ)」が1回、「radio(ラジオ) open(始まる)」が1回、出現しているとする。
Further, the context vector generation unit 20 refers to the statistical data for “radio” in step S40 of FIG. In the statistical data, "radio (radio) start (start)" once, "radio (radio) close (near)" once, "radio (radio) open (start)" once appears Suppose you are.
そして文脈ベクトル生成部20は、「start(始まる)」「close(近くだ)」「open(始まる)」・・・を抽出し、この単語に関する統計データを抽出する。その結果、図7に示すように、「radio(ラジオ) start(始まる)」が1回、「tv(テレビ) start(始まる)」が2回、「radio(ラジオ) close(近くだ)」が1回、「bakery(パン屋) close(近くだ)」が2回、「radio(ラジオ) open(始まる)」が1回、「opera(オペラ) open(始まる)」が1回、・・・出現しているとする。文脈ベクトル生成部20は、「start(始まる)」に対しては「radio(ラジオ):1,tv(テレビ):2,...」、「close(近くだ)」に対しては「radio(ラジオ):1,bakery(パン屋):2,...」、「open(始まる)」に対しては「radio(ラジオ):1,opera(オペラ):1,...」という文脈ベクトルを生成する。
Then, the context vector generation unit 20 extracts “start”, “close”, “open”, and so on, and extracts statistical data regarding this word. As a result, as shown in FIG. 7, "radio (radio) start (start)" is once, "tv (television) 始 ま る start (start)" is twice, and "radio (radio) close (close)" Once, “bakery close” is twice, “radiooopen” is once, “opera open” is once, ... Suppose that it appears. The context vector generation unit 20 selects “radio (radio): 1, tv (television): 2, ...” for “start”, and “radio” for “close”. (Radio): 1, bakery (bakery: 2, ...), "open" (start), "radio (radio): 1, opera (opera): 1, ..." Generate a vector.
そして選択部30は、図4のステップS60において、候補ベクトルと、文脈ベクトルの類似度を算出する。例えば、コサイン類似度を用いて、break(折れる)のベクトルとstart(始まる)のベクトルの類似度を求め、break(折れる)とclose(近くだ)の類似度を求め、break(折れる)とopen(始まる)の類似度を求め、break(切れる)とstart(始まる)の類似度を求め、・・・、break(壊れる)とopen(始まる)の類似度を求める。
And the selection part 30 calculates the similarity of a candidate vector and a context vector in step S60 of FIG. For example, using cosine similarity, find the similarity between the break vector and the start vector, find the similarity between break and close, break and open Find the similarity between (start), find the similarity between break (start) and start (start), ..., find the similarity between break (break) and open (start).
そして選択部30は、図4のステップS80において、文脈ベクトルと最も類似度が高い候補ベクトルを有する候補単語を選択する。ここでは、break(壊れる)の候補ベクトルとstart(始まる)の文脈ベクトルペアのコサイン類似度が、他の候補ベクトルと文脈ベクトルの組のコサイン類似度と比較して最も高いとして、この候補ベクトルを有するbreak(壊れる)を選択する。
And the selection part 30 selects the candidate word which has a candidate vector with the highest similarity with a context vector in step S80 of FIG. Here, it is assumed that the cosine similarity of the break candidate vector and the start context vector pair is the highest compared to the cosine similarity of the other candidate vector / context vector pairs. Select to have a break.
以上をまとめると、本実施例では、「radio breaks」という原データに対して、「radio」という文脈に対する「break」の訳語を選択する場合である。そして、統計データに「radio break」に関するデータがない場合でも、「break(折れる)」、「break(切れる)」、「break(壊れる)」から、精度よく「break」の訳語として「壊れる」を選択できる。
In summary, in this embodiment, the translation of “break” for the context “radio” is selected for the original data “radio breaks”. And even if there is no data about `` radio break '' in the statistical data, `` break '' is accurately translated from `` break '' from `` break '', `` break '', `` break '' You can choose.
(実施例2)
第2の実施形態に係る情報処理装置が日本語の音声認識装置である場合において、「絵馬が割れる」と音声入力されたときの動作を説明する。入力データは、言語モデルを適用する前の状態のネットワーク(音声データ)である。 (Example 2)
In the case where the information processing apparatus according to the second embodiment is a Japanese speech recognition apparatus, an operation when a voice input “Ema is broken” is input will be described. The input data is a network (voice data) in a state before the language model is applied.
第2の実施形態に係る情報処理装置が日本語の音声認識装置である場合において、「絵馬が割れる」と音声入力されたときの動作を説明する。入力データは、言語モデルを適用する前の状態のネットワーク(音声データ)である。 (Example 2)
In the case where the information processing apparatus according to the second embodiment is a Japanese speech recognition apparatus, an operation when a voice input “Ema is broken” is input will be described. The input data is a network (voice data) in a state before the language model is applied.
図8のように、音声認識結果の候補として、「餌が割れる」、「絵馬が割れる」、「枝が割れる」があるとする。この場合、第1の単語の候補単語を「餌」、「絵馬」、「枝」の1単語ずつとし、第2の単語(候補単語が一つの単語)を「が」「割れる」の2単語とする。なお、この2つの単語により、一つの文脈が形成されている。また、これらの3単語の連鎖は統計データに含まれていないものとする。
As shown in FIG. 8, it is assumed that there are “breakage of food”, “breakage of ema”, and “breakage of branch” as candidates for the speech recognition result. In this case, the first word candidate word is “bait”, “picture horse”, and “branch” one word at a time, and the second word (one candidate word is one word) “ga” “break” And A context is formed by these two words. These three-word chains are not included in the statistical data.
図4のステップS10において、変換部40は、「餌が割れる」、「絵馬が割れる」、「枝が割れる」の3単語の連鎖が統計データに含まれないことを確認する。
4, the conversion unit 40 confirms that the statistical data does not include a chain of three words “break the bait”, “break the ema”, and “break the branch”.
次いで図4のステップS20において、候補ベクトル生成部10は、統計データからそれぞれの候補単語(餌、絵馬、及び枝)を含むデータを抽出する。ここでは、「餌」については「餌を食べる」、「餌が少ない」、「絵馬」については「絵馬を書く」、「絵馬が薄い」、「枝」については「枝が折れる」、「枝が長い」、が抽出できたとする。この抽出したデータの候補単語以外の部分(すなわち同時使用単語)、及びその頻度を使って、図9のように候補ベクトルを生成する。
Next, in step S20 of FIG. 4, the candidate vector generation unit 10 extracts data including each candidate word (food, ema, and branch) from the statistical data. Here, “eat food”, “low food” for “bait”, “write ema”, “thin ema”, “branch” for “branches”, “branches” for “branches” ”Is long”. A candidate vector is generated as shown in FIG. 9 by using a portion other than the candidate word of the extracted data (that is, a simultaneously used word) and its frequency.
次いで図4のステップS40において、文脈ベクトル生成部20は、統計データから文脈部の「が割れる」を含むデータを抽出する。ここでは、「板が割れる」が抽出できたとする。そして文脈ベクトル生成部20は、この文脈部(「が割れる」)以外の「板」、及び「コップ」を、同時使用単語として認識し、これらを含むデータを統計データから抽出する。ここでは、「板」については「板が割れる」、「板が薄い」が抽出でき、「コップ」については「コップが割れる」、「コップで飲む」が抽出できたとする。そして文脈ベクトル生成部20は、図10のように文脈ベクトルを生成する。
Next, in step S40 of FIG. 4, the context vector generation unit 20 extracts data including “breaking” in the context part from the statistical data. Here, it is assumed that “the plate breaks” can be extracted. Then, the context vector generation unit 20 recognizes “plates” and “cops” other than the context unit (“break”) as simultaneously used words, and extracts data including these from the statistical data. Here, it is assumed that “plate is broken” and “thin plate is thin” can be extracted for “plate”, and “cup is broken” and “drink with a cup” are extracted for “cup”. Then, the context vector generation unit 20 generates a context vector as shown in FIG.
そして図4のステップS60において、選択部30は、候補ベクトルと、文脈ベクトルの類似度を計算する。選択部30は、例えばハミング距離を使って、頻度1以上の要素を持つもので同じ要素を持つ要素数を数えることにより、これら2つのベクトルの類似度を算出する。
4, the selection unit 30 calculates the similarity between the candidate vector and the context vector. The selection unit 30 calculates the degree of similarity between these two vectors by, for example, counting the number of elements having elements having a frequency of 1 or more and having the same elements using a Hamming distance.
そして図4のステップS80において、選択部30は、文脈ベクトルと最も類似度が高い候補ベクトルを有する候補単語を選択する。本実施例では、「板」の文脈ベクトルと「が薄い」という共通の要素に持つ「絵馬」の候補ベクトルとを選択する。
And in step S80 of FIG. 4, the selection part 30 selects the candidate word which has a candidate vector with the highest similarity with a context vector. In this embodiment, a “board” context vector and a “picture horse” candidate vector having a common element “thin” are selected.
以上、本実施例では、統計データに「絵馬が割れる」というデータがない場合でも、統計データを用いて、精度よく「絵馬が割れる」という音声認識結果を出力できる。
As described above, in this embodiment, even when there is no data “Ema can break” in the statistical data, the speech recognition result “Ema can break” can be output with high accuracy using the statistical data.
以上、図面を参照して本発明の実施形態及び実施例について述べたが、これらは本発明の例示であり、上記以外の様々な構成を採用することもできる。例えば、情報処理装置は、機械翻訳装置や音声認識装置のような言語処理システムだけでなく、「この商品を買った人はこれも買いました」のような行動履歴を使ったレコメンドシステムであってもよい。
As mentioned above, although embodiment and Example of this invention were described with reference to drawings, these are illustrations of this invention and can also employ | adopt various structures other than the above. For example, an information processing device is not only a language processing system such as a machine translation device or a speech recognition device, but also a recommendation system that uses an action history such as “the person who bought this product also bought it”. May be.
また、上記実施例では文脈を1つとして説明したが、文脈も複数あってもよい。このときは複数の文脈の文脈ベクトルに対して類似度計算部で類似度を計算し、選択部で最も類似度が高い候補ベクトルと文脈ベクトルのペアを抽出することで、文脈も1つに選択してもよい。
In the above embodiment, the context is described as one, but there may be a plurality of contexts. At this time, the similarity calculation unit calculates the similarity for the context vectors of a plurality of contexts, and the selection unit extracts the candidate vector / context vector pair having the highest similarity, thereby selecting one context. May be.
この出願は、2011年9月26日に出願された日本出願特願2011-208706を基礎とする優先権を主張し、その開示の全てをここに取り込む。
This application claims priority based on Japanese Patent Application No. 2011-208706 filed on September 26, 2011, the entire disclosure of which is incorporated herein.
Claims (8)
- 少なくとも2つの単語が連続している入力データを、第1言語に変換する情報処理装置であって、
第1の前記単語は、前記第1言語の単語の候補である候補単語が複数あり、第2の前記単語は、前記候補単語が一つであり、
前記第1の単語について、前記候補単語ごとに、前記第1言語の統計データに基づいて、前記候補単語と同時に使用されている単語である同時使用単語をその使用回数とともに選択することにより、前記候補単語ごとに候補ベクトルを生成する候補ベクトル生成手段と、
前記第2の単語について、前記第1言語の統計データに基づいて、前記第2の単語の前記同時使用単語をその使用回数とともに選択することにより、文脈ベクトルを生成する文脈ベクトル生成手段と、
前記文脈ベクトルと最も類似度が高い前記候補ベクトルを有している前記候補単語を、前記第1の単語に対応する前記第1言語の単語として選択する選択手段と、
を備える情報処理装置。 An information processing apparatus that converts input data in which at least two words are continuous into a first language,
The first word has a plurality of candidate words that are candidates for the word in the first language, and the second word has one candidate word,
For the first word, for each candidate word, based on the statistical data of the first language, by selecting a simultaneously used word that is a word that is used at the same time as the candidate word together with the number of uses thereof, Candidate vector generation means for generating a candidate vector for each candidate word;
A context vector generating means for generating a context vector for the second word by selecting the simultaneously used word of the second word together with the number of uses based on the statistical data of the first language;
Selection means for selecting the candidate word having the candidate vector having the highest similarity with the context vector as a word of the first language corresponding to the first word;
An information processing apparatus comprising: - 請求項1に記載の情報処理装置において、
前記情報処理装置は翻訳装置であり、
前記入力データは、前記第1言語とは異なる言語である第2言語の文字データである情報処理装置。 The information processing apparatus according to claim 1,
The information processing device is a translation device;
The information processing apparatus, wherein the input data is character data of a second language that is a language different from the first language. - 請求項1に記載の情報処理装置において、
前記情報処理装置は音声認識装置である情報処理装置。 The information processing apparatus according to claim 1,
The information processing device is a voice recognition device. - 請求項1~3のいずれか一項に記載の情報処理装置において、
統計データに基づいて、前記入力データを前記第1言語に変換する変換手段を備え、
前記候補ベクトル生成手段、前記文脈ベクトル生成手段、及び前記選択手段は、前記変換手段が変換できなかった前記入力データを処理する情報処理装置。 The information processing apparatus according to any one of claims 1 to 3,
Conversion means for converting the input data into the first language based on statistical data;
The candidate vector generation unit, the context vector generation unit, and the selection unit process the input data that could not be converted by the conversion unit. - 少なくとも2つの単語が連続している入力データを、第1言語に変換する情報処理方法であって、
第1の前記単語は、前記第1言語の単語の候補である候補単語が複数あり、第2の前記単語は、前記候補単語が一つであり、
コンピュータが、前記第1の単語について、前記候補単語ごとに、前記第1言語の統計データに基づいて、前記候補単語と同時に使用されている単語である同時使用単語をその使用回数とともに選択することにより、前記候補単語ごとに候補ベクトルを生成し、
前記コンピュータが、前記第2の単語について、前記第1言語の統計データに基づいて、前記第2の単語の前記同時使用単語をその使用回数とともに選択することにより、文脈ベクトルを生成し、
前記コンピュータが、前記文脈ベクトルと最も類似度が高い前記候補ベクトルを有している前記候補単語を、前記第1の単語に対応する前記第1言語の単語として選択する、情報処理方法。 An information processing method for converting input data in which at least two words are continuous into a first language,
The first word has a plurality of candidate words that are candidates for the word in the first language, and the second word has one candidate word,
For the first word, the computer selects, for each candidate word, a simultaneously used word, which is a word used simultaneously with the candidate word, together with the number of uses, based on the statistical data of the first language. To generate a candidate vector for each candidate word,
The computer generates a context vector for the second word by selecting the simultaneously used word of the second word together with the number of uses based on the statistical data of the first language,
The information processing method, wherein the computer selects the candidate word having the candidate vector having the highest similarity with the context vector as a word in the first language corresponding to the first word. - 少なくとも2つの単語が連続している入力データを、第1言語に変換するためのプログラムであって、
第1の前記単語は、前記第1言語の単語の候補である候補単語が複数あり、第2の前記単語は、前記候補単語が一つであり、
コンピュータに、
前記第1の単語について、前記候補単語ごとに、前記第1言語の統計データに基づいて、前記候補単語と同時に使用されている単語である同時使用単語をその使用回数とともに選択することにより、前記候補単語ごとに候補ベクトルを生成する機能と、
前記第2の単語について、前記第1言語の統計データに基づいて、前記第2の単語の前記同時使用単語をその使用回数とともに選択することにより、文脈ベクトルを生成する機能と、
前記文脈ベクトルと最も類似度が高い前記候補ベクトルを有している前記候補単語を、前記第1の単語に対応する前記第1言語の単語として選択する機能と、
を実現させるプログラム。 A program for converting input data in which at least two words are continuous into a first language,
The first word has a plurality of candidate words that are candidates for the word in the first language, and the second word has one candidate word,
On the computer,
For the first word, for each candidate word, based on the statistical data of the first language, by selecting a simultaneously used word that is a word that is used at the same time as the candidate word together with the number of uses thereof, The ability to generate candidate vectors for each candidate word;
A function of generating a context vector for the second word by selecting the simultaneously used word of the second word together with the number of uses based on the statistical data of the first language;
A function of selecting the candidate word having the candidate vector having the highest similarity with the context vector as a word of the first language corresponding to the first word;
A program that realizes - 請求項6に記載のプログラムにおいて、
前記プログラムは機械翻訳用のプログラムであり、
前記入力データは、前記第1言語とは異なる言語である第2言語の文字データであるプログラム。 The program according to claim 6,
The program is a machine translation program,
The input data is a program that is character data of a second language that is a language different from the first language. - 請求項6に記載のプログラムにおいて、
前記プログラムは音声認識用のプログラムであるプログラム。 The program according to claim 6,
The program is a program for speech recognition.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011-208706 | 2011-09-26 | ||
JP2011208706 | 2011-09-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013046590A1 true WO2013046590A1 (en) | 2013-04-04 |
Family
ID=47994676
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/005906 WO2013046590A1 (en) | 2011-09-26 | 2012-09-14 | Information processing device, information processing method, and program |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2013046590A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017142746A (en) * | 2016-02-12 | 2017-08-17 | 日本電信電話株式会社 | Word vector learning device, natural language processing device, program, and program |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH01314373A (en) * | 1988-06-15 | 1989-12-19 | Hitachi Ltd | Translated word selecting system in machine translating system |
JP2000163441A (en) * | 1998-11-30 | 2000-06-16 | Nippon Telegr & Teleph Corp <Ntt> | Method and device for preparing dictionary, storage medium storing dictionary preparation program, method and device for preparing retrieval request, storage medium storing retrieval request preparation program and multi-language correspondence information retrieval system |
JP2006338342A (en) * | 2005-06-02 | 2006-12-14 | Nippon Telegr & Teleph Corp <Ntt> | Word vector generation device, word vector generation method and program |
-
2012
- 2012-09-14 WO PCT/JP2012/005906 patent/WO2013046590A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH01314373A (en) * | 1988-06-15 | 1989-12-19 | Hitachi Ltd | Translated word selecting system in machine translating system |
JP2000163441A (en) * | 1998-11-30 | 2000-06-16 | Nippon Telegr & Teleph Corp <Ntt> | Method and device for preparing dictionary, storage medium storing dictionary preparation program, method and device for preparing retrieval request, storage medium storing retrieval request preparation program and multi-language correspondence information retrieval system |
JP2006338342A (en) * | 2005-06-02 | 2006-12-14 | Nippon Telegr & Teleph Corp <Ntt> | Word vector generation device, word vector generation method and program |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017142746A (en) * | 2016-02-12 | 2017-08-17 | 日本電信電話株式会社 | Word vector learning device, natural language processing device, program, and program |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10515155B2 (en) | Conversational agent | |
US11681872B2 (en) | Language sequence labeling method and apparatus, storage medium, and computing device | |
JP7278477B2 (en) | Decryption network construction method, speech recognition method, device, equipment and storage medium | |
CN109313650B (en) | Generating responses in automated chat | |
US10282420B2 (en) | Evaluation element recognition method, evaluation element recognition apparatus, and evaluation element recognition system | |
KR101353521B1 (en) | A method and an apparatus of keyword extraction and a communication assist device | |
CN111274367A (en) | Semantic analysis method, semantic analysis system and non-transitory computer readable medium | |
TW202022855A (en) | Method and system for processing speech signal | |
EP2950306A1 (en) | A method and system for building a language model | |
US8626490B2 (en) | Dialogue system using extended domain and natural language recognition method and computer-readable medium thereof | |
CN105390137A (en) | Response generation method, response generation apparatus, and response generation program | |
EP3525107A1 (en) | Conversational agent | |
KR102167157B1 (en) | Voice recognition considering utterance variation | |
CN105373527B (en) | Omission recovery method and question-answering system | |
US10997966B2 (en) | Voice recognition method, device and computer storage medium | |
CN111581347A (en) | Sentence similarity matching method and device | |
CN105632500B (en) | Speech recognition apparatus and control method thereof | |
WO2013046590A1 (en) | Information processing device, information processing method, and program | |
TW202217595A (en) | Machine reading comprehension method, machine reading comprehension device and non-transient computer readable medium | |
CN112151021A (en) | Language model training method, speech recognition device and electronic equipment | |
CN114818736B (en) | Text processing method, chain finger method and device for short text and storage medium | |
JP6353408B2 (en) | Language model adaptation device, language model adaptation method, and program | |
US12026632B2 (en) | Response phrase selection device and method | |
CN114266240A (en) | Multi-intention identification method and device based on robot | |
WO2021038827A1 (en) | Information processing method, information processing program, and information processing device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12835389 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12835389 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |