CN111782060B - Object display method and device and electronic equipment - Google Patents
Object display method and device and electronic equipment Download PDFInfo
- Publication number
- CN111782060B CN111782060B CN202010559109.4A CN202010559109A CN111782060B CN 111782060 B CN111782060 B CN 111782060B CN 202010559109 A CN202010559109 A CN 202010559109A CN 111782060 B CN111782060 B CN 111782060B
- Authority
- CN
- China
- Prior art keywords
- character information
- candidate
- character
- target object
- memory word
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 238000000926 separation method Methods 0.000 claims description 27
- 238000013507 mapping Methods 0.000 claims description 15
- 238000012790 confirmation Methods 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 10
- 238000012937 correction Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000006399 behavior Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000010192 kaixin Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
- G06F3/023—Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
- G06F3/0233—Character input methods
- G06F3/0237—Character input methods using prediction or retrieval techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/216—Parsing using statistical methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Human Computer Interaction (AREA)
- Machine Translation (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses an object display method, an object display device and electronic equipment, wherein the method comprises the following steps: acquiring a candidate object corresponding to the first character information; the first character information is character information input based on a keyboard control; searching a target object from a memory word stock; the target object is an object corresponding to second character information, and the second character information is character information with the minimum editing distance with the first character information in the memory word stock; and if the first number of occurrences of the target object in the memory word stock is greater than the second number of occurrences of the candidate object in the memory word stock, displaying the target object in a candidate display area. The implementation of the method can simplify the operation of a user.
Description
Technical Field
The application belongs to the technical field of communication, and particularly relates to an object display method, an object display device and electronic equipment.
Background
With the rapid development of electronic devices, various functions of an input method configured on the electronic device are increasingly perfected, for example, a function of memorizing word stock is developed for the input method. The memory word stock refers to a corresponding candidate object, such as a candidate word, clicked by the user when the user inputs a repeated character string, such as a pinyin string, again, which is arranged in a more front position in the candidate display area.
In the input scene of the memory word stock, because the distance between the keyboard letters is smaller, the adjacent positions are very easy to be misplaced, so when a user inputs misplaced character strings and finishes one-time clicking, the clicked object is added into the memory word stock; when the user inputs the same character string again, this object appears in the front position in the candidate display area. In such a scenario, the user may be required to perform a cancel operation to re-input the correct character string, and thus, the existing object display mode has a problem of complicated operation.
Disclosure of Invention
The embodiment of the application aims to provide an object display method, an object display device and electronic equipment, which can solve the problem of complicated operation of an object display mode in the prior art.
In order to solve the technical problems, the application is realized as follows:
in a first aspect, an embodiment of the present application provides an object display method, including:
acquiring a candidate object corresponding to the first character information; the first character information is character information input based on a keyboard control;
Searching a target object from a memory word stock; the target object is an object corresponding to second character information, and the second character information is character information with the minimum editing distance with the first character information in the memory word stock;
And if the first number of occurrences of the target object in the memory word stock is greater than the second number of occurrences of the candidate object in the memory word stock, displaying the target object in a candidate display area.
In a second aspect, an embodiment of the present application provides an object display apparatus, including:
The acquisition module is used for acquiring a candidate object corresponding to the first character information; the first character information is character information input based on a keyboard control;
The searching module is used for searching the target object from the memory word stock; the target object is an object corresponding to second character information, and the second character information is character information with the minimum editing distance with the first character information in the memory word stock;
And the first display module is used for displaying the target object in a candidate display area if the first number of occurrences of the target object in the memory word stock is larger than the second number of occurrences of the candidate object in the memory word stock.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction implementing the steps of the method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In the embodiment of the application, the target object corresponding to the second character information most similar to the first character information is searched in the memory word stock, and the target object is displayed under the condition that the target object appears more frequently than the candidate object in the memory word stock is judged. In this way, when the first character information does not meet the user's expectation due to the error in input, the user can simplify the operation of the user by predicting the second character information expected to be input by the user and displaying the target object corresponding to the second character information in the candidate display area, so that the user does not need to cancel the operation and re-input the characters.
Drawings
FIG. 1 is a flow chart of an object display method provided by an embodiment of the present application;
FIG. 2 is a schematic illustration of a display of an object in a candidate display area in the prior art;
FIG. 3 is a schematic illustration of displaying objects in a candidate display area according to an embodiment of the application;
FIG. 4 is a second schematic diagram of displaying objects in candidate display areas according to an embodiment of the application;
fig. 5 is a block diagram of an object display device provided in an embodiment of the present application;
Fig. 6 is a block diagram of an electronic device according to an embodiment of the present application;
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The following describes in detail the object display provided by the embodiment of the present application through specific embodiments and application scenarios thereof with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart of an object display method according to an embodiment of the present application, as shown in fig. 1, including the following steps:
Step 101, obtaining a candidate object corresponding to first character information; the first character information is character information input based on a keyboard control.
In this step, the keyboard control may be a virtual keyboard control or an entity keyboard control, which is not specifically limited herein.
The first character information may be character information input through a keyboard control using a native or installed input method application. The first character information may include at least one character, and the at least one character may include letters, numbers, other symbols, or the like.
The number of the candidate objects may be at least one, and may be an expression, a word, a sentence, or the like, for example, a string of pinyin "kaixin" may be input, the number of candidate objects corresponding to the pinyin may be two, namely, a word "happy" and a happy expression, and for example, a string of pinyin "senlinzhenmei" may be input, and the candidate object corresponding to the pinyin may be a sentence "forest true and beautiful".
The candidate object may be of Chinese, english, or other language. The type of the candidate object may be determined according to the type of the input method, for example, if the type of the input method is a pinyin input method, and the first character information input through the keyboard control is pinyin composed of letters, the type of the candidate object is chinese, and if the type of the input method is an english input method, the first character information input through the keyboard control is english word composed of letters, the type of the candidate object is english. In the following description, the types of the candidate objects will be described in detail by taking chinese as an example.
The candidate object corresponding to the first character information may be acquired according to the actual situation. In a scenario, the user inputs the first character information for the first time, or inputs the first character information again, but the user does not perform the input confirmation operation on the first character information before, at this time, the first character information is not stored in the memory word bank configured by the input method application program, and the candidate object of the first character information can be obtained according to semantic understanding. For example, the user enters first character information "zhenengshurufa" for the first time, which may be semantically understood as "this input-enabled method".
In another scenario, the candidate object corresponding to the first character information may be an object corresponding to the first character information in the memory word stock, and in this scenario, the first character information is typically character information input by the user again. For example, the user inputs "zhenengshurufa" again, and at this time, the candidate object corresponding to the first character information can be queried from the memory word stock, which is the "input method".
However, there is a case where the user inputs first character information, but the first character information cannot be understood semantically, and the flow is ended at this time. For example, the user inputs first character information "ji@dkj", which cannot be understood semantically, and the flow ends at this time.
Step 102, searching a target object from a memory word stock; the target object is an object corresponding to second character information, and the second character information is character information with the minimum editing distance with the first character information in the memory word stock.
The memory word library generally stores related parameters of some objects which are input and confirmed by the user before, such as a character string which is input before the user, a separation mode of the character string, a full spelling corresponding to the character string and a word or sentence corresponding to the full spelling, and a word frequency of the word or sentence, wherein the word frequency represents the number of times that the user inputs and confirms the word or sentence.
In this step, in the process of searching for the target object, the second character information may be first searched for from the memory word stock.
Specifically, the character information in the memory word stock may be traversed, and an edit distance between the first character information and other character information in the memory word stock except for the first character information may be calculated, where the edit distance may represent a degree of difference between the first character information and one character information, that is, the edit distance may represent how many times the first character information is changed into the character information to be processed.
For example, the first character information input by the user is a character string "zhenengshurufa", one character information in the memory word stock is a character string "zhinengshurufa", the editing distance between the first character information and the character information is 1, i.e. the character "e" is replaced by a character "i", and then the character string "zhenengshurufa" can be converted into a character string "zhinengshurufa".
The edit distance may be a levenstein distance, the calculation formula of which is shown in the following formula (1).
In the above formula (1), a represents first character information, b represents character information currently traversed by the memory word stock, i represents a subscript of the first character information a, and j represents a subscript of character information b currently traversed by the memory word stock. The editing distance between the first character information and other character information except the first character information in the memory word stock can be calculated by the formula (1).
And acquiring second character information with the minimum editing distance with the first character information from the memory word stock, wherein the second character information is the character information most similar to the first character information input by the user.
After the second character information is acquired, an object corresponding to the second character information is acquired from the memory word stock, and the object is the target object. The target object may be an object that the user expects to appear.
And step 103, if the first number of occurrences of the target object in the memory word stock is greater than the second number of occurrences of the candidate object in the memory word stock, displaying the target object in a candidate display area.
Wherein the target object is different from the candidate object.
The word frequency of the object can be stored in the memory word stock. In the step, a judging step of judging whether the word frequency of the target object is larger than that of the candidate object is implicitly included, namely, a judging step of judging whether the first time number of occurrence of the target object in the memory word stock is larger than that of the second time number of occurrence of the candidate object in the memory word stock is implicitly included.
And when the first time of the occurrence of the target object in the memory word stock is judged to be larger than the second time of the occurrence of the candidate object in the memory word stock, marking the target object as an error correction memory object, and displaying the error correction memory object in a candidate display area. The error correction memory object may be understood as an object for performing error correction on a candidate object corresponding to the first character information in a memory word stock, where the error correction memory object is different from the candidate object corresponding to the first character information.
The error correction memory object may be an expression, a word or a sentence, for example, the user inputs a character string "zhenengshurfa", and since the character string "zhinengshurufa" is recorded in the memory word stock, the word frequency of the "intelligent input method" is far greater than the word frequency of the "input-capable method", and the target object "intelligent input method" may be an error correction memory object of the candidate object "the input-capable method", that is, the candidate object "the input-capable method" may be an object that occurs due to the user's erroneous input of the first character information, and the target object "intelligent input method" may be an object that the user may want to occur.
In order to make the target object displayed in the candidate display area more in line with the expectations of the user, the quality of the target object displayed in the candidate display area is ensured, and the target object can be displayed only when the word frequency of the target object is far greater than that of the candidate object. Here, far greater means that the word frequency of the target object is more than twice the word frequency of the candidate object.
And if the first number of occurrences of the target object in the memory word stock is less than or equal to the second number of occurrences of the candidate object in the memory word stock, the target object may not be displayed.
Further, in order to ensure that the object displayed in the candidate display area matches the first character information input by the user on the one hand, and to avoid the problem that the object displayed in the candidate display area does not match the object expected to occur by the user due to the false input of the first character information by the user on the other hand, the candidate object and the target object may be displayed simultaneously in the candidate display area. The method further comprises, while displaying the target object in the candidate display area:
displaying the candidate object in the candidate display area;
Wherein the displaying the target object in the candidate display area includes:
displaying the target object in the candidate display area in a preset display mode; the preset display mode is different from the display mode of the candidate object.
Specifically, in the candidate display area, the candidate object may be displayed before the target object or may be displayed after the target object, which is not specifically limited herein.
The target object is displayed in a preset display mode, which is different from the display mode of the candidate object, so that the target object is highlighted in the candidate display area, that is, the target object is displayed in a mode different from the candidate object, for example, bold display, italic display, font enlargement display, display with a different color from the candidate object, and the like.
By additionally displaying the target object in the candidate display area, on one hand, the object displayed in the candidate display area can be ensured to be matched with the first character information input by the user, and on the other hand, the problem that the object displayed in the candidate display area is not matched with the object expected to occur by the user due to the false input of the first character information by the user can be avoided. And the target object is highlighted in the candidate display area, so that the user can quickly distinguish the target object from a plurality of objects, and the confirmation speed of the user on the target object can be improved.
Referring to fig. 2, fig. 2 is a display schematic diagram of an object in a conventional candidate display area, and as shown in fig. 2, a candidate object "this input-enabled method" corresponding to the first character information is displayed in a position in front of the candidate display area. Referring to fig. 3, fig. 3 is one of display diagrams of objects in a candidate display area according to an embodiment of the present application, and as shown in fig. 3, a target object "smart input method" is also highlighted after the candidate object.
In this embodiment, the target object corresponding to the second character information most similar to the first character information is searched in the memory word stock, and the target object is displayed when it is determined that the target object appears more frequently than the candidate object in the memory word stock. In this way, when the first character information does not meet the user's expectation due to the error in input, the user can simplify the operation of the user by predicting the second character information expected to be input by the user and displaying the target object corresponding to the second character information in the candidate display area, so that the user does not need to cancel the operation and re-input the characters.
Optionally, based on the first embodiment, the step 101 specifically includes:
character separation is carried out on the first character information, and a character combination result is obtained;
judging whether the character combination result exists in the memory word stock or not under the condition that the character combination result meets a preset matching rule;
and under the condition that the character combination result exists in the memory word stock, acquiring the candidate object corresponding to the character combination result from the memory word stock.
In this embodiment, first, an existing or new semantic understanding algorithm may be adopted to perform character separation on the first character information, so as to obtain all possible character combination results. The character combination results may include some character combination results which do not conform to the input habit of the user, and the character combination results may carry separation marks.
And then judging whether the character combination result meets a preset matching rule or not, wherein the preset matching rule is different according to the type of the input method. For example, if the type of the input method is a pinyin input method and the first character information input by the user is a pinyin string, the character combination result is a pinyin combination result, and the preset matching rule is that the pinyin combination result is obtained by performing character separation on the first character information in a full spelling mode and/or a simple spelling mode.
Taking a pinyin input method as an example, according to the Chinese pinyin scheme, the total number of Chinese pinyin is 63, wherein the number of initial consonants is 23, the number of final sounds is 24, the number of syllables is 16 to be integrally recognized and read, and 428 pinyin which can be formed by permutation and combination can be adopted by a legal full spelling mode. In addition, during the input process, the user also usually adopts a simple spelling mode to input the pinyin string, and the common simple spelling mode input by the user comprises an end simple spelling mode, a full simple spelling mode, other simple spellings such as a middle simple spelling mode, a first simple spelling mode and the like.
If the pinyin combination result is obtained by carrying out character separation on the first character information in a full spelling mode and/or a simple spelling mode, determining that the pinyin combination result meets a preset matching rule. For example, if the user inputs the pinyin string "xians" and the pinyin combination result is "x/ian/s", the pinyin combination result does not satisfy the preset matching rule because "ian" does not conform to the full spelling mode or the simple spelling mode.
In addition, the input behavior of the user generally accords with a certain rule, and the pinyin string input by the user generally uses a full-pinyin mode preferentially, and then the pinyin string is simply spelled at the tail end, and then other spelled strings. Therefore, the pinyin strings can be firstly subjected to character separation in a full spelling mode according to the sequence from front to back of the pinyin strings, if the type of the pinyin strings does not accord with the full spelling mode in the separation process, the pinyin strings can be subjected to character separation in a simple spelling mode, for example, the pinyin strings 'xians', the type of the pinyin's' does not accord with the full spelling mode, and therefore the pinyin combination result can be finally obtained by separation in the simple spelling mode. Thus, where a pinyin string entered by a user may be understood semantically, the pinyin combination result it obtains is typically a pinyin combination result that matches the user's input habits.
For example, if the user inputs the pinyin string "xians", the pinyin combination result "xi/an/s" carrying the separation mark may be obtained according to the preset matching rule, and the corresponding candidate object may be "xian is", "xian city", etc. And obtaining pinyin combination results of "xin/s", the corresponding candidate objects may be "first yes", "display", etc.
And then, judging whether the character combination result exists in the memory word stock or not under the condition that the character combination result meets a preset matching rule. Specifically, first character information may be first matched from a memory word stock, and if the first character information is matched, it is determined whether a separation mode corresponding to the first character information in the memory word stock is consistent with a separation mode indicated by a separation mark in a character combination result, and if the first character information is matched with the separation mode indicated by the separation mark in the character combination result, it is determined that the character combination result exists in the memory word stock.
And finally, under the condition that the character combination result exists in the memory word stock, acquiring the candidate object corresponding to the character combination result from the memory word stock.
In this embodiment, a character combination result is obtained by performing character separation on the first character information, and if the character combination result meets a preset matching rule, whether the character combination result exists in the memory word bank is judged; and under the condition that the character combination result exists in the memory word stock, acquiring the candidate object corresponding to the character combination result from the memory word stock. Therefore, the character combination result obtained by character separation of the first character information can well accord with the input habit of the user, and the candidate object can be accurately obtained and displayed.
Optionally, based on embodiment one, after the step 103, the method further includes:
Under the condition that a first input to the target object is received, deleting the mapping relation if the memory word stock comprises the mapping relation between the first character information and the candidate object; the first input is used for carrying out input confirmation on the target object;
And associating the first character information with the target object.
In this embodiment, after the candidate display area displays the target object, that is, the object that the user may expect more, it may be determined whether the object expected by the user coincides with the expected object according to the input behavior of the user to the target object. When the user confirms the input of the target object, the first character information is determined to be the character information input by the user by mistake, and the user actually expects to input the second character information and expects to be the target object. At this time, the memory word stock may be revised.
Specifically, when the first input to the target object is received, it is determined that the object expected by the user is consistent with the expected object, and at this time, if the memory word stock includes the mapping relationship between the first character information and the candidate object, the mapping relationship is deleted. For example, the mapping relation between the first character information 'zhenengshurufa' and the candidate object 'this can input method' in the memory word stock is deleted.
The first input may be a voice input, a somatosensory input, or a touch input. Wherein, the touch input may include: single click operation, double click operation, multi-click operation, drag operation, etc.
Then, the first character information is associated with the target object to correct another input possibility of the target object in the memory word stock. For example, the mapping relation of the target object "intelligent input method" is corrected to map the character information "zhinengshurufa" and "zhenengshurufa" to the "intelligent input method" at the same time. Thus, referring to fig. 4, fig. 4 is a second schematic diagram of displaying objects in a candidate display area, and as shown in fig. 4, if the user inputs the character string "zhenengshurufa" by mistake again, the candidate display area may directly display "intelligent input method".
In this embodiment, when the first input to the target object is received, by deleting the mapping relationship and associating the first character information with the target object, on one hand, the memory burden of the memory word stock can be reduced, and on the other hand, the input behavior of the user can be well fitted.
It should be noted that, in the object display method provided in the embodiment of the present application, the execution subject may be an object display device, or a control module for executing the object display method in the object display device. In the embodiment of the present application, an object display device executes an object display method as an example, and the object display device provided in the embodiment of the present application is described.
Referring to fig. 5, fig. 5 is a structural diagram of an object display device provided in an embodiment of the present application, and as shown in fig. 5, an object display device 500 includes:
An obtaining module 501, configured to obtain a candidate object corresponding to the first character information; the first character information is character information input based on a keyboard control;
the searching module 502 is configured to search a memory word stock for a target object; the target object is an object corresponding to second character information, and the second character information is character information with the minimum editing distance with the first character information in the memory word stock;
A first display module 503, configured to display the target object in a candidate display area if the first number of occurrences of the target object in the memory word stock is greater than the second number of occurrences of the candidate object in the memory word stock.
Optionally, the obtaining module 501 includes:
the character separation unit is used for carrying out character separation on the first character information to obtain a character combination result;
The judging unit is used for judging whether the character combination result exists in the memory word stock or not under the condition that the character combination result meets a preset matching rule;
The obtaining unit is used for obtaining the candidate object corresponding to the character combination result in the memory word stock under the condition that the character combination result exists in the memory word stock.
Optionally, the character combination result is a pinyin combination result, and the preset matching rule is that the pinyin combination result is obtained by performing character separation on the first character information according to a full spelling mode and/or a simple spelling mode.
Optionally, the apparatus further includes:
The deleting module is used for deleting the mapping relation if the memory word stock comprises the mapping relation between the first character information and the candidate object under the condition that the first input to the target object is received; the first input is used for carrying out input confirmation on the target object;
and the association module is used for associating the first character information with the target object.
Optionally, the apparatus further includes:
The second display module is used for displaying the candidate objects in the candidate display area;
The first display module 503 is specifically configured to display the target object in the candidate display area in a preset display manner; the preset display mode is different from the display mode of the candidate object.
In this embodiment, the searching module 502 searches the memory word stock for the target object corresponding to the second character information most similar to the first character information, and the first display module 503 displays the target object when it is determined that the target object appears more frequently than the candidate object in the memory word stock. In this way, when the first character information does not meet the user's expectation due to the error in input, the user can simplify the operation of the user by predicting the second character information expected to be input by the user and displaying the target object corresponding to the second character information in the candidate display area, so that the user does not need to cancel the operation and re-input the characters.
The object display device in the embodiment of the application can be a device, and can also be a component, an integrated circuit or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and the non-mobile electronic device may be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., and the embodiments of the present application are not limited in particular.
The object display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The object display device provided in the embodiment of the present application can implement each process implemented by the method embodiment of fig. 1, and in order to avoid repetition, a description is omitted here.
Optionally, referring to fig. 6, fig. 6 is a block diagram of an electronic device provided by the embodiment of the present application, as shown in fig. 6, the embodiment of the present application further provides an electronic device 600, including a processor 601, a memory 602, and a program or an instruction stored in the memory 602 and capable of running on the processor 601, where the program or the instruction implements each process of the above object display method embodiment when executed by the processor 601, and the process can achieve the same technical effect, and is not repeated herein.
It should be noted that, the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 700 includes, but is not limited to: radio frequency unit 701, network module 702, audio output unit 703, input unit 704, sensor 705, display unit 706, user input unit 707, interface unit 708, memory 709, and processor 710.
Those skilled in the art will appreciate that the electronic device 700 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 710 via a power management system so as to perform functions such as managing charge, discharge, and power consumption via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 710 is configured to obtain a candidate object corresponding to the first character information; the first character information is character information input based on a keyboard control; searching a target object from a memory word stock; the target object is an object corresponding to second character information, and the second character information is character information with the minimum editing distance with the first character information in the memory word stock;
And a display unit 706, configured to display the target object in a candidate display area if the first number of occurrences of the target object in the memory word stock is greater than the second number of occurrences of the candidate object in the memory word stock.
In the embodiment of the present application, the processor 710 searches the memory word stock for the target object corresponding to the second character information most similar to the first character information, and the display unit 706 displays the target object when it is determined that the target object appears more frequently than the candidate object in the memory word stock. In this way, when the first character information does not meet the user's expectation due to the error in input, the user can simplify the operation of the user by predicting the second character information expected to be input by the user and displaying the target object corresponding to the second character information in the candidate display area, so that the user does not need to cancel the operation and re-input the characters.
Optionally, the processor 710 is further configured to perform character separation on the first character information to obtain a character combination result; judging whether the character combination result exists in the memory word stock or not under the condition that the character combination result meets a preset matching rule; and under the condition that the character combination result exists in the memory word stock, acquiring the candidate object corresponding to the character combination result from the memory word stock.
Optionally, the processor 710 is further configured to, in a case of receiving a first input to the target object, delete the mapping relationship if the memory word stock includes the mapping relationship between the first character information and the candidate object; the first input is used for carrying out input confirmation on the target object; and associating the first character information with the target object.
Optionally, the display unit 706 is further configured to display the candidate object in the candidate display area; displaying the target object in the candidate display area in a preset display mode; the preset display mode is different from the display mode of the candidate object.
It should be appreciated that in embodiments of the present application, the input unit 704 may include a graphics processor (Graphics Processing Unit, GPU) 7041 and a microphone 7042, with the graphics processor 7041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts, a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 709 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 710 may integrate an application processor that primarily processes operating systems, user interfaces, applications, etc., with a modem processor that primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 710.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above-mentioned object display method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the object display method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.
Claims (10)
1. An object display method, comprising:
Acquiring a candidate object corresponding to the first character information; the candidate objects are obtained according to semantic understanding or are objects corresponding to the first character information stored in a memory word stock;
Searching a target object from a memory word stock; the target object is an object corresponding to second character information, and the second character information is character information with the minimum editing distance with the first character information in the memory word stock;
If the first time number of the target object appearing in the memory word stock is larger than the second time number of the candidate object appearing in the memory word stock, displaying the target object in a candidate display area;
The method further comprises, while displaying the target object in the candidate display area:
displaying the candidate object in the candidate display area;
Wherein the displaying the target object in the candidate display area includes:
displaying the target object in the candidate display area in a preset display mode; the preset display mode is different from the display mode of the candidate object.
2. The method according to claim 1, wherein the obtaining the candidate object corresponding to the first character information includes:
character separation is carried out on the first character information, and a character combination result is obtained;
judging whether the character combination result exists in the memory word stock or not under the condition that the character combination result meets a preset matching rule;
and under the condition that the character combination result exists in the memory word stock, acquiring the candidate object corresponding to the character combination result from the memory word stock.
3. The method according to claim 2, wherein the character combination result is a pinyin combination result, and the preset matching rule is that the pinyin combination result is obtained by character separation of the first character information in a full-spelling manner and/or a simple-spelling manner.
4. The method of claim 1, wherein after the target object is displayed in the candidate display area, the method further comprises:
Under the condition that a first input to the target object is received, deleting the mapping relation if the memory word stock comprises the mapping relation between the first character information and the candidate object; the first input is used for carrying out input confirmation on the target object;
And associating the first character information with the target object.
5. An object display device, comprising:
The acquisition module is used for acquiring a candidate object corresponding to the first character information; the candidate objects are obtained according to semantic understanding or are objects corresponding to the first character information stored in a memory word stock;
The searching module is used for searching the target object from the memory word stock; the target object is an object corresponding to second character information, and the second character information is character information with the minimum editing distance with the first character information in the memory word stock;
the first display module is used for displaying the target object in a candidate display area if the first number of occurrences of the target object in the memory word stock is larger than the second number of occurrences of the candidate object in the memory word stock;
the apparatus further comprises:
The second display module is used for displaying the candidate objects in the candidate display area;
The first display module is specifically configured to display the target object in the candidate display area in a preset display manner; the preset display mode is different from the display mode of the candidate object.
6. The apparatus of claim 5, wherein the acquisition module comprises:
the character separation unit is used for carrying out character separation on the first character information to obtain a character combination result;
The judging unit is used for judging whether the character combination result exists in the memory word stock or not under the condition that the character combination result meets a preset matching rule;
The obtaining unit is used for obtaining the candidate object corresponding to the character combination result in the memory word stock under the condition that the character combination result exists in the memory word stock.
7. The apparatus of claim 6, wherein the character combination result is a pinyin combination result, and the preset matching rule is that the pinyin combination result is obtained by character separation of the first character information in a full-spelling manner and/or a simple-spelling manner.
8. The apparatus of claim 5, wherein the apparatus further comprises:
The deleting module is used for deleting the mapping relation if the memory word stock comprises the mapping relation between the first character information and the candidate object under the condition that the first input to the target object is received; the first input is used for carrying out input confirmation on the target object;
and the association module is used for associating the first character information with the target object.
9. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the object display method as claimed in claims 1-4.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the object display method according to claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010559109.4A CN111782060B (en) | 2020-06-18 | 2020-06-18 | Object display method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010559109.4A CN111782060B (en) | 2020-06-18 | 2020-06-18 | Object display method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111782060A CN111782060A (en) | 2020-10-16 |
CN111782060B true CN111782060B (en) | 2024-07-26 |
Family
ID=72756678
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010559109.4A Active CN111782060B (en) | 2020-06-18 | 2020-06-18 | Object display method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111782060B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110488990A (en) * | 2019-08-12 | 2019-11-22 | 腾讯科技(深圳)有限公司 | Input error correction method and device |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005182487A (en) * | 2003-12-19 | 2005-07-07 | Nec Software Chubu Ltd | Character input apparatus, method and program |
CN102184028A (en) * | 2011-04-11 | 2011-09-14 | 百度在线网络技术(北京)有限公司 | Method and equipment for acquiring candidate character strings corresponding to input key sequence |
CN102360250A (en) * | 2011-10-13 | 2012-02-22 | 广东步步高电子工业有限公司 | Memory type input method and system and mobile handheld device applying same |
CN103677299A (en) * | 2012-09-12 | 2014-03-26 | 深圳市世纪光速信息技术有限公司 | Method and device for achievement of intelligent association in input method and terminal device |
CN105204663A (en) * | 2015-10-30 | 2015-12-30 | 维沃移动通信有限公司 | Method of virtual keyboard input and terminal |
CN107102746B (en) * | 2016-02-19 | 2023-03-24 | 北京搜狗科技发展有限公司 | Candidate word generation method and device and candidate word generation device |
CN107229348B (en) * | 2016-03-23 | 2021-11-02 | 北京搜狗科技发展有限公司 | Input error correction method and device for input error correction |
CN107340880B (en) * | 2016-05-03 | 2021-11-02 | 北京搜狗科技发展有限公司 | Association input method and device and electronic equipment for realizing association input |
CN106896937A (en) * | 2017-02-28 | 2017-06-27 | 百度在线网络技术(北京)有限公司 | Method and apparatus for being input into information |
CN107329585A (en) * | 2017-06-28 | 2017-11-07 | 北京百度网讯科技有限公司 | Method and apparatus for inputting word |
CN109308126B (en) * | 2017-07-27 | 2022-09-13 | 北京搜狗科技发展有限公司 | Candidate word display method and device |
CN109976548B (en) * | 2017-12-28 | 2022-07-19 | 北京搜狗科技发展有限公司 | Input method and input device |
CN110780751B (en) * | 2019-10-25 | 2024-04-05 | 维沃移动通信有限公司 | Information processing method and electronic equipment |
-
2020
- 2020-06-18 CN CN202010559109.4A patent/CN111782060B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110488990A (en) * | 2019-08-12 | 2019-11-22 | 腾讯科技(深圳)有限公司 | Input error correction method and device |
Also Published As
Publication number | Publication date |
---|---|
CN111782060A (en) | 2020-10-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2713255B1 (en) | Method and electronic device for prompting character input | |
CN111324771B (en) | Video tag determination method and device, electronic equipment and storage medium | |
WO2022083750A1 (en) | Text display method and apparatus and electronic device | |
WO2022135474A1 (en) | Information recommendation method and apparatus, and electronic device | |
CN107346183B (en) | Vocabulary recommendation method and electronic equipment | |
CN111052064A (en) | Method for automatically providing gesture-based autocomplete suggestions and electronic device thereof | |
US20210405765A1 (en) | Method and Device for Input Prediction | |
WO2021254251A1 (en) | Input display method and apparatus, and electronic device | |
CN111860000A (en) | Text translation editing method and device, electronic equipment and storage medium | |
CN111984589A (en) | Document processing method, document processing device and electronic equipment | |
CN114861635B (en) | Chinese spelling error correction method, device, equipment and storage medium | |
WO2022105754A1 (en) | Character input method and apparatus, and electronic device | |
WO2022166808A1 (en) | Text restoration method and apparatus, and electronic device | |
WO2024179519A1 (en) | Semantic recognition method and apparatus | |
CN107797676B (en) | Single character input method and device | |
CN112764734B (en) | Auxiliary method and device for code editing and electronic equipment | |
CN107239209B (en) | Photographing search method, device, terminal and storage medium | |
CN113359999A (en) | Candidate word updating method and device and electronic equipment | |
CN112148135A (en) | Input method processing method and device and electronic equipment | |
CN111782060B (en) | Object display method and device and electronic equipment | |
WO2022143523A1 (en) | Vocabulary display method and apparatus, and electronic device | |
CN115437548A (en) | Display method and device of virtual keyboard, electronic equipment and storage medium | |
JPWO2015075920A1 (en) | Input support apparatus, input support method, and program | |
JP6655331B2 (en) | Electronic equipment and methods | |
CN114299525A (en) | Picture processing method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |