US20150100562A1 - Contextual insights and exploration - Google Patents
Contextual insights and exploration Download PDFInfo
- Publication number
- US20150100562A1 US20150100562A1 US14/508,431 US201414508431A US2015100562A1 US 20150100562 A1 US20150100562 A1 US 20150100562A1 US 201414508431 A US201414508431 A US 201414508431A US 2015100562 A1 US2015100562 A1 US 2015100562A1
- Authority
- US
- United States
- Prior art keywords
- results
- query
- context
- attention
- request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G06F17/30867—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2457—Query processing with adaptation to user needs
- G06F16/24575—Query processing with adaptation to user needs using context
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2457—Query processing with adaptation to user needs
- G06F16/24578—Query processing with adaptation to user needs using ranking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3322—Query formulation using system suggestions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G06F17/3053—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
Definitions
- Applications for creating or consuming content include reader applications and productivity applications like notebook, word processors, spreadsheets or presentation programs. Users of these applications for creating or consuming content often research topics and rely on Internet search services to find additional information related to the content being created or consumed. To research topics, a user will often leave the application and go to a web browser to perform a search and review the results.
- one or more queries to search services may be formulated for the application for creating or consuming content without requiring entry of a search query directly by a user.
- techniques and systems may leverage the context of the content the user is consuming or authoring, as well as user, device, and application metadata, to construct the queries and to organize and filter the results into relevant contextual insights.
- a method for facilitating contextual insights can include: determining a focus of attention for contextual insights from information provided with a request for contextual insights with respect to at least some text; performing context analysis to determine query terms from context provided with the request; formulating at least one query using one or more of the query terms; initiating a search by sending the at least one query to at least one search service; and organizing and filtering results received from the at least one search service according to at least some of the context.
- FIG. 1A shows an example operating environment in which certain implementations of systems and techniques for contextual insights may be carried out.
- FIGS. 1B-1E show example interactions indicating an initial selection of text for contextual insights.
- FIG. 2 illustrates an example process flow for contextual insights and exploration.
- FIG. 3 shows an example interface displaying contextual insights.
- FIG. 4 shows a block diagram illustrating components of a computing device or system used in some implementations of the described contextual insights service.
- FIG. 5 illustrates an example system architecture in which an implementation of techniques for contextual insights may be carried out.
- the contextual insights can include, without limitation, people/contact information, documents, meeting information, and advertisements that relate to a determined focus of attention (e.g., determined topics of interest) for the user.
- one or more queries to search services may be formulated by the application without requiring entry of a search query directly by a user.
- techniques and systems may leverage the context of the content the user is consuming or authoring, as well as other context of the user, device, and application metadata, to construct the queries and to organize and filter the results into relevant contextual insights.
- the techniques and systems described herein may improve a user's workflow and/or productivity while consuming or authoring content in an application for creating or consuming content.
- a user wants to research a topic while in the application for creating or consuming content, the user does not need to move to a separate application to conduct a search.
- the techniques enable a user to immerse themselves in a topic without having to leave the application.
- context within (or accessible) by the application for creating or consuming content can be used to provide relevant results and may reduce the number of times a user may narrow or modify a search query to achieve a relevant result.
- a “query” is a request for information from a data storage system.
- a query is a command that instructs a data storage system of the “query terms” that are desired by the requestor and the terms' relationship to one another.
- the data storage system includes a web search engine, such as available from a variety of search services
- a query might contain the query terms “ Russian” and “Syria” and indicate that the relationship between the two query terms is conjunctive (i.e., “AND”).
- the search service may return only content having both words somewhere in the content.
- a query is a command requesting additional content or information from a search service, where the content or information is associated with specific terms (e.g., words) in the query.
- a query is sometimes written in a special formatting language that is interpretable to a search service.
- the queries may be shaped by the user's “context,” which may include both content surrounding the user's indicated interest and additional factors determined from attributes of the user, device, or application.
- “Surrounding content” refers to text or other content in a position before and/or after the user's indicated interest (e.g., selection).
- the user may highlight the term “USD” and request contextual insights (via a separate command or as a result of the highlighting action).
- the application can return information for the contextual insights that may include articles from an online encyclopedia such as “China”, “China's role in the Syrian civil war”, and “Russia-Syrian relations”. If the user instead highlights a different term, “weapons”, the returned information may be an article titled “Syria and weapons of mass destruction.” The returned information is dependent both on the user's indicated interest and on the context of the document that the user is reading.
- the contextual insights service includes functionality and logic for producing “contextual insights,” which includes results that are related through context and not just from a conventional search.
- the portion of the text indicated by the user, along with additional text around the portion of the text selected by the user, is sent to the contextual insights service.
- the contextual insights service can perform a determination as to the intended item or topic for search.
- the contextual insights service can provide one or more proposed terms found in the associated text that forms the context of the user's selection, as well as determine additional query terms or limitations that take account of contextual factors relating to the user, device, or application.
- relevant results may be organized and filtered (including sorting and grouping) based on context and other factors.
- techniques may be iteratively applied to progressively improve the relevance of contextual insights. Multiple modes of interacting with the contextual insights may be supported.
- FIG. 1A shows an example operating environment in which certain implementations of systems and techniques for contextual insights may be carried out.
- the example operating environment in FIG. 1A may include a client device 100 , user 101 , application 102 contextual insights component 105 , contextual insights service 110 , and one or more search services 120 .
- Client device 100 may be a general-purpose device that has the ability to run one or more applications.
- the client device 100 may be, but is not limited to, a personal computer, a laptop computer, a desktop computer, a tablet computer, a reader, a mobile device, a personal digital assistant, a smart phone, a gaming device or console, a wearable computer, a wearable computer with an optical head-mounted display, computer watch, or a smart television.
- Application 102 may be a program for creating or consuming content.
- Example applications for creating or consuming content include word processing applications such as MICROSOFT WORD; email applications; layout applications; note-taking applications such as MICROSOFT ONENOTE, EVERNOTE, and GOOGLE KEEP, presentation applications; and reader applications such as GOGGLE READER, APPLE iBooks, ACROBAT eBook Reader. AMAZON KINDLE READER, and MICROSOFT Reader and those available on designated hardware readers such as AMAZON KINDLE READER).
- Contextual insights component 105 may be integrated with application 102 as an inherent feature of application 102 or as a plug-in or extension for an existing application 102 to provide the contextual insights feature. Although primarily described herein as being incorporated with application 102 at the client device 100 , contextual insights component 105 may, in some cases, be available through a separate device from the client device 100 .
- Contextual insights component 105 facilitates the interaction between the application 102 and contextual insights service 110 , for example through an application programming interface (API) of the contextual insights service 110 .
- API application programming interface
- An API is generally a set of programming instructions and standards for enabling two or more applications to communicate with each other and is commonly implemented as a set of Hypertext Transfer Protocol (HTTP) request messages and a specified format or structure for response messages according to a REST (Representational state transfer) or SOAP (Simple Object Access Protocol) architecture.
- HTTP Hypertext Transfer Protocol
- REST Real state transfer
- SOAP Simple Object Access Protocol
- the contextual insights component 105 may facilitate a call (or invocation) of a contextual insights service 110 using the API of the contextual insights service 110 .
- the contextual insights component 105 sends a request 130 for contextual insights to the contextual insights service 110 so that contextual insights service 110 may execute one or more operations to provide the contextual insights 135 , including those described with respect to FIG. 2 .
- Contextual insights component 105 may also, in some cases, facilitate the presentation of contextual insights 135 for application 102 , for example, by rendering the contextual insights 135 in a user interface.
- Contextual insights service 110 receives the request 130 for contextual insights and generates contextual insights 135 .
- the request 130 may contain text, text markup, and/or other usage context from application 102 .
- the contextual insights service 110 may process the request via one or more components, shown in FIG. 1A as smart selection 131 , context analysis 132 , and query formulation 133 .
- contextual insights service 110 may direct one or more requests to one or more search service(s) 120 , and may interpret or manipulate the results received from search service(s) 120 in a post-processing component 134 before returning contextual insights 135 to client device 100 via contextual insights component 105 .
- the contextual insights service 110 can perform a determination of the user's intended content selection based on information provided by the contextual insights component 105 , analyze the context of the content selection with respect both to other content the user is perusing and also to various device and user metadata, and construct and send one or more queries for requesting a search from one or more search services 120 .
- These operational aspects of the contextual insights service, including result post-processing, are discussed in more detail with respect to FIG. 2 .
- contextual insights service 110 may determine that contextual insights can be further optimized after result post-processing component 134 activities. Another iteration of the processing stages of smart selection 131 , context analysis 132 , and/or query formulation 133 might be executed to produce improved insights through the modification of query terms.
- contextual insights service 110 While sub-components of contextual insights service 110 are depicted in FIG. 1A (i.e., smart selection 131 , context analysis 132 , query formulation 133 , and result post-processing 134 ), this arrangement of the contextual insights service 110 into components is exemplary only; other physical and logical arrangements of a contextual insights service capable of performing the operational aspects of the disclosed techniques are possible. Further, it should be noted that aspects of a contextual insights service 110 may be implemented on more than one device. In some cases, a contextual insights service 110 may include components located on user devices and on one or more services implemented on separate physical devices.
- Search service(s) 120 may take myriad forms.
- a familiar kind of search service is a web search engine such as, but not limited to, MICROSOFT BING and GOOGLE.
- any service or data storage system having content that may be queried for content appropriate to contextual insights may be a search service 120 .
- a search service may also be built to optimize for the queries and context patterns in an application so that retrieval of information may be further focused and/or improved.
- an “intranet” search engine implemented on an internal or private network may be queried as a search service 120 ; an example is Microsoft FAST Search.
- a custom company knowledge-base or knowledge management system, if accessible through a query, may be a search service 120 .
- a custom database implemented in a relational database system (such as MICROSOFT SQL SERVER) that may have the capability to do textual information lookup may be a search service 120 .
- a search service 120 may access information such as a structured file in Extended Markup Language (XML) format, or even a text file having a list of entries. Queries by the contextual insights service 110 to the search service(s) 120 may be performed in some cases via API.
- XML Extended Markup Language
- a request for contextual insights 130 may contain a variety of cues for the contextual insights service 110 that are relevant to generating contextual insights.
- the contextual insights component 105 generates and sends the request 130 to the contextual insights service 110 based on an indication by a user 101 .
- the request for contextual insights 130 may be initiated by a user 101 interacting with an application 102 on client device 100 .
- content in the form of a document including any format type document), article, picture (e.g., that may or may not undergo optical character recognition), book, and the like may be created or consumed (e.g., read) by a user 101 via the application 100 running on the client device 100 .
- a user may interact with the content and/or an interface to application 102 to indicate a request for contextual insights 130 is desired.
- Contextual insights component 105 may interact with application 102 , client device 100 and even other applications or user-specific resources to generate and send the request 130 to the contextual insights service 110 in response to the indication by the user 101 for the request 130 .
- a user can indicate an initial selection of text for contextual insights.
- a user may indicate an interest in certain text in, for example, a document, email, notes taken in a note-taking application, e-book, or other electronic content.
- the indication of interest does not require the entering of search terms into a search field.
- a search box may be available as a tool in the application so that a user may enter terms or a natural language expression indicating a topic of interest.
- Interaction by the user 101 indicating the initial text selection may take myriad forms.
- the input indicating an initial text selection can include, but is not limited to, a verbal selection (of one or more words or phrases), contact or contact-less gestural selection, touch selection (finger or stylus), swipe selection, cursor selection, encircling using a stylus/pen, or any other available technique that can be detected by the client device 100 (via a user interface system of the device).
- contextual insights may initiate without an active selection by a user.
- the user 101 may also, for instance, utilize a device which is capable of detecting eye movements.
- the device detects that the user's eye lingers on a particular portion of content for a length of time, indicating the user's interest in selecting the content for contextual insights.
- a computing device capable of detecting voice commands can be used to recognize a spoken command to initially select content for contextual insights.
- many other user interface elements as diverse as drop-down menus, buttons, search box, or right-click context menus, may signify that the user has set an initial text selection. Further, it can be understood that an initial text selection may involve some or all of the text available on the document, page, or window.
- FIGS. 1B-1E show example interactions indicating an initial selection of text for contextual insights.
- the contextual insights component provides the selection as well as context including content before and/or after the selection as part of the request. Therefore, the indication by the user of text for contextual insight and exploration may be of varying specificity.
- a word (or phrase) 151 may be selected.
- the selection of a word (or phrase) may be a swipe gesture 152 on a touch enabled display screen such as illustrated in FIG. 1B .
- Other gestures such as insertion point, tap, double tap, and pinch could be used.
- non-touch selection of a word (as well as cursor selection of the word) may be used as an indication.
- a cursor 153 may be used to indicate, for example, via a mouse click, a point on the content surface of the user interface 150 .
- the cursor 153 may be placed within a term without highlighting a word or words.
- a similar selection may be conducted by touch (e.g., using a finger or pen/stylus) or even by eye gaze detection. This type of selection may be referred to as a selection of a region.
- an initial selection may include a contiguous series of words (a phrase).
- a phrase For example, multiple words may be “marked” by the user using interface techniques such as illustrated in FIG. 1D , where a cursor 154 is shown selecting multiple words 155 of a sentence.
- the user is not limited to selecting a particular amount of text.
- multiple, non-contiguous words or phrases may be selected by highlighting, circling or underlining with a digital stylus.
- Multiple words or phrases of interest also may be prioritized by the user. For example, one word or phrase may be marked as the primary text selection of interest, and other related words may be marked as supporting words or phrases which are of secondary, but related interest.
- several words 156 may be indicated on user interface 150 .
- the input for initial text selection may be discerned from passive, rather than active, interactions by the user. For example, while the user is scrolling through the text rendered by an application, a paragraph on which the user lingers for a significant time might constitute an initial text selection.
- the client device allows the user's eye movements to be tracked, words or phrases on which the user's eye lingers may form the input for initial text selection.
- the entire document, window, or page may be considered to be selected based on a passive interaction.
- additional information may be sent as part of the request 130 containing the user's indicated initial text selection.
- the additional information may be used by the contextual insights service 110 to improve the relevance or clarity of searches directed by the initial text selection.
- the additional information may vary by embodiment and scenario, but in some embodiments will include such information as the text surrounding the selection (which can also be referred to as an expanded portion of text, for example, a certain number of symbols or characters before and/or after the selection), information about the application in which the content is displayed, information about the device on which the application runs, and information about the specific user. In some cases, this information may be referred to herein as “application metadata”, “device metadata”, and “user metadata,” respectively.
- contextual insights service 110 can return contextual insights 135 to the user.
- contextual insights component 105 may operate to render or facilitate the application 102 in rendering or displaying one or more user interfaces to show the contextual insights to the user on a client device 100 .
- FIG. 2 illustrates an example process flow for contextual insights and exploration.
- a contextual insights service 110 such as described with respect to FIG. 1A , may implement the process.
- an indication of a request for contextual insights with respect to at least some text may be received ( 201 ).
- the request can include a selection such as described with respect to FIGS. 1B-1E and context including content before and/or after the selection.
- the focus of attention for the contextual insights may be determined from information provided with the request ( 202 ), for example by the smart selection component 131 of contextual insights service 110 of FIG. 1A .
- the “focus of attention” refers to the concept (or “topic”) considered to be about what the user would like to explore and gain contextual insights.
- a user's selection of text may, on its own, sufficiently indicate the focus of attention.
- the user may improperly or incompletely indicate a focus of attention, for example by indicating a word that is near to but not actually the focus of attention, or by indicating only one word of a phrase that consists of multiple words.
- the focus of attention may need to be adjusted from the selection indicated with the request.
- a variety of techniques may be used to predict candidates for the user's intended focus of attention based on a given user selection and the surrounding text or content. These processes may include, for example, iterative selection expansion, character n-gram probabilities, term frequency-inverse document frequency (tf-idf) information for terms, and capitalization properties. In some implementations, more than one technique may be used to select one or more candidate foci of attention. Candidate foci of attention determined from these multifarious techniques may then be scored and ranked by the contextual insights service 110 , or smart selection component 131 thereof, to determine one or more likely foci of attention from among multiple possibilities.
- Smart selection component 131 may iteratively determine for every current selection whether the selection should be expanded by one character or word to the right or to the left.
- smart selection component 131 may rank or score candidates for selection using “anchor texts” that may be obtained from an online encyclopedia or knowledge-base.
- Anchor texts sometimes known as “link titles,” are text descriptions of hyperlinks. Anchor texts may give the user relevant descriptive or contextual information about the content at the hyperlink's destination. Anchor texts form a source of words and phrases that are positively correlated with one another as related concepts. Examples of online encyclopedias and knowledge bases are MICROSOFT ENCARTA, ENCYCLOPEDIA BRITTANICA, and WIKIPEDIA.
- Character n-gram probabilities are based on n-gram models, a type of probabilistic language model for predicting the next item in a sequence of characters, phonemes, syllables, or words.
- a character n-gram probability may allow prediction of the next character that will be typed based on a probability distribution derived from a training data set.
- a smart selection component 131 may be trained using machine learning techniques via character n-gram probability data from anchor texts.
- a smart selection component 131 may interact with or use available commercial or free cloud-based services providing n-gram probability information.
- An example of a cloud-based service is “Microsoft Web N-gram Services”. This service continually analyzes all content indexed by the MICROSOFT BING search engine. Similar services are available from GOOGLE's N-gram corpus.
- a cloud-based service may include the analysis of search engine logs for the words that internet users add or change to disambiguate their searches. Smart selection component 131 may interoperate with such a cloud-based service via API.
- tf-idf techniques may be used in a smart selection component 131 .
- the tf-idf is a numerical statistic intended to reflect how important a word is to a document in a collection or corpus.
- the tf-idf value increases in proportion to the number of times a term (e.g., a word or phrase) appears in a document, but is negatively weighted by the number of documents that contain the word in order to control for the fact that some words are generally more common than others.
- One way of using tf-idf techniques is by summing tf-idf values for each term in a candidate focus of attention.
- capitalization properties of terms may be used to identify nouns or noun phrases for focus of attention candidates.
- Capitalization properties may be used both to rank the importance of certain terms and as further scoring filters when final rankings are calculated.
- Other implementations may use dictionary-based techniques to additionally identify a large dictionary of known, named entities, such as the titles of albums, songs, movies, and TV shows.
- a natural language analyzer can be used to identify the part of speech of words, term boundaries, and constituents (noun phrases, verb phrases, etc.). It should be noted that the techniques described for predicting the focus of attention are examples and are not intended to be limiting.
- Scoring data from the various described techniques, and others, may be used to produce candidate foci of attention from a user-indicated focus of attention.
- the scores may be assembled by the smart selection component 131 , and scores assigned by one or more of these techniques may be compiled, averaged and weighted.
- the scores may be further modified by the capitalization and stop-word properties of the words in the candidate focus of attention (stop-words are semantically irrelevant words, such as the articles “A”, “an”, and “the”).
- a final score and ranking for each candidate focus of attention may be calculated which may be used to find the top candidate focus (or foci) of attention for a given user selection.
- the initial text selection provided with the request may be referred to as a “user-indicated” focus of attention.
- the request can include a range of text or content before and/or after the user-indicated focus of attention.
- the user-indicated focus of attention may then be analyzed for expansion, contraction, or manipulation to find an intended focus of attention in response to rankings emerging as various predictive techniques are applied.
- One or more foci of attention may be chosen that may be different from the user's indicated foci of attention.
- context analysis may be performed to determine query terms for formulating a query ( 203 ).
- query items including operators such as OR, NOT, and BOOST, as well as meta-information (e.g., derived from user metadata) such as the user's location (if available through privacy permissions), time of day, client device and the like may also be determined so as to facilitate the generation of the queries.
- Context analysis may identify representative terms in the context that can be used to query the search engine in conjunction with the focus of attention. Context analysis may be performed, for example, by a context analysis component 132 such as described with respect to FIG. 1A .
- context analysis is a technique by which a query to a search service (e.g., one or more of search services 120 ) may be refined to become more relevant to a particular user.
- a search service e.g., one or more of search services 120
- Various forms of context may be analyzed, including, for example: the content of the article, document, e-book, or other electronic content a user is reading or manipulating (including techniques by which to analyze content for its contextual relationship to the focus of attention); application and device properties; and metadata associated with the client device user's identity, locality, environment, language, privacy settings, search history, interests, or access to computing resources.
- the content of the article, document, e-book, or other electronic content with which a user is interacting is one possible aspect of the “context” that may refine a search query.
- a user who selected “Russian Federation” as a focus of attention may be interested in different information about Russia when reading an article about the Syrian civil war than when reading an article about the Olympics.
- the query terms might be modified from “Russian Federation” (the user-indicated focus) to “Russian Federation involvement in Syrian civil war” or “Russian Federation 2014 Sochi Olympics,” respectively.
- the electronic content surrounding the focus of attention may undergo context analysis to determine query terms in one or more of a variety of ways.
- the entire document, article, or e-book may be analyzed for context to determine query terms.
- the electronic content undergoing context analysis may be less than the entire document, article, or e-book.
- the amount and type of surrounding content analyzed for candidate context terms may vary according to application, content type, and other factors.
- the contextually analyzed content may be defined by a range of words, pages, or paragraphs surrounding the focus of attention.
- the content for contextual analysis may be limited to only that portion of the e-book that the user has actually read, rather than the unread pages or chapters.
- the content for contextual analysis may include the title, author, publication date, index, table of contents, bibliography, or other metadata about the electronic content.
- the contextual insights component 105 at the client may be used to determine and/or apply the rules for the amount of contextual content provided in a request to a contextual insights service.
- Context analysis of an appropriate range of content surrounding the focus of attention may be conducted in some implementations by selecting candidate context terms from the surrounding content and analyzing them in relation to a focus of attention term. For example, a technique that scores candidate context terms independently of each other but in relation to a focus of attention term may be used. The technique may determine a score for each pair of focus-candidate context terms and then rank the scores.
- the relevance of the relationship between the candidate term from the surrounding content and the focus of attention may be analyzed with reference to the query logs of search engines.
- the query logs may indicate, using heuristics gathered from prior searches run by a multiplicity of users, that certain relationships between foci of attention terms and candidate terms from the surrounding content are stronger than others.
- a context analysis component 132 may be trained on the terms by culling term relationships from web content crawls.
- the strength of a relationship between terms may be available as part of a cloud-based service, such as the “Microsoft Web N-gram Services” system discussed above, from which relative term strengths may be obtained, for example via API call or other communication mechanism.
- a candidate context term may be part of a dictionary of known, named entities such as the titles of albums, songs, movies, and TV shows; if the candidate context term is a named entity, the relevance of the candidate term may be adjusted.
- Distance between the candidate context term and the focus of attention may also be considered in context analysis. Distance may be determined by the number of words or terms interceding between the candidate context term and a focus of attention.
- the relevance of a candidate context term with reference to focus of attention terms may be determined with respect to anchor text available from an online knowledge-base.
- Statistical measurements of the occurrence frequencies of terms in anchor texts may indicate whether candidate terms and focus of attention terms are likely to be related, or whether the juxtaposition of the terms is random.
- highly entropic relationship values between the candidate context term and the focus of attention term(s) in anchor text may signify that the candidate context term is a poor choice for a query term.
- Some techniques of context analysis may use metadata associated with the application, device, or user in addition to (or in lieu of) the gathering and analysis of terms from the content surrounding a focus of attention. These techniques may be used by context analysis component 132 to refine, expand, or reduce the query terms selected for the search query.
- the type of application 102 or device 100 may be a factor in context analysis. For example, if a user is writing a paper in a content authoring application such as a word processor, then context analysis for query terms may be different than the analysis would be for a reader application.
- a context analysis may determine via the application type that a narrower focus to find query terms may be appropriate, perhaps limiting query terms to definitions and scholarly materials. In the case of the reader, more interest-based and informal materials may be appropriate, so candidate query terms are more wide-ranging.
- Factors derived from user device metadata may also be considered in certain implementations.
- the type of user device may be a factor in the query terms determined from context analysis. For example, if the user device is a phone-sized mobile device, then candidate context terms may be selected from a different classification than those selected if the user device were a desktop computer. In the case of the small mobile device, a user's interests may be more casual, and the screen may have less space, so candidate terms which produce more summarized information may be selected. Further, context analysis may consider device mobility by selecting candidate terms that may be related to nearby attractions. In contrast, if the user device is a desktop device, then user may be at work and want more detailed and informative results; query terms might be added which obtain results from additional sources of information.
- factors derived from user metadata may be used as part of context analysis to define query terms.
- a factor may be the type of user—e.g., whether the user's current role is as a corporate employee or consumer.
- the type of user may be determined, for example, by the internet protocol (IP) address from which the user is accessing a communications network.
- IP internet protocol
- work-oriented query terms may be preferentially selected by the context analysis component; in the latter case, more home or consumer-related terms may be preferred.
- the user type may determine the availability of computing resources such as a company knowledge management system accessible by a company intranet. Availability of company-related resources might enable a context analysis component to select query terms targeted toward such specialized systems.
- a factor in context analysis may be the user's history of prior searches or interests.
- the historical record of previous foci of attention selected by the user may be analyzed to generate or predict candidate query terms.
- candidate terms might be refined or ranked with respect to the user's current foci of attention using techniques similar to those described with respect to candidate terms for surrounding content; e.g., by using N-term services or anchor text analysis.
- Candidate terms may be selected by the context analysis engine on the basis of prior user internet searches. The historical record of these searches may generate or predict candidate query terms. Similarly, internet browser cookies or browser history of websites visited may be used to discern user interests which may predict or refine candidate terms. Candidate terms generated may be ranked or refined using similar techniques to those described above with respect to historical foci of attention terms.
- Other factors which may be analyzed during the context analysis component's determination of query terms might be the time of day that the user is requesting contextual insights and the current geographic locality of the client device.
- User profile and demographic information such as age, gender, ethnicity, religion, profession, and preferred language may also be used as factors in query term determination. It should be noted that, in some implementations, privacy settings of the user may impact whether user profile metadata is available for context analysis and to what extent profile metadata may be used.
- a query may be formulated using one or more of the query terms ( 204 ).
- Query formulation may include a pre-processing determination in which a mode of operation is decided with reference to user preferences; the mode of operation may inform which context-related terms are used to formulate the query.
- Query formulation may include the assembly of the actual queries that may be sent to one or more search services. Query formulation may be performed, for example, by a query formulation component 133 described with respect to FIG. 1A .
- query formulation component 133 may engage in a pre-processing determination in which a mode of operation is decided with reference to user preferences.
- the mode of operation may determine one or more classes of appropriate or desirable search results. For example, two modes of operation may be “lookup” and “exploration.”
- a “lookup” mode may give targeted results directed narrowly toward a focus of attention (e.g., a dictionary lookup).
- An “exploration” mode may give more general search results, and, for example, present several options to the user with respect to which search results or topics to further explore.
- Other modes of operation representing different classes of search result are possible, as are scenarios in which multiple modes of operation are provided.
- an operation of the query formulation component may be to determine to what extent the query terms from the contextual analysis phase may take over from or supersede the user's indicated/determined foci of attention (or explicit search query if the user provided one).
- a mode of operation may be selected by the user actively, such as by affirmative selection of the mode, or passively, such as based on some factor determined from user, device, or application metadata.
- a mode of operation may be determined by the query formulation component 133 based on outcomes from context analysis or other factors. For example, the query formulation component 133 may determine which mode of operation to use based on ambiguity of a focus of attention. If, during or after context analysis, contextual insights service determines that, because of ambiguity in the focus of attention, terms or results may not be acceptably narrowed for a lookup mode, an exploration mode may be chosen.
- query formulation component 133 may determine that certain additional context terms may return search results that inappropriately overwhelm the focus of attention.
- query formulation component 133 may modify query terms that may be likely to return adult or offensive content; user profile metadata (e.g., age of the user) may be a factor in such a modification of query terms.
- Contextual insights service 110 may make this determination, for example, by formulating and sending one or more probative queries to search services. Probative queries may enable the contextual insights service 110 to preview search results for several trial formulations of query terms so that terms added by context analysis may be adjusted or modified.
- Query formulation may include the assembly of actual queries that may be sent to one or more search services.
- a query formulation component may assemble and send a single query consisting of one or more query terms joined conjunctively to a single search service.
- the query formulation component may, based on a determined need for different classes of search results, formulate disjunctive queries, formulate separate queries with differing terms, split queries into multiple execution phases, and/or send different queries to different search services.
- query terms may be ordered in a particular sequence to obtain particular search results.
- query formulation component 133 may determine that a particular focus of attention and context analysis reveals query terms that may be best presented to the user in segmented fashion. In such a case, the query formulation component 133 may construct a disjunctive query of the form “focus-term AND (context-term 1 OR context-term 2 OR . . . )”. Moreover, the query formulation component 133 may sometimes construct multiple queries—a query that targets the focus of attention more narrowly, and one or more queries that target exploratory search results on a handful of related themes. In some cases, a query may be targeted toward a particular search service in order to retrieve results from a given class.
- Query formulation can be carried out based on intended search services so that the contextual insights service initiates a search by sending a query to one or more search services ( 205 ).
- the search may occur when contextual insights service 110 , or some component thereof (e.g., query formulation component 133 ) issues or sends the query to one or more search services 120 as described in FIG. 1A .
- results of the search may be received ( 206 ).
- the search query will be issued, and search results will be returned, via an API call to the search service, as noted in FIG. 1A .
- multiple sets of search results may be received.
- the results may be organized and filtered according to at least some of the context ( 207 ).
- the contextual insights service 110 or some component thereof (e.g., result post-processing component 134 described in FIG. 1A ) may receive the results and/or perform organizing and filtering operations.
- Organization and filtering of the results may include, for example: ranking of results according to various criteria; assembly, sorting, and grouping of result sets, including those from multiple queries and/or multiple search services; and removal of spurious or less relevant results.
- organization and filtering of the results may include ranking of results according to various criteria; some of the criteria may be determined from context.
- Result post-processing component may assess aspects of search results received from the search service according to varied techniques. Assessments of rank emerging from one or more techniques may be used in concert with one another; some of the techniques may weighted according to their aptitude for producing relevant answers in a given context.
- search results may be received from a search service with a ranking position; such a ranking may constitute a natural starting point for determining relevance.
- Another technique may include a linguistic assessment of how closely the title or URL of a search result matches the query; e.g., if words in the title are an almost exact match to terms in the query, the result may be more relevant.
- Factors determined from context analysis may also be applied in the result post-processing phase to assist in congruency of the search results with respect to context. For example, results may be assessed to ensure that the results are congruous with user profile metadata. Results that are age-inappropriate, for example, might be removed entirely; in other cases, results that may more be more appropriate to a user's location may be ranked higher by the result post-processing component 134 than other results.
- Factors such as whether a search result corresponds to a disambiguation page (e.g., on Wikipedia), the length of the query, the length of the context, and other query performance indicators may also be used in an assessment of search result relevance.
- results may be relevant to organizing and filtering the results for contextual insights. For example, when multiple results sets from several queries or search services have been received, the results may be grouped or resorted. Furthermore, when there is disagreement or lack of congruity between different search services, determinations may be needed as to which result sets to prioritize.
- multiple queries may have been issued by the query formulation component 133 , and perhaps to multiple search services.
- the queries themselves may naturally reflect intended groupings of result sets. For example, if a focus of attention relates to a geographic location and the query formulation component 133 directed a query specifically at a search service with travel information, search results returned from that service may be grouped together under a “Travel” category by the result post-processing component 134 . Similarly, if the query formulation component 133 had issued a separate query to a search service focusing on the history of the geographic location, those results may also be grouped together. Naturally, search results may also be ranked and refined within the group or category.
- result sets returned from different queries may be reconsolidated by the result post-processing component 134 .
- results received from the search service as a single result set may be segmented and grouped more logically, for example according to topic, domain of the website having the result, type of result (e.g., text, photos, multimedia), content rating, or other criteria.
- Some implementations may include the detection of result thresholds by the result post-processing component 134 .
- Result “thresholds” are juncture points in one or more results or sets of results that indicate that particular groups of results may be related to one another, such as by relevance or by category/topic. These thresholds may be used to group, refine, remove, or re-sort results.
- Result thresholds may sometimes be used to determine how many insights 135 to return from a given contextual insights request 130 .
- characteristics of a given result threshold may be adapted to user, application, or device metadata (e.g., the size of the device screen).
- result thresholds may be recognized from patterns that allow detection of content groups. Examples of patterns include when several results show similarities (or dissimilarities) in their title, site origin, or ranking scores.
- the result post-processing component 134 may group those results together as a single result or segmented category of results. For example, multiple postings of a similar news release to various websites may be determined to have very similar titles or brief descriptions; the result post-processing component 134 may recognize a pattern and either group or truncate the news releases into a single insight.
- Result thresholds may be detected from patterns of disagreement between sources. For instance, a level of entropy—the degree to which there is or is not overlap between results returned by different sources—may indicate a result threshold. If, for example, results from one source have a low overlap with the results from another source, this pattern may indicate that the results may be separated into multiple groups having different contextual insights.
- certain functions of the contextual insights service 110 may be executed repeatedly to determine appropriate contextual insights (for example, an adjustment may be made at operation 202 and the processes repeated).
- result post-processing component 134 may determine that another iteration of the processing stages of smart selection, context analysis, and/or query formulation might produce improved insights.
- the focus of attention, context terms from content and metadata, and query terms may be modified.
- a pattern of disagreement between sources might occur, for instance, when the formulated query terms were ambiguous with respect one or more of the sources or search services. If, for example, a request is made for contextual insights about “John Woo,” and John Woo is both a prominent movie director and a high-ranking statesman at the United Nations, at least two distinct patterns of results would be returned. A further iteration of processing using additional context or a modified focus of attention may be used to determine the most relevant insights. Or, consider the homograph “row” (a homograph is each of two or more words spelled the same but not necessarily pronounced the same and having different meanings and origins).
- contextual insights 135 may be returned ( 208 ) to the calling component 105 or client device 100 by the contextual insights service 110 .
- FIG. 3 shows an example interface displaying contextual insights.
- the example interface is provided to illustrate one way that contextual insights 135 may be displayed on the user device 100 .
- An example interface such as the one shown in FIG. 3 may be generated by the application 102 , or rendered by a contextual insights component 105 in cooperation with the application 102 or device 100 .
- the example is for illustrative purposes only and is not intended to be limiting of the ways and varieties that contextual insights may be organized and filtered by the contextual insights service 110 .
- a contextual insights preview 300 can be displayed non-obtrusively atop the existing application surface 301 , only partly obscuring the content displayed in the application surface 301 .
- a quick summary can be provided that may include a title 302 (as provided by the identified text 303 ), an image (still or moving) 304 (if available) and summary text 305 (if available).
- contextual insights preview 300 may be a preview of various modes of operation that may form groupings in the contextual insights, or various other groupings 320 determined by the contextual insights service 110 .
- the relevant results 310 can be grouped into modules 320 that may be indicative of modes of operation or other groupings.
- the results 310 are grouped by source.
- “Source,” in this context, may mean a network location, website, type of application, type of result (such as an image) or other logical method of grouping results.
- Some examples of sources might be the Wikipedia online encyclopedia; a local network source, such as an internal web server and/or social graph, privately available to the users in a company; a particular news website; image files from a photo-sharing website; structured data from a database; or private files on the user's drives or personal cloud storage.
- the modular groupings may be displayed differently based on contextual information about the user. For example, a user at home may receive consumer or entertainment-oriented information sources. The same user might receive different groupings (and, as noted above, different results) when at work. Many such forms of groupings are possible. In some cases, as noted, the groupings or modules may be formed by the strength of the relationship between focus of attention terms or concepts and context terms. These aspects were discussed with respect to FIG. 2 .
- FIG. 4 shows a block diagram illustrating components of a computing device or system used in some implementations of the described contextual insights service.
- any computing device operative to run a contextual insights service 110 or intermediate devices facilitating interaction between other devices in the environment may each be implemented as described with respect to system 400 , which can itself include one or more computing devices.
- the system 400 can include one or more blade server devices, standalone server devices, personal computers, routers, hubs, switches, bridges, firewall devices, intrusion detection devices, mainframe computers, network-attached storage devices, and other types of computing devices.
- the hardware can be configured according to any suitable computer architectures such as a Symmetric Multi-Processing (SMP) architecture or a Non-Uniform Memory Access (NUMA) architecture.
- SMP Symmetric Multi-Processing
- NUMA Non-Uniform Memory Access
- the system 400 can include a processing system 401 , which may include a processing device such as a central processing unit (CPU) or microprocessor and other circuitry that retrieves and executes software 402 from storage system 403 .
- Processing system 401 may be implemented within a single processing device but may also be distributed across multiple processing devices or sub-systems that cooperate in executing program instructions.
- processing system 401 examples include general purpose central processing units, application specific processors, and logic devices, as well as any other type of processing device, combinations, or variations thereof.
- the one or more processing devices may include multiprocessors or multi-core processors and may operate according to one or more suitable instruction sets including, but not limited to, a Reduced Instruction Set Computing (RISC) instruction set, a Complex Instruction Set Computing (CISC) instruction set, or a combination thereof.
- RISC Reduced Instruction Set Computing
- CISC Complex Instruction Set Computing
- DSPs digital signal processors
- DSPs digital signal processors
- Storage system 403 may comprise any computer readable storage media readable by processing system 401 and capable of storing software 402 including contextual insights components 404 (such as smart selection 131 , context analysis 132 , query formulation 133 , and result post processing 134 ).
- Storage system 403 may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
- storage media examples include random access memory (RAM), read only memory (ROM), magnetic disks, optical disks, CDs, DVDs, flash memory, solid state memory, phase change memory, or any other suitable storage media. Certain implementations may involve either or both virtual memory and non-virtual memory. In no case do storage media consist of a propagated signal.
- storage system 403 may also include communication media over which software 402 may be communicated internally or externally.
- Storage system 403 may be implemented as a single storage device but may also be implemented across multiple storage devices or sub-systems co-located or distributed relative to each other. Storage system 403 may include additional elements, such as a controller, capable of communicating with processing system 401 .
- Software 402 may be implemented in program instructions and among other functions may, when executed by system 400 in general or processing system 401 in particular, direct system 400 or processing system 401 to operate as described herein for enabling contextual insights.
- Software 402 may provide program instructions 404 that implement a contextual insights service.
- Software 402 may implement on system 400 components, programs, agents, or layers that implement in machine-readable processing instructions 404 the methods described herein as performed by contextual insights service.
- Software 402 may also include additional processes, programs, or components, such as operating system software or other application software.
- Software 402 may also include firmware or some other form of machine-readable processing instructions executable by processing system 401 .
- software 402 may, when loaded into processing system 401 and executed, transform system 400 overall from a general-purpose computing system into a special-purpose computing system customized to facilitate contextual insights.
- encoding software 402 on storage system 403 may transform the physical structure of storage system 403 .
- the specific transformation of the physical structure may depend on various factors in different implementations of this description. Examples of such factors may include, but are not limited to, the technology used to implement the storage media of storage system 403 and whether the computer-storage media are characterized as primary or secondary storage.
- System 400 may represent any computing system on which software 402 may be staged and from where software 402 may be distributed, transported, downloaded, or otherwise provided to yet another computing system for deployment and execution, or yet additional distribution.
- one or more communications networks may be used to facilitate communication among the computing devices.
- the one or more communications networks can include a local, wide area, or ad hoc network that facilitates communication among the computing devices.
- One or more direct communication links can be included between the computing devices.
- the computing devices can be installed at geographically distributed locations. In other cases, the multiple computing devices can be installed at a single geographic location, such as a server farm or an office.
- a communication interface 405 may be included, providing communication connections and devices that allow for communication between system 400 and other computing systems (not shown) over a communication network or collection of networks (not shown) or the air.
- Examples of connections and devices that together allow for inter-system communication may include network interface cards, antennas, power amplifiers, RF circuitry, transceivers, and other communication circuitry.
- the connections and devices may communicate over communication media to exchange communications with other computing systems or networks of systems, such as metal, glass, air, or any other suitable communication media.
- the aforementioned communication media, network, connections, and devices are well known and need not be discussed at length here.
- system 400 may be included in a system-on-a-chip (SoC) device. These elements may include, but are not limited to, the processing system 401 , a communications interface 405 , and even elements of the storage system 403 and software 402 .
- SoC system-on-a-chip
- FIG. 5 illustrates an example system architecture in which an implementation of techniques for contextual insights may be carried out.
- an application 501 for interacting with textual content can be implemented on a client device 500 , which may be or include computing systems such as a laptop, desktop, tablet, reader, mobile phone, and the like.
- Contextual insights component 502 can be integrated with application 502 to facilitate communication with contextual insights service 511 .
- Contextual insights service 511 may be implemented as software or hardware (or a combination thereof) on server 510 , which may be an instantiation of system 400 .
- the features and functions of a contextual insights service 511 may be callable by device 500 , application 501 , or contextual insights component 502 via an API.
- the contextual insights service 511 may initiate and send search queries to search service 521 .
- Search service 521 may be implemented on server 520 , which may itself be an instantiation of a system similar to that described with respect to system 400 or aspects thereof. Many search services may be available for querying in a given environment.
- the network 550 can include, but is not limited to, a cellular network (e.g., wireless phone), a point-to-point dial up connection, a satellite network, the Internet, a local area network (LAN), a wide area network (WAN), a Wi-Fi network, an ad hoc network, an intranet, an extranet, or a combination thereof.
- the network may include one or more connected networks (e.g., a multi-network environment) including public networks, such as the Internet, and/or private networks such as a secure enterprise private network.
- the functionality, methods and processes described herein can be implemented, at least in part, by one or more hardware modules (or logic components).
- the hardware modules can include, but are not limited to, application-specific integrated circuit (ASIC) chips, field programmable gate arrays (FPGAs), system-on-a-chip (SoC) systems, complex programmable logic devices (CPLDs) and other programmable logic devices now known or later developed.
- ASIC application-specific integrated circuit
- FPGAs field programmable gate arrays
- SoC system-on-a-chip
- CPLDs complex programmable logic devices
- a method for facilitating contextual insights comprising: receiving a request for contextual insights with respect to at least some text; determining from information provided with the request a focus of attention for the contextual insights; performing context analysis from context provided with the request to determine query terms; formulating at least one query using one or more of the query terms; initiating a search by sending the at least one query to at least one search service; receiving results of the search; and organizing and filtering the results according to at least some of the context.
- query items including operators such as or, not and boost; and metadata such as user's location, time of day, and client device are also determined from the information provided with the request, the formulating of the at least one query further using one or more of the query items.
- determining from the indication the focus of attention comprises predicting the focus of attention by: modifying an initially indicated text section from the information provided with the request with additional text selected from the context provided with the request to form one or more candidate foci of attention; determining a probability or score for each of the one or more candidate foci of attention; and selecting at least one of the candidate foci of attention having the highest probability or score.
- the context comprises one or more of content surrounding the indication, device metadata, application metadata, and user metadata.
- formulating the at least one query further comprises: determining a mode of operation from one or more of a level of ambiguity in the focus of attention and a user preference; and modifying the query in response to the mode of operation.
- formulating the at least one query further comprises modifying the query in response to user metadata.
- organizing and filtering the results further comprises: detecting a pattern in the results, wherein the pattern is based on a level of similarity of one or more of rankings of the results, content of the results, and origin of the results; and using the pattern to group, re-sort, or remove results.
- determining the query terms comprises: performing context analysis of one or more of: content of a file being consumed or created in an application that is a source of the request; application properties of the application; device properties of a device on which the application is executed; or metadata associated with a user's identity, locality, environment, language, privacy settings, search history, interests and/or access to computing resources.
- performing context analysis of the content further comprises selecting candidate context terms from content surrounding a focus-of-attention term and analyzing the candidate context terms in relation to the focus-of-attention term.
- determining the query terms further comprises scoring the candidate context terms independently of each other but in relation to the focus-of-attention term; and ranking the scores for each pair of candidate context term and focus-of-attention term.
- determining the query terms further comprises using query logs of search engines to analyze a relevance of a candidate context term to the focus-of-attention-term.
- determining the query terms further comprises determining whether a candidate context term is a named entity and adjusting a relevance of the candidate context term according to whether or not the candidate context term is the named entity.
- determining the query terms further comprises determining a distance value of a number of words or terms between the candidate context term and the focus-of-attention term.
- a computer-readable storage medium having instructions stored thereon to perform the method of any of examples 1-18.
- a service comprising: one or more computer readable storage media; program instructions stored on at least one of the one or more computer readable storage media that, when executed by a processing system, direct the processing system to: in response to receiving a request for contextual insights with respect to at least some text: determine a focus of attention from the information provided with the request; perform context analysis from context provided with the request to determine one or more context terms; formulate at least one query using one or more of the focus of attention and the context terms; send the at least one query to at least one search service to initiate a search; and in response to receiving one or more results from the at least one search service, organize and filter the results according to at least some of the context.
- program instructions that direct the processing system to determine the focus of attention from the indication direct the processing system to: modify an initially indicated text section from the information provided with the request with an additional text selected from the context provided with the request to form one or more candidate foci of attention; determine a probability or score for each of the one or more candidate foci of attention; and select at least one of the candidate foci of attention having the highest probability or score.
- context comprises one or more of the content surrounding the indication, device metadata, application metadata, and user metadata.
- program instructions that direct the processing system to formulate the at least one query direct the processing system to: determine a mode of operation from one or more of a level of ambiguity in the focus of attention and a user preference; and modify the query in response to the mode of operation.
- program instructions that direct the processing system to organize and filter the results direct the processing system to: detect a pattern in the results, wherein the pattern is based on a level of similarity of one or more of rankings of the results, content of the results, and origin of the results; and use the pattern to group, re-sort, or remove results.
- a system comprising: a processing system; one or more computer readable storage media; program instructions stored on at least one of the one or more storage media that, when executed by the processing system, direct the processing system to: determine, from information provided with a request for contextual insights with respect to at least some text, a focus of attention for the contextual insights; perform context analysis of a context provided with the request to determine query terms; formulate at least one query using one or more of the query terms; send the at least one query to at least one search service; organize and filter results received from the at least one search service according to at least some of the context; and provide the organized and filtered results to a source of the request.
- program instructions that direct the processing system to determine the focus of attention from the indication direct the processing system to: modify an initially indicated text section from the information provided with the request with an additional text selected from the context provided with the request to form one or more candidate foci of attention; determine a probability or score for each of the one or more candidate foci of attention; and select at least one of the candidate foci of attention having the highest probability or score.
- request context comprises one or more of the content surrounding the indication, device metadata, application metadata, and user metadata.
- program instructions that direct the processing system to organize and filter results received from the at least one search service according to at least some of the context direct the processing system to: detect a pattern in the results, wherein the pattern is based on a level of similarity of one or more of rankings of the results, content of the results, and origin of the results; and use the pattern to group, re-sort, or remove results.
- a system comprising: a means for receiving a request for contextual insights with respect to at least some text; a means for determining from information provided with the request a focus of attention for the contextual insights; a means for performing context analysis from context provided with the request to determine query items; a means for formulating at least one query using one or more of the query terms; a means for initiating a search by sending the at least one query to at least one search service; a means for receiving results of the search; and a means for organizing and filtering the results according to at least some of the context.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Pure & Applied Mathematics (AREA)
- Algebra (AREA)
- Computational Mathematics (AREA)
- Probability & Statistics with Applications (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Machine Translation (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
- This application claims the benefit of U.S. Provisional Application Ser. No. 61/887,954, filed Oct. 7, 2013.
- Applications for creating or consuming content include reader applications and productivity applications like notebook, word processors, spreadsheets or presentation programs. Users of these applications for creating or consuming content often research topics and rely on Internet search services to find additional information related to the content being created or consumed. To research topics, a user will often leave the application and go to a web browser to perform a search and review the results.
- Techniques and systems are presented for providing “contextual insights,” or information that is tailored to the context of the content a user is consuming or authoring.
- Given a request for information about a topic from within an application for creating or consuming content, one or more queries to search services may be formulated for the application for creating or consuming content without requiring entry of a search query directly by a user. Moreover, techniques and systems may leverage the context of the content the user is consuming or authoring, as well as user, device, and application metadata, to construct the queries and to organize and filter the results into relevant contextual insights.
- A method for facilitating contextual insights can include: determining a focus of attention for contextual insights from information provided with a request for contextual insights with respect to at least some text; performing context analysis to determine query terms from context provided with the request; formulating at least one query using one or more of the query terms; initiating a search by sending the at least one query to at least one search service; and organizing and filtering results received from the at least one search service according to at least some of the context.
- This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
-
FIG. 1A shows an example operating environment in which certain implementations of systems and techniques for contextual insights may be carried out. -
FIGS. 1B-1E show example interactions indicating an initial selection of text for contextual insights. -
FIG. 2 illustrates an example process flow for contextual insights and exploration. -
FIG. 3 shows an example interface displaying contextual insights. -
FIG. 4 shows a block diagram illustrating components of a computing device or system used in some implementations of the described contextual insights service. -
FIG. 5 illustrates an example system architecture in which an implementation of techniques for contextual insights may be carried out. - Techniques and systems are presented for providing “contextual insights,” or information that is tailored to the context of the content a user is consuming or authoring. The contextual insights can include, without limitation, people/contact information, documents, meeting information, and advertisements that relate to a determined focus of attention (e.g., determined topics of interest) for the user.
- Given a request for information about a topic, which may be a direct or indirect request by a user of an application for creating or consuming content, one or more queries to search services may be formulated by the application without requiring entry of a search query directly by a user. Moreover, techniques and systems may leverage the context of the content the user is consuming or authoring, as well as other context of the user, device, and application metadata, to construct the queries and to organize and filter the results into relevant contextual insights.
- Advantageously, the techniques and systems described herein may improve a user's workflow and/or productivity while consuming or authoring content in an application for creating or consuming content. When a user wants to research a topic while in the application for creating or consuming content, the user does not need to move to a separate application to conduct a search. The techniques enable a user to immerse themselves in a topic without having to leave the application. In addition, context within (or accessible) by the application for creating or consuming content can be used to provide relevant results and may reduce the number of times a user may narrow or modify a search query to achieve a relevant result.
- A “query” is a request for information from a data storage system. A query is a command that instructs a data storage system of the “query terms” that are desired by the requestor and the terms' relationship to one another. For example, if the data storage system includes a web search engine, such as available from a variety of search services, a query might contain the query terms “Russia” and “Syria” and indicate that the relationship between the two query terms is conjunctive (i.e., “AND”). In response, the search service may return only content having both words somewhere in the content. As frequently used here, a query is a command requesting additional content or information from a search service, where the content or information is associated with specific terms (e.g., words) in the query. A query is sometimes written in a special formatting language that is interpretable to a search service.
- The queries may be shaped by the user's “context,” which may include both content surrounding the user's indicated interest and additional factors determined from attributes of the user, device, or application. “Surrounding content” refers to text or other content in a position before and/or after the user's indicated interest (e.g., selection). By organizing and filtering the results received from search services, information is fashioned into contextual insights that are tailored to a user's particular usage context. Implementations of the described systems and techniques may not only provide more relevant related content, but may do so without the interruptions associated with a web search using a separate application such as a web browser, and hence may improve user productivity.
- As an example, consider a user who is reading an article on President Obama's 2013 address to the nation on the Syrian crisis. While authoring or reading an article in an application for creating or consuming content that incorporates the described techniques for contextual insights, the user may highlight the term “Russia” and request contextual insights (via a separate command or as a result of the highlighting action). The application can return information for the contextual insights that may include articles from an online encyclopedia such as “Russia”, “Russia's role in the Syrian civil war”, and “Russia-Syrian relations”. If the user instead highlights a different term, “weapons”, the returned information may be an article titled “Syria and weapons of mass destruction.” The returned information is dependent both on the user's indicated interest and on the context of the document that the user is reading.
- Certain implementations utilize a contextual insights service. The contextual insights service includes functionality and logic for producing “contextual insights,” which includes results that are related through context and not just from a conventional search. In one such implementation, the portion of the text indicated by the user, along with additional text around the portion of the text selected by the user, is sent to the contextual insights service. The contextual insights service can perform a determination as to the intended item or topic for search. The contextual insights service can provide one or more proposed terms found in the associated text that forms the context of the user's selection, as well as determine additional query terms or limitations that take account of contextual factors relating to the user, device, or application. After return of the search results from one or more search services, relevant results may be organized and filtered (including sorting and grouping) based on context and other factors.
- In some embodiments, techniques may be iteratively applied to progressively improve the relevance of contextual insights. Multiple modes of interacting with the contextual insights may be supported.
-
FIG. 1A shows an example operating environment in which certain implementations of systems and techniques for contextual insights may be carried out. The example operating environment inFIG. 1A may include aclient device 100,user 101,application 102contextual insights component 105,contextual insights service 110, and one ormore search services 120. -
Client device 100 may be a general-purpose device that has the ability to run one or more applications. Theclient device 100 may be, but is not limited to, a personal computer, a laptop computer, a desktop computer, a tablet computer, a reader, a mobile device, a personal digital assistant, a smart phone, a gaming device or console, a wearable computer, a wearable computer with an optical head-mounted display, computer watch, or a smart television. -
Application 102 may be a program for creating or consuming content. Example applications for creating or consuming content include word processing applications such as MICROSOFT WORD; email applications; layout applications; note-taking applications such as MICROSOFT ONENOTE, EVERNOTE, and GOOGLE KEEP, presentation applications; and reader applications such as GOGGLE READER, APPLE iBooks, ACROBAT eBook Reader. AMAZON KINDLE READER, and MICROSOFT Reader and those available on designated hardware readers such as AMAZON KINDLE READER). -
Contextual insights component 105 may be integrated withapplication 102 as an inherent feature ofapplication 102 or as a plug-in or extension for anexisting application 102 to provide the contextual insights feature. Although primarily described herein as being incorporated withapplication 102 at theclient device 100,contextual insights component 105 may, in some cases, be available through a separate device from theclient device 100. -
Contextual insights component 105 facilitates the interaction between theapplication 102 andcontextual insights service 110, for example through an application programming interface (API) of thecontextual insights service 110. - An API is generally a set of programming instructions and standards for enabling two or more applications to communicate with each other and is commonly implemented as a set of Hypertext Transfer Protocol (HTTP) request messages and a specified format or structure for response messages according to a REST (Representational state transfer) or SOAP (Simple Object Access Protocol) architecture.
- In response to receiving particular user interactions with the
client device 100 by theuser 101 ofapplication 102, thecontextual insights component 105 may facilitate a call (or invocation) of acontextual insights service 110 using the API of thecontextual insights service 110. For example, thecontextual insights component 105 sends arequest 130 for contextual insights to thecontextual insights service 110 so thatcontextual insights service 110 may execute one or more operations to provide thecontextual insights 135, including those described with respect toFIG. 2 .Contextual insights component 105 may also, in some cases, facilitate the presentation ofcontextual insights 135 forapplication 102, for example, by rendering thecontextual insights 135 in a user interface. -
Contextual insights service 110 receives therequest 130 for contextual insights and generatescontextual insights 135. Therequest 130 may contain text, text markup, and/or other usage context fromapplication 102. Thecontextual insights service 110 may process the request via one or more components, shown inFIG. 1A assmart selection 131,context analysis 132, andquery formulation 133. As part of its operations,contextual insights service 110 may direct one or more requests to one or more search service(s) 120, and may interpret or manipulate the results received from search service(s) 120 in apost-processing component 134 before returningcontextual insights 135 toclient device 100 viacontextual insights component 105. - For example, upon receipt of
request 130, thecontextual insights service 110 can perform a determination of the user's intended content selection based on information provided by thecontextual insights component 105, analyze the context of the content selection with respect both to other content the user is perusing and also to various device and user metadata, and construct and send one or more queries for requesting a search from one ormore search services 120. These operational aspects of the contextual insights service, including result post-processing, are discussed in more detail with respect toFIG. 2 . - In some implementations,
contextual insights service 110 may determine that contextual insights can be further optimized after resultpost-processing component 134 activities. Another iteration of the processing stages ofsmart selection 131,context analysis 132, and/orquery formulation 133 might be executed to produce improved insights through the modification of query terms. - It should be noted that, while sub-components of
contextual insights service 110 are depicted inFIG. 1A (i.e.,smart selection 131,context analysis 132,query formulation 133, and result post-processing 134), this arrangement of thecontextual insights service 110 into components is exemplary only; other physical and logical arrangements of a contextual insights service capable of performing the operational aspects of the disclosed techniques are possible. Further, it should be noted that aspects of acontextual insights service 110 may be implemented on more than one device. In some cases, acontextual insights service 110 may include components located on user devices and on one or more services implemented on separate physical devices. - Search service(s) 120 may take myriad forms. A familiar kind of search service is a web search engine such as, but not limited to, MICROSOFT BING and GOOGLE. However, any service or data storage system having content that may be queried for content appropriate to contextual insights may be a
search service 120. A search service may also be built to optimize for the queries and context patterns in an application so that retrieval of information may be further focused and/or improved. Sometimes, an “intranet” search engine implemented on an internal or private network may be queried as asearch service 120; an example is Microsoft FAST Search. A custom company knowledge-base or knowledge management system, if accessible through a query, may be asearch service 120. In some implementations, a custom database implemented in a relational database system (such as MICROSOFT SQL SERVER) that may have the capability to do textual information lookup may be asearch service 120. Asearch service 120 may access information such as a structured file in Extended Markup Language (XML) format, or even a text file having a list of entries. Queries by thecontextual insights service 110 to the search service(s) 120 may be performed in some cases via API. - A request for
contextual insights 130 may contain a variety of cues for thecontextual insights service 110 that are relevant to generating contextual insights. Thecontextual insights component 105 generates and sends therequest 130 to thecontextual insights service 110 based on an indication by auser 101. - The request for
contextual insights 130 may be initiated by auser 101 interacting with anapplication 102 onclient device 100. For example, content in the form of a document (including any format type document), article, picture (e.g., that may or may not undergo optical character recognition), book, and the like may be created or consumed (e.g., read) by auser 101 via theapplication 100 running on theclient device 100. A user may interact with the content and/or an interface toapplication 102 to indicate a request forcontextual insights 130 is desired.Contextual insights component 105 may interact withapplication 102,client device 100 and even other applications or user-specific resources to generate and send therequest 130 to thecontextual insights service 110 in response to the indication by theuser 101 for therequest 130. - As one example of an indication of a request for
contextual insights 130, a user can indicate an initial selection of text for contextual insights. In theapplication 102 containing text or other readily searchable content, a user may indicate an interest in certain text in, for example, a document, email, notes taken in a note-taking application, e-book, or other electronic content. The indication of interest does not require the entering of search terms into a search field. Of course, in some implementation, a search box may be available as a tool in the application so that a user may enter terms or a natural language expression indicating a topic of interest. - Interaction by the
user 101 indicating the initial text selection may take myriad forms. The input indicating an initial text selection can include, but is not limited to, a verbal selection (of one or more words or phrases), contact or contact-less gestural selection, touch selection (finger or stylus), swipe selection, cursor selection, encircling using a stylus/pen, or any other available technique that can be detected by the client device 100 (via a user interface system of the device). In some implementations, contextual insights may initiate without an active selection by a user. - The
user 101 may also, for instance, utilize a device which is capable of detecting eye movements. In this scenario, the device detects that the user's eye lingers on a particular portion of content for a length of time, indicating the user's interest in selecting the content for contextual insights. A computing device capable of detecting voice commands can be used to recognize a spoken command to initially select content for contextual insights. It should also be noted that many other user interface elements, as diverse as drop-down menus, buttons, search box, or right-click context menus, may signify that the user has set an initial text selection. Further, it can be understood that an initial text selection may involve some or all of the text available on the document, page, or window. -
FIGS. 1B-1E show example interactions indicating an initial selection of text for contextual insights. The contextual insights component provides the selection as well as context including content before and/or after the selection as part of the request. Therefore, the indication by the user of text for contextual insight and exploration may be of varying specificity. - As one example, in a
graphical user interface 150 ofapplication 102 in which text is depicted, the user may select a word (or phrase) 151. The selection of a word (or phrase) may be aswipe gesture 152 on a touch enabled display screen such as illustrated inFIG. 1B . Other gestures such as insertion point, tap, double tap, and pinch could be used. Of course, non-touch selection of a word (as well as cursor selection of the word) may be used as an indication. In the example shown inFIG. 1C , acursor 153 may be used to indicate, for example, via a mouse click, a point on the content surface of theuser interface 150. Thecursor 153 may be placed within a term without highlighting a word or words. A similar selection may be conducted by touch (e.g., using a finger or pen/stylus) or even by eye gaze detection. This type of selection may be referred to as a selection of a region. - Just as less than a full word can be indicated by the user as the initial selection of text, a user may select more than a single word using any of the methods of user interaction described above. In some scenarios an initial selection may include a contiguous series of words (a phrase). For example, multiple words may be “marked” by the user using interface techniques such as illustrated in
FIG. 1D , where acursor 154 is shown selectingmultiple words 155 of a sentence. Thus, as illustrated by the example scenarios, the user is not limited to selecting a particular amount of text. - In some scenarios, multiple, non-contiguous words or phrases may be selected by highlighting, circling or underlining with a digital stylus. Multiple words or phrases of interest also may be prioritized by the user. For example, one word or phrase may be marked as the primary text selection of interest, and other related words may be marked as supporting words or phrases which are of secondary, but related interest. For example, using interface techniques such as illustrated in
FIG. 1E ,several words 156 may be indicated onuser interface 150. - Furthermore, even a scenario in which the user selects no specific words or phrases for the contextual information lookup is envisioned. In one such scenario, the input for initial text selection may be discerned from passive, rather than active, interactions by the user. For example, while the user is scrolling through the text rendered by an application, a paragraph on which the user lingers for a significant time might constitute an initial text selection. As an additional example, if the client device allows the user's eye movements to be tracked, words or phrases on which the user's eye lingers may form the input for initial text selection. In yet another example, the entire document, window, or page may be considered to be selected based on a passive interaction.
- Returning to
FIG. 1A , in some cases, additional information may be sent as part of therequest 130 containing the user's indicated initial text selection. The additional information may be used by thecontextual insights service 110 to improve the relevance or clarity of searches directed by the initial text selection. The additional information may vary by embodiment and scenario, but in some embodiments will include such information as the text surrounding the selection (which can also be referred to as an expanded portion of text, for example, a certain number of symbols or characters before and/or after the selection), information about the application in which the content is displayed, information about the device on which the application runs, and information about the specific user. In some cases, this information may be referred to herein as “application metadata”, “device metadata”, and “user metadata,” respectively. - Once
contextual insights service 110 has processed the user's selection (and context) and has received and processed query results,contextual insights service 110 can returncontextual insights 135 to the user. In some embodiments,contextual insights component 105 may operate to render or facilitate theapplication 102 in rendering or displaying one or more user interfaces to show the contextual insights to the user on aclient device 100. -
FIG. 2 illustrates an example process flow for contextual insights and exploration. Acontextual insights service 110, such as described with respect toFIG. 1A , may implement the process. - Referring to
FIG. 2 , an indication of a request for contextual insights with respect to at least some text may be received (201). The request can include a selection such as described with respect toFIGS. 1B-1E and context including content before and/or after the selection. - The focus of attention for the contextual insights may be determined from information provided with the request (202), for example by the
smart selection component 131 ofcontextual insights service 110 ofFIG. 1A . The “focus of attention” refers to the concept (or “topic”) considered to be about what the user would like to explore and gain contextual insights. - Sometimes a user's selection of text may, on its own, sufficiently indicate the focus of attention. However, sometimes the user may improperly or incompletely indicate a focus of attention, for example by indicating a word that is near to but not actually the focus of attention, or by indicating only one word of a phrase that consists of multiple words. As a specific example, if the user selects the word “San” in the sentence, “The San Francisco 49ers scored big in last Monday's game,” the true focus of attention is likely to be “San Francisco 49ers” and not “San”; hence, the focus of attention may need to be adjusted from the selection indicated with the request.
- In cases where the user's indication of the focus of attention is incomplete or improper, the intended focus of attention may sometimes be predictable. A variety of techniques may be used to predict candidates for the user's intended focus of attention based on a given user selection and the surrounding text or content. These processes may include, for example, iterative selection expansion, character n-gram probabilities, term frequency-inverse document frequency (tf-idf) information for terms, and capitalization properties. In some implementations, more than one technique may be used to select one or more candidate foci of attention. Candidate foci of attention determined from these multifarious techniques may then be scored and ranked by the
contextual insights service 110, orsmart selection component 131 thereof, to determine one or more likely foci of attention from among multiple possibilities. -
Smart selection component 131 may iteratively determine for every current selection whether the selection should be expanded by one character or word to the right or to the left. In some implementations,smart selection component 131 may rank or score candidates for selection using “anchor texts” that may be obtained from an online encyclopedia or knowledge-base. “Anchor texts,” sometimes known as “link titles,” are text descriptions of hyperlinks. Anchor texts may give the user relevant descriptive or contextual information about the content at the hyperlink's destination. Anchor texts form a source of words and phrases that are positively correlated with one another as related concepts. Examples of online encyclopedias and knowledge bases are MICROSOFT ENCARTA, ENCYCLOPEDIA BRITTANICA, and WIKIPEDIA. - Character n-gram probabilities are based on n-gram models, a type of probabilistic language model for predicting the next item in a sequence of characters, phonemes, syllables, or words. A character n-gram probability may allow prediction of the next character that will be typed based on a probability distribution derived from a training data set. In some cases, a
smart selection component 131 may be trained using machine learning techniques via character n-gram probability data from anchor texts. - In some implementations, a
smart selection component 131 may interact with or use available commercial or free cloud-based services providing n-gram probability information. An example of a cloud-based service is “Microsoft Web N-gram Services”. This service continually analyzes all content indexed by the MICROSOFT BING search engine. Similar services are available from GOOGLE's N-gram corpus. A cloud-based service may include the analysis of search engine logs for the words that internet users add or change to disambiguate their searches.Smart selection component 131 may interoperate with such a cloud-based service via API. - In some cases, tf-idf techniques may be used in a
smart selection component 131. The tf-idf is a numerical statistic intended to reflect how important a word is to a document in a collection or corpus. The tf-idf value increases in proportion to the number of times a term (e.g., a word or phrase) appears in a document, but is negatively weighted by the number of documents that contain the word in order to control for the fact that some words are generally more common than others. One way of using tf-idf techniques is by summing tf-idf values for each term in a candidate focus of attention. - In some cases, capitalization properties of terms may be used to identify nouns or noun phrases for focus of attention candidates. Capitalization properties may be used both to rank the importance of certain terms and as further scoring filters when final rankings are calculated. Other implementations may use dictionary-based techniques to additionally identify a large dictionary of known, named entities, such as the titles of albums, songs, movies, and TV shows. In some cases, a natural language analyzer can be used to identify the part of speech of words, term boundaries, and constituents (noun phrases, verb phrases, etc.). It should be noted that the techniques described for predicting the focus of attention are examples and are not intended to be limiting.
- Scoring data from the various described techniques, and others, may be used to produce candidate foci of attention from a user-indicated focus of attention. The scores may be assembled by the
smart selection component 131, and scores assigned by one or more of these techniques may be compiled, averaged and weighted. The scores may be further modified by the capitalization and stop-word properties of the words in the candidate focus of attention (stop-words are semantically irrelevant words, such as the articles “A”, “an”, and “the”). A final score and ranking for each candidate focus of attention may be calculated which may be used to find the top candidate focus (or foci) of attention for a given user selection. - Accordingly, the initial text selection provided with the request may be referred to as a “user-indicated” focus of attention. In addition to an indication of the user-indicated focus of attention, the request can include a range of text or content before and/or after the user-indicated focus of attention. The user-indicated focus of attention may then be analyzed for expansion, contraction, or manipulation to find an intended focus of attention in response to rankings emerging as various predictive techniques are applied. One or more foci of attention may be chosen that may be different from the user's indicated foci of attention.
- Once one or more foci of attention are determined, context analysis may be performed to determine query terms for formulating a query (203). As part of the determination of query terms, query items including operators such as OR, NOT, and BOOST, as well as meta-information (e.g., derived from user metadata) such as the user's location (if available through privacy permissions), time of day, client device and the like may also be determined so as to facilitate the generation of the queries. Context analysis may identify representative terms in the context that can be used to query the search engine in conjunction with the focus of attention. Context analysis may be performed, for example, by a
context analysis component 132 such as described with respect toFIG. 1A . - Here, context analysis is a technique by which a query to a search service (e.g., one or more of search services 120) may be refined to become more relevant to a particular user. Various forms of context may be analyzed, including, for example: the content of the article, document, e-book, or other electronic content a user is reading or manipulating (including techniques by which to analyze content for its contextual relationship to the focus of attention); application and device properties; and metadata associated with the client device user's identity, locality, environment, language, privacy settings, search history, interests, or access to computing resources. The use of these various forms of context with respect to query refinement will now be discussed.
- The content of the article, document, e-book, or other electronic content with which a user is interacting is one possible aspect of the “context” that may refine a search query. For example, a user who selected “Russian Federation” as a focus of attention may be interested in different information about Russia when reading an article about the Syrian civil war than when reading an article about the Olympics. If context analysis of the article content were performed in this example, the query terms might be modified from “Russian Federation” (the user-indicated focus) to “Russian Federation involvement in Syrian civil war” or “Russian Federation 2014 Sochi Olympics,” respectively.
- The electronic content surrounding the focus of attention may undergo context analysis to determine query terms in one or more of a variety of ways. In some cases, the entire document, article, or e-book may be analyzed for context to determine query terms. In some cases, the electronic content undergoing context analysis may be less than the entire document, article, or e-book. The amount and type of surrounding content analyzed for candidate context terms may vary according to application, content type, and other factors.
- For example, the contextually analyzed content may be defined by a range of words, pages, or paragraphs surrounding the focus of attention. In an e-book, for example, the content for contextual analysis may be limited to only that portion of the e-book that the user has actually read, rather than the unread pages or chapters. In some cases, the content for contextual analysis may include the title, author, publication date, index, table of contents, bibliography, or other metadata about the electronic content. In some implementations, the
contextual insights component 105 at the client may be used to determine and/or apply the rules for the amount of contextual content provided in a request to a contextual insights service. - Context analysis of an appropriate range of content surrounding the focus of attention may be conducted in some implementations by selecting candidate context terms from the surrounding content and analyzing them in relation to a focus of attention term. For example, a technique that scores candidate context terms independently of each other but in relation to a focus of attention term may be used. The technique may determine a score for each pair of focus-candidate context terms and then rank the scores.
- In some implementations, the relevance of the relationship between the candidate term from the surrounding content and the focus of attention may be analyzed with reference to the query logs of search engines. The query logs may indicate, using heuristics gathered from prior searches run by a multiplicity of users, that certain relationships between foci of attention terms and candidate terms from the surrounding content are stronger than others. In some implementations, a
context analysis component 132 may be trained on the terms by culling term relationships from web content crawls. In some cases, the strength of a relationship between terms may be available as part of a cloud-based service, such as the “Microsoft Web N-gram Services” system discussed above, from which relative term strengths may be obtained, for example via API call or other communication mechanism. - Another technique that may be used for determining the relevance of candidate context terms, used either alone or in concert with other techniques, is by determining whether a candidate context is a named entity. For example, a candidate context term may be part of a dictionary of known, named entities such as the titles of albums, songs, movies, and TV shows; if the candidate context term is a named entity, the relevance of the candidate term may be adjusted.
- Distance between the candidate context term and the focus of attention may also be considered in context analysis. Distance may be determined by the number of words or terms interceding between the candidate context term and a focus of attention.
- In some implementations, the relevance of a candidate context term with reference to focus of attention terms may be determined with respect to anchor text available from an online knowledge-base. Statistical measurements of the occurrence frequencies of terms in anchor texts may indicate whether candidate terms and focus of attention terms are likely to be related, or whether the juxtaposition of the terms is random. For example, highly entropic relationship values between the candidate context term and the focus of attention term(s) in anchor text may signify that the candidate context term is a poor choice for a query term.
- Some techniques of context analysis may use metadata associated with the application, device, or user in addition to (or in lieu of) the gathering and analysis of terms from the content surrounding a focus of attention. These techniques may be used by
context analysis component 132 to refine, expand, or reduce the query terms selected for the search query. - In some implementations, the type of
application 102 ordevice 100 may be a factor in context analysis. For example, if a user is writing a paper in a content authoring application such as a word processor, then context analysis for query terms may be different than the analysis would be for a reader application. In this example of the authoring application, a context analysis may determine via the application type that a narrower focus to find query terms may be appropriate, perhaps limiting query terms to definitions and scholarly materials. In the case of the reader, more interest-based and informal materials may be appropriate, so candidate query terms are more wide-ranging. - Factors derived from user device metadata may also be considered in certain implementations. Sometimes, the type of user device may be a factor in the query terms determined from context analysis. For example, if the user device is a phone-sized mobile device, then candidate context terms may be selected from a different classification than those selected if the user device were a desktop computer. In the case of the small mobile device, a user's interests may be more casual, and the screen may have less space, so candidate terms which produce more summarized information may be selected. Further, context analysis may consider device mobility by selecting candidate terms that may be related to nearby attractions. In contrast, if the user device is a desktop device, then user may be at work and want more detailed and informative results; query terms might be added which obtain results from additional sources of information.
- In some implementations, factors derived from user metadata may be used as part of context analysis to define query terms. Sometimes, a factor may be the type of user—e.g., whether the user's current role is as a corporate employee or consumer. The type of user may be determined, for example, by the internet protocol (IP) address from which the user is accessing a communications network. In the former case, work-oriented query terms may be preferentially selected by the context analysis component; in the latter case, more home or consumer-related terms may be preferred. In some implementations, the user type may determine the availability of computing resources such as a company knowledge management system accessible by a company intranet. Availability of company-related resources might enable a context analysis component to select query terms targeted toward such specialized systems.
- In some implementations, a factor in context analysis may be the user's history of prior searches or interests. In some cases, the historical record of previous foci of attention selected by the user may be analyzed to generate or predict candidate query terms. Those candidate terms might be refined or ranked with respect to the user's current foci of attention using techniques similar to those described with respect to candidate terms for surrounding content; e.g., by using N-term services or anchor text analysis.
- Candidate terms may be selected by the context analysis engine on the basis of prior user internet searches. The historical record of these searches may generate or predict candidate query terms. Similarly, internet browser cookies or browser history of websites visited may be used to discern user interests which may predict or refine candidate terms. Candidate terms generated may be ranked or refined using similar techniques to those described above with respect to historical foci of attention terms.
- Other factors which may be analyzed during the context analysis component's determination of query terms might be the time of day that the user is requesting contextual insights and the current geographic locality of the client device. User profile and demographic information, such as age, gender, ethnicity, religion, profession, and preferred language may also be used as factors in query term determination. It should be noted that, in some implementations, privacy settings of the user may impact whether user profile metadata is available for context analysis and to what extent profile metadata may be used.
- Continuing with the process illustrated in
FIG. 2 , a query may be formulated using one or more of the query terms (204). Query formulation may include a pre-processing determination in which a mode of operation is decided with reference to user preferences; the mode of operation may inform which context-related terms are used to formulate the query. Query formulation may include the assembly of the actual queries that may be sent to one or more search services. Query formulation may be performed, for example, by aquery formulation component 133 described with respect toFIG. 1A . - In some embodiments,
query formulation component 133 may engage in a pre-processing determination in which a mode of operation is decided with reference to user preferences. The mode of operation may determine one or more classes of appropriate or desirable search results. For example, two modes of operation may be “lookup” and “exploration.” A “lookup” mode may give targeted results directed narrowly toward a focus of attention (e.g., a dictionary lookup). An “exploration” mode may give more general search results, and, for example, present several options to the user with respect to which search results or topics to further explore. Naturally, other modes of operation representing different classes of search result are possible, as are scenarios in which multiple modes of operation are provided. - Thus, an operation of the query formulation component may be to determine to what extent the query terms from the contextual analysis phase may take over from or supersede the user's indicated/determined foci of attention (or explicit search query if the user provided one). A mode of operation may be selected by the user actively, such as by affirmative selection of the mode, or passively, such as based on some factor determined from user, device, or application metadata.
- In some cases, a mode of operation may be determined by the
query formulation component 133 based on outcomes from context analysis or other factors. For example, thequery formulation component 133 may determine which mode of operation to use based on ambiguity of a focus of attention. If, during or after context analysis, contextual insights service determines that, because of ambiguity in the focus of attention, terms or results may not be acceptably narrowed for a lookup mode, an exploration mode may be chosen. - Sometimes,
query formulation component 133 may determine that certain additional context terms may return search results that inappropriately overwhelm the focus of attention. In some cases,query formulation component 133 may modify query terms that may be likely to return adult or offensive content; user profile metadata (e.g., age of the user) may be a factor in such a modification of query terms.Contextual insights service 110 may make this determination, for example, by formulating and sending one or more probative queries to search services. Probative queries may enable thecontextual insights service 110 to preview search results for several trial formulations of query terms so that terms added by context analysis may be adjusted or modified. - Query formulation may include the assembly of actual queries that may be sent to one or more search services. In some cases, a query formulation component may assemble and send a single query consisting of one or more query terms joined conjunctively to a single search service.
- In some cases, however, context analysis could reveal that the context covers multiple aspects about the focus of attention that can lead the user to desire to explore different context terms differently. The query formulation component may, based on a determined need for different classes of search results, formulate disjunctive queries, formulate separate queries with differing terms, split queries into multiple execution phases, and/or send different queries to different search services. In some cases, query terms may be ordered in a particular sequence to obtain particular search results.
- For example,
query formulation component 133 may determine that a particular focus of attention and context analysis reveals query terms that may be best presented to the user in segmented fashion. In such a case, thequery formulation component 133 may construct a disjunctive query of the form “focus-term AND (context-term1 OR context-term2 OR . . . )”. Moreover, thequery formulation component 133 may sometimes construct multiple queries—a query that targets the focus of attention more narrowly, and one or more queries that target exploratory search results on a handful of related themes. In some cases, a query may be targeted toward a particular search service in order to retrieve results from a given class. - Query formulation can be carried out based on intended search services so that the contextual insights service initiates a search by sending a query to one or more search services (205). The search may occur when
contextual insights service 110, or some component thereof (e.g., query formulation component 133) issues or sends the query to one ormore search services 120 as described inFIG. 1A . Once sent, results of the search may be received (206). In many cases, the search query will be issued, and search results will be returned, via an API call to the search service, as noted inFIG. 1A . In situations where multiple queries have been sent, either to segment results or to target specific search services, multiple sets of search results may be received. - After receipt of the results, the results may be organized and filtered according to at least some of the context (207). The
contextual insights service 110, or some component thereof (e.g., resultpost-processing component 134 described inFIG. 1A ) may receive the results and/or perform organizing and filtering operations. Organization and filtering of the results may include, for example: ranking of results according to various criteria; assembly, sorting, and grouping of result sets, including those from multiple queries and/or multiple search services; and removal of spurious or less relevant results. - In some implementations, organization and filtering of the results may include ranking of results according to various criteria; some of the criteria may be determined from context. Result post-processing component may assess aspects of search results received from the search service according to varied techniques. Assessments of rank emerging from one or more techniques may be used in concert with one another; some of the techniques may weighted according to their aptitude for producing relevant answers in a given context.
- In some cases, search results may be received from a search service with a ranking position; such a ranking may constitute a natural starting point for determining relevance. Another technique may include a linguistic assessment of how closely the title or URL of a search result matches the query; e.g., if words in the title are an almost exact match to terms in the query, the result may be more relevant.
- Factors determined from context analysis may also be applied in the result post-processing phase to assist in congruency of the search results with respect to context. For example, results may be assessed to ensure that the results are congruous with user profile metadata. Results that are age-inappropriate, for example, might be removed entirely; in other cases, results that may more be more appropriate to a user's location may be ranked higher by the
result post-processing component 134 than other results. - Factors such as whether a search result corresponds to a disambiguation page (e.g., on Wikipedia), the length of the query, the length of the context, and other query performance indicators may also be used in an assessment of search result relevance.
- However, at times, other techniques aside from ranking the results may be relevant to organizing and filtering the results for contextual insights. For example, when multiple results sets from several queries or search services have been received, the results may be grouped or resorted. Furthermore, when there is disagreement or lack of congruity between different search services, determinations may be needed as to which result sets to prioritize.
- In some cases, multiple queries may have been issued by the
query formulation component 133, and perhaps to multiple search services. In those cases, the queries themselves may naturally reflect intended groupings of result sets. For example, if a focus of attention relates to a geographic location and thequery formulation component 133 directed a query specifically at a search service with travel information, search results returned from that service may be grouped together under a “Travel” category by theresult post-processing component 134. Similarly, if thequery formulation component 133 had issued a separate query to a search service focusing on the history of the geographic location, those results may also be grouped together. Naturally, search results may also be ranked and refined within the group or category. In some cases, result sets returned from different queries may be reconsolidated by theresult post-processing component 134. Moreover, sometimes results received from the search service as a single result set may be segmented and grouped more logically, for example according to topic, domain of the website having the result, type of result (e.g., text, photos, multimedia), content rating, or other criteria. - Some implementations may include the detection of result thresholds by the
result post-processing component 134. Result “thresholds” are juncture points in one or more results or sets of results that indicate that particular groups of results may be related to one another, such as by relevance or by category/topic. These thresholds may be used to group, refine, remove, or re-sort results. - For example, in a given search, if the first three search results are ranked at the top because they have a high ranking score, but the next seven search results form a group having a low ranking score, the first group of three results may be highly relevant to the focus of attention and context. Here, a result threshold may exist beyond which results may either be truncated or presented differently to the user, for example in an interface displaying a different mode of operation. In another example scenario, perhaps all ten results have a ranking score that is similar, and the relevance of the results would be difficult to distinguish from one another; in this example, there is no result threshold with respect to relevance, and different presentation or grouping options may be used. Result thresholds may sometimes be used to determine how
many insights 135 to return from a givencontextual insights request 130. In some cases, characteristics of a given result threshold may be adapted to user, application, or device metadata (e.g., the size of the device screen). - Sometimes, result thresholds may be recognized from patterns that allow detection of content groups. Examples of patterns include when several results show similarities (or dissimilarities) in their title, site origin, or ranking scores. When the
result post-processing component 134 receives results that can be determined to match a particular pattern, theresult post-processing component 134 may group those results together as a single result or segmented category of results. For example, multiple postings of a similar news release to various websites may be determined to have very similar titles or brief descriptions; theresult post-processing component 134 may recognize a pattern and either group or truncate the news releases into a single insight. - Result thresholds may be detected from patterns of disagreement between sources. For instance, a level of entropy—the degree to which there is or is not overlap between results returned by different sources—may indicate a result threshold. If, for example, results from one source have a low overlap with the results from another source, this pattern may indicate that the results may be separated into multiple groups having different contextual insights.
- In some cases, as for example when a threshold is detected from an identifiable pattern of disagreement between sources, certain functions of the
contextual insights service 110 may be executed repeatedly to determine appropriate contextual insights (for example, an adjustment may be made atoperation 202 and the processes repeated). For example, as described with respect toFIG. 1A , resultpost-processing component 134 may determine that another iteration of the processing stages of smart selection, context analysis, and/or query formulation might produce improved insights. As a result of additional iterations of processing, the focus of attention, context terms from content and metadata, and query terms may be modified. - A pattern of disagreement between sources might occur, for instance, when the formulated query terms were ambiguous with respect one or more of the sources or search services. If, for example, a request is made for contextual insights about “John Woo,” and John Woo is both a prominent movie director and a high-ranking statesman at the United Nations, at least two distinct patterns of results would be returned. A further iteration of processing using additional context or a modified focus of attention may be used to determine the most relevant insights. Or, consider the homograph “row” (a homograph is each of two or more words spelled the same but not necessarily pronounced the same and having different meanings and origins). British people frequently use the word “row” to mean an argument or quarrel, but Americans seldom do; Americans tend to use the word in its verb form, e.g., “to paddle a boat, as with an oar”. If a threshold is determined that hinges upon the two meanings in a given query, a further context analysis might identify that, for example, the user is British (and hence means “argument”), or that the word “row” is being used in its verb form in the context of the content being consumed.
- When the organizing and filtering of the results (207) has completed,
contextual insights 135 may be returned (208) to thecalling component 105 orclient device 100 by thecontextual insights service 110. -
FIG. 3 shows an example interface displaying contextual insights. The example interface is provided to illustrate one way thatcontextual insights 135 may be displayed on theuser device 100. An example interface such as the one shown inFIG. 3 may be generated by theapplication 102, or rendered by acontextual insights component 105 in cooperation with theapplication 102 ordevice 100. The example is for illustrative purposes only and is not intended to be limiting of the ways and varieties that contextual insights may be organized and filtered by thecontextual insights service 110. - Referring to
FIG. 3 , a contextual insights preview 300 can be displayed non-obtrusively atop the existingapplication surface 301, only partly obscuring the content displayed in theapplication surface 301. In theexample preview 300, a quick summary can be provided that may include a title 302 (as provided by the identified text 303), an image (still or moving) 304 (if available) and summary text 305 (if available). - Also included in the example contextual insights preview 300 may be a preview of various modes of operation that may form groupings in the contextual insights, or various
other groupings 320 determined by thecontextual insights service 110. To enable a user to navigate the contextual insights, therelevant results 310 can be grouped intomodules 320 that may be indicative of modes of operation or other groupings. - In the example illustrated in
FIG. 3 , theresults 310 are grouped by source. “Source,” in this context, may mean a network location, website, type of application, type of result (such as an image) or other logical method of grouping results. Some examples of sources might be the Wikipedia online encyclopedia; a local network source, such as an internal web server and/or social graph, privately available to the users in a company; a particular news website; image files from a photo-sharing website; structured data from a database; or private files on the user's drives or personal cloud storage. - It should be noted that the modular groupings may be displayed differently based on contextual information about the user. For example, a user at home may receive consumer or entertainment-oriented information sources. The same user might receive different groupings (and, as noted above, different results) when at work. Many such forms of groupings are possible. In some cases, as noted, the groupings or modules may be formed by the strength of the relationship between focus of attention terms or concepts and context terms. These aspects were discussed with respect to
FIG. 2 . -
FIG. 4 shows a block diagram illustrating components of a computing device or system used in some implementations of the described contextual insights service. For example, any computing device operative to run acontextual insights service 110 or intermediate devices facilitating interaction between other devices in the environment may each be implemented as described with respect tosystem 400, which can itself include one or more computing devices. Thesystem 400 can include one or more blade server devices, standalone server devices, personal computers, routers, hubs, switches, bridges, firewall devices, intrusion detection devices, mainframe computers, network-attached storage devices, and other types of computing devices. The hardware can be configured according to any suitable computer architectures such as a Symmetric Multi-Processing (SMP) architecture or a Non-Uniform Memory Access (NUMA) architecture. - The
system 400 can include aprocessing system 401, which may include a processing device such as a central processing unit (CPU) or microprocessor and other circuitry that retrieves and executessoftware 402 fromstorage system 403.Processing system 401 may be implemented within a single processing device but may also be distributed across multiple processing devices or sub-systems that cooperate in executing program instructions. - Examples of
processing system 401 include general purpose central processing units, application specific processors, and logic devices, as well as any other type of processing device, combinations, or variations thereof. The one or more processing devices may include multiprocessors or multi-core processors and may operate according to one or more suitable instruction sets including, but not limited to, a Reduced Instruction Set Computing (RISC) instruction set, a Complex Instruction Set Computing (CISC) instruction set, or a combination thereof. In certain embodiments, one or more digital signal processors (DSPs) may be included as part of the computer hardware of the system in place of or in addition to a general purpose CPU. -
Storage system 403 may comprise any computer readable storage media readable byprocessing system 401 and capable of storingsoftware 402 including contextual insights components 404 (such assmart selection 131,context analysis 132,query formulation 133, and result post processing 134).Storage system 403 may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. - Examples of storage media include random access memory (RAM), read only memory (ROM), magnetic disks, optical disks, CDs, DVDs, flash memory, solid state memory, phase change memory, or any other suitable storage media. Certain implementations may involve either or both virtual memory and non-virtual memory. In no case do storage media consist of a propagated signal. In addition to storage media, in some implementations,
storage system 403 may also include communication media over whichsoftware 402 may be communicated internally or externally. -
Storage system 403 may be implemented as a single storage device but may also be implemented across multiple storage devices or sub-systems co-located or distributed relative to each other.Storage system 403 may include additional elements, such as a controller, capable of communicating withprocessing system 401. -
Software 402 may be implemented in program instructions and among other functions may, when executed bysystem 400 in general orprocessing system 401 in particular,direct system 400 orprocessing system 401 to operate as described herein for enabling contextual insights.Software 402 may provideprogram instructions 404 that implement a contextual insights service.Software 402 may implement onsystem 400 components, programs, agents, or layers that implement in machine-readable processing instructions 404 the methods described herein as performed by contextual insights service. -
Software 402 may also include additional processes, programs, or components, such as operating system software or other application software.Software 402 may also include firmware or some other form of machine-readable processing instructions executable by processingsystem 401. - In general,
software 402 may, when loaded intoprocessing system 401 and executed, transformsystem 400 overall from a general-purpose computing system into a special-purpose computing system customized to facilitate contextual insights. Indeed,encoding software 402 onstorage system 403 may transform the physical structure ofstorage system 403. The specific transformation of the physical structure may depend on various factors in different implementations of this description. Examples of such factors may include, but are not limited to, the technology used to implement the storage media ofstorage system 403 and whether the computer-storage media are characterized as primary or secondary storage. -
System 400 may represent any computing system on whichsoftware 402 may be staged and from wheresoftware 402 may be distributed, transported, downloaded, or otherwise provided to yet another computing system for deployment and execution, or yet additional distribution. - In embodiments where the
system 400 includes multiple computing devices, one or more communications networks may be used to facilitate communication among the computing devices. For example, the one or more communications networks can include a local, wide area, or ad hoc network that facilitates communication among the computing devices. One or more direct communication links can be included between the computing devices. In addition, in some cases, the computing devices can be installed at geographically distributed locations. In other cases, the multiple computing devices can be installed at a single geographic location, such as a server farm or an office. - A
communication interface 405 may be included, providing communication connections and devices that allow for communication betweensystem 400 and other computing systems (not shown) over a communication network or collection of networks (not shown) or the air. Examples of connections and devices that together allow for inter-system communication may include network interface cards, antennas, power amplifiers, RF circuitry, transceivers, and other communication circuitry. The connections and devices may communicate over communication media to exchange communications with other computing systems or networks of systems, such as metal, glass, air, or any other suitable communication media. The aforementioned communication media, network, connections, and devices are well known and need not be discussed at length here. - It should be noted that many elements of
system 400 may be included in a system-on-a-chip (SoC) device. These elements may include, but are not limited to, theprocessing system 401, acommunications interface 405, and even elements of thestorage system 403 andsoftware 402. -
FIG. 5 illustrates an example system architecture in which an implementation of techniques for contextual insights may be carried out. In the example illustrated inFIG. 5 , anapplication 501 for interacting with textual content can be implemented on aclient device 500, which may be or include computing systems such as a laptop, desktop, tablet, reader, mobile phone, and the like.Contextual insights component 502 can be integrated withapplication 502 to facilitate communication withcontextual insights service 511. -
Contextual insights service 511 may be implemented as software or hardware (or a combination thereof) onserver 510, which may be an instantiation ofsystem 400. The features and functions of acontextual insights service 511 may be callable bydevice 500,application 501, orcontextual insights component 502 via an API. - The
contextual insights service 511 may initiate and send search queries to searchservice 521.Search service 521 may be implemented onserver 520, which may itself be an instantiation of a system similar to that described with respect tosystem 400 or aspects thereof. Many search services may be available for querying in a given environment. - Communications and interchanges of data between components in the environment may take place over
network 550. Thenetwork 550 can include, but is not limited to, a cellular network (e.g., wireless phone), a point-to-point dial up connection, a satellite network, the Internet, a local area network (LAN), a wide area network (WAN), a Wi-Fi network, an ad hoc network, an intranet, an extranet, or a combination thereof. The network may include one or more connected networks (e.g., a multi-network environment) including public networks, such as the Internet, and/or private networks such as a secure enterprise private network. - Alternatively, or in addition, the functionality, methods and processes described herein can be implemented, at least in part, by one or more hardware modules (or logic components). For example, the hardware modules can include, but are not limited to, application-specific integrated circuit (ASIC) chips, field programmable gate arrays (FPGAs), system-on-a-chip (SoC) systems, complex programmable logic devices (CPLDs) and other programmable logic devices now known or later developed. When the hardware modules are activated, the hardware modules perform the functionality, methods and processes included within the hardware modules.
- Certain aspects of the invention provide the following non-limiting embodiments:
- A method for facilitating contextual insights comprising: receiving a request for contextual insights with respect to at least some text; determining from information provided with the request a focus of attention for the contextual insights; performing context analysis from context provided with the request to determine query terms; formulating at least one query using one or more of the query terms; initiating a search by sending the at least one query to at least one search service; receiving results of the search; and organizing and filtering the results according to at least some of the context.
- The method of example 1, wherein query items including operators such as or, not and boost; and metadata such as user's location, time of day, and client device are also determined from the information provided with the request, the formulating of the at least one query further using one or more of the query items.
- The method of any of examples 1-2, wherein the information provided with the request for contextual insights comprises an indication of a selection of text.
- The method of any of examples 1-2, wherein the information provided with the request for contextual insights comprises an indication of a selection of a region.
- The method of any of examples 1-4, wherein determining from the indication the focus of attention comprises predicting the focus of attention by: modifying an initially indicated text section from the information provided with the request with additional text selected from the context provided with the request to form one or more candidate foci of attention; determining a probability or score for each of the one or more candidate foci of attention; and selecting at least one of the candidate foci of attention having the highest probability or score.
- The method of any of examples 1-5, wherein the context comprises one or more of content surrounding the indication, device metadata, application metadata, and user metadata.
- The method of any of examples 1-6, wherein formulating the at least one query further comprises: determining a mode of operation from one or more of a level of ambiguity in the focus of attention and a user preference; and modifying the query in response to the mode of operation.
- The method of any of examples 1-7, wherein formulating the at least one query further comprises modifying the query in response to user metadata.
- The method of any of examples 1-8, wherein organizing and filtering the results further comprises: detecting a pattern in the results, wherein the pattern is based on a level of similarity of one or more of rankings of the results, content of the results, and origin of the results; and using the pattern to group, re-sort, or remove results.
- The method of any of examples 1-9, wherein determining the query terms comprises: performing context analysis of one or more of: content of a file being consumed or created in an application that is a source of the request; application properties of the application; device properties of a device on which the application is executed; or metadata associated with a user's identity, locality, environment, language, privacy settings, search history, interests and/or access to computing resources.
- The method of example 10, wherein performing context analysis of the content of the file performs context analysis of all content of the file being consumed or created in the application or performs context analysis of a particular amount of content of the file.
- The method of example 10 or 11, wherein performing context analysis of the content further comprises selecting candidate context terms from content surrounding a focus-of-attention term and analyzing the candidate context terms in relation to the focus-of-attention term.
- The method of example 12, wherein determining the query terms further comprises scoring the candidate context terms independently of each other but in relation to the focus-of-attention term; and ranking the scores for each pair of candidate context term and focus-of-attention term.
- The method of any of examples 12-13, wherein determining the query terms further comprises using query logs of search engines to analyze a relevance of a candidate context term to the focus-of-attention-term.
- The method of any of examples 12-14, comprising requesting a strength relationship value for the candidate context terms from an n-gram service.
- The method of any of examples 12-15, wherein determining the query terms further comprises determining whether a candidate context term is a named entity and adjusting a relevance of the candidate context term according to whether or not the candidate context term is the named entity.
- The method of any of examples 12-16, wherein determining the query terms further comprises determining a distance value of a number of words or terms between the candidate context term and the focus-of-attention term.
- The method of any of examples 12-17, wherein the relevance of a candidate context term to the focus-of-attention term is determined using anchor text available from an online knowledge-base.
- A computer-readable storage medium having instructions stored thereon to perform the method of any of examples 1-18.
- A service comprising: one or more computer readable storage media; program instructions stored on at least one of the one or more computer readable storage media that, when executed by a processing system, direct the processing system to: in response to receiving a request for contextual insights with respect to at least some text: determine a focus of attention from the information provided with the request; perform context analysis from context provided with the request to determine one or more context terms; formulate at least one query using one or more of the focus of attention and the context terms; send the at least one query to at least one search service to initiate a search; and in response to receiving one or more results from the at least one search service, organize and filter the results according to at least some of the context.
- The service of example 20, wherein the program instructions that direct the processing system to determine the focus of attention from the indication direct the processing system to: modify an initially indicated text section from the information provided with the request with an additional text selected from the context provided with the request to form one or more candidate foci of attention; determine a probability or score for each of the one or more candidate foci of attention; and select at least one of the candidate foci of attention having the highest probability or score.
- The service of any of examples 20-21, wherein the context comprises one or more of the content surrounding the indication, device metadata, application metadata, and user metadata.
- The service of any of examples 20-22, wherein the program instructions that direct the processing system to formulate the at least one query direct the processing system to: determine a mode of operation from one or more of a level of ambiguity in the focus of attention and a user preference; and modify the query in response to the mode of operation.
- The service of any of examples 20-23, wherein the program instructions that direct the processing system to formulate the at least one query directs the processing system to modify the query in response to user metadata.
- The service of any of examples 20-24, wherein the program instructions that direct the processing system to organize and filter the results direct the processing system to: detect a pattern in the results, wherein the pattern is based on a level of similarity of one or more of rankings of the results, content of the results, and origin of the results; and use the pattern to group, re-sort, or remove results.
- The service of any of examples 20-25, wherein the program instructions direct the processing system to perform any of the steps of the methods in examples 1-19.
- A system comprising: a processing system; one or more computer readable storage media; program instructions stored on at least one of the one or more storage media that, when executed by the processing system, direct the processing system to: determine, from information provided with a request for contextual insights with respect to at least some text, a focus of attention for the contextual insights; perform context analysis of a context provided with the request to determine query terms; formulate at least one query using one or more of the query terms; send the at least one query to at least one search service; organize and filter results received from the at least one search service according to at least some of the context; and provide the organized and filtered results to a source of the request.
- The system of example 27, wherein the program instructions that direct the processing system to determine the focus of attention from the indication direct the processing system to: modify an initially indicated text section from the information provided with the request with an additional text selected from the context provided with the request to form one or more candidate foci of attention; determine a probability or score for each of the one or more candidate foci of attention; and select at least one of the candidate foci of attention having the highest probability or score.
- The system of any of examples 27-28, wherein the request context comprises one or more of the content surrounding the indication, device metadata, application metadata, and user metadata.
- The system of any of examples 27-29, wherein the program instructions that direct the processing system to formulate the at least one query direct the processing system to: determine a mode of operation from one or more of a level of ambiguity in the focus of attention and a user preference; and modify the query in response to the mode of operation.
- The system of any of examples 27-30, wherein the program instructions that direct the processing system to formulate the at least one query direct the processing system to modify the query in response to user metadata.
- The system of any of examples 27-31, wherein the program instructions that direct the processing system to organize and filter results received from the at least one search service according to at least some of the context direct the processing system to: detect a pattern in the results, wherein the pattern is based on a level of similarity of one or more of rankings of the results, content of the results, and origin of the results; and use the pattern to group, re-sort, or remove results.
- The system of any of examples 27-32, wherein the program instructions direct the processing system to perform any of the steps of the methods in examples 1-19.
- A system comprising: a means for receiving a request for contextual insights with respect to at least some text; a means for determining from information provided with the request a focus of attention for the contextual insights; a means for performing context analysis from context provided with the request to determine query items; a means for formulating at least one query using one or more of the query terms; a means for initiating a search by sending the at least one query to at least one search service; a means for receiving results of the search; and a means for organizing and filtering the results according to at least some of the context.
- It should be understood that the examples and embodiments described herein are for illustrative purposes only and that various modifications or changes in light thereof will be suggested to persons skilled in the art and are to be included within the spirit and purview of this application.
- Although the subject matter has been described in language specific to structural features and/or acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as examples of implementing the claims and other equivalent features and acts are intended to be within the scope of the claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/508,431 US20150100562A1 (en) | 2013-10-07 | 2014-10-07 | Contextual insights and exploration |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361887954P | 2013-10-07 | 2013-10-07 | |
US14/508,431 US20150100562A1 (en) | 2013-10-07 | 2014-10-07 | Contextual insights and exploration |
Publications (1)
Publication Number | Publication Date |
---|---|
US20150100562A1 true US20150100562A1 (en) | 2015-04-09 |
Family
ID=51790877
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/245,646 Active 2034-12-05 US9436918B2 (en) | 2013-10-07 | 2014-04-04 | Smart selection of text spans |
US14/508,431 Abandoned US20150100562A1 (en) | 2013-10-07 | 2014-10-07 | Contextual insights and exploration |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/245,646 Active 2034-12-05 US9436918B2 (en) | 2013-10-07 | 2014-04-04 | Smart selection of text spans |
Country Status (6)
Country | Link |
---|---|
US (2) | US9436918B2 (en) |
EP (2) | EP3055789A1 (en) |
KR (1) | KR20160067202A (en) |
CN (2) | CN105637507B (en) |
TW (1) | TW201519075A (en) |
WO (2) | WO2015053993A1 (en) |
Cited By (78)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150134653A1 (en) * | 2013-11-13 | 2015-05-14 | Google Inc. | Methods, systems, and media for presenting recommended media content items |
US20150205451A1 (en) * | 2014-01-23 | 2015-07-23 | Lg Electronics Inc. | Mobile terminal and control method for the same |
US20160048326A1 (en) * | 2014-08-18 | 2016-02-18 | Lg Electronics Inc. | Mobile terminal and method of controlling the same |
US9485543B2 (en) | 2013-11-12 | 2016-11-01 | Google Inc. | Methods, systems, and media for presenting suggestions of media content |
WO2016196697A1 (en) * | 2015-06-03 | 2016-12-08 | Microsoft Technology Licensing, Llc | Graph-driven authoring in productivity tools |
US20170140055A1 (en) * | 2015-11-17 | 2017-05-18 | Dassault Systemes | Thematic web corpus |
US20170228459A1 (en) * | 2016-02-05 | 2017-08-10 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and device for mobile searching based on artificial intelligence |
US20170337265A1 (en) * | 2016-05-17 | 2017-11-23 | Google Inc. | Generating a personal database entry for a user based on natural language user interface input of the user and generating output based on the entry in response to further natural language user interface input of the user |
US20170357696A1 (en) * | 2016-06-10 | 2017-12-14 | Apple Inc. | System and method of generating a key list from multiple search domains |
US10193990B2 (en) * | 2005-10-26 | 2019-01-29 | Cortica Ltd. | System and method for creating user profiles based on multimedia content |
US10210146B2 (en) | 2014-09-28 | 2019-02-19 | Microsoft Technology Licensing, Llc | Productivity tools for content authoring |
US20190095482A1 (en) * | 2017-09-28 | 2019-03-28 | Oracle International Corporation | Recommending fields for a query based on prior queries |
US20190155955A1 (en) * | 2017-11-20 | 2019-05-23 | Rovi Guides, Inc. | Systems and methods for filtering supplemental content for an electronic book |
US10303771B1 (en) | 2018-02-14 | 2019-05-28 | Capital One Services, Llc | Utilizing machine learning models to identify insights in a document |
US10331737B2 (en) | 2005-10-26 | 2019-06-25 | Cortica Ltd. | System for generation of a large-scale database of hetrogeneous speech |
US10372746B2 (en) | 2005-10-26 | 2019-08-06 | Cortica, Ltd. | System and method for searching applications using multimedia content elements |
US10387914B2 (en) | 2005-10-26 | 2019-08-20 | Cortica, Ltd. | Method for identification of multimedia content elements and adding advertising content respective thereof |
US10402061B2 (en) | 2014-09-28 | 2019-09-03 | Microsoft Technology Licensing, Llc | Productivity tools for content authoring |
US10402410B2 (en) * | 2015-05-15 | 2019-09-03 | Google Llc | Contextualizing knowledge panels |
US20190325046A1 (en) * | 2018-04-19 | 2019-10-24 | Microsoft Technology Licensing, Llc | Discovering schema using anchor attributes |
US10534502B1 (en) * | 2015-02-18 | 2020-01-14 | David Graham Boyers | Methods and graphical user interfaces for positioning the cursor and selecting text on computing devices with touch-sensitive displays |
US10585934B2 (en) | 2005-10-26 | 2020-03-10 | Cortica Ltd. | Method and system for populating a concept database with respect to user identifiers |
US10607355B2 (en) | 2005-10-26 | 2020-03-31 | Cortica, Ltd. | Method and system for determining the dimensions of an object shown in a multimedia content item |
US10614626B2 (en) | 2005-10-26 | 2020-04-07 | Cortica Ltd. | System and method for providing augmented reality challenges |
US10621988B2 (en) | 2005-10-26 | 2020-04-14 | Cortica Ltd | System and method for speech to text translation using cores of a natural liquid architecture system |
US10650007B2 (en) | 2016-04-25 | 2020-05-12 | Microsoft Technology Licensing, Llc | Ranking contextual metadata to generate relevant data insights |
US10691642B2 (en) | 2005-10-26 | 2020-06-23 | Cortica Ltd | System and method for enriching a concept database with homogenous concepts |
US10706094B2 (en) | 2005-10-26 | 2020-07-07 | Cortica Ltd | System and method for customizing a display of a user device based on multimedia content element signatures |
US10742340B2 (en) | 2005-10-26 | 2020-08-11 | Cortica Ltd. | System and method for identifying the context of multimedia content elements displayed in a web-page and providing contextual filters respective thereto |
US10748022B1 (en) | 2019-12-12 | 2020-08-18 | Cartica Ai Ltd | Crowd separation |
US10748038B1 (en) | 2019-03-31 | 2020-08-18 | Cortica Ltd. | Efficient calculation of a robust signature of a media unit |
US10769182B2 (en) | 2016-06-10 | 2020-09-08 | Apple Inc. | System and method of highlighting terms |
US10776585B2 (en) | 2005-10-26 | 2020-09-15 | Cortica, Ltd. | System and method for recognizing characters in multimedia content |
US10776669B1 (en) | 2019-03-31 | 2020-09-15 | Cortica Ltd. | Signature generation and object detection that refer to rare scenes |
US10789527B1 (en) | 2019-03-31 | 2020-09-29 | Cortica Ltd. | Method for object detection using shallow neural networks |
US10789535B2 (en) | 2018-11-26 | 2020-09-29 | Cartica Ai Ltd | Detection of road elements |
US10796444B1 (en) | 2019-03-31 | 2020-10-06 | Cortica Ltd | Configuring spanning elements of a signature generator |
US10831814B2 (en) | 2005-10-26 | 2020-11-10 | Cortica, Ltd. | System and method for linking multimedia data elements to web pages |
US10839694B2 (en) | 2018-10-18 | 2020-11-17 | Cartica Ai Ltd | Blind spot alert |
US10848590B2 (en) | 2005-10-26 | 2020-11-24 | Cortica Ltd | System and method for determining a contextual insight and providing recommendations based thereon |
US10846544B2 (en) | 2018-07-16 | 2020-11-24 | Cartica Ai Ltd. | Transportation prediction system and method |
US10902049B2 (en) | 2005-10-26 | 2021-01-26 | Cortica Ltd | System and method for assigning multimedia content elements to users |
US10909191B2 (en) | 2017-11-20 | 2021-02-02 | Rovi Guides, Inc. | Systems and methods for displaying supplemental content for an electronic book |
US11003706B2 (en) | 2005-10-26 | 2021-05-11 | Cortica Ltd | System and methods for determining access permissions on personalized clusters of multimedia content elements |
US11019161B2 (en) | 2005-10-26 | 2021-05-25 | Cortica, Ltd. | System and method for profiling users interest based on multimedia content analysis |
US11029685B2 (en) | 2018-10-18 | 2021-06-08 | Cartica Ai Ltd. | Autonomous risk assessment for fallen cargo |
US11032017B2 (en) | 2005-10-26 | 2021-06-08 | Cortica, Ltd. | System and method for identifying the context of multimedia content elements |
US11037015B2 (en) | 2015-12-15 | 2021-06-15 | Cortica Ltd. | Identification of key points in multimedia data elements |
US11126870B2 (en) | 2018-10-18 | 2021-09-21 | Cartica Ai Ltd. | Method and system for obstacle detection |
US11126869B2 (en) | 2018-10-26 | 2021-09-21 | Cartica Ai Ltd. | Tracking after objects |
US11132548B2 (en) | 2019-03-20 | 2021-09-28 | Cortica Ltd. | Determining object information that does not explicitly appear in a media unit signature |
US11181911B2 (en) | 2018-10-18 | 2021-11-23 | Cartica Ai Ltd | Control transfer of a vehicle |
US11195043B2 (en) | 2015-12-15 | 2021-12-07 | Cortica, Ltd. | System and method for determining common patterns in multimedia content elements based on key points |
US11210458B2 (en) * | 2017-05-16 | 2021-12-28 | Apple Inc. | Device, method, and graphical user interface for editing screenshot images |
US11216498B2 (en) | 2005-10-26 | 2022-01-04 | Cortica, Ltd. | System and method for generating signatures to three-dimensional multimedia data elements |
US11222069B2 (en) | 2019-03-31 | 2022-01-11 | Cortica Ltd. | Low-power calculation of a signature of a media unit |
US11285963B2 (en) | 2019-03-10 | 2022-03-29 | Cartica Ai Ltd. | Driver-based prediction of dangerous events |
US11320983B1 (en) * | 2018-04-25 | 2022-05-03 | David Graham Boyers | Methods and graphical user interfaces for positioning a selection, selecting, and editing, on a computing device running applications under a touch-based operating system |
US11347756B2 (en) | 2019-08-26 | 2022-05-31 | Microsoft Technology Licensing, Llc | Deep command search within and across applications |
US11403336B2 (en) | 2005-10-26 | 2022-08-02 | Cortica Ltd. | System and method for removing contextually identical multimedia content elements |
US20220261428A1 (en) * | 2021-02-16 | 2022-08-18 | International Business Machines Corporation | Selection-based searching using concatenated word and context |
US11436282B2 (en) * | 2019-02-20 | 2022-09-06 | Baidu Online Network Technology (Beijing) Co., Ltd. | Methods, devices and media for providing search suggestions |
US11550865B2 (en) | 2019-08-19 | 2023-01-10 | Dropbox, Inc. | Truncated search results that preserve the most relevant portions |
US11593662B2 (en) | 2019-12-12 | 2023-02-28 | Autobrains Technologies Ltd | Unsupervised cluster generation |
US11590988B2 (en) | 2020-03-19 | 2023-02-28 | Autobrains Technologies Ltd | Predictive turning assistant |
US11604847B2 (en) | 2005-10-26 | 2023-03-14 | Cortica Ltd. | System and method for overlaying content on a multimedia content element based on user interest |
US11643005B2 (en) | 2019-02-27 | 2023-05-09 | Autobrains Technologies Ltd | Adjusting adjustable headlights of a vehicle |
US11694088B2 (en) | 2019-03-13 | 2023-07-04 | Cortica Ltd. | Method for object detection using knowledge distillation |
US11758004B2 (en) | 2005-10-26 | 2023-09-12 | Cortica Ltd. | System and method for providing recommendations based on user profiles |
US11756424B2 (en) | 2020-07-24 | 2023-09-12 | AutoBrains Technologies Ltd. | Parking assist |
US11760387B2 (en) | 2017-07-05 | 2023-09-19 | AutoBrains Technologies Ltd. | Driving policies determination |
US11827215B2 (en) | 2020-03-31 | 2023-11-28 | AutoBrains Technologies Ltd. | Method for training a driving related object detector |
US11899707B2 (en) | 2017-07-09 | 2024-02-13 | Cortica Ltd. | Driving policies determination |
US11900046B2 (en) * | 2020-08-07 | 2024-02-13 | Microsoft Technology Licensing, Llc | Intelligent feature identification and presentation |
US12049116B2 (en) | 2020-09-30 | 2024-07-30 | Autobrains Technologies Ltd | Configuring an active suspension |
US12055408B2 (en) | 2019-03-28 | 2024-08-06 | Autobrains Technologies Ltd | Estimating a movement of a hybrid-behavior vehicle |
US12110075B2 (en) | 2021-08-05 | 2024-10-08 | AutoBrains Technologies Ltd. | Providing a prediction of a radius of a motorcycle turn |
US12139166B2 (en) | 2022-06-07 | 2024-11-12 | Autobrains Technologies Ltd | Cabin preferences setting that is based on identification of one or more persons in the cabin |
Families Citing this family (171)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8677377B2 (en) | 2005-09-08 | 2014-03-18 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US10002189B2 (en) | 2007-12-20 | 2018-06-19 | Apple Inc. | Method and apparatus for searching using an active ontology |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US20100030549A1 (en) | 2008-07-31 | 2010-02-04 | Lee Michael M | Mobile device having human language translation capability with positional feedback |
US8676904B2 (en) | 2008-10-02 | 2014-03-18 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US10255566B2 (en) | 2011-06-03 | 2019-04-09 | Apple Inc. | Generating and processing task items that represent tasks to perform |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US8682667B2 (en) | 2010-02-25 | 2014-03-25 | Apple Inc. | User profiling for selecting user specific voice input processing information |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US11568331B2 (en) * | 2011-09-26 | 2023-01-31 | Open Text Corporation | Methods and systems for providing automated predictive analysis |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US10417037B2 (en) | 2012-05-15 | 2019-09-17 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
JP2014099052A (en) * | 2012-11-14 | 2014-05-29 | International Business Maschines Corporation | Apparatus for editing text, data processing method and program |
AU2014214676A1 (en) | 2013-02-07 | 2015-08-27 | Apple Inc. | Voice trigger for a digital assistant |
US10652394B2 (en) | 2013-03-14 | 2020-05-12 | Apple Inc. | System and method for processing voicemail |
US10748529B1 (en) | 2013-03-15 | 2020-08-18 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
WO2014197335A1 (en) | 2013-06-08 | 2014-12-11 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
CN110442699A (en) | 2013-06-09 | 2019-11-12 | 苹果公司 | Operate method, computer-readable medium, electronic equipment and the system of digital assistants |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
AU2014306221B2 (en) | 2013-08-06 | 2017-04-06 | Apple Inc. | Auto-activating smart responses based on activities from remote devices |
US9721002B2 (en) * | 2013-11-29 | 2017-08-01 | Sap Se | Aggregating results from named entity recognition services |
US10296160B2 (en) | 2013-12-06 | 2019-05-21 | Apple Inc. | Method for extracting salient dialog usage from live data |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
EP3480811A1 (en) | 2014-05-30 | 2019-05-08 | Apple Inc. | Multi-command single utterance input method |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US11334720B2 (en) | 2019-04-17 | 2022-05-17 | International Business Machines Corporation | Machine learned sentence span inclusion judgments |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9811352B1 (en) | 2014-07-11 | 2017-11-07 | Google Inc. | Replaying user input actions using screen capture images |
US9965559B2 (en) | 2014-08-21 | 2018-05-08 | Google Llc | Providing automatic actions for mobile onscreen content |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10152299B2 (en) | 2015-03-06 | 2018-12-11 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9703541B2 (en) | 2015-04-28 | 2017-07-11 | Google Inc. | Entity action suggestion on a mobile device |
US10460227B2 (en) | 2015-05-15 | 2019-10-29 | Apple Inc. | Virtual assistant in a communication session |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10200824B2 (en) | 2015-05-27 | 2019-02-05 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device |
US9578173B2 (en) | 2015-06-05 | 2017-02-21 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US20160378747A1 (en) | 2015-06-29 | 2016-12-29 | Apple Inc. | Virtual assistant for media playback |
US9971940B1 (en) * | 2015-08-10 | 2018-05-15 | Google Llc | Automatic learning of a video matching system |
US10740384B2 (en) | 2015-09-08 | 2020-08-11 | Apple Inc. | Intelligent automated assistant for media search and playback |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10331312B2 (en) | 2015-09-08 | 2019-06-25 | Apple Inc. | Intelligent automated assistant in a media environment |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10970646B2 (en) | 2015-10-01 | 2021-04-06 | Google Llc | Action suggestions for user-selected content |
US10178527B2 (en) | 2015-10-22 | 2019-01-08 | Google Llc | Personalized entity repository |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10956666B2 (en) | 2015-11-09 | 2021-03-23 | Apple Inc. | Unconventional virtual assistant interactions |
US10055390B2 (en) * | 2015-11-18 | 2018-08-21 | Google Llc | Simulated hyperlinks on a mobile device based on user intent and a centered selection of text |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) * | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
CN105975540A (en) | 2016-04-29 | 2016-09-28 | 北京小米移动软件有限公司 | Information display method and device |
US11227589B2 (en) | 2016-06-06 | 2022-01-18 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10586535B2 (en) | 2016-06-10 | 2020-03-10 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
DK201670540A1 (en) | 2016-06-11 | 2018-01-08 | Apple Inc | Application integration with a digital assistant |
DK179415B1 (en) | 2016-06-11 | 2018-06-14 | Apple Inc | Intelligent device arbitration and control |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
WO2018053735A1 (en) * | 2016-09-21 | 2018-03-29 | 朱小军 | Search method and system |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
KR101881439B1 (en) * | 2016-09-30 | 2018-07-25 | 주식회사 솔트룩스 | System and method for recommending knowledge actively to write document |
US10535005B1 (en) | 2016-10-26 | 2020-01-14 | Google Llc | Providing contextual actions for mobile onscreen content |
US11032410B2 (en) * | 2016-11-08 | 2021-06-08 | Microsoft Technology Licensing, Llc | Mobile data insight platforms for data analysis |
EP3545425A4 (en) * | 2016-11-23 | 2020-07-15 | Primal Fusion Inc. | System and method for using a knowledge representation with a machine learning classifier |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US11237696B2 (en) | 2016-12-19 | 2022-02-01 | Google Llc | Smart assist for repeated actions |
TWI603320B (en) * | 2016-12-29 | 2017-10-21 | 大仁科技大學 | Global spoken dialogue system |
CN108279828A (en) * | 2016-12-30 | 2018-07-13 | 北京搜狗科技发展有限公司 | A kind of method, apparatus and terminal starting application program |
US11138208B2 (en) * | 2016-12-30 | 2021-10-05 | Microsoft Technology Licensing, Llc | Contextual insight system |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
US20180253638A1 (en) * | 2017-03-02 | 2018-09-06 | Accenture Global Solutions Limited | Artificial Intelligence Digital Agent |
US11003839B1 (en) | 2017-04-28 | 2021-05-11 | I.Q. Joe, Llc | Smart interface with facilitated input and mistake recovery |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
DK201770383A1 (en) | 2017-05-09 | 2018-12-14 | Apple Inc. | User interface for correcting recognition errors |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
DK201770439A1 (en) | 2017-05-11 | 2018-12-13 | Apple Inc. | Offline personal assistant |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
DK180048B1 (en) | 2017-05-11 | 2020-02-04 | Apple Inc. | MAINTAINING THE DATA PROTECTION OF PERSONAL INFORMATION |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
DK179496B1 (en) | 2017-05-12 | 2019-01-15 | Apple Inc. | USER-SPECIFIC Acoustic Models |
DK179745B1 (en) | 2017-05-12 | 2019-05-01 | Apple Inc. | SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT |
DK201770427A1 (en) | 2017-05-12 | 2018-12-20 | Apple Inc. | Low-latency intelligent automated assistant |
DK201770432A1 (en) | 2017-05-15 | 2018-12-21 | Apple Inc. | Hierarchical belief states for digital assistants |
DK201770411A1 (en) | 2017-05-15 | 2018-12-20 | Apple Inc. | Multi-modal interfaces |
DK201770431A1 (en) | 2017-05-15 | 2018-12-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US20180336892A1 (en) | 2017-05-16 | 2018-11-22 | Apple Inc. | Detecting a trigger of a digital assistant |
DK179549B1 (en) | 2017-05-16 | 2019-02-12 | Apple Inc. | Far-field extension for digital assistant services |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US10303715B2 (en) | 2017-05-16 | 2019-05-28 | Apple Inc. | Intelligent automated assistant for media exploration |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US11108709B2 (en) * | 2017-05-25 | 2021-08-31 | Lenovo (Singapore) Pte. Ltd. | Provide status message associated with work status |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
US11816622B2 (en) | 2017-08-14 | 2023-11-14 | ScoutZinc, LLC | System and method for rating of personnel using crowdsourcing in combination with weighted evaluator ratings |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
US11113608B2 (en) | 2017-10-30 | 2021-09-07 | Accenture Global Solutions Limited | Hybrid bot framework for enterprises |
CN107844327B (en) * | 2017-11-03 | 2020-10-27 | 南京大学 | Detection system and detection method for realizing context consistency |
JP7311509B2 (en) * | 2017-11-20 | 2023-07-19 | ロヴィ ガイズ, インコーポレイテッド | Systems and methods for filtering supplemental content for e-books |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
CN109917988B (en) * | 2017-12-13 | 2021-12-21 | 腾讯科技(深圳)有限公司 | Selected content display method, device, terminal and computer readable storage medium |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10831812B2 (en) | 2018-03-20 | 2020-11-10 | Microsoft Technology Licensing, Llc | Author-created digital agents |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
US11715042B1 (en) | 2018-04-20 | 2023-08-01 | Meta Platforms Technologies, Llc | Interpretability of deep reinforcement learning models in assistant systems |
US10782986B2 (en) | 2018-04-20 | 2020-09-22 | Facebook, Inc. | Assisting users with personalized and contextual communication content |
US11676220B2 (en) | 2018-04-20 | 2023-06-13 | Meta Platforms, Inc. | Processing multimodal user input for assistant systems |
US10978056B1 (en) * | 2018-04-20 | 2021-04-13 | Facebook, Inc. | Grammaticality classification for natural language generation in assistant systems |
US11307880B2 (en) | 2018-04-20 | 2022-04-19 | Meta Platforms, Inc. | Assisting users with personalized and contextual communication content |
US11886473B2 (en) | 2018-04-20 | 2024-01-30 | Meta Platforms, Inc. | Intent identification for agent matching by assistant systems |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
DK201870355A1 (en) | 2018-06-01 | 2019-12-16 | Apple Inc. | Virtual assistant operation in multi-device environments |
DK179822B1 (en) | 2018-06-01 | 2019-07-12 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
DK180639B1 (en) | 2018-06-01 | 2021-11-04 | Apple Inc | DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
US10944859B2 (en) | 2018-06-03 | 2021-03-09 | Apple Inc. | Accelerated task performance |
JP6965846B2 (en) * | 2018-08-17 | 2021-11-10 | 日本電信電話株式会社 | Language model score calculation device, learning device, language model score calculation method, learning method and program |
US11010561B2 (en) | 2018-09-27 | 2021-05-18 | Apple Inc. | Sentiment prediction from textual data |
US10839159B2 (en) | 2018-09-28 | 2020-11-17 | Apple Inc. | Named entity normalization in a spoken dialog system |
US11462215B2 (en) | 2018-09-28 | 2022-10-04 | Apple Inc. | Multi-modal inputs for voice commands |
US11170166B2 (en) | 2018-09-28 | 2021-11-09 | Apple Inc. | Neural typographical error modeling via generative adversarial networks |
US11475898B2 (en) | 2018-10-26 | 2022-10-18 | Apple Inc. | Low-latency multi-speaker speech recognition |
EP3857431A1 (en) * | 2018-10-30 | 2021-08-04 | Google LLC | Automatic hyperlinking of documents |
CN109543022B (en) * | 2018-12-17 | 2020-10-13 | 北京百度网讯科技有限公司 | Text error correction method and device |
US11200461B2 (en) * | 2018-12-21 | 2021-12-14 | Capital One Services, Llc | Methods and arrangements to identify feature contributions to erroneous predictions |
US11638059B2 (en) | 2019-01-04 | 2023-04-25 | Apple Inc. | Content playback on multiple devices |
US20220075944A1 (en) * | 2019-02-19 | 2022-03-10 | Google Llc | Learning to extract entities from conversations with neural networks |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
US11275892B2 (en) | 2019-04-29 | 2022-03-15 | International Business Machines Corporation | Traversal-based sentence span judgements |
DK201970509A1 (en) | 2019-05-06 | 2021-01-15 | Apple Inc | Spoken notifications |
US11423908B2 (en) | 2019-05-06 | 2022-08-23 | Apple Inc. | Interpreting spoken requests |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
US11475884B2 (en) | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
US11140099B2 (en) | 2019-05-21 | 2021-10-05 | Apple Inc. | Providing message response suggestions |
DK180129B1 (en) | 2019-05-31 | 2020-06-02 | Apple Inc. | User activity shortcut suggestions |
US11289073B2 (en) | 2019-05-31 | 2022-03-29 | Apple Inc. | Device text to speech |
US11496600B2 (en) | 2019-05-31 | 2022-11-08 | Apple Inc. | Remote execution of machine-learned models |
DK201970511A1 (en) | 2019-05-31 | 2021-02-15 | Apple Inc | Voice identification in digital assistant systems |
US11360641B2 (en) | 2019-06-01 | 2022-06-14 | Apple Inc. | Increasing the relevance of new available information |
US11468890B2 (en) | 2019-06-01 | 2022-10-11 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
WO2021056255A1 (en) | 2019-09-25 | 2021-04-01 | Apple Inc. | Text detection using global geometry estimators |
US10956295B1 (en) * | 2020-02-26 | 2021-03-23 | Sap Se | Automatic recognition for smart declaration of user interface elements |
US11768945B2 (en) * | 2020-04-07 | 2023-09-26 | Allstate Insurance Company | Machine learning system for determining a security vulnerability in computer software |
US11061543B1 (en) | 2020-05-11 | 2021-07-13 | Apple Inc. | Providing relevant data items based on context |
US11043220B1 (en) | 2020-05-11 | 2021-06-22 | Apple Inc. | Digital assistant hardware abstraction |
US11490204B2 (en) | 2020-07-20 | 2022-11-01 | Apple Inc. | Multi-device audio adjustment coordination |
US11438683B2 (en) | 2020-07-21 | 2022-09-06 | Apple Inc. | User identification using headphones |
US11829720B2 (en) | 2020-09-01 | 2023-11-28 | Apple Inc. | Analysis and validation of language models |
JP7534673B2 (en) | 2020-11-20 | 2024-08-15 | 富士通株式会社 | Machine learning program, machine learning method and natural language processing device |
US20220366513A1 (en) * | 2021-05-14 | 2022-11-17 | Jpmorgan Chase Bank, N.A. | Method and apparatus for check fraud detection through check image analysis |
CN113641724B (en) * | 2021-07-22 | 2024-01-19 | 北京百度网讯科技有限公司 | Knowledge tag mining method and device, electronic equipment and storage medium |
WO2023028599A1 (en) * | 2021-08-27 | 2023-03-02 | Rock Cube Holdings LLC | Systems and methods for time-dependent hyperlink presentation |
IL313797A (en) * | 2022-01-13 | 2024-08-01 | High Sec Labs Ltd | System and method for secure copy-and-paste opertions between hosts through a peripheral sharing device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5832528A (en) * | 1994-08-29 | 1998-11-03 | Microsoft Corporation | Method and system for selecting text with a mouse input device in a computer system |
US6385602B1 (en) * | 1998-11-03 | 2002-05-07 | E-Centives, Inc. | Presentation of search results using dynamic categorization |
US20060074883A1 (en) * | 2004-10-05 | 2006-04-06 | Microsoft Corporation | Systems, methods, and interfaces for providing personalized search and information access |
US20070136251A1 (en) * | 2003-08-21 | 2007-06-14 | Idilia Inc. | System and Method for Processing a Query |
US20090228842A1 (en) * | 2008-03-04 | 2009-09-10 | Apple Inc. | Selecting of text using gestures |
US20140081993A1 (en) * | 2012-09-20 | 2014-03-20 | Intelliresponse Systems Inc. | Disambiguation framework for information searching |
US8706748B2 (en) * | 2007-12-12 | 2014-04-22 | Decho Corporation | Methods for enhancing digital search query techniques based on task-oriented user activity |
Family Cites Families (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7000197B1 (en) * | 2000-06-01 | 2006-02-14 | Autodesk, Inc. | Method and apparatus for inferred selection of objects |
US6907581B2 (en) * | 2001-04-03 | 2005-06-14 | Ramot At Tel Aviv University Ltd. | Method and system for implicitly resolving pointing ambiguities in human-computer interaction (HCI) |
JP4278050B2 (en) * | 2004-01-30 | 2009-06-10 | ソフトバンクモバイル株式会社 | Search device and information providing system |
US7536382B2 (en) * | 2004-03-31 | 2009-05-19 | Google Inc. | Query rewriting with entity detection |
GB0407816D0 (en) * | 2004-04-06 | 2004-05-12 | British Telecomm | Information retrieval |
US7603349B1 (en) * | 2004-07-29 | 2009-10-13 | Yahoo! Inc. | User interfaces for search systems using in-line contextual queries |
US7856441B1 (en) * | 2005-01-10 | 2010-12-21 | Yahoo! Inc. | Search systems and methods using enhanced contextual queries |
US8838562B1 (en) * | 2004-10-22 | 2014-09-16 | Google Inc. | Methods and apparatus for providing query parameters to a search engine |
WO2007118424A1 (en) * | 2006-04-13 | 2007-10-25 | Zhang, Sheng | Web search on mobile devices |
US20100241663A1 (en) * | 2008-02-07 | 2010-09-23 | Microsoft Corporation | Providing content items selected based on context |
US8786556B2 (en) * | 2009-03-12 | 2014-07-22 | Nokia Corporation | Method and apparatus for selecting text information |
US20100289757A1 (en) * | 2009-05-14 | 2010-11-18 | Budelli Joey G | Scanner with gesture-based text selection capability |
US9262063B2 (en) | 2009-09-02 | 2016-02-16 | Amazon Technologies, Inc. | Touch-screen user interface |
US8489390B2 (en) * | 2009-09-30 | 2013-07-16 | Cisco Technology, Inc. | System and method for generating vocabulary from network data |
EP2488963A1 (en) * | 2009-10-15 | 2012-08-22 | Rogers Communications Inc. | System and method for phrase identification |
US9811507B2 (en) | 2010-01-11 | 2017-11-07 | Apple Inc. | Presenting electronic publications on a graphical user interface of an electronic device |
US8704783B2 (en) * | 2010-03-24 | 2014-04-22 | Microsoft Corporation | Easy word selection and selection ahead of finger |
US9069416B2 (en) * | 2010-03-25 | 2015-06-30 | Google Inc. | Method and system for selecting content using a touchscreen |
US8719246B2 (en) * | 2010-06-28 | 2014-05-06 | Microsoft Corporation | Generating and presenting a suggested search query |
US9002701B2 (en) * | 2010-09-29 | 2015-04-07 | Rhonda Enterprises, Llc | Method, system, and computer readable medium for graphically displaying related text in an electronic document |
US8818981B2 (en) * | 2010-10-15 | 2014-08-26 | Microsoft Corporation | Providing information to users based on context |
US20120102401A1 (en) * | 2010-10-25 | 2012-04-26 | Nokia Corporation | Method and apparatus for providing text selection |
JP5087129B2 (en) * | 2010-12-07 | 2012-11-28 | 株式会社東芝 | Information processing apparatus and information processing method |
US9645986B2 (en) | 2011-02-24 | 2017-05-09 | Google Inc. | Method, medium, and system for creating an electronic book with an umbrella policy |
KR20120102262A (en) * | 2011-03-08 | 2012-09-18 | 삼성전자주식회사 | The method for selecting a desired contents from text in portable terminal and device thererof |
DE112011105305T5 (en) * | 2011-06-03 | 2014-03-13 | Google, Inc. | Gestures for text selection |
US8612584B2 (en) | 2011-08-29 | 2013-12-17 | Google Inc. | Using eBook reading data to generate time-based information |
US9612670B2 (en) * | 2011-09-12 | 2017-04-04 | Microsoft Technology Licensing, Llc | Explicit touch selection and cursor placement |
US8842085B1 (en) * | 2011-09-23 | 2014-09-23 | Amazon Technologies, Inc. | Providing supplemental information for a digital work |
US20150205490A1 (en) * | 2011-10-05 | 2015-07-23 | Google Inc. | Content selection mechanisms |
US8626545B2 (en) | 2011-10-17 | 2014-01-07 | CrowdFlower, Inc. | Predicting future performance of multiple workers on crowdsourcing tasks and selecting repeated crowdsourcing workers |
US9691381B2 (en) * | 2012-02-21 | 2017-06-27 | Mediatek Inc. | Voice command recognition method and related electronic device and computer-readable medium |
CN103294706A (en) * | 2012-02-28 | 2013-09-11 | 腾讯科技(深圳)有限公司 | Text searching method and device in touch type terminals |
US9292192B2 (en) * | 2012-04-30 | 2016-03-22 | Blackberry Limited | Method and apparatus for text selection |
US9916396B2 (en) * | 2012-05-11 | 2018-03-13 | Google Llc | Methods and systems for content-based search |
WO2014000251A1 (en) * | 2012-06-29 | 2014-01-03 | Microsoft Corporation | Input method editor |
-
2014
- 2014-04-04 US US14/245,646 patent/US9436918B2/en active Active
- 2014-08-19 TW TW103128489A patent/TW201519075A/en unknown
- 2014-10-01 WO PCT/US2014/058506 patent/WO2015053993A1/en active Application Filing
- 2014-10-01 CN CN201480055252.2A patent/CN105637507B/en active Active
- 2014-10-01 EP EP14796315.1A patent/EP3055789A1/en not_active Ceased
- 2014-10-07 EP EP14787348.3A patent/EP3055787A1/en not_active Ceased
- 2014-10-07 US US14/508,431 patent/US20150100562A1/en not_active Abandoned
- 2014-10-07 CN CN201480055402.XA patent/CN105612517A/en active Pending
- 2014-10-07 KR KR1020167009112A patent/KR20160067202A/en not_active Application Discontinuation
- 2014-10-07 WO PCT/US2014/059451 patent/WO2015054218A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5832528A (en) * | 1994-08-29 | 1998-11-03 | Microsoft Corporation | Method and system for selecting text with a mouse input device in a computer system |
US6385602B1 (en) * | 1998-11-03 | 2002-05-07 | E-Centives, Inc. | Presentation of search results using dynamic categorization |
US20070136251A1 (en) * | 2003-08-21 | 2007-06-14 | Idilia Inc. | System and Method for Processing a Query |
US20060074883A1 (en) * | 2004-10-05 | 2006-04-06 | Microsoft Corporation | Systems, methods, and interfaces for providing personalized search and information access |
US8706748B2 (en) * | 2007-12-12 | 2014-04-22 | Decho Corporation | Methods for enhancing digital search query techniques based on task-oriented user activity |
US20090228842A1 (en) * | 2008-03-04 | 2009-09-10 | Apple Inc. | Selecting of text using gestures |
US20140081993A1 (en) * | 2012-09-20 | 2014-03-20 | Intelliresponse Systems Inc. | Disambiguation framework for information searching |
Non-Patent Citations (2)
Title |
---|
Eric J. Glover, Architecture of a metasearch engine that supports user information needs, InCIKM '99 Proceedings of the eighth international conference on Information and knowledge management, January 1999, ACM, Pg. 210-216 * |
Lev Finkelstein, Placing search in context: the concept revisited, In: ACM Transactions on Information Systems (TOIS), January 2002, ACM, Pg. 116-126 * |
Cited By (123)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11019161B2 (en) | 2005-10-26 | 2021-05-25 | Cortica, Ltd. | System and method for profiling users interest based on multimedia content analysis |
US10848590B2 (en) | 2005-10-26 | 2020-11-24 | Cortica Ltd | System and method for determining a contextual insight and providing recommendations based thereon |
US11758004B2 (en) | 2005-10-26 | 2023-09-12 | Cortica Ltd. | System and method for providing recommendations based on user profiles |
US11003706B2 (en) | 2005-10-26 | 2021-05-11 | Cortica Ltd | System and methods for determining access permissions on personalized clusters of multimedia content elements |
US11403336B2 (en) | 2005-10-26 | 2022-08-02 | Cortica Ltd. | System and method for removing contextually identical multimedia content elements |
US11216498B2 (en) | 2005-10-26 | 2022-01-04 | Cortica, Ltd. | System and method for generating signatures to three-dimensional multimedia data elements |
US11032017B2 (en) | 2005-10-26 | 2021-06-08 | Cortica, Ltd. | System and method for identifying the context of multimedia content elements |
US11604847B2 (en) | 2005-10-26 | 2023-03-14 | Cortica Ltd. | System and method for overlaying content on a multimedia content element based on user interest |
US10902049B2 (en) | 2005-10-26 | 2021-01-26 | Cortica Ltd | System and method for assigning multimedia content elements to users |
US10831814B2 (en) | 2005-10-26 | 2020-11-10 | Cortica, Ltd. | System and method for linking multimedia data elements to web pages |
US10776585B2 (en) | 2005-10-26 | 2020-09-15 | Cortica, Ltd. | System and method for recognizing characters in multimedia content |
US10742340B2 (en) | 2005-10-26 | 2020-08-11 | Cortica Ltd. | System and method for identifying the context of multimedia content elements displayed in a web-page and providing contextual filters respective thereto |
US10193990B2 (en) * | 2005-10-26 | 2019-01-29 | Cortica Ltd. | System and method for creating user profiles based on multimedia content |
US10706094B2 (en) | 2005-10-26 | 2020-07-07 | Cortica Ltd | System and method for customizing a display of a user device based on multimedia content element signatures |
US10691642B2 (en) | 2005-10-26 | 2020-06-23 | Cortica Ltd | System and method for enriching a concept database with homogenous concepts |
US10621988B2 (en) | 2005-10-26 | 2020-04-14 | Cortica Ltd | System and method for speech to text translation using cores of a natural liquid architecture system |
US10614626B2 (en) | 2005-10-26 | 2020-04-07 | Cortica Ltd. | System and method for providing augmented reality challenges |
US10331737B2 (en) | 2005-10-26 | 2019-06-25 | Cortica Ltd. | System for generation of a large-scale database of hetrogeneous speech |
US10607355B2 (en) | 2005-10-26 | 2020-03-31 | Cortica, Ltd. | Method and system for determining the dimensions of an object shown in a multimedia content item |
US10372746B2 (en) | 2005-10-26 | 2019-08-06 | Cortica, Ltd. | System and method for searching applications using multimedia content elements |
US10387914B2 (en) | 2005-10-26 | 2019-08-20 | Cortica, Ltd. | Method for identification of multimedia content elements and adding advertising content respective thereof |
US10585934B2 (en) | 2005-10-26 | 2020-03-10 | Cortica Ltd. | Method and system for populating a concept database with respect to user identifiers |
US9794636B2 (en) | 2013-11-12 | 2017-10-17 | Google Inc. | Methods, systems, and media for presenting suggestions of media content |
US11381880B2 (en) | 2013-11-12 | 2022-07-05 | Google Llc | Methods, systems, and media for presenting suggestions of media content |
US9485543B2 (en) | 2013-11-12 | 2016-11-01 | Google Inc. | Methods, systems, and media for presenting suggestions of media content |
US10341741B2 (en) | 2013-11-12 | 2019-07-02 | Google Llc | Methods, systems, and media for presenting suggestions of media content |
US10880613B2 (en) | 2013-11-12 | 2020-12-29 | Google Llc | Methods, systems, and media for presenting suggestions of media content |
US11023542B2 (en) | 2013-11-13 | 2021-06-01 | Google Llc | Methods, systems, and media for presenting recommended media content items |
US9552395B2 (en) * | 2013-11-13 | 2017-01-24 | Google Inc. | Methods, systems, and media for presenting recommended media content items |
US20150134653A1 (en) * | 2013-11-13 | 2015-05-14 | Google Inc. | Methods, systems, and media for presenting recommended media content items |
US20150205451A1 (en) * | 2014-01-23 | 2015-07-23 | Lg Electronics Inc. | Mobile terminal and control method for the same |
US9733787B2 (en) * | 2014-01-23 | 2017-08-15 | Lg Electronics Inc. | Mobile terminal and control method for the same |
US20160048326A1 (en) * | 2014-08-18 | 2016-02-18 | Lg Electronics Inc. | Mobile terminal and method of controlling the same |
US10528597B2 (en) | 2014-09-28 | 2020-01-07 | Microsoft Technology Licensing, Llc | Graph-driven authoring in productivity tools |
US10402061B2 (en) | 2014-09-28 | 2019-09-03 | Microsoft Technology Licensing, Llc | Productivity tools for content authoring |
US10210146B2 (en) | 2014-09-28 | 2019-02-19 | Microsoft Technology Licensing, Llc | Productivity tools for content authoring |
US10534502B1 (en) * | 2015-02-18 | 2020-01-14 | David Graham Boyers | Methods and graphical user interfaces for positioning the cursor and selecting text on computing devices with touch-sensitive displays |
US10402410B2 (en) * | 2015-05-15 | 2019-09-03 | Google Llc | Contextualizing knowledge panels |
US20190347265A1 (en) * | 2015-05-15 | 2019-11-14 | Google Llc | Contextualizing knowledge panels |
US11720577B2 (en) | 2015-05-15 | 2023-08-08 | Google Llc | Contextualizing knowledge panels |
KR20200052992A (en) * | 2015-05-15 | 2020-05-15 | 구글 엘엘씨 | Contextualizing knowledge panels |
KR102249436B1 (en) | 2015-05-15 | 2021-05-07 | 구글 엘엘씨 | Contextualizing knowledge panels |
WO2016196697A1 (en) * | 2015-06-03 | 2016-12-08 | Microsoft Technology Licensing, Llc | Graph-driven authoring in productivity tools |
US20170140055A1 (en) * | 2015-11-17 | 2017-05-18 | Dassault Systemes | Thematic web corpus |
US10783196B2 (en) * | 2015-11-17 | 2020-09-22 | Dassault Systemes | Thematic web corpus |
US11037015B2 (en) | 2015-12-15 | 2021-06-15 | Cortica Ltd. | Identification of key points in multimedia data elements |
US11195043B2 (en) | 2015-12-15 | 2021-12-07 | Cortica, Ltd. | System and method for determining common patterns in multimedia content elements based on key points |
US20170228459A1 (en) * | 2016-02-05 | 2017-08-10 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and device for mobile searching based on artificial intelligence |
US10650007B2 (en) | 2016-04-25 | 2020-05-12 | Microsoft Technology Licensing, Llc | Ranking contextual metadata to generate relevant data insights |
US10783178B2 (en) * | 2016-05-17 | 2020-09-22 | Google Llc | Generating a personal database entry for a user based on natural language user interface input of the user and generating output based on the entry in response to further natural language user interface input of the user |
US11494427B2 (en) * | 2016-05-17 | 2022-11-08 | Google Llc | Generating a personal database entry for a user based on natural language user interface input of the user and generating output based on the entry in response to further natural language user interface input of the user |
US11907276B2 (en) | 2016-05-17 | 2024-02-20 | Google Llc | Generating a personal database entry for a user based on natural language user interface input of the user and generating output based on the entry in response to further natural language user interface input of the user |
US20170337265A1 (en) * | 2016-05-17 | 2017-11-23 | Google Inc. | Generating a personal database entry for a user based on natural language user interface input of the user and generating output based on the entry in response to further natural language user interface input of the user |
US10831763B2 (en) * | 2016-06-10 | 2020-11-10 | Apple Inc. | System and method of generating a key list from multiple search domains |
US20170357696A1 (en) * | 2016-06-10 | 2017-12-14 | Apple Inc. | System and method of generating a key list from multiple search domains |
US10769182B2 (en) | 2016-06-10 | 2020-09-08 | Apple Inc. | System and method of highlighting terms |
US12050857B2 (en) | 2017-05-16 | 2024-07-30 | Apple Inc. | Device, method, and graphical user interface for editing screenshot images |
US11681866B2 (en) | 2017-05-16 | 2023-06-20 | Apple Inc. | Device, method, and graphical user interface for editing screenshot images |
US11210458B2 (en) * | 2017-05-16 | 2021-12-28 | Apple Inc. | Device, method, and graphical user interface for editing screenshot images |
US11760387B2 (en) | 2017-07-05 | 2023-09-19 | AutoBrains Technologies Ltd. | Driving policies determination |
US11899707B2 (en) | 2017-07-09 | 2024-02-13 | Cortica Ltd. | Driving policies determination |
US20190095482A1 (en) * | 2017-09-28 | 2019-03-28 | Oracle International Corporation | Recommending fields for a query based on prior queries |
US10747756B2 (en) * | 2017-09-28 | 2020-08-18 | Oracle International Corporation | Recommending fields for a query based on prior queries |
US20190155955A1 (en) * | 2017-11-20 | 2019-05-23 | Rovi Guides, Inc. | Systems and methods for filtering supplemental content for an electronic book |
US10909193B2 (en) * | 2017-11-20 | 2021-02-02 | Rovi Guides, Inc. | Systems and methods for filtering supplemental content for an electronic book |
US10909191B2 (en) | 2017-11-20 | 2021-02-02 | Rovi Guides, Inc. | Systems and methods for displaying supplemental content for an electronic book |
US11861477B2 (en) | 2018-02-14 | 2024-01-02 | Capital One Services, Llc | Utilizing machine learning models to identify insights in a document |
US10489512B2 (en) | 2018-02-14 | 2019-11-26 | Capital One Services, Llc | Utilizing machine learning models to identify insights in a document |
US10303771B1 (en) | 2018-02-14 | 2019-05-28 | Capital One Services, Llc | Utilizing machine learning models to identify insights in a document |
US11227121B2 (en) | 2018-02-14 | 2022-01-18 | Capital One Services, Llc | Utilizing machine learning models to identify insights in a document |
US10853332B2 (en) * | 2018-04-19 | 2020-12-01 | Microsoft Technology Licensing, Llc | Discovering schema using anchor attributes |
US20190325046A1 (en) * | 2018-04-19 | 2019-10-24 | Microsoft Technology Licensing, Llc | Discovering schema using anchor attributes |
US11320983B1 (en) * | 2018-04-25 | 2022-05-03 | David Graham Boyers | Methods and graphical user interfaces for positioning a selection, selecting, and editing, on a computing device running applications under a touch-based operating system |
US10846544B2 (en) | 2018-07-16 | 2020-11-24 | Cartica Ai Ltd. | Transportation prediction system and method |
US11673583B2 (en) | 2018-10-18 | 2023-06-13 | AutoBrains Technologies Ltd. | Wrong-way driving warning |
US10839694B2 (en) | 2018-10-18 | 2020-11-17 | Cartica Ai Ltd | Blind spot alert |
US11029685B2 (en) | 2018-10-18 | 2021-06-08 | Cartica Ai Ltd. | Autonomous risk assessment for fallen cargo |
US11126870B2 (en) | 2018-10-18 | 2021-09-21 | Cartica Ai Ltd. | Method and system for obstacle detection |
US12128927B2 (en) | 2018-10-18 | 2024-10-29 | Autobrains Technologies Ltd | Situation based processing |
US11282391B2 (en) | 2018-10-18 | 2022-03-22 | Cartica Ai Ltd. | Object detection at different illumination conditions |
US11685400B2 (en) | 2018-10-18 | 2023-06-27 | Autobrains Technologies Ltd | Estimating danger from future falling cargo |
US11181911B2 (en) | 2018-10-18 | 2021-11-23 | Cartica Ai Ltd | Control transfer of a vehicle |
US11718322B2 (en) | 2018-10-18 | 2023-08-08 | Autobrains Technologies Ltd | Risk based assessment |
US11087628B2 (en) | 2018-10-18 | 2021-08-10 | Cartica Al Ltd. | Using rear sensor for wrong-way driving warning |
US11373413B2 (en) | 2018-10-26 | 2022-06-28 | Autobrains Technologies Ltd | Concept update and vehicle to vehicle communication |
US11126869B2 (en) | 2018-10-26 | 2021-09-21 | Cartica Ai Ltd. | Tracking after objects |
US11700356B2 (en) | 2018-10-26 | 2023-07-11 | AutoBrains Technologies Ltd. | Control transfer of a vehicle |
US11270132B2 (en) | 2018-10-26 | 2022-03-08 | Cartica Ai Ltd | Vehicle to vehicle communication and signatures |
US11244176B2 (en) | 2018-10-26 | 2022-02-08 | Cartica Ai Ltd | Obstacle detection and mapping |
US10789535B2 (en) | 2018-11-26 | 2020-09-29 | Cartica Ai Ltd | Detection of road elements |
US11436282B2 (en) * | 2019-02-20 | 2022-09-06 | Baidu Online Network Technology (Beijing) Co., Ltd. | Methods, devices and media for providing search suggestions |
US11643005B2 (en) | 2019-02-27 | 2023-05-09 | Autobrains Technologies Ltd | Adjusting adjustable headlights of a vehicle |
US11285963B2 (en) | 2019-03-10 | 2022-03-29 | Cartica Ai Ltd. | Driver-based prediction of dangerous events |
US11694088B2 (en) | 2019-03-13 | 2023-07-04 | Cortica Ltd. | Method for object detection using knowledge distillation |
US11755920B2 (en) | 2019-03-13 | 2023-09-12 | Cortica Ltd. | Method for object detection using knowledge distillation |
US11132548B2 (en) | 2019-03-20 | 2021-09-28 | Cortica Ltd. | Determining object information that does not explicitly appear in a media unit signature |
US12055408B2 (en) | 2019-03-28 | 2024-08-06 | Autobrains Technologies Ltd | Estimating a movement of a hybrid-behavior vehicle |
US10796444B1 (en) | 2019-03-31 | 2020-10-06 | Cortica Ltd | Configuring spanning elements of a signature generator |
US12067756B2 (en) | 2019-03-31 | 2024-08-20 | Cortica Ltd. | Efficient calculation of a robust signature of a media unit |
US11222069B2 (en) | 2019-03-31 | 2022-01-11 | Cortica Ltd. | Low-power calculation of a signature of a media unit |
US11481582B2 (en) | 2019-03-31 | 2022-10-25 | Cortica Ltd. | Dynamic matching a sensed signal to a concept structure |
US11275971B2 (en) | 2019-03-31 | 2022-03-15 | Cortica Ltd. | Bootstrap unsupervised learning |
US10846570B2 (en) | 2019-03-31 | 2020-11-24 | Cortica Ltd. | Scale inveriant object detection |
US11741687B2 (en) | 2019-03-31 | 2023-08-29 | Cortica Ltd. | Configuring spanning elements of a signature generator |
US10748038B1 (en) | 2019-03-31 | 2020-08-18 | Cortica Ltd. | Efficient calculation of a robust signature of a media unit |
US10776669B1 (en) | 2019-03-31 | 2020-09-15 | Cortica Ltd. | Signature generation and object detection that refer to rare scenes |
US11488290B2 (en) | 2019-03-31 | 2022-11-01 | Cortica Ltd. | Hybrid representation of a media unit |
US10789527B1 (en) | 2019-03-31 | 2020-09-29 | Cortica Ltd. | Method for object detection using shallow neural networks |
US11550865B2 (en) | 2019-08-19 | 2023-01-10 | Dropbox, Inc. | Truncated search results that preserve the most relevant portions |
US11347756B2 (en) | 2019-08-26 | 2022-05-31 | Microsoft Technology Licensing, Llc | Deep command search within and across applications |
US11921730B2 (en) | 2019-08-26 | 2024-03-05 | Microsoft Technology Licensing, Llc | Deep command search within and across applications |
US11593662B2 (en) | 2019-12-12 | 2023-02-28 | Autobrains Technologies Ltd | Unsupervised cluster generation |
US10748022B1 (en) | 2019-12-12 | 2020-08-18 | Cartica Ai Ltd | Crowd separation |
US11590988B2 (en) | 2020-03-19 | 2023-02-28 | Autobrains Technologies Ltd | Predictive turning assistant |
US11827215B2 (en) | 2020-03-31 | 2023-11-28 | AutoBrains Technologies Ltd. | Method for training a driving related object detector |
US11756424B2 (en) | 2020-07-24 | 2023-09-12 | AutoBrains Technologies Ltd. | Parking assist |
US11900046B2 (en) * | 2020-08-07 | 2024-02-13 | Microsoft Technology Licensing, Llc | Intelligent feature identification and presentation |
US12049116B2 (en) | 2020-09-30 | 2024-07-30 | Autobrains Technologies Ltd | Configuring an active suspension |
US11983208B2 (en) * | 2021-02-16 | 2024-05-14 | International Business Machines Corporation | Selection-based searching using concatenated word and context |
US20220261428A1 (en) * | 2021-02-16 | 2022-08-18 | International Business Machines Corporation | Selection-based searching using concatenated word and context |
US12110075B2 (en) | 2021-08-05 | 2024-10-08 | AutoBrains Technologies Ltd. | Providing a prediction of a radius of a motorcycle turn |
US12142005B2 (en) | 2021-10-13 | 2024-11-12 | Autobrains Technologies Ltd | Camera based distance measurements |
US12139166B2 (en) | 2022-06-07 | 2024-11-12 | Autobrains Technologies Ltd | Cabin preferences setting that is based on identification of one or more persons in the cabin |
Also Published As
Publication number | Publication date |
---|---|
TW201519075A (en) | 2015-05-16 |
US9436918B2 (en) | 2016-09-06 |
CN105612517A (en) | 2016-05-25 |
CN105637507A (en) | 2016-06-01 |
WO2015053993A1 (en) | 2015-04-16 |
WO2015054218A1 (en) | 2015-04-16 |
EP3055787A1 (en) | 2016-08-17 |
EP3055789A1 (en) | 2016-08-17 |
US20150100524A1 (en) | 2015-04-09 |
KR20160067202A (en) | 2016-06-13 |
CN105637507B (en) | 2019-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20150100562A1 (en) | Contextual insights and exploration | |
US11294970B1 (en) | Associating an entity with a search query | |
CN107787487B (en) | Deconstructing documents into component blocks for reuse in productivity applications | |
US20200004882A1 (en) | Misinformation detection in online content | |
US20140019460A1 (en) | Targeted search suggestions | |
US9665643B2 (en) | Knowledge-based entity detection and disambiguation | |
US9418128B2 (en) | Linking documents with entities, actions and applications | |
US9245022B2 (en) | Context-based person search | |
US8332748B1 (en) | Multi-directional auto-complete menu | |
US9378283B2 (en) | Instant search results with page previews | |
EP2109050A1 (en) | Facilitating display of an interactive and dynamic cloud of terms related to one or more input terms | |
US20160357842A1 (en) | Graph-driven authoring in productivity tools | |
US10296644B2 (en) | Salient terms and entities for caption generation and presentation | |
US8700594B2 (en) | Enabling multidimensional search on non-PC devices | |
US20110307432A1 (en) | Relevance for name segment searches | |
US10909202B2 (en) | Information providing text reader | |
US20160224621A1 (en) | Associating A Search Query With An Entity | |
US10242033B2 (en) | Extrapolative search techniques | |
US20180349500A1 (en) | Search engine results for low-frequency queries | |
US9811592B1 (en) | Query modification based on textual resource context | |
EP2189917A1 (en) | Facilitating display of an interactive and dynamic cloud with advertising and domain features | |
Kravi et al. | One query, many clicks: Analysis of queries with multiple clicks by the same user | |
Zhang et al. | A knowledge base approach to cross-lingual keyword query interpretation | |
Wang et al. | Cross-modal search on social networking systems by exploring wikipedia concepts | |
Jeon | Lexicon-based context-sensitive reference comments crawler |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT CORPORATION, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KOHLMEIER, BERNHARD S.J.;CHILAKAMARRI, PRADEEP;SAAD, KRISTEN M.;AND OTHERS;SIGNING DATES FROM 20141006 TO 20150528;REEL/FRAME:035915/0387 |
|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:036100/0048 Effective date: 20150702 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |