Nothing Special   »   [go: up one dir, main page]

CN110992098A - Method, device, equipment and medium for obtaining object information - Google Patents

Method, device, equipment and medium for obtaining object information Download PDF

Info

Publication number
CN110992098A
CN110992098A CN201911222250.9A CN201911222250A CN110992098A CN 110992098 A CN110992098 A CN 110992098A CN 201911222250 A CN201911222250 A CN 201911222250A CN 110992098 A CN110992098 A CN 110992098A
Authority
CN
China
Prior art keywords
information
data
obtaining
activity data
offline
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911222250.9A
Other languages
Chinese (zh)
Inventor
甘泰玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Cloud Computing Beijing Co Ltd
Original Assignee
Tencent Cloud Computing Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Cloud Computing Beijing Co Ltd filed Critical Tencent Cloud Computing Beijing Co Ltd
Priority to CN201911222250.9A priority Critical patent/CN110992098A/en
Publication of CN110992098A publication Critical patent/CN110992098A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The method for obtaining the object information comprises the steps of obtaining information of each associated device associated with the object according to online behavior data of the object, obtaining offline activity data of each associated device through a collection device according to the information of each associated device associated with the object, extracting offline activity data of the object according to the offline activity data of each associated device, and obtaining the object information according to the online behavior data and the offline activity data of the object. Therefore, the on-line behavior data and the off-line behavior data of the object are combined, and the accuracy and the integrity of the object information are improved.

Description

Method, device, equipment and medium for obtaining object information
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method, an apparatus, a device, and a medium for obtaining object information.
Background
With the development of internet technology and the popularization and application of intelligent terminals, the proportion of internet surfing time of people is also increasing.
In the prior art, to better provide targeted services for users, different object information is generally constructed for a user or a class of users according to different service requirements. However, the accuracy and integrity of the object information is poor.
Thus, a technical solution that can improve the accuracy and integrity of object information is urgently needed.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a medium for obtaining object information, which are used for improving the accuracy and the integrity of the object information when the object information is obtained.
In one aspect, a method for obtaining object information is provided, including:
obtaining information of each associated device associated with the object according to the online behavior data of the object;
acquiring offline activity data of each associated device through acquisition equipment according to the information of each associated device associated with the object;
extracting the offline activity data of the object as data according to the offline activity data of each associated device;
and obtaining object information according to the on-line behavior data and the off-line behavior data of the object.
In one aspect, an apparatus for obtaining object information is provided, including:
the first obtaining unit is used for obtaining information of each associated device associated with the object according to the online behavior data of the object;
the acquisition unit is used for acquiring offline activity data of each associated device through the acquisition device according to the information of each associated device associated with the object;
the extraction unit is used for extracting the offline activity data of the object as data according to the offline activity data of each associated device;
and the second obtaining unit is used for obtaining the object information according to the on-line behavior data and the off-line behavior data of the object.
In one aspect, a control device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to perform any of the steps of the method for obtaining object information.
In one aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of any of the above-mentioned methods of obtaining object information.
In the method, the apparatus, the device and the medium for obtaining object information, the information of each associated device associated with the object is obtained according to the online behavior data of the object, the offline activity data of each associated device is obtained through the acquisition device according to the information of each associated device associated with the object, the offline activity data of each associated device is extracted according to the offline activity data of each associated device, and the object information is obtained according to the online behavior data and the offline activity data of the object. Therefore, the on-line behavior data and the off-line behavior data of the object are combined, and the accuracy and the integrity of the object information are improved.
Furthermore, the object is identified through the acquisition equipment, the object activity data of the object under the line is obtained, the line descending data is generated according to the object activity data and the line descending data, and the object information is obtained according to the line descending data and the line ascending data.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic diagram of a system architecture for obtaining object information according to an embodiment of the present disclosure;
FIG. 2a is a diagram illustrating an example of an architecture for obtaining object information according to an embodiment of the present disclosure;
FIG. 2b is a diagram illustrating an example of a detailed architecture for obtaining object information according to an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating an implementation of a method for obtaining object information according to an embodiment of the present disclosure;
FIG. 4a is a diagram illustrating an example of a wireless-based object information obtaining architecture according to an embodiment of the present application;
fig. 4b is an exemplary diagram of an object information obtaining architecture based on bluetooth in the embodiment of the present application;
fig. 4c is a diagram illustrating an example of an object information obtaining architecture based on a camera according to an embodiment of the present disclosure;
fig. 5a is an exemplary diagram of office object information in an embodiment of the present application;
FIG. 5b is an exemplary diagram of a gender attribute recommendation page according to an embodiment of the present disclosure;
FIG. 5c is an exemplary diagram of a preferential page according to an embodiment of the present disclosure;
FIG. 5d is a diagram illustrating an example of an object region according to an embodiment of the present disclosure;
FIG. 5e is a diagram illustrating a business statistics example according to an embodiment of the present disclosure;
FIG. 5f is a diagram illustrating an example of an object trajectory in an embodiment of the present application;
FIG. 5g is a thermal profile of a market subject according to an embodiment of the present application;
fig. 5h is an exemplary diagram of a public security scenario in the embodiment of the present application;
FIG. 6 is a schematic structural diagram of an apparatus for obtaining object information according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a control device in an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solution and beneficial effects of the present application more clear and more obvious, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
First, some terms referred to in the embodiments of the present application are explained to facilitate understanding by those skilled in the art.
The terminal equipment: the electronic device can be mobile or fixed, and can be used for installing various applications and displaying objects provided in the installed applications. For example, a mobile phone, a tablet computer, various wearable devices, a vehicle-mounted device, a Personal Digital Assistant (PDA), a point of sale (POS), or other electronic devices capable of implementing the above functions may be used.
The application program comprises the following steps: computer programs that perform one or more tasks typically have a visual display that interacts with a target object, such as an electronic map and WeChat, which may be referred to as an application.
Artificial Intelligence (AI): the method is a theory, method, technology and application system for simulating, extending and expanding human intelligence by using a digital computer or a machine controlled by the digital computer, sensing the environment, acquiring knowledge and obtaining the best result by using the knowledge. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Computer Vision technology (Computer Vision, CV): computer vision is a science for researching how to make a machine "see", and further, it means that a camera and a computer are used to replace human eyes to perform machine vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. Theories and techniques related to computer vision research attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
Object information: information describing the concrete character of the object, typically a set of a plurality of tags for describing the object character obtained by abstracting related data of the object, may be used to provide a targeted service for the object. If the service requirements are different, the object information of the user is also different.
On-line behavior data: the related data of the object can be obtained according to the related data of the object obtained by various operations of the object on the network, and also can be obtained according to the related information of the object stored in each network database. The online behavior data comprises identity information submitted by the object, online behavior information of the object and the like.
Line down is data: and obtaining object related data according to the activity data of the object under the line. The activity data of the subject in the line can be obtained by collecting the activity data of the subject itself, or by collecting the activity data of some associated devices used by the subject in the line.
The activity data of the object and the activity data of the associated equipment are obtained by using various acquisition equipment arranged in different areas or occasions.
The associated device of the object: the hardware devices used by the object may be obtained from device information included in the online behavior data of the object. For example, the mobile phone used by the object is determined according to the mobile device identification code of the mobile phone contained in the online behavior data of the object.
Offline activity data: activity data of various associated devices of the subject that may be acquired by the acquisition device. For example, the information of the relevant device of the mobile phone of the object collected by the collecting device of a certain mall, and the like, and the data of the driving track of the vehicle of the object captured by the monitoring system of the transportation department, and the like. The collection mode may include monitoring camera shooting, biometric collection, and detection of peripheral devices by a collection device including a Wireless-Fidelity (WiFi) communication module, a bluetooth module, or other Wireless communication modules.
Object activity data: the data of the activity of the subject is collected online in order for the collecting device to identify physical media (e.g., identity card passport, etc.) or biometric information associated with the subject. For example, a face recognition system in a public place recognizes an object and obtains object motion data and the like.
Object identity information: information indicating the identity of the subject, such as the subject's account number, name, identification number, school number, passport number, etc. It should be noted that, the information that the object is a natural person may also be included, including: gender, cell phone number, birthday, photo, occupation, etc.
The biometric information is information for describing a biometric feature of the subject, and may include, but is not limited to, the following information: voice print, face image, fingerprint, iris, behavioral pose, and skeletal information.
Ultra Wide Band (UWB): the method is a carrier-free communication technology, and utilizes nanosecond-microsecond-level non-sine wave narrow pulses to transmit data. It is called a revolutionary advance in the radio field, and is considered to become a mainstream technology of short-distance wireless communication in the future.
Software Development Kit (SDK): are typically a collection of development tools used by some software engineers to build application software for a particular software package, software framework, hardware platform, operating system, etc.
The design concept of the embodiment of the present application is described below.
With the development of internet technology and intelligent terminal technology, intelligent terminals gradually become necessities of people's daily life, and people can realize entertainment, study, work, life service and the like through intelligent terminals. Through the online behavior data of people, data analysis can be carried out on one object or a group of objects, so that product optimization, targeted sales and the like can be carried out according to the analysis result.
Under the conventional technology, usually, online behavior data of an object is collected through an intelligent terminal such as a mobile phone, an intelligent bracelet, a notebook computer and the like of the object, identity information, place information, behavior information and the like of the object are determined according to the collected online behavior data, and then corresponding object information is constructed according to service requirements.
For example, the service demand may be a mall object thermal distribution, a shopping recommendation, and the like. Different object information can be constructed for different service requirements.
However, only through on-line behavior data of an object, the obtained object information is limited and has problems of fragmentation and discontinuity. And when a Global Positioning System (GPS) is used for terminal Positioning, the Positioning accuracy in indoor and other scenes is low, and the Positioning accuracy is poor. This makes the accuracy and integrity of object information constructed by the on-line behavior data of the object poor.
Obviously, the conventional technology does not provide a technical solution capable of improving the accuracy and integrity of the object information, and therefore, a technical solution capable of improving the accuracy and integrity of the object information is urgently needed.
The method comprises the steps of obtaining object activity data and offline activity data of associated equipment of an object through collecting equipment such as a camera and entrance guard equipment, determining offline activity data of the object according to the offline activity data and the object activity data of the object, and combining the offline activity data and the online behavior data of the object to obtain object information. In the scheme, information of each associated device of an object is obtained according to online behavior data of the object, offline activity data of each associated device is obtained through acquisition devices according to the information of each associated device of the object, object activity data is obtained through acquisition devices around the object, offline activity data and object activity data of the associated devices are obtained, and object information is obtained according to the online behavior data and the offline activity data of the object.
To further illustrate the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the accompanying drawings and the detailed description. Although the embodiments of the present application provide method steps as shown in the following embodiments or figures, more or fewer steps may be included in the method based on conventional or non-inventive efforts. In steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application. The method can be executed in sequence or in parallel according to the method shown in the embodiment or the figure when the method is executed in an actual processing procedure or a device.
Fig. 1 is a schematic diagram of a system architecture for obtaining object information. The system comprises: a server 100, a plurality of association devices 101, and a plurality of acquisition devices 102.
The server 100: the device or a group of devices is used for obtaining the online behavior data of the object according to the data obtained by the network operation of the object and the data obtained according to the object related information stored in each network database; the system is further configured to obtain information of each associated device of the object according to the online behavior data of the object, obtain offline activity data of each associated device 101 and activity data of each object through the acquisition device 102, extract offline data of the object according to the offline activity data and the object activity data, and construct object information according to the online behavior data and the offline data of the object.
The associated apparatus 101: the hardware devices used for the operation of the determined objects are determined according to the on-line behavior data.
The acquisition device 102: for collecting offline activity data of the device 101 associated with the subject, and subject activity data of the subject itself, and transmitting the collected offline activity data and subject activity data to the server 100.
The acquisition device can acquire the activity data of the object and the activity data of the associated device through the biological characteristic acquisition module, the camera module, the WiFi communication module, the Bluetooth module, other wireless communication modules and the like.
Optionally, the association device 101 may be obtained in the following ways:
the first mode is as follows: and acquiring the associated equipment 101 of the object according to the object identity information and the associated equipment information stored in each network database.
For example, according to the vehicle purchasing record of the user, object identity information of the user and vehicle identification information owned by the user are acquired, so that a vehicle associated with the user is acquired.
The second way is: the server 100 receives the upload information including the hardware device information and the object identification information sent by the application program, and determines the associated device 101 of the object.
For example, if the object logs in the communication application in the terminal a through the communication application account, the terminal a is the associated device 101 of the object.
For another example, a mobile phone to which a mobile phone card of the object is attached is determined as the associated device 101 of the object.
Further, a hardware device, such as a smart band, a bluetooth device, a bluetooth headset, or the like, which is bluetooth or wirelessly connected to the associated device 101 of the object, may also be determined as the associated device 101 of the object. In the embodiment of the present application, only the associated device 101 of the object is determined in some ways, and in practical applications, the associated device 101 of the object may also be determined in other ways, which is not limited herein.
In the embodiment of the application, the object activity data of the object and the offline activity data of the associated equipment of the object are acquired through the acquisition equipment, and the offline activity data of the object is integrated according to the object activity data and the offline activity data, so that more complete object information is constructed through the combination of the offline activity data and the online behavior data of the object, and the accuracy of the object information is improved.
It should be noted that, according to the line down behavior data and the line up behavior data of each object, object information corresponding to each user may be generated, or object information corresponding to a group of users may be generated.
FIG. 2a is a diagram illustrating an example of an architecture for obtaining object information. FIG. 2b is a diagram illustrating an example of a detailed architecture for obtaining object information. The system for object information is described in further detail below with reference to fig. 2a and 2 b.
In fig. 2a, the architecture comprises: a hardware device layer, an identification module layer, an object information layer and an application layer. In fig. 2b, the architecture includes: space, identity module, online behavior and offline behavior, and object information services.
In fig. 2a, the hardware device layer includes an acquisition device and an association device. The hardware device layer may include hardware devices: the system comprises a camera, a POS machine, an advertising machine, an Access control device, a wireless broadband WiFi Access Point (AP), an object mobile phone, an Internet of vehicles device and a card punching machine. In fig. 2b, the space comprises: home, elevator, door access, office, conference room, mall, store, front desk, public transport, etc.
As can be seen in conjunction with fig. 2a and 2b, the hardware devices may include, but are not limited to: cameras installed for security, entrance guard, monitoring, driving safety and the like in office/street/mall/hospital/entertainment/car networking, POS machines applied in places such as enterprise dining room/mall/hospital/store/outdoor sales, advertisement machines (i.e. advertisement screens) in enterprise office/store/home/bus station/elevator, card punches in office/factory/subway scenes. And the smart phone or the intelligent wearable device with the communication capability can be a smart watch, a smart blood glucose meter, a weight meter and the like.
In fig. 2a, the identification module layer comprises: Wi-Fi communication module, bluetooth module, image recognition module, biological identification module, other wireless communication modules and software identification module. The software identification module comprises: a Software Development Kit (SDK) or an embedded Operating System (OS) is identified.
It should be noted that, each module in the identification module layer constructs the acquired information or performs partial edge preprocessing by using a general SDK or OS to obtain offline activity data, and sends the offline activity data to the server. The image recognition module may be a camera or a light sense SENSOR, the biometric recognition module may be a module for iris recognition or fingerprint recognition, and the other wireless Communication modules may be UWB, Near Field Communication (NFC) devices, and Radio Frequency Identification (RFID) devices.
Specifically, in fig. 2b, the identification module is used for identifying through a probe detection method or a feature identification method, including but not limited to: the device comprises a camera, a POS (point of sale), a Wi-Fi module, a Bluetooth module, a near field communication module, a UWB (ultra wide band) module, a fingerprint identification module, a voice identification module and the like.
Wherein, the camera can be used to obtain the following information: device Universal Unique Identifier (UUID), face features, iris features, Optical Character Recognition (OCR) character recognition, nearby Wi-Fi physical address (MAC), and behavioral gesture determination.
The POS may be used to obtain the following information: device UUID, consumption details, consumption amount, order number, and Geographic Information System (GIS) location.
The Wi-Fi module may be used to obtain the following information: device UUID, nearby Wi-Fi MAC, and nearby device signal strength. The unit of the signal strength of the nearby equipment acquired by the Wi-Fi module is generally decibel-milliwatt (dbm).
Note that the nearby Wi-Fi MACs include Wi-Fi MACs that have been connected and Wi-Fi MACs that have been detected as unconnected.
The bluetooth module may be used to obtain the following information: device UUID, nearby Wi-Fi MAC, and nearby device signal strength. The signal strength of the nearby device obtained by the bluetooth module is usually represented by a Received Signal Strength Indication (RSSI), which is an Indication value of the signal strength.
The near field communication module is used for acquiring the following information: device UUID, nearby NFC device, nearby RFID, and nearby device signal strength. The unit of the signal strength of the nearby device acquired by the near field communication module is typically dbm.
The UWB module is used to obtain the following information: device UUID, nearby UWB devices, and nearby device signal strength. The UWB module typically obtains the signal strength of nearby devices in dbm.
Fingerprint identification module: device UUID and fingerprint characteristics.
A voice recognition module: device UUID and voiceprint feature.
In fig. 2a, the object information layer includes: identity detection, biometric detection, geographic information detection, object identity determination, object behavior determination, and object location determination.
Wherein, the identity detection: the method is used for detecting the identity information of an object, and the detection mode comprises the following steps: Wi-Fi probes, Bluetooth active probes, Bluetooth passive probes, Internet service identities, UWB pulse ultra-wideband, and RFID/NFC, among others.
Biological characteristic detection: for detecting biometric information, comprising: face recognition, iris recognition, fingerprint recognition, voiceprint recognition and the like.
Detecting geographic information: the method is used for acquiring positioning information, and the geographic information detection mode comprises the following steps: geographic location identification, terrain height identification, and the like.
And (3) object identity judgment: the system comprises a database, a.
Wherein the asset identification information: is coded information for indicating a substance possessed by the subject. The biometric information and the asset identification information are both feature identification information. And (3) judging the object behavior: for sorting the on/off line of the object according to the preference of the object.
Specific object behaviors include, but are not limited to: offline payment, e-commerce payment, navigation data, parking payment, bus data, and office data.
And (3) object position determination: the method is used for judging the position information of the object according to the Point of interest (POI) corresponding to the detected hardware position or software position.
In fig. 2b, the line descending includes: identity detection and biometric detection.
Specifically, the physical medium for identity detection comprises: identification cards, bus cards, access cards, passports, license plate numbers, and various tickets (e.g., high-speed railway tickets or movie tickets), RFIDs (e.g., work cards, house cards), and the like.
The acquisition device or associated device for identity detection comprises: mobile phones, smart watches, UWB vehicles, sharing bicycles, computers, game machines, bluetooth devices (e.g., headphones), televisions, Electronic Toll Collection (ETC) vehicles, NFC TAGs (TAG), and the like.
The biometric features of the biometric detection include: fingerprints, voice prints, human faces, irises, and the like.
In fig. 2b, the online behavior includes the operation of the object on each application.
In particular, payment Applications (APPs): and acquiring the UUID of the object, the Mobile Equipment Identifier (MEID) of the Mobile phone, the Bluetooth MAC, the Wi-Fi MAC, the GIS position and the like.
Social APP: and obtaining the UUID, MEID, Bluetooth MAC, Wi-Fi MAC, GIS position, UUID of the friend circle, work and rest time, a dialogue abstract, intimacy and the like of the object.
Citizen service applet: and obtaining the UUID, the mobile phone number, the identification card number, the photo, the income condition, the license plate number, the gender, the address, the GIS position and the like of the object.
And (3) a travel applet: and obtaining the UUID, the bus type, the departure place, the destination and the GIS position of the object.
Navigating the APP: and obtaining the UUID, the departure place, the destination, the driving track and the GIS position of the object.
E, E-commerce APP: and obtaining the UUID, the MEID of the mobile phone, the shopping details and the GIS position of the object.
An asset system: the object UUID, property user, property number, pickup date, and pickup location are obtained.
In fig. 2b, the line-up and line-down for data includes: object identity information, object location information, and object behavior information.
Specifically, the object identity information includes: APP1 UUID, APP2 UUID, MEID, Bluetooth MAC, Wi-Fi MAC, RFID Product Electronic code (EPC), NFC EPC, UWB EPC, fingerprint, iris, voice print, face, name, birthday, gender, occupation, constellation, address, income component, financial status, credit status, personal image, and the like.
The object behavior information includes: timestamp, online/offline, behavior category, behavior description, and GIS location.
In fig. 2a, the object information layer: the obtaining the object information according to the offline behavior data and the online behavior data may include: an online consumer representation, an offline consumer representation, a life representation, a social representation, a travel representation, an office representation, object tags, and group classifications.
It should be noted that the object information may be constructed for one user or a group of users, that is, corresponding object information of one user or a group of users may be obtained.
In FIG. 2b, the image information service includes: industry portrayal services, online/offline portrayal services, trajectory services, and group classification.
In fig. 2a, the application layer: application of object information, comprising: the method is applied to enterprise B-end application, enterprise C-end application, smart city/smart municipal G-end application and the like.
Referring to fig. 3, a flowchart of an implementation of a method for obtaining object information according to the present application is shown.
The method comprises the following specific processes:
step 300: and the server acquires information of each associated device associated with the object according to the online behavior data of the object.
Specifically, the server obtains the online behavior data of the object according to the data obtained by the network operation of the object and the data obtained according to the object-related information stored in each network database, and obtains each piece of associated device information associated with the object contained in the online behavior data of the object.
In this way, the associated device of the object can be acquired, so that in the subsequent step, the offline activity data of the object is acquired according to the associated device of the object.
Step 301: and the server obtains offline activity data of each associated device through the acquisition device according to the information of each associated device associated with the object.
Specifically, the acquisition device acquires offline activity data of each peripheral hardware device, and sends the acquired offline activity data to the server. And the server acquires the offline activity data of each associated device associated with the object from the offline activity data uploaded by the acquisition devices distributed in the activity area of each associated device according to the information of each associated device of the object.
In one embodiment, when it is determined that the received offline activity data of the acquisition device includes associated device information associated with the object, the server determines the offline activity data as offline activity data of an associated device of the object.
For example, the information of each related device associated with the object B is a cell phone MEID1, a smart band MEID2, and a computer MEID 3. The Point of Sale (POS) sends payment transaction data containing MEID1 to the server. The server determines that the payment transaction data contains the associated device information MEID1 of the object B, and then determines the payment transaction data as the offline activity data of the associated device of the object B.
Further, the server can also obtain object activity data from acquisition devices distributed in an activity area of the object, and can further extract the line descending data of the object according to the object activity data.
The object activity data comprises identification information for identifying the identity characteristics of the object and acquisition equipment information, and the characteristic identification information comprises but is not limited to biological characteristic information and asset identification information.
When the feature identification information is biometric information, the server obtains subject activity data containing biometric information of the subject from the acquisition devices distributed in the activity area of the subject. And when the characteristic identification information is asset identification information, acquiring object activity data containing the asset identification information of the object from acquisition equipment distributed in an activity area of the object.
In the embodiment of the present application, only the feature identification information is taken as biological feature information or asset identification information for example, but in practical applications, the feature identification information may also be other types of information for identifying an identity of an object, which is not limited herein.
An application scenario in which the acquisition device acquires offline activity data of the device associated with the object is illustrated below.
For example, a collection device containing a Wi-Fi module, such as an AP, a POS machine, an advertisement machine, etc., performs Wi-Fi probing on a peripheral device in a Wi-Fi Probe (Probe) mode or a hybrid mode, so that offline activity data containing a MAC address list of the peripheral device can be obtained.
For another example, the acquisition device including the bluetooth module performs bluetooth detection on peripheral devices, so as to obtain offline activity data including a MAC address list of the peripheral devices.
An application scenario in which the acquisition device acquires the activity data of the object is illustrated below.
For example, the acquisition device including the camera performs shooting on the periphery to obtain a surveillance video, performs face recognition on the surveillance video through the acquisition device or the server to obtain offline activity data including the identity characteristic information of the object, and can also perform identification such as license plate numbers and certificate numbers on the surveillance video to obtain offline activity data including the identity characteristic information of the object.
For another example, the acquisition device including the iris recognition module may perform biometric recognition to obtain offline activity data of the subject including the identity feature information.
For another example, the acquisition device including the RFID module, the NFC module, and the UWB module may identify the bus card, the peripheral device, and the like, and obtain offline activity data including asset identification information.
It should be noted that the device associated with one object may also become an acquisition device of another device. An application scenario in which the associated device of the object is used as the acquisition device to acquire offline activity data of the object or activity data of the object is illustrated below.
For example, Wi-Fi detection is performed by a hot spot (hot spot) of the subject handset by the property that a process of an application (e.g., social application) of the subject handset can still work in the background, and offline activity data containing the detected MEID (e.g., MAC address list) is transmitted to the server by the process.
For another example, the peripheral device is bluetooth detected by an application program of the target mobile phone, device information of the detected peripheral device (such as bluetooth beacon placed on a wall in a mall) is obtained, and offline activity data including the device information is sent to the server through a process.
In the embodiment of the application, the acquisition device may acquire offline activity data of the device associated with the object, for example, the POS machine acquires offline activity data of the peripheral device associated with the object; the associated devices of the object can also collect reversely, that is, the associated devices of the object are used as collecting devices to collect data of associated devices of other objects and the like, and obtain corresponding offline activity data, for example, the mobile phone is used as a probe to search nearby devices through Wi-Fi hot spots or Bluetooth of the mobile phone of the object, and at this time, the mobile phone can scan the MAC address of other people or scan the Bluetooth Beacon posted offline, so that the information of the position, the path, the behavior and the like of the object and other objects can be obtained.
In this way, offline activity data for each associated device of the object can be obtained, as well as object activity data.
Step 302: and the server extracts the offline activity data of the object as data according to the offline activity data of each associated device.
Further, when the server can also extract offline behavior data according to the object activity data, the steps are as follows:
the server obtains object identity information according to the feature identification information contained in the object activity data, determines object behavior information according to the acquisition equipment information contained in the object activity data, and further obtains downlink data containing the object identity information and the object behavior information.
In one embodiment, the server sets object behavior information corresponding to the acquisition device information in advance, so that the object behavior information can be determined according to the acquisition device information included in the object activity data.
For example, if the acquisition device is determined to be a POS machine based on the device information, the object is determined to be consumed.
For another example, if the collecting device is determined to be a cash dispenser according to the device information, it is determined that the object withdraws cash.
It should be noted that information such as the position of the object may be specified based on information included in the object activity data, and the present invention is not limited thereto.
When the offline activity data contains object identity information or associated equipment information, determining the object identity; when the offline activity data contains acquisition equipment information or service data, determining object behaviors; when the position information is contained in the offline activity data, the position of the object is determined.
Further, the server can extract the line descending data of the object by combining the acquired line ascending behavior data according to the object activity data.
In one embodiment, the server determines the object identity information, the object location information and the object behavior information according to the acquired offline activity data and the object activity data.
The following description will take the determination of the identity information of the object as an example.
For example, the server recognizes the image in the identification card, the big picture of the face in the resume, and the picture (object activity data) taken at the time of municipal administration based on the personal information base (online behavior data), and recognizes the object identity.
For another example, the server obtains real-name authentication information (offline activity data) of the object in the internet service or APP, and determines the identity of the object according to the sum of names contained in the personal information base.
For another example, the server obtains asset identification information of the object in municipal service or life service, and determines the identity of the object according to the association relationship between the asset identification information and the object identity information contained in the personal information base.
For another example, the server obtains the MAC address of the smart wearable device or the mobile phone of the object (associated device of the object), and determines the identity of the object according to the association relationship between the object identity information and the associated device information included in the online behavior data.
In one embodiment, the behavior class of the object can be determined by the device type of the associated device, and the spatial location of the associated device can be inferred from the location of the associated device.
The following description will take the determination of the object behavior information as an example.
For example, the POS of each merchant detects peripheral devices, obtains device information and location (offline activity data) of the mobile phone of the object, determines a walking path of the object according to the timestamp (offline data), and associates the mobile phone of the object with the identity information of the object and the consumption details (object behavior information) of the shopping mall according to the received payment APP payment service and the detected device information (offline activity data) of the object.
The following description will take the determination of the object position information as an example.
For example, when a server connects to a new device (associated device and acquisition device), it acquires geographical location information (offline activity data) of the device. And according to the POI corresponding to the geographic position information of the equipment in the GIS system, the business scene where the object is located is judged, and then the behavior of the object can be judged or the track of the object can be generated.
Therefore, the line downlink of the object under the line can be obtained through the acquisition equipment as data to supplement the line uplink as data.
Step 303: and the server acquires the object information according to the online behavior data and the offline behavior data of the object.
Specifically, the server screens out data meeting preset attribute conditions from the online behavior data and the offline behavior data of the object, and generates object information according to the screened data.
In practical application, the preset attribute condition can be set according to the actual portrait service requirement. The object information may be used for the following services: industry portrayal services, online/offline portrayal services, trajectory services, and group classification. The service requirement may be to determine a change in trajectory of the user, to classify each user, and so on. The object information may include: an online consumer representation, an offline consumer representation, a life representation, a social representation, a travel representation, an office representation, object tags, and group classifications.
In one embodiment, different types of data are screened for the on-line behavior data and the off-line behavior data of the object to generate different object information for different service requirements.
For example, the preset attribute condition is to acquire shopping list data of the user to construct a consumption image of the user.
A specific on-line and off-line scenario is specifically exemplified below.
In an online scene, an object registers an APP through a mobile phone, the APP acquires a UUID of the object and a Wi-FiMAC address of the mobile phone, the name and the identity card number of the object are acquired through registration information, and the information of license plate numbers, photos, bus cards and the like of the object is acquired through the application operation of a municipal service applet and a parking service applet of the object in the APP. Thus, the server determines the online behavior data of the object according to the acquired information of the UUID, the Wi-Fi MAC address, the name, the identification card number, the license plate number, the photo, the bus card and the like of the object, and determines each identity (object identity information) of the object and associated equipment.
In an offline scene, a POS machine is assumed to be arranged in each store in a market, and the POS machine can detect the peripheral equipment through a Wi-Fi or Bluetooth module to obtain the MAC address of the peripheral equipment. The POS machine of each merchant detects peripheral equipment, can obtain the equipment information and the position (offline activity data) of the cell-phone of object, combines above-mentioned online behavior data, can confirm the object identity, and can confirm the walking route of object (offline is data) according to the time stamp, and according to the payment APP payment business of the object that receives and the equipment information (offline activity data) that detects, can associate cell-phone and object identity information of object, and in the market which consumption and consumption details (object behavior information).
Further explanation follows regarding the acquisition of object information based on application scenarios employing several different recognition modules.
For example, referring to fig. 4a, a diagram of an exemplary wireless-based object information obtaining architecture is shown. The architecture includes: the system comprises a correlation device, a collection terminal and a server. The related equipment of the object is a mobile phone, and the MAC address is FF:88:66: 3X. The acquisition equipment is a POS machine and comprises a Wi-Fi module, a Bluetooth module and a consumption module.
The acquisition equipment detects the MAC addresses FF:88:66:3X of the mobile phone of the object through the Wi-Fi module. The object pays the fee to the POS machine through the payment application of the mobile phone. And the POS machine acquires the object UUID through the consumption module, and sends the acquired MAC address, the object UUID, the POS unique identification number and the GIS position to the server. The server determines equipment information according to the MAC address and the POS unique identification number, determines object behaviors according to the POS machine and payment service data, determines an object position according to the POI corresponding to the GIS position, and determines object identity information according to the UUID and the MAC address of the object. And obtaining object information according to the obtained object identity information, the equipment information, the object behavior and the object position.
For another example, refer to fig. 4b, which is a diagram illustrating an exemplary bluetooth-based object information obtaining architecture. The architecture includes: the system comprises a correlation device, a collection terminal and a server. The related equipment of the object is a mobile phone, and the MAC address is FF:88:66: 3X. The acquisition equipment is a POS machine and comprises a Wi-Fi module, a Bluetooth module and a consumption module.
The related equipment of the object, namely the mobile phone, detects the periphery to obtain the Bluetooth beacon of the POS machine, and sends the MAC address and the Bluetooth beacon of the mobile phone to the server. The server determines the equipment information, namely the MEID of the mobile phone of the object and the identity information of the object according to the MAC address, and determines the position of the object according to the MAC address and the Bluetooth beacon of the mobile phone. And finally, the server generates object information according to the acquired object identity information and the object position.
For another example, refer to fig. 4c, which is a diagram illustrating an object information obtaining architecture based on a camera. The architecture includes a server and a camera.
The object passes through the entrance guard with the camera. The camera makes a video recording, and sends the obtained face image and the unique identification number of the camera to the server. The server determines the identity information of the object according to the face image, obtains the position of the object according to the position information determined by the unique identification number of the camera, and determines that the behavior of the object is trip according to the fact that the type of the equipment obtained by the unique identification number of the camera is an entrance guard. And finally, the server acquires the object information according to the acquired object identity information, the object position and the object behavior.
The following uses a specific application scenario to further illustrate in detail the process of generating object information according to the line-up and line-down of the object.
S401: the object connects the mobile phone with the WiFi AP at home, determines meeting time and meeting place with friends through social application of the mobile phone, and then goes out with the mobile phone.
Specifically, the server may determine the WiFi AP position, the mobile phone MAC address, acquire the offline time of the mobile phone through the WiFi AP, and acquire the social account number of the object, i.e., the object identity information, and the social account number of the friend, i.e., the object identity information, through the social application.
S402: the object arrives at the parking lot through the apartment entrance guard device and drives away from home.
Specifically, the server can acquire the access control card number of the object, namely the object identity information, through the access control equipment, and can acquire the license plate number, the face image and the parking card identification information of the object through the camera.
S403: the object searches for a certain mall through the navigation APP.
Specifically, the server can acquire a navigation APP account of the object, namely object identity information and a travel track, through the navigation APP, and shoot license plate numbers in the journey through a camera of the internet of vehicles.
S404: the object arrives at a parking lot of a mall.
Specifically, the server may determine the arrival location, arrival time, and arrival number of the object through the camera.
S405: the object takes an elevator to a certain floor.
Specifically, the server confirms the number of people in the elevator and the access floor of the object through an advertising machine in the elevator.
S406: the subject enters a coffee shop and is consumed via a cell phone.
Specifically, the server detects the mobile phone of the object through the POS machine of each merchant, determines the travel track of the object, and determines the preference, the consumption details, the amount of money and the identity information of the object according to the transaction service data of the object and the POS machine through the mobile phone.
S407: friends of the subject also come to the coffee shop to chat with the subject drinking tea.
Specifically, the server POS machine acquires object identity information, preferences, consumption details, and money amounts of the object, and acquires interpersonal relationships of the object.
Thus, the server determines the natural person identity and the digital identity of the object, the associated equipment information, the object behavior information and the object location information according to the acquired object activity data, the online activity data and the online activity data of the object, and further generates the object information.
Further, the object information can be further improved through other application scenes of the object.
The above embodiments are further specifically exemplified by using several application scenarios of object information.
The application scene of the first object information is an office scene: the server acquires the online behavior data of the object through the WiFi AP equipment, acquires the object activity data of the object through the camera, the work card and the access control equipment, and acquires the object information through the acquired online behavior data and the acquired object activity data. The object information is determined according to the structural data of the object such as employee information, a time stamp, a place and a behavior, and is used for describing the long office investment time, the moving path and the like of the employee.
Fig. 5a is a diagram illustrating an example of office object information. In fig. 5a, the mobile phone number and the working volume for displaying the object are extreme, and the mobile phone number and the working volume include the time-sharing investment degree of the workstation, the conference and other work, the time-sharing office efficiency obtained through the social software, the time-sharing office efficiency obtained through the email, and the staying time of the object in each place.
It should be noted that fig. 5a is only used for showing that the object information has the function of describing the identity of the employee and the office investment level, and if the lines and characters in fig. 5a are unclear, the clarity of the description is not affected.
The application scene of the second object information is an electronic market scene: the server extracts offline activity data (such as stay time and consumption records) of the object, which are acquired by acquisition equipment in each business, of the object, extracts offline behavior data (such as consumption preference) of the object, and determines object information for supplying the e-commerce APP according to the offline behavior data and the online behavior data of the object. Because the established object information is different for different objects, the e-commerce APP can provide targeted e-commerce services for different objects according to the corresponding object information.
Fig. 5b is a diagram of an exemplary gender attribute recommendation page. Assuming that corresponding object information is determined for a female group and a male group respectively, the e-commerce APP recommends a corresponding page according to the obtained object information. The e-commerce APP pushes the (a) diagram in fig. 5b for the female population and the (b) diagram in fig. 5b for the male population.
Fig. 5c is a diagram illustrating an example of a preferential page. And supposing that corresponding object information is determined for the new object and the old object respectively, the e-commerce APP recommends corresponding preferential pages according to the obtained object information. The e-commerce APP pushes graph (a) in fig. 5c for new objects and pushes graph (b) in fig. 5c for old objects.
The application scene of the third object information is an offline retail scene: the server acquires object information of each object in the designated area. The merchant can select the address selection, the product selection, the operation mode of the decision owner, the composition of the resident merchant and the like through the information of each object.
Fig. 5d is a diagram illustrating an example of an object region. In fig. 5d, the number of objects in different ranges of a specified location may be determined. FIG. 5e is a diagram illustrating an example of merchant statistics. And the server acquires object information corresponding to the merchant according to the offline behavior data and the online behavior data of each object in each cell, namely, the distance, the number of hidden customers, the number of Taobao members and the permeability of each cell near the merchant are respectively determined.
It should be noted that fig. 5e is only used to show the distance, the number of hidden customers, the number of members who are culled, and the penetration rate of each cell near the merchant, and if the text or lines in fig. 5e are not clear, the clarity of the description is not affected.
Referring to fig. 5f, which is an exemplary diagram of an object track, in fig. 5f, a main meeting place, a shop 1, a shop 2 and a check-in desk are included in a shopping mall, and the object is located in the shop 1. From the object information generated for the data and the on-line behavior data for the object under-line, the position and behavior trajectory of the object within the mall can be determined.
Fig. 5g is a thermal distribution diagram of a market subject. According to the object information of each object in the market, the thermodynamic diagram of the object distribution in the market can be obtained.
The application scene of the fourth object information is a public security scene. Referring to fig. 5h, an exemplary diagram of a public security scenario is shown, where object information is respectively established for each person in a blacklist, such as a criminal, a behavior track of an object is determined according to the object information of each person in the blacklist, and when a suspicious behavior is monitored, an alarm is issued, all permissions of the person are frozen, and security interception is required.
And the application scene of the fifth object information is a city management scene. When public security incidents, disputes, traffic accidents and the like occur, object information generated by data collected through a camera, Wi-Fi, Bluetooth and the like is obtained, and witnesses of incident sites and behavior tracks are determined according to the object information and timestamps, so that facts are restored, relevant people are found out, and searching and questioning are carried out without looking up and monitoring.
The application scenes of the sixth object information are family and social scenes. The home address can be determined through the Wi-Fi AP, the family members of the object can be determined according to the associated equipment information of all objects connected with the Wi-Fi AP and the Bluetooth, the object identity information, the E-commerce transaction record, the frequency, the social circle and the like of the object can be obtained according to the associated equipment information of each object, and data such as connecting the Wi-Fi AP to each family member every day and disconnecting the Wi-Fi AP can be obtained. And obtaining object information of a family according to the obtained data of each object.
The application scenario of the seventh object information is a financial credit scenario: the method includes the steps of obtaining subject information about online and offline credit scores of a subject based on online behavior data of the subject on internet finance and payment instruments and online behavior data and offline behavior data obtained through an associated device, and judging whether to provide financial loan or VIP service for the subject and promote financial products to a target customer group through the subject information.
The application scene of the eighth object information is an advertisement service scene: the advertising service provider divides the masses into high-end men, high-end women, small-fund families and moonlight families according to the object information, and puts different materials on the advertising screen aiming at different types of objects. If a small resource enters the elevator, corresponding advertisement content can be accurately delivered to the small resource within 10-15 seconds of taking.
Therefore, for general life application such as e-commerce and take-out comment, developers do not need to care about the customized requirements of the objects, and only need to obtain the preference labels of individuals or groups of the objects from the object information service and provide corresponding services. For enterprise application, the object information of the staff in an office scene can be mastered, namely 8 hours of working time a day, and how much time is distributed in meetings, meals, stations and the like. For government application, the object track in the black and white list can be mastered according to the object information, and the prejudgment behavior can be stopped at the right time. For the occurrence of public security events, comparison can be easily carried out according to object information without needing to read and monitor in a complicated manner.
In the embodiment of the application, more meta-means are provided to obtain the online behavior data and the offline behavior data of the object, including but not limited to object identity information, biometric information, asset identification information, favorite tag, time-behavior information, object trajectory, and the like. The on-line behavior data of the object is important data for obtaining the object information, and the off-line behavior data of the object plays a very important role in performing the object information. Thus integrating into structured data of the object. Obtain object information and provide a representation service. Therefore, the online behavior data and the offline behavior data of the object are combined, the accuracy and the integrity of the object information are improved, and the problem of single-point output in the traditional technology is solved.
Based on the same inventive concept, the embodiment of the present application further provides a device for obtaining object information, and because the principle of the device and the apparatus for solving the problem is similar to that of a method for obtaining object information, the implementation of the device can refer to the implementation of the method, and repeated details are not repeated.
Fig. 6 is a schematic structural diagram of an apparatus for obtaining object information according to an embodiment of the present application. An apparatus for obtaining object information includes:
a first obtaining unit 601, configured to obtain, according to online behavior data of an object, associated device information associated with the object;
the acquisition unit 602 is configured to acquire offline activity data of each associated device through the acquisition device according to information of each associated device associated with the object;
an extracting unit 603, configured to extract, according to offline activity data of each associated device, offline of the object as data;
a second obtaining unit 604, configured to obtain the object information according to the line-up behavior data and the line-down behavior data of the object.
Preferably, the second obtaining unit is further configured to:
acquiring object activity data from acquisition equipment distributed in an activity area of an object, wherein the object activity data comprises characteristic identification information and acquisition equipment information for identifying the identity of the object;
obtaining object identity information according to feature identification information contained in the object activity data;
determining object behavior information according to acquisition equipment information contained in the object activity data;
and acquiring downlink data containing object identity information and object behavior information.
Preferably, the on-line behavior data includes: object-related data obtained from various operations of the object on the network, and object-related data obtained from object-related information stored in each network database.
Preferably, the second obtaining unit is configured to:
screening out data meeting preset attribute conditions from the online behavior data and the offline behavior data of the object;
and generating object information according to the screened data.
In the method, the apparatus, the device and the medium for obtaining object information, the information of each associated device associated with the object is obtained according to the online behavior data of the object, the offline activity data of each associated device is obtained through the acquisition device according to the information of each associated device associated with the object, the offline activity data of each associated device is extracted according to the offline activity data of each associated device, and the object information is obtained according to the online behavior data and the offline activity data of the object. Therefore, the on-line behavior data and the off-line behavior data of the object are combined, and the accuracy and the integrity of the object information are improved.
Furthermore, the object is identified through the acquisition equipment, the object activity data of the object under the line is obtained, the line descending data is generated according to the object activity data and the line descending data, and the object information is obtained according to the line descending data and the line ascending data.
Fig. 7 shows a schematic configuration of a control device 7000. Referring to fig. 7, the control apparatus 7000 includes: a processor 7010, a memory 7020, a power supply 7030, a display unit 7040, and an input unit 7050.
The processor 7010 is a control center of the control apparatus 7000, connects the respective components by various interfaces and lines, and executes various functions of the control apparatus 7000 by running or executing software programs and/or data stored in the memory 7020, thereby monitoring the control apparatus 7000 as a whole.
In the embodiment of the present application, the processor 7010, when calling a computer program stored in the memory 7020, executes the method for obtaining object information as provided in the embodiment shown in fig. 3.
Optionally, the processor 7010 may include one or more processing units; preferably, the processor 7010 may integrate an application processor, which mainly handles operating systems, target object interfaces, applications, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 7010. In some embodiments, the processor, memory, and/or memory may be implemented on a single chip, or in some embodiments, they may be implemented separately on separate chips.
The memory 7020 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, various applications, and the like; the stored data area may store data created from the use of the control device 7000 and the like. In addition, the memory 7020 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The control device 7000 also includes a power supply 7030 (e.g., a battery) for powering the various components, which may be logically coupled to the processor 7010 via a power management system that may be used to manage charging, discharging, and power consumption.
Display unit 7040 may be configured to display information input by or provided to the target object, and various menus of control apparatus 7000, and the like. The display unit 7040 may include a display panel 7041. The Display panel 7041 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-Emitting Diode (OLED), or the like.
The input unit 7050 may be used to receive information such as numbers or characters input by the target object. The input unit 7050 may include a touch panel 7051 and other input devices 7052. Among other things, the touch panel 7051, also referred to as a touch screen, may collect touch operations of a target object on or near the touch panel 7051 (e.g., operations of the target object on or near the touch panel 7051 using any suitable object or attachment such as a finger, a stylus, etc.).
Specifically, the touch panel 7051 may detect a touch operation of a target object, detect signals resulting from the touch operation, convert the signals into touch point coordinates, transmit the touch point coordinates to the processor 7010, receive a command sent from the processor 7010, and execute the command. In addition, the touch panel 7051 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. Other input devices 7052 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, power on and off keys, etc.), a trackball, a mouse, a joystick, and the like.
Of course, the touch panel 7051 may cover the display panel 7041, and when the touch panel 7051 detects a touch operation on or near the touch panel 7051, the touch operation is transmitted to the processor 7010 to determine the type of the touch event, and then the processor 7010 provides a corresponding visual output on the display panel 7041 according to the type of the touch event. Although in fig. 7, the touch panel 7051 and the display panel 7041 are two separate components to implement the input and output functions of the control device 7000, in some embodiments, the touch panel 7051 and the display panel 7041 may be integrated to implement the input and output functions of the control device 7000.
The control device 7000 may also comprise one or more sensors, such as pressure sensors, gravitational acceleration sensors, proximity light sensors, etc. Of course, the control device 7000 may also comprise other components such as a camera, which are not shown in fig. 7 and will not be described in detail, since they are not components used in the embodiments of the present application.
Those skilled in the art will appreciate that fig. 7 is merely an example of a control device and is not intended to be limiting and may include more or less components than those shown, or some components in combination, or different components.
Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for obtaining object information in any of the above-mentioned method embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. Based on such understanding, the above technical solutions substantially or partially contributing to the related art may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for enabling a control device (which may be a personal computer, a server, or a network device, etc.) to execute the methods of the various embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A method of obtaining object information, comprising:
obtaining information of each associated device associated with the object according to the online behavior data of the object;
acquiring offline activity data of each associated device through acquisition equipment according to the information of each associated device associated with the object;
extracting the offline activity data of the object as data according to the offline activity data of each associated device;
and obtaining object information according to the on-line behavior data and the off-line behavior data of the object.
2. The method of claim 1, prior to obtaining object information based on the on-line behavior data and the off-line behavior data for the object, further comprising:
acquiring object activity data from acquisition equipment distributed in an activity area of an object, wherein the object activity data comprises characteristic identification information and acquisition equipment information for identifying the identity of the object;
obtaining object identity information according to feature identification information contained in the object activity data;
determining object behavior information according to acquisition equipment information contained in the object activity data;
and acquiring downlink data containing object identity information and object behavior information.
3. The method of claim 1 or 2, wherein the online behavior data comprises: object-related data obtained from various operations of the object on the network, and object-related data obtained from object-related information stored in each network database.
4. The method of claim 1 or 2, wherein obtaining object information from the on-line behavior data and the off-line behavior data of the object comprises:
screening out data meeting preset attribute conditions from the online behavior data and the offline behavior data of the object;
and generating object information according to the screened data.
5. An apparatus for obtaining object information, comprising:
the first obtaining unit is used for obtaining information of each associated device associated with the object according to the online behavior data of the object;
the acquisition unit is used for acquiring offline activity data of each associated device through the acquisition device according to the information of each associated device associated with the object;
the extraction unit is used for extracting the offline activity data of the object as data according to the offline activity data of each associated device;
and the second obtaining unit is used for obtaining the object information according to the on-line behavior data and the off-line behavior data of the object.
6. The apparatus of claim 5, wherein the second obtaining unit is further to:
acquiring object activity data from acquisition equipment distributed in an activity area of an object, wherein the object activity data comprises characteristic identification information and acquisition equipment information for identifying the identity of the object;
obtaining object identity information according to feature identification information contained in the object activity data;
determining object behavior information according to acquisition equipment information contained in the object activity data;
and acquiring downlink data containing object identity information and object behavior information.
7. The apparatus of claim 5 or 6, wherein the online behavior data comprises: object-related data obtained from various operations of the object on the network, and object-related data obtained from object-related information stored in each network database.
8. The apparatus of claim 5 or 6, wherein the second obtaining unit is to:
screening out data meeting preset attribute conditions from the online behavior data and the offline behavior data of the object;
and generating object information according to the screened data.
9. A control device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1-4 are implemented when the program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 4.
CN201911222250.9A 2019-12-03 2019-12-03 Method, device, equipment and medium for obtaining object information Pending CN110992098A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911222250.9A CN110992098A (en) 2019-12-03 2019-12-03 Method, device, equipment and medium for obtaining object information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911222250.9A CN110992098A (en) 2019-12-03 2019-12-03 Method, device, equipment and medium for obtaining object information

Publications (1)

Publication Number Publication Date
CN110992098A true CN110992098A (en) 2020-04-10

Family

ID=70089824

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911222250.9A Pending CN110992098A (en) 2019-12-03 2019-12-03 Method, device, equipment and medium for obtaining object information

Country Status (1)

Country Link
CN (1) CN110992098A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291638A (en) * 2020-01-19 2020-06-16 上海云从汇临人工智能科技有限公司 Object comparison method, system, equipment and medium
CN114417120A (en) * 2020-10-28 2022-04-29 博泰车联网科技(上海)股份有限公司 Information pushing method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105138875A (en) * 2015-09-30 2015-12-09 百度在线网络技术(北京)有限公司 Identification method and device for user information
CN107767168A (en) * 2017-09-19 2018-03-06 神策网络科技(北京)有限公司 User behavior data processing method and processing device, electronic equipment and storage medium
CN108280368A (en) * 2018-01-22 2018-07-13 北京腾云天下科技有限公司 On a kind of line under data and line data correlating method and computing device
CN108564434A (en) * 2018-03-20 2018-09-21 北京车音网科技有限公司 User's portrait generation method and device
CN110033293A (en) * 2018-01-12 2019-07-19 阿里巴巴集团控股有限公司 Obtain the method, apparatus and system of user information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105138875A (en) * 2015-09-30 2015-12-09 百度在线网络技术(北京)有限公司 Identification method and device for user information
CN107767168A (en) * 2017-09-19 2018-03-06 神策网络科技(北京)有限公司 User behavior data processing method and processing device, electronic equipment and storage medium
CN110033293A (en) * 2018-01-12 2019-07-19 阿里巴巴集团控股有限公司 Obtain the method, apparatus and system of user information
CN108280368A (en) * 2018-01-22 2018-07-13 北京腾云天下科技有限公司 On a kind of line under data and line data correlating method and computing device
CN108564434A (en) * 2018-03-20 2018-09-21 北京车音网科技有限公司 User's portrait generation method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291638A (en) * 2020-01-19 2020-06-16 上海云从汇临人工智能科技有限公司 Object comparison method, system, equipment and medium
CN114417120A (en) * 2020-10-28 2022-04-29 博泰车联网科技(上海)股份有限公司 Information pushing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11449907B2 (en) Personalized contextual suggestion engine
US8671143B2 (en) Virtual badge, device and method
US9129230B2 (en) Virtual badge, device and method
US10089521B2 (en) Identity verification via validated facial recognition and graph database
US8930458B2 (en) GPS pathfinder cell phone and method
TWI647628B (en) Method and system for leveraging location-based information to influence business workflows and computer program product
US9288079B2 (en) Virtual notes in a reality overlay
US20180069937A1 (en) Event correlation and association using a graph database
US20140195664A1 (en) Zone Oriented Applications, Systems and Methods
US10606824B1 (en) Update service in a distributed environment
US10853634B2 (en) Methods and systems for updating a database based on object recognition
EP2988473B1 (en) Argument reality content screening method, apparatus, and system
CN110992098A (en) Method, device, equipment and medium for obtaining object information
JP6047939B2 (en) Evaluation system, program
CN110673767A (en) Information display method and device
JP2014016842A (en) Evaluation system and program
US11107098B2 (en) System and method for content recognition and data categorization
CN111861139A (en) Merchant recommendation method and device and computer equipment
KR20180020654A (en) Method for providing communication service based on augment reality
CN106663112A (en) Presenting information cards for events associated with entities
CN114697870A (en) Method, device, equipment and medium for positioning, screening and matching personnel in exhibition
JP5929573B2 (en) Evaluation system, program
WO2016176376A1 (en) Personalized contextual suggestion engine
JP2014026594A (en) Evaluation system and server device
JP6019888B2 (en) Evaluation system, information processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40022952

Country of ref document: HK

WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200410

WD01 Invention patent application deemed withdrawn after publication