Nothing Special   »   [go: up one dir, main page]

EP3616095A1 - Computer vision based monitoring system and method - Google Patents

Computer vision based monitoring system and method

Info

Publication number
EP3616095A1
EP3616095A1 EP18790579.9A EP18790579A EP3616095A1 EP 3616095 A1 EP3616095 A1 EP 3616095A1 EP 18790579 A EP18790579 A EP 18790579A EP 3616095 A1 EP3616095 A1 EP 3616095A1
Authority
EP
European Patent Office
Prior art keywords
sensors
monitoring system
territory
records
detects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP18790579.9A
Other languages
German (de)
French (fr)
Other versions
EP3616095A4 (en
Inventor
Maxim GONCHAROV
Nikolay DAVYDOV
Stanislav VERETENNIKOV
Dmitry Gorilovsky
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cherry Labs Inc
Original Assignee
Cherry Labs Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/647,129 external-priority patent/US11037300B2/en
Priority claimed from GBGB1717244.6A external-priority patent/GB201717244D0/en
Application filed by Cherry Labs Inc filed Critical Cherry Labs Inc
Publication of EP3616095A1 publication Critical patent/EP3616095A1/en
Publication of EP3616095A4 publication Critical patent/EP3616095A4/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19613Recognition of a predetermined image pattern or behaviour pattern indicating theft or intrusion
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19663Surveillance related processing done local to the camera
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/08Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using communication transmission lines
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3226Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using a predetermined code, e.g. password, passphrase or PIN
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/06Authentication
    • H04W12/065Continuous authentication

Definitions

  • the field of the invention relates to electronic systems for monitoring, security and control.
  • One implementation of the present invention relates to a method and a computer-based system for representing data from computer vision systems.
  • Conventional security, monitoring and control systems may comprise a user interface unit that allows to user to view information (e.g. video and audio signals) from a set of cameras and additional sensors on a display of a user processor-controlled device. Due to the fact that the common approach is to scan the video and audio information, current solutions often require the user to watch the video and hear the audio information in order to understand what people do on the territory under surveillance and understand which screen to use during movements of surveillance object.
  • information e.g. video and audio signals
  • a user of such a system may have to continuously view all data from cameras to identify something suspicious and have an opportunity to give a response to the monitoring system based on the viewed information by means of GUI.
  • the video streams from different cameras may be shown on the display simultaneously as a grid of pictures. That's why a user often has to shift his/her gaze from one picture to another picture to catch all the information and to receive an overview of the situation in the observed territory.
  • the present invention addresses the above vulnerabilities and also other problems not described above.
  • a first aspect of the invention is a monitoring system comprising:
  • sensors that monitor activity within a designated territory, the sensors including visual or image sensors, for example sensors that make video recordings;
  • a local processing system located within or proximate to the designated territory, the local processing system receiving signals from the sensors, the local processing system processing and analyzing the signals from the sensors to produce messages that describe activity within the designated territory as monitored by the sensors, the messages not including audio, visual or other direct identifying information that directly reveal identity of persons within the designated territory;
  • the monitoring station receiving the messages produced by the local processing system and making the messages available to external observers.
  • the messages describe actions performed by a person or animal within the designated territory sufficiently to allow an external observer to determine when external intervention is required.
  • the monitoring station produces an alarm when the messages describe actions performed by a person or animal that indicate an external intervention may be required.
  • the sensors are selected to monitor children, disabled, sick or old persons.
  • the monitoring station forwards to the local processing system a query from an external observer, the local processor responds with a message, wherein contents of the message are based on activity within the designated territory detected by processing and analyzing the signals from the sensors.
  • the monitoring station produces an alert when the messages indicate a predetermined gesture or predetermined phrase is detected by the sensors.
  • the monitoring station cancels the alert when the messages indicate a predetermined cancelation gesture or predetermined cancelation phrase is detected by the sensors.
  • the local processing system internally identifies bodies of people and animals within the designated territory based on video data using at least one of the following techniques:
  • face metric identification that identifies and records facial features
  • clothes metric identification that detects and records types of clothes on a body
  • body metric identification that detects and records body shapes
  • hair-cut metric identification that detects and records current activity of a body
  • hair-cut metric identification that detects and records hair-cut style
  • tool metric identification that detects and records objects held or in near vicinity to a body.
  • the local processing system recognizes changes made to bodies within the designated territory and updates affected metric identifications
  • the local processing system analyzes kinetic motion of identified bodies to insure movements consistent with laws of physics to detect and resolve inconsistencies when internally identifying bodies.
  • the local processing system additionally uses audio data to internally identify bodies of people and animals within the designated territory, the audio data including timbre metrics and relative sound amplitudes.
  • the local processing system internally identifies bodies of people and animals within the designated territory based on video data using more than one of the following techniques:
  • face metric identification that identifies and records facial features
  • clothes metric identification that detects and records types of clothes on a body
  • body metric identification that detects and records body shapes
  • hair-cut metric identification that detects and records current activity of a body
  • hair-cut metric identification that detects and records hair-cut style
  • tool metric identification that detects and records objects held or in near vicinity to a body.
  • the designated territory includes an area in at least one of the following:
  • the designated territory comprises a plurality of rooms, the local processing system constructing a layout plan of the territory using video data from the sensors.
  • the local processing system additionally uses audio data to construct the layout plan.
  • the local processing system when constructing the layout plan of the territory, utilizes at least one of the following:
  • estimates of room geometry based on detecting horizontal and vertical lines; detection of major objects within field of view of video sensors;
  • Another aspect is a method for monitoring activity within a designated territory, comprising:
  • Optional steps include any of the following.
  • clothes metric identification that detects and records types of clothes on a body
  • body metric identification that detects and records body shapes
  • activity metric identification that detects and records current activity of a body
  • hair-cut metric identification that detects and records hair-cut style
  • tool metric identification that detects and records objects held or in near vicinity to a body.
  • the audio data being used to internally identify bodies of people and animals within the designated territory, the audio data including timbre metrics and relative sound amplitudes.
  • FIG.l illustrates a monitoring system
  • FIG.2 illustrates a sensor node activating diagram
  • FIG.3 illustrates an exemplary sensor
  • FIG.4 illustrates an exemplary computer processing unit.
  • FIG. 5 illustrates an exemplary block diagram of the system calibration.
  • FIG. 6 illustrates an exemplary block diagram representing the monitoring and alarm condition detection.
  • FIG. 7 illustrates an exemplary block diagram representing the monitoring and alarm condition detection.
  • FIG. 8 illustrates an exemplary screen of a user processor-controlled electronic device that contains a frame with identified objects in a viewfinder.
  • FIG. 9 illustrates an exemplary screen of user processor-controlled electronic device that contains a frame with identified object in a viewfinder.
  • FIG. 10 illustrates an exemplary block diagram of a system operation in different modes.
  • FIG. 11 illustrates an exemplary block diagram of a system operation in case of a secret signal detection.
  • FIG.12 illustrates an exemplary user processor-controlled device with the user interface of a dashboard on the screen operating.
  • FIG.13 illustrates an exemplary graphical user interface of a dashboard with the mini 2 map and a timeline with a slider on the screen of a user processor-controlled device.
  • FIG. 14 illustrates an exemplary block diagram of a user interaction with the system.
  • FIG.l 5 illustrates a further example of a home monitoring and security system.
  • FIG. 16 illustrates the main components of a sensor node including a single Camera.
  • FIG.17 illustrates an exemplary sensor node.
  • FIG.18 illustrates an exemplary sensor node. DETAILED DESCRIPTION
  • Example aspects are described herein in the context of a monitoring system and a method for an object is identification, an object's location detection as well as, recognition of a type of an activity and time of the activity of the detected object.
  • the system is able to collect data from a plurality of sensors, process the data and send alerts based on the processed data to prevent or react to emergency situations.
  • the described system may react automatically in situations such as an appearance of an unknown person in the house during night hours, an appearance of an unknown person in the master bedroom during day hours, a broken window, a failure to shut off water in a bathroom, a baby crawling into a fireplace, a baby approaching a swimming pool with no adults in the vicinity, a person who has fallen down, a person who is lying on a floor or in any other unusual location for a long time, a person crying for help, a child not returning from school at a normal time, and so on.
  • situations such as an appearance of an unknown person in the house during night hours, an appearance of an unknown person in the master bedroom during day hours, a broken window, a failure to shut off water in a bathroom, a baby crawling into a fireplace, a baby approaching a swimming pool with no adults in the vicinity, a person who has fallen down, a person who is lying on a floor or in any other unusual location for a long time, a person crying for help, a child not returning from school at a
  • frame herein may refer to a collection of video data, audio data and other available sensory data, captured from all the cameras and sensors of the monitoring system in one particular time moment, for example, in one second.
  • object herein may refer to an object observed by the monitoring system.
  • object may refer to an animate object, for example, a person (in other words, an individual, a human), a pet, etc.
  • Objects may be "known” - which means they are included into a system internal database of known people and pets, in other words with ID and characteristics that are stored in the memory of the system, or "unknown”— which means that the object is not included into the database, or characteristics that are new for the monitoring system.
  • the term “territory” herein may refer to territory to be observed by the monitoring system.
  • the term “territory” may refer to a living quarter, to an apartment, to a building of a hospital, a school, an old people's home, a private house, to an adjoining territory, etc. It may be a physical and logical location.
  • the term "user” herein may refer to any person or group of people who may interact with the monitoring system.
  • a user may be an owner of a house, or a user may be a medical officer who takes care of elderly persons.
  • the term "zone” herein may refer at least to the part of the territory, for example, to a swimming pool, a fireplace, a room, etc.
  • the term "forbidden" zone herein may refer to a specific part of the territory to be observed by the monitoring system that is not allowed for a specific object.
  • the monitoring system may react if it detects crossing of the border of the forbidden zone by an object.
  • the term “allowed” zone herein may refer to a specific part of the territory observed by the monitoring system that is allowed for a specific object.
  • the term "abnormal event” herein may refer to pre-defined types of activities within a frame to which the monitoring system may react.
  • pre-defined types of activities may include motions related to the intrusions (such as breaking into a window, fighting, presenting weapons, etc.).
  • Pre-defined types of activities also may relate to activities that may be typical for a specific medical condition (for example: falling down, agonizing, visible blood or injury, furniture movements, etc.) and so on.
  • alarm condition may refer to a set of rules, describing a situation that may be considered as dangerous or unsafe for normal wellbeing.
  • some of the rules may be predefined by the monitoring system.
  • the owner or other users may create and/ or adjust rules.
  • a defined set of rules depends on the scenarios or application of the monitoring system.
  • One of the illustrative examples is equipping a private house with the given monitoring system to protect people from a dangerous situation, to safeguard lives by detecting abnormal events and to send alert notifications to external services, such as police, or a user processor-controlled electronic device.
  • the monitoring system may protect from physical break-ins, protect people from medical problems, react if a baby is approaching a swimming pool with no adults in the vicinity, and react when someone collapses or has a stroke.
  • Another illustrative example is equipping a hospital or elderly person home.
  • the monitoring system monitors a person of interest in a room to react and call an alert or medical officer in case the monitored person has a stroke, loses consciousness, falls, faints and so on.
  • the monitored person may also provide an alarm signal to trigger an alarm notification.
  • alarm signal herein may refer to a specific gesture, or a specific sound signal, which may turn the monitoring system into the alarm mode and initiate the procedure of sending the alarm notification to the external services or devices.
  • the alarm signal is needed in the case when the monitoring system is not able to recognize the problem in an automatic way, then the monitored person may signal about the problem.
  • FIG.l illustrates an exemplary monitoring system 100.
  • FIG. 1 illustrates an exemplary monitoring system 100.
  • FIG. 1 may vary without limitation of a scope of the disclosure.
  • a monitoring system 100 incudes a set of sensors 110 which may include at least video cameras, infrared lighting for night vision, and a plurality of microphones. Moreover, the described monitoring system may have a gyroscope, a motion detector, a smoke sensor, a thermographic camera (heat detector), a detector of intrusion (through the window or doors), a thermometer, a smell detector and other sources of information. Sensors 110 may be allocated within the observed territory in such way that they capture signals including all information related to the activity within the territory. For example, sensors 110 of different types produce dense representation of signals and augment data received from other sensors to restore a complete frame. The list of detectors is illustrative.
  • a motion sensor may be used to record the camera positioning and adjust in case of replacement.
  • An accelerometer/ magnetometer may be used to adjust the camera according to the compass points.
  • a pressure sensor may be used to confirm the camera positioning height (the same mechanic phones use to determine their positioning.
  • More than one set of lenses (such as visible light and infra-red) may be used to create a depth picture along with creating a stereoscopic image.
  • a low quality mode may mean the following:
  • a sensor node is in the idle mode, meaning no video recording or feed takes place. It can be activated by movement within the active area.
  • a moving object triggers to the motion sensor, it activates a low quality video feed. If the central computing unit requests a high quality video feed, the sensor node switches to the high quality mode.
  • the high quality mode is triggered by the computer processing unit 120. It can utilize the full capability of a single lens array to output FULL HD 60 FPS video or UHD video feed with a lower number of frames per second. This mode is automatically used for facial recognition, can be requested by the user or required by the central computing unit in case of an emergency situation.
  • a sensor node can utilize both lens arrays to create a stereoscopic picture and compile it into a single image before ISP (image signal processor) utilizing an internal FPGA.
  • ISP image signal processor
  • a night mode utilizes the lens array with an infrared filter along with the infra red lighting.
  • FIG.3 illustrates an exemplary sensor, and in particular, an exemplary video camera 200 in accordance with an implementation.
  • the video camera 200 includes, for example, a speaker 201, an infrared night light 202, a lens 203, a microphone 204, a pass-through adapter 205, a Wi-Fi unit 206, an ARM controller 207 and a backup battery 208.
  • computer processing unit 120 interacts with a set of sensors 110 administering them by a control loop.
  • computer processing unit 120 is a local processing system located within or proximate to a designated territory in which sensors 110 monitor activity.
  • computer processing unit 120 receives signals from sensors 110, and processes and analyzes the signals to produce messages that describe activity within the designated territory as monitored by sensors 110.
  • the messages do not include audio, visual or other types of data that directly reveal the identity of persons within the designated territory.
  • the designated territory can be an area in a school, a prison, hospital, a shopping mall, a street, an office, a parking lot or another type of area.
  • the processed data received from the set of sensors 110 may be stored in memory 130.
  • Machine learning techniques may also be used to provide the ability to determine who is home and if that person is a member of the household through training the computation unit using pictures and video.
  • the set of sensors are connected to a computer processing unit hub via a Wi-Fi connection.
  • a Wi-Fi connection For security reasons the footage used for machine learning purposes does not leave the household when a central hub is present. However a system can be operational without the central hub.
  • the central hub may be a standard PC with a small form factor GPU running the machine learning algorithms.
  • the computer processing unit preferably has the following interfaces:
  • Two Wi-Fi connectivity modules one of the modules is used to connect to the Internet, while the other one to create a separate channel for the camera node's traffic;
  • BluetoothTM direct connection to a user's phone for the purpose of adjusting settings and controlling the system and as a reserve communication channel.
  • the system may operate through a cloud-provided server-based system but using more of the broadband connection.
  • the computer processing unit 120 includes a charging station. Battery remains at a 50% charge while being installed into the charging station. If a sensor node sends a signal to the computer processing unit 120 indicating low power levels, the battery is charged up to a 100% prolonging the battery life and optimizing the battery life span. A separate LED indicator situated on the battery body shows the charge level only when removed from the sensor node.
  • the computer processing unit 120 internally identifies bodies of people and animals within the designated territory based on one or a combination of video data using face metric identification that identifies and records facial features, clothes metric identification that detects and records types of clothes on a body, body metric identification that detects and records body shapes, activity metric identification that detects and records current activity of a body, hair-cut metric identification that detects and records hair-cut style tool metric identification that detects and records objects held or in near vicinity to a body, or another technique.
  • a list of connected devices can be used to distinctively identify people within the household by studying the list of devices present in the presence of a certain amount of people and the changes to that list while people are moving in and out of the operational zone.
  • computer processing unit 120 recognizes changes made to bodies within the designated territory and updates affected metric identifications.
  • computer processing unit 120 analyses the kinetic motion of identified bodies to insure movements consistent with laws of physics to detect and resolve inconsistencies when internally identifying bodies.
  • computer processing unit 120 additionally uses audio data to internally identify bodies of people and animals within the designated territory where the audio data includes timbre metrics and relative sound amplitudes.
  • FIG.4 provides an example of the computer processing unit 120 in accordance with an implementation.
  • the processing unit 120 may include, for example, a Wi-Fi router 231, a backup battery 232, a backup mobile internet module 233, a CPU board 234 and a GPU board 235.
  • the computer processing unit 120 may be included in a local server 140.
  • the term "local" means that local server is allocated within the territory observed by monitoring system 100. Described allocation of the processing unit and a local server doesn't overload internet channels; increasing the effectiveness of monitoring system 100. Privacy, data protection and data security are protected, for example, by data encryption. Additionally, privacy, data protection and data security are enhanced by the independence of monitoring system 100 from the Internet. Such independence from the Internet makes monitoring system 100 more confidential and reliable for users. Monitoring system 100 does not send any private source data to external storage resources such as a cloud storage system or some other external storage systems located outside the territory.
  • the low-level video description may include recognized objects - animate and inanimate - on different levels of a hierarchy.
  • the low-level video description may include descriptions of a body, body parts, skeleton joint coordinates, body shape measurements and samples of clothes, skin colors and textures on different parts of the body.
  • the low-level video description may also include descriptions of a face, coordinates of facial features, a type of the haircut, external objects (such as earrings), and samples of color and texture on lips, ears, hairs, eyes and teeth.
  • the low-level video description may also include descriptions of hand, coordinates of finger joints, types of the finger and toe nails, samples of the color and texture of the fingers, the nails and the palms.
  • a low-level audio description may include, for example, recognized sounds on different levels of hierarchy.
  • low level audio description may include a single sound or morpheme, its spectrum, its relative amplitude on different microphones, or a longer duration audio sound which is recognized as known, such as, for example, a phrase to set an alarm on or off, or a phrase to enable/ disable surveillance.
  • high-level description 122 may be generated.
  • the processed data received from the sensors 110 is stored in memory 130 in an accurate high-level description file 135, for example, in a textual log-file.
  • activity detection used to create low-level description algorithms (See Serena Yeung, et al. "Learning of Action Detection from Frame Glimpses in Videos", arXiv:1511.06984, 2015.)
  • the high-level description 135 may be extracted from the memory storage and processed at any time.
  • the high-level description file 135 may be customized and transferred into the script of any other external system.
  • the information from the file 135 may be processed from text to speech.
  • Computer processing unit 120 uses the source signal (video signal, audio signal, data from smoke detector, etc.) from sensors 110 to recognize and differentiate objects within a frame, and to identify the type and spatial position of the detected objects.
  • the results produced by computer processing unit 120 are stored in an accurate high-level description file 135.
  • Computer processing unit 120 may produce people and pet identification and re-identification units based on face metrics, body metrics, clothes metrics, hair-cut metrics, pose and limb coordinate estimations, detection of people and pets spatial position, detection of activity, detection of controlling visual gestures and audial signals, detections of specific medical conditions, detection of the "scenario" of people and pets interaction with the surrounding objects, and so on.
  • spatial localization based on audio signal works uses relative amplitudes. That is, as an object moves towards and away from a microphone, the detected amplitude of an audio signal based on sound from the object varies based on location. The amplitude will vary dependent upon which microphone generates the audio signal based on the sound.
  • computer processing unit 120 can statistically correlate the distribution of the audio signal amplitudes at all the microphones on the territory with the known spatial locations of the object. The result of this statistical correlation is a mapping of the space of audial amplitudes to the plan of territory. So, when an object for a period is not seen by any of the video cameras, monitoring system 100 may restore the object spatial location by using the microphone signals and the mapping performed during the calibration stage.
  • Computer processing unit 120 interacts with a memory storage unit 130 to store the processed information, for example, in the form of a high-level description 135 and to extract stored information for further detailed analysis.
  • Memory storage unit 130 may include, for example, a database of known persons 474, a short-term memory unit of the previous landmark positions 425, a short-term memory of faces, bodies and clothes metrics 435, a short-term memory of geometrical positions and personal identifications 485, a database of pre-set alert conditions 492, and a short-term memory of audio frames 850.
  • the source unprocessed signals from the sensors may be stored in the memory just for a short period of time.
  • Monitoring system 100 may include an Application Programming Interface (API) module 155 that provides opportunity for interaction of a "user" with monitoring system 100.
  • Applications on user-processor controlled electronic devices 180 by means of a graphical user interface (GUI) 160 may access the server 140 through the Application Programming Interface (API) 155 - a software interface. Interaction of a user with monitoring system 100 may be performed for example, through the mobile application or through the user interface of a desktop program 180.
  • One embodiment for example may be represented by a client— server system or a request— response model.
  • GUI 160 on a user-processor controlled device 180 provides a user with the possibility to receive alerts from monitoring system 100, to view the collected, stored and analysed data from sensors, to calibrate monitoring system 100, to define rules that describe the dangerous situation and to add new objects to the database of monitoring system 100.
  • signals from the set of sensors 110 may be represented in real time via the display of a user-controlled device (mobile phone, laptop, smartphone or personal computer, etc.) without any modifications.
  • monitoring system 100 may provide information pertaining to what is happening at a home to a user.
  • GUI 160 of the system on the user controlled device may display all captured video and images from all cameras and play the recorded audio tracks.
  • the user may give a response to monitoring system 100 based on the viewed information by means of GUI 160.
  • the user may request monitoring system 100 to provide more detailed information from sensors 110, to give a report based on the analysis of data from sensors 110, to call an alert or to call off an alert, to call the police, ambulance or emergency service, or so on.
  • monitoring system 100 gives access to the user to view information, to obtain access to stored data in memory storage 130 of server 140 and to control monitoring system 100.
  • the user may be provided with the results of the information processed by unit 120 data.
  • the high-level description 135 of one or several frames may be pre-processed to display the information on a user-controlled device 180.
  • the information may be represented in several ways, for instance in the form of a textual report, in the form of a map, in the form of a graphical reproduction or cartoons, or in the form of a voice that describes main events during the requested time period.
  • the textual report may include textual information about the main events.
  • the textual report may say: “John woke up at 9 am, went to the kitchen, had a breakfast with nurse, spent 2 hours in the study room, spent one hour nearby the swimming pool.”; “A nurse cooked breakfast at 8.30 am, cleaned the rooms, spent 2 hours with John in room B,. . .”; "Patient Mary G. took a medicine at 8 am, read book 3 hours, left a room at 1 pm,. . . "; and so on.
  • a user's desktop or mobile applications 160 in processor-controlled user devices 180 exchanges information with monitoring system 100.
  • Electronic communications between computer systems generally includes packets of information, referred to as datagrams, transferred from processor-controlled user devices 180 to a local server and from a local server to processor-controlled user devices 180.
  • the elements of the system exchange the information via Wi-Fi.
  • monitoring system 100 In comparison to other known electronic communications, the current interaction of a user/client with monitoring system 100 involves adding and storing information within monitoring system 100 in such way that this information is stored in the memory storage of the local server 140 that is located inside the territory where monitoring system 100 is installed. This ensures privacy for the information about the personal life of a user of the given system.
  • Monitoring system 100 is able to interact with external systems or services, for example, with smart house systems, with the police systems, with medical services, and so on.
  • the described system for monitoring, security and control 100 send alert notification to the external services and system 170, user processor-controlled devices 180 based on the analysis of the data from sensors 110.
  • the alerts may be represented to the user with help of a GUI on the display for example.
  • external services and system 170 is a monitoring station that receives messages produced by computer processing unit 120 and makes the messages available to external observers.
  • the messages describe actions performed by a person or animal within the designated territory sufficiently to allow an external observer to determine when external intervention is required.
  • external devices or services 170 includes a monitoring station that produces an alarm when the messages from computer processing unit 120 describe actions performed by a person or animal that indicate an external intervention may be required.
  • sensors 110 are selected to monitor children, disabled, sick or old persons.
  • the monitoring station forwards to computer processing unit 120 a query from an external observer and computer processing unit 120 responds with a message, wherein contents of the message are based on activity within the designated territory detected by processing and analysing the signals from sensors 110.
  • the owner can be notified via the mobile device.
  • Notifications can be accompanied by the video or pictures provided by the cameras according to settings defined by the user. To prevent false positive results, a user can view the video feed from the camera. After each situation, the system remembers the result to prevent future false positives.
  • Calibration is an initial step of running of monitoring system 100 within an unknown territory. During the calibration, monitoring system 100 identifies basic parameters of the territory, identifies the type of the territory, creates the map of the territory, and so on. For example, when the designated territory includes a plurality of rooms, the local processing system constructs a layout plan of the territory using video data from the sensors. For example, the local processing system additionally also uses audio data to construct the layout plan. For example, estimates of room geometry are based on detecting horizontal and vertical lines.
  • neural network algorithms are used to recognize a room type form based on a database of known room types.
  • measurements can be based on requested configuration activities of a user within a room.
  • FIG.5 illustrates an exemplary block diagram of a calibration method of monitoring system 100 in accordance with one or more aspects.
  • the method of the calibration of monitoring system 100 starts at step 300 and proceeds to step 370.
  • source signals from a set of sensors 110 are synchronized.
  • monitoring system 100 determines the geometrical parameters of the territory. Particularly, monitoring system 100 may identify allocation of rooms inside the observed building, sizes of the rooms, their functions, the furniture inside the rooms, the parameters of adjoining territory, for example, garden, garage, playing ground etc. Monitoring system 100 may also determine an allocation of windows and entries, source of light, fireplaces, stairs, connection with other rooms, etc.
  • the type and function of territory may influence how monitoring system 100 will process the data from sensors, the type of abnormal event within the frame. For example, if the territory represents a private home, the scenarios will be different from the scenarios of monitoring system 100 in the hospital for elderly persons. Type and function of the territory may be indicated by the user at step 330.
  • monitoring system 100 may generate the plan of the territory based on the identified geometrical parameters.
  • the estimation of the room geometry may be based on detecting horizontal and vertical lines, angle positions of the gyroscopes, detection of major objects within field of view of video sensors, and neural network algorithms to recognize a room type form based on a database of known room types.
  • a plan of the territory may be generated in other possible ways without limiting the scope of the present invention.
  • a plan of the territory may be generated by a user.
  • the sensor node If the sensor node is removed from a previously designated position, it activates the sensor array and re -maps the surrounding area to redefine its position in space and relative to other sensor nodes while being moved.
  • a new sensor node If a new sensor node is added to the monitoring system — it configures itself automatically through adding its unique ID to the database of the sensor nodes (included into the system). It is automatically routed BLE and Wi-Fi access data by the central computing unit.
  • Sensor nodes provided with the system are configured the same way — the only difference is that sensor nodes sold together are written in an online database and transferred to a computing unit on configuration
  • a user may be requested during calibration to walk from one video camera to another, and from one microphone to another, allowing monitoring system 100 to create the plan of the territory based on the measurements as the user moves along a known trajectory.
  • Monitoring system 100 also, for example, correlates the relative sound amplitudes of all the microphones with the spatial location of the object on a territory plan.
  • a designed plan may include all available information about the territory, including, for example, the allocation of rooms, their geometrical sizes, windows, doors, stairs, source of light, the type of room, etc.
  • the generated plan depicts the graphical image of the territory and is created to schematically represent to a user the received and processed data from the sensors of monitoring system 100.
  • Generating of a territory map may be performed in a semi-automatic way based on the data from the sensors 110.
  • monitoring system 100 creates the description of the preliminary plan and the user may correct the results of the created plan at step 330 to increase accuracy of identified parameters of the territory. Step 330 is optional and may be skipped.
  • monitoring system 100 creates the description of the living space in automatic way without operator assisting.
  • the plan of the territory may be conventionally divided into “forbidden” and "allowed” zones.
  • the "forbidden” and “allowed” zones may be common for all observed objects.
  • "allowed” and “forbidden” zones may be specifically indicated for every specific object.
  • a child may have “forbidden zones”, such as swimming pool, kitchen, stairs etc.
  • a nurse and housemaid may have different "forbidden zones” such as, for example, a private study room or a bedroom.
  • monitoring system 100 may automatically define forbidden zones for specific objects. For example, a fireplace and a pool may be defined as forbidden for a child. The operator may also indicate which zones are forbidden for which objects. For instance, user may tag on the plan "forbidden” and “allowed” zones for each of the observed object. These zones may be further changed or adjusted. The purpose of dividing the space into “allowed” and “forbidden” zones is control and safety. In one embodiment, monitoring system 100 may react if an alarm condition is fulfilled, for example, when monitoring system 100 recognized
  • some user-defined rules may augment the procedure of verifying the "abnormal" event of a child approaching a swimming pool.
  • monitoring system 100 may check if there are some other objects (grownups) in the vicinity. Monitoring system 100 recognizes the "forbidden zones” and reacts to the approaching and crossing the borders of these zones. The technique of verifying the alarm condition will be described further in FIG.6, FIG.7, FIG.l l.
  • all observed objects are classified by monitoring system 100 as being “known” and “unknown”. . All people and pets living on this territory, and also the important inanimate objects can be "introduced” to monitoring system 100 by the user, so it would add it to the database of "known” objects 474. The user gives them a name and gives access privileges to the "forbidden” and “allowed” zones.
  • a user may create and augment some rules which may describe the alarm condition.
  • Pre-defined rules represent some description of a situation that may be dangerous.
  • rules may be described with help of a GUI of user processor- controlled devices that provide grammatical tools in form of main questions such as: "Who is doing something?"; "Where is she/he doing?"; "When is she/he doing?"; and "What is she/he doing?".
  • Monitoring system 100 may already have some rules for alarm conditions. Also, the user may add some types of activities that may be considered as abnormal events.
  • FIG. 6 and 7 provide an exemplary block diagram of a computer processing unit 120 operating in accordance with one or more aspects of the present disclosure to verify the alarm condition and send an alert notification to the external system or user processor- controlled devices.
  • the computer processing unit 120 receives data of a video frame from at least one sensor, for example, from a camera viewfinder 200.
  • the frame is being analysed based on the known image processing technique.
  • Anatomical landmarks of a human body may include, for example, a forehead, a chin, a right shoulder, a left shoulder, a right palm, a left palm, a right elbow, a right elbow, a right hip, a left hip, a right knee, a left knee, a right foot, a left foot, and so on This list is illustrative, as other parts of a human body may be detected. A hierarchy of body parts exist and may be used for verification of a correct detection of an object. FIG.
  • FIG. 8 illustrates schematically an exemplary screen of a user processor controlled electronic device that contains a frame with identified objects in a viewfmder, in according to an implementation.
  • Anatomical landmarks such as a forehead 501, a chin 502, a right shoulder 503, a left shoulder 504, a right palm 505, a left palm 506, a right elbow 507, a left elbow 508, a right hip 509, a left hip 510, a right knee 511, a left knee 512, a right foot 513, a left foot 514, etc. of human bodies are identified within the frame 500.
  • monitoring system 100 may interact with a short-term memory storage 425 to extract data about previous landmark positions.
  • the term "previous" may refer to the information extracted from the previously analysed frames.
  • the identified anatomical landmarks are the basis points to generate a human skeleton and to detect the pose and activity of a human.
  • the human body is identified based on deep learning algorithm of an object recognition. For example, the approach "Real-time Multi-Person 2D Pose Estimation using Part Affinity Fields" of Zhe Cao, Tomas Simon, Shih-En Wei Yaser Sheikh, arXiv:1611.08050, 2016, may be used. Any other algorithms of machine learning such as “Long short-term memory” (recurrent neural network) may also be used here.
  • FIG. 9 illustrates an example user interface of monitoring system 100 in according to an implementation. Namely FIG. 9 illustrates the exemplary results of human skeleton restoring, individual identification, pose detection on a larger scale.
  • a person can be reliably identified by his/her face or audio timbre. However most of the time the face is not visible to the video cameras. In one of the embodiments, one of the video cameras is directed toward the entrance to the territory, so at least it captures a face of everybody who enters. Once a person is identified by his/her face or audio timbre, all other metrics of this person are measured and updated: his/her today clothes, hairstyle, attached objects such as earrings, rings, bracelets, samples of her lips color, body shape measurements. Some of such metrics are built into monitoring system 100. However, in one of the embodiments, monitoring system 100 can add new metrics to itself, as it challenges itself and learns automatically over time to recognize better and better "known" people on this territory using even minor clues. For example, after a certain amount of time monitoring system 100 will recognize a person by only a fragment his/her hand, for example.
  • Some metrics are known to be constant over a long time including, for example, facial features, audio timbre or body shape measurements.
  • the other metrics such as clothes, are only valid for a short time, as a person can change clothes.
  • such "short- time” metrics are extremely robust for short-term tracking. Clothes are often more visible to a camera than a face.
  • Some other metrics can have "medium duration", for example, a hair-style.
  • monitoring system 100 recognizes a detected person as "known” or "unknown".
  • the person may be identified precisely and identification of the person is assigned.
  • FIG.8 also depicts the main information about the identified object, For example, the type of the object (person or pet), identification, name, pose (standing, sitting) and status.
  • reading of face metrics, body metrics, and clothes metrics is performed.
  • Monitoring system 100 recognizes the biometric parameters of humans within the frame of video stream. Monitoring system 100 may use the identification based on the shape of the body of a human, haircut or clothes. This type of identification is not reliable but may be used at one of the level of identification.
  • Behavioral characteristics are related to the pattern of behavior of a person, including but not limited to typing rhythm, gait, and voice.
  • the person may be identified based on the timbre of voice.
  • the voice may be not recorded because of privacy.
  • Monitoring system 100 tracks the sound intensity and the spectral characteristics of the sound. Behavioural characteristics are stored in the memory of monitoring system 100 and analysed.
  • Monitoring system 100 may know what behaviour, type and time of activities is typical for a specific object.
  • monitoring system 100 may interact with a short-term memory storage 435 faces, bodies, clothes metrics which contain the information about detected faces, bodies, clothes metrics detected at the previous frame.
  • person identification is performed based on the information assembled from previous steps and extracted from the database of known person 474.
  • monitoring system 100 allows for identification of a "known" person who just changed clothes.
  • monitoring system 100 verifies whether any detected person has changed clothes. If somebody has changed clothes, monitoring system 100 goes to step 460 to update the clothes metrics data and to add it to the short-term memory of faces, bodies, and clothes metrics 435.
  • FIG. 7 illustrates the continuation of FIG.6.
  • monitoring system 100 updates the geometrical (spatial) positions of all objects based on the information from the short- term memory of geometrical positions and personal IDs 485.
  • monitoring system 100 compares people's IDs, their spatial position, timestamps and activity types with the list of alert triggers.
  • Database of pre-set alert conditions 492 may be used here.
  • the database of pre-set alert/ alarm conditions 492 may include descriptions of specific objects, such a blood or weapons. If an alert is generated at step 491, monitoring system 100 may send alert notification 494 to the external system or devices and user-controlled devices. If an alert is not generated at step 491, monitoring system 100 continues 493 to process data from the next frame.
  • Techniques for improvements of identification may be used to increase the accuracy of monitoring system 100. For example, such techniques may be used to track a person sitting or staying with their back to the camera, moving from one room to another room, changing clothes or hidden behind furniture.
  • Techniques for tracking a person may be augmented using audio data from microphones.
  • Techniques for tracking a person augmented when an identified object disappears from one camera viewfinder and does not appear in another camera viewfinder and system are important to prevent losing track of such an identified object.
  • the location of a person is known, it is correlated with the signal from microphones. Then monitoring system 100 may restore the coordinated of a person based only on the signals from the microphones.
  • the proposed monitoring system 100 is able to identify animals, for example, cats, dogs, etc. Identification of pets is needed for further video frame decomposition and analysis. For example, monitoring system 100 does not react to the motion of pets, when the house is temporally left by the owners. On other hand when a pet bites bite a child, monitoring system 100 reacts and call an alert or makes contacts with a designated adult in closest vicinity to the child. Such a rule describing this dangerous situation may be added by the user into monitoring system 100.
  • FIG.10 illustrates an exemplary block diagram of the running monitoring system 100 in several modes according to an implementation.
  • monitoring system 100 is launched. Most of the time monitoring system 100 works in an energy saving mode.
  • monitoring system 100 receives signals from sensors 100, namely low-resolution low frequency audio and video data.
  • monitoring system 100 detects only a few categories of the objects and their coarse activity. If activity is not detected in step 750 within the frame, monitoring system 100 returns to the step 720.
  • monitoring system 100 If activity is detected in step, 750 monitoring system 100 turns on into a full processing mode 760 to detect in step 770 the type of activity precisely as was described above in according to FIG.6 and 7. Monitoring system 100 may verify some conditions as in step 780, for example, whether every object has left the territory and N seconds passed. If these conditions are not fulfilled monitoring system 100 returns to step 720.
  • the described monitoring system 100 is able to recognize preliminary defined alarm signals, such as for example, a gesture, a spoken phrase, or an audio signal to increase the accuracy in preventing of dangerous events.
  • the alarm signal is needed in the case when monitoring system 100 is not able to recognize the problem automatically, requiring a person to signal to about the problem.
  • FIG.11 illustrates an exemplary block diagram to detect the alarm signal and send alarm notification to external services or user processor- controlled devices in according to an implementation.
  • monitoring system 100 receives a description of types of activity detected earlier.
  • the flowchart in FIG.11 provides more detail.
  • recognition of "alarm on” and “alarm off secret gestures is performed.
  • secret gestures are included in monitoring system 100 during the calibration stage of monitoring system 100.
  • Secret gestures are purposed to notify monitoring system 100 about an abnormal event.
  • Secret gestures may trigger the alarm notification sending ("alarm on” secret gestures) or stop monitoring system 100 ("alarm off secret gestures) from performing an action based on a false triggering.
  • monitoring system 100 may have a pre-set list of pre-trained visual and audial alarm on/off gestures or phrases, such as known to be particularly easy for recognition by monitoring system 100. The user then would choose one or a few from this list.
  • monitoring system 100 is able to recognize secret audio signals.
  • monitoring system 100 may receive audio frames.
  • recognition of "alarm on” and “alarm off secret phrases is performed.
  • short memory 850 of audio frames is used for this purpose.
  • step 860 it is verified whether an "alarm on" manual signal was received and not cancelled during M sec by receiving an "alarm off manual signal. If these conditions are not fulfilled, in step 880 monitoring system 100 continues monitoring by going to the next frame and keeping tracking of detected objects. If these conditions are fulfilled monitoring system 100 sends alert notification at step 870.
  • the high-level interface providing information about the observed territory in an informative and compact way is now described. The proposed representation technique doesn't require the user to watch the video and hear the audio in order to understand what people do in some area under surveillance.
  • FIG.12 illustrates an exemplary user processor-controlled device 1200 operating in accordance with one aspect of the present invention.
  • user processor controlled device may be represented by any electronic device, including a laptop computer, a personal computer, a smart phone, a tablet computer, etc.
  • a user processor controlled device may comprise a processor, memory storage, display 1201, keyboard, input device, etc.
  • the user processor-controlled device may include touch screen input units that may be represented by the touch-input sensitive area.
  • An exemplary graphical user interface with the mini map on the screen of a user processor- controlled device 1200 is represented.
  • processed data from sensors of a monitoring system may be represented on a dashboard, wherein users can track a position of surveillance object.
  • processed by computer processing unit 120 data may be extracted from the memory storage 130 to be represented in visual form on the display of a user processor-controlled device 150.
  • FIG.13 illustrates an exemplary graphical user interface of a dashboard 1300 with a mini map 405 and a timeline 1340 with a slider 1350 on the screen of a user processor controlled device 150.
  • a dashboard 1300 comprises a customizable screen that compactly and visually displays vital information on different aspects of the system.
  • the dashboard 1300 collects and represents the most important information, more particularly information about all observed objects, their activity, and their allocation within the territory.
  • dashboard may comprise a mini map (or generated plan) 1305.
  • Mini map 1305 (or in other words, plan of the territory) may be generated by the security system 105 based on the identified geometrical parameters. Generating of the territory mini map may be performed in the semi-automatic way based on the data from the sensors 110.
  • information about an angle of a camera from the gyroscope is able to define the plan of the territory more precisely.
  • the system creates the description of the preliminary plan and the user may correct the results of the created plan at step 330 (FIG. 5) to increase the accuracy of identified parameters of the territory.
  • the system creates the description of the living space in an automatic way without operator assistance.
  • the GUI interface represents an exemplary dashboard 1300 with a generated mini map 1300 of the territory, wherein the information about observed objects is indicated in a specific moment of time.
  • Designed plan 405 may include all available information about the territory, namely the allocation of rooms, their geometrical sizes, windows, doors, stairs, source of light, the type of room, etc.
  • the generated plan depicts the graphical image of the territory and it is created to schematically represent to the user the received and processed data from the sensors of the monitoring system.
  • the interface may connect multiple mini-maps to each other.
  • Dashboard 1305 represents a customized screen on a user-processor controlled device 150, wherein Graphical User Interface (GUI) 160 provides a user with the possibility to change the settings of the application.
  • GUI Graphical User Interface
  • GUI 160 allows the user to define the view mode of a mini map.
  • view mode may refer herein to the way a mini map is represented to a user on a display.
  • the mini map of the territory may be displayed on the screen of user processor-controlled device 150 as a schematic picture or drawing of the territory from above.
  • One of the main features of the present technique is representing the information in a form of schematic drawing or cartoon.
  • the existing security, monitoring and control system show all complete information on the screen of a display of a user processor-controlled device thereby increasing the cognitive load on the user and decreasing the speed of the tracking process.
  • the schematic graphical image representation of data from sensors allows the system to show only the most important details, that help focus attention of a user on the most important information and reduce the time of the tracking process.
  • the mini map of the territory may be displayed from any specific angle, creating for a user a sense of a presence. The user may move within the mini map to make an overview of the situation on the territory to be observed.
  • the described invention enables the user to add new details on the dashboard that should be represented on the screen of a user processor-controlled device 150.
  • the degree of detailing may be changed in the settings of the application.
  • the mini map may include only the plan of the territory with an indication of the allocation of rooms, the titles of room (such as "children's bedroom” 1321, “children's bathroom” 1322, “living room” 423, "master bedroom” 1324, “master bathroom” 1325) etc.
  • the mini map may include the allocation of doors and windows, furniture or other objects, such as fireplace, swimming pool, baths, etc.
  • the mini map 1300 may comprise the information about the location of cameras 420, microphones and additional sensors.
  • the observed object may be indicated on the mini map 400.
  • the observed objects within the mini map may be represented by the means of avatars 1320 and schematic figures.
  • Avatar may be chosen by default or changed by user of the system. This type of objects representation provides the privacy of information about personal life of a user of the given system.
  • a dashboard 1300 may comprise a timeline 1340 with a slider 1350 that enable the user to choose a moment of time at the time line 1340.
  • the user can pick any time in the past to check the situation in the past.
  • By scrolling the slider 1350 a user initiates the dynamic change of graphical image representation of the information within the mini map.
  • the term "dynamic" means the change of the information (objects, their activity, their allocation) during the time.
  • the user may overview the information by scrolling the slider by means of touch input or cursor 1202 in a short period of time, providing the considerable speeding-up of information provision. In a conventional system the user requires to view all video data.
  • the current user interface may comprise a track 1350 to show the spatial location of the object in the past.
  • the term "track” may refer to the trace of the object that is identified based on geometrical parameters of the object.
  • Representation of track 460 within the mini map 1305 gives the visual information about the allocation and the movement of the objects during the time within the territory. Time period for track representation may be installed by default, for example during 30 minutes, or this may be changed by user.
  • the track may be represented on the mini map in different ways.
  • a track 1360 may be painted as a curve line with a color of different intensity.
  • the intensity of coloring may show the speed of the movement of the object within the territory.
  • the high intensity of coloring may indicate on the high velocity of the movement.
  • the low intensity of coloring may indicate the low velocity of movement.
  • the mini map 1305 may provide short and precise information about the detected activity of the objects. For example, GUI may give some notes about the activities, such as “Lilly is washing hands” 1332, “Mum is sitting on the sofa” 1334, and “Bob went out 10 min ago” 1333, etc.
  • the user may indicate, for example by touch, the object and receive the precise and short information about the specific object.
  • the system may provide the full information about the activity of Lilly during the day.
  • the textual report may include textual information about the main events. For example, “Lilly woke up at 9 am, went to the kitchen, had a breakfast with nurse, spent 2 hours in the study room, spent one hour nearby the swimming pool,.”, "Mom cooked breakfast at 8.30 am, cleaned the rooms, spent 2 hours with Lilly in children bedroom. . .”, "Bob took medicine at 8 am, read book 3 hours, left a house at 1 pm. . .” etc.
  • FIG.14 illustrates an exemplary block diagram of an interacting of user with the graphical interface of the system in accordance with one aspect of the invention.
  • the method of an interacting of a user with the graphical interface of the system 100 starts at step 1400 and proceeds to step 1470.
  • a user launches the system application window and may select the view mode.
  • a user may send a request to show the information about the observed territory on the mini map on a dashboard 1300 on the display of a user processor controlled device.
  • the user may select the time period of data representation by means of time line 1340 with slider 1350.
  • the user may use input devices, such as an optical mouse or touch screen display, to indicate the time moment on the time line.
  • the application When launched, the application may respond to the input of a user received from sensors of a security and monitoring system 105.
  • processed by computer processing unit 120 data may be extracted from the memory storage 130 to be represented in visual form on the display of a user processor-controlled device 150.
  • the identified objects, their positions and the identified activity are displayed on the mini map at step 1450 of a dashboard 1305. If not all information is represented then the user may request additional information at step 1420, for example the user indicates the specific time moment with the slider 1350 on the timeline 1340. Otherwise the user may monitor the territory or close the system application window.
  • FIG.15 provides a further example with a drawing of a home monitoring and security surveillance system integrated within a home environment.
  • the system includes a processing unit connected to a set of sensors and provides intelligent home monitoring with a function of notification and emergency calling.
  • FIG.16 shows an example of the main components of a sensor node with a single camera, called the CherryCam, which provides sensor and video/audio data streams via a Wi-Fi/Bluetooth connection to the processing unit inside the house and acts according to software requirements. It comprises 3 main components:
  • FIG.17 and 18 show examples of the CherryCam design.
  • the main PCB includes:
  • the camera PCB includes:
  • the removable battery includes (exceeding 3500 mAh)
  • USB-C port b. USB-C port; c. Charge & battery fuel gauge;
  • Application searches unit and performs connection via Bluetooth.
  • Ethernet-cable is used - no input data required.
  • connection can be established directly via Bluetooth.
  • Camera detects unit via Bluetooth, connects, and obtains internal Wi-
  • connection can be established indirectly using phone application.
  • b. Phone detects camera via Bluetooth and establishes connection.
  • c. Phone establishes connection with unit via Wi-Fi, obtains internal Wi- Fi parameters and transmits them to CherryCam via Bluetooth.
  • LED indicator stops blinking and starts to glow in white for approx. 20 seconds and then turns off.
  • a standby state or mode is available. The following events may activate the standby state:
  • processing unit request via Wi-Fi/Bluetooth.
  • the motion sensor, microphone set and Bluetooth are active. Other components are in sleep mode, "keep alive" signals are also sent to the processing unit via Wi-Fi/Bluetooth at regular intervals.
  • 'single, 480p and day mode' may have the following parameters:
  • o Motion sensor in case of any motion detected within the sensors' range
  • o Microphone set trigger in case of any noise above threshold is detected by microphone set;
  • Processing unit trigger activation is performed by the processing unit signal via Bluetooth/Wi-Fi.
  • o processing unit request Operation: transferring video/audio streams, sensor data via Wi-Fi/Bluetooth to the processing unit, and 4MP still-shot on the processing unit request.
  • 4MP still-shot request camera is switched to corresponding single mode (for approximately 1 second), takes still-shot and returns to current mode.
  • 'single, 480p and night mode' may have the following parameters:
  • o Motion sensor in case of any motion detected within the sensors' range
  • o Microphone set trigger in case of any noise above threshold is detected by microphone set;
  • Processing unit trigger activation is performed by the processing unit signal via Bluetooth/Wi-Fi.
  • Operation transferring video/audio streams, sensor data via WiFi/Bluetooth to the processing unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Alarm Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

A monitoring system includes sensors that monitor activity within a designated territory. The sensors include visual sensors that make video recordings. A local processing system located within or proximate to the designated territory receives signals from the sensors. The local processing system processes and analyzes the signals from the sensors to produce messages that describe activity within the designated territory as monitored by the sensors. The messages do not include audio, visual or other direct identifying information that directly reveal identity of persons within the designated territory. A monitoring station outside the designated territory receives the messages produced by the local processing system and makes the messages available to external observers.

Description

COMPUTER VISION BASED MONITORING SYSTEM AND METHOD
BACKGROUND OF THE INVENTION
1. Field of the Invention
The field of the invention relates to electronic systems for monitoring, security and control. One implementation of the present invention relates to a method and a computer-based system for representing data from computer vision systems.
A portion of the disclosure of this patent document contains material, which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
2. Description of the Prior Art Security, monitoring and control systems equipped with cameras and additional sensors can be used to detect various threats such as intrusions, fire, smoke, flood, and so on. Motion detection is also often used to detect intruders in vacated or occupied building or home environment. Detection of an intruder may lead to an audio or silent alarm and contact of security personnel.
Due to the fact that computer-based technologies in the field of security have rapidly developed in recent years, a huge variety of security, monitoring and control systems equipped with cameras and additional sensors exist. Conventional security, monitoring and control systems may comprise a user interface unit that allows to user to view information (e.g. video and audio signals) from a set of cameras and additional sensors on a display of a user processor-controlled device. Due to the fact that the common approach is to scan the video and audio information, current solutions often require the user to watch the video and hear the audio information in order to understand what people do on the territory under surveillance and understand which screen to use during movements of surveillance object.
As a result, a user of such a system may have to continuously view all data from cameras to identify something suspicious and have an opportunity to give a response to the monitoring system based on the viewed information by means of GUI. Moreover, the video streams from different cameras may be shown on the display simultaneously as a grid of pictures. That's why a user often has to shift his/her gaze from one picture to another picture to catch all the information and to receive an overview of the situation in the observed territory.
There is a need for a compact and informative high-level interface that would not require the user to carefully watch a video in order to understand what people do in an area under surveillance.
The present invention addresses the above vulnerabilities and also other problems not described above.
SUMMARY OF THE INVENTION
A first aspect of the invention is a monitoring system comprising:
sensors that monitor activity within a designated territory, the sensors including visual or image sensors, for example sensors that make video recordings;
a local processing system located within or proximate to the designated territory, the local processing system receiving signals from the sensors, the local processing system processing and analyzing the signals from the sensors to produce messages that describe activity within the designated territory as monitored by the sensors, the messages not including audio, visual or other direct identifying information that directly reveal identity of persons within the designated territory; and
a monitoring station outside the designated territory, the monitoring station receiving the messages produced by the local processing system and making the messages available to external observers.
Optional features include any of the following.
• The messages describe actions performed by a person or animal within the designated territory sufficiently to allow an external observer to determine when external intervention is required.
• the monitoring station produces an alarm when the messages describe actions performed by a person or animal that indicate an external intervention may be required.
• the sensors are selected to monitor children, disabled, sick or old persons.
• the monitoring station forwards to the local processing system a query from an external observer, the local processor responds with a message, wherein contents of the message are based on activity within the designated territory detected by processing and analyzing the signals from the sensors.
• the monitoring station produces an alert when the messages indicate a predetermined gesture or predetermined phrase is detected by the sensors.
• the monitoring station cancels the alert when the messages indicate a predetermined cancelation gesture or predetermined cancelation phrase is detected by the sensors. the local processing system internally identifies bodies of people and animals within the designated territory based on video data using at least one of the following techniques:
face metric identification that identifies and records facial features; clothes metric identification that detects and records types of clothes on a body; body metric identification that detects and records body shapes;
activity metric identification that detects and records current activity of a body; hair-cut metric identification that detects and records hair-cut style;
tool metric identification that detects and records objects held or in near vicinity to a body.
the local processing system recognizes changes made to bodies within the designated territory and updates affected metric identifications,
the local processing system analyzes kinetic motion of identified bodies to insure movements consistent with laws of physics to detect and resolve inconsistencies when internally identifying bodies.
the local processing system additionally uses audio data to internally identify bodies of people and animals within the designated territory, the audio data including timbre metrics and relative sound amplitudes.
the local processing system internally identifies bodies of people and animals within the designated territory based on video data using more than one of the following techniques:
face metric identification that identifies and records facial features;
clothes metric identification that detects and records types of clothes on a body; body metric identification that detects and records body shapes;
activity metric identification that detects and records current activity of a body; hair-cut metric identification that detects and records hair-cut style;
tool metric identification that detects and records objects held or in near vicinity to a body.
the designated territory includes an area in at least one of the following:
a school,
a prison,
a hospital,
a shopping mall,
a street, an office,
a parking lot.
• the designated territory comprises a plurality of rooms, the local processing system constructing a layout plan of the territory using video data from the sensors.
• the local processing system additionally uses audio data to construct the layout plan.
• when constructing the layout plan of the territory, the local processing system utilizes at least one of the following:
estimates of room geometry based on detecting horizontal and vertical lines; detection of major objects within field of view of video sensors;
neural network algorithms to recognize a room type form based a database of known room types;
measurements based on requested configuration activities of a user within a room;
configuration audio signals detected by audio sensors.
Another aspect is a method for monitoring activity within a designated territory, comprising:
using sensors to make video recordings;
forwarding signals from the sensors to a local processing system located within or proximate to the designated territory,
using the local processing system to process and analyze the signals from the sensors to produce messages that describe activity within the designated territory as monitored by the sensors, where the messages do not include audio, visual or other direct identifying information that directly reveal identity of persons within the designated territory;
sending the messages to a monitoring station outside the designated territory, the monitoring station making the messages available to external observers.
Optional steps include any of the following.
• producing an alert by the monitoring station when the messages indicate a predetermined gesture or predetermined phrase is detected by the sensors. • internally identifying bodies of people and animals within the designated territory based on video data using one or more of the following techniques:
face metric identification that identifies and records facial features,
clothes metric identification that detects and records types of clothes on a body, body metric identification that detects and records body shapes,
activity metric identification that detects and records current activity of a body, hair-cut metric identification that detects and records hair-cut style, and tool metric identification that detects and records objects held or in near vicinity to a body.
· recording audio data by the sensors, the audio data being used to internally identify bodies of people and animals within the designated territory, the audio data including timbre metrics and relative sound amplitudes.
BRIEF DESCRIPTION OF THE FIGURES
Aspects of the invention will now be described, by way of example(s), with reference to the following Figures, which each show features of the invention:
FIG.l illustrates a monitoring system.
FIG.2 illustrates a sensor node activating diagram.
FIG.3 illustrates an exemplary sensor.
FIG.4 illustrates an exemplary computer processing unit.
FIG. 5 illustrates an exemplary block diagram of the system calibration.
FIG. 6 illustrates an exemplary block diagram representing the monitoring and alarm condition detection.
FIG. 7 illustrates an exemplary block diagram representing the monitoring and alarm condition detection.
FIG. 8 illustrates an exemplary screen of a user processor-controlled electronic device that contains a frame with identified objects in a viewfinder.
FIG. 9 illustrates an exemplary screen of user processor-controlled electronic device that contains a frame with identified object in a viewfinder.
FIG. 10 illustrates an exemplary block diagram of a system operation in different modes.
FIG. 11 illustrates an exemplary block diagram of a system operation in case of a secret signal detection.
FIG.12 illustrates an exemplary user processor-controlled device with the user interface of a dashboard on the screen operating.
FIG.13 illustrates an exemplary graphical user interface of a dashboard with the mini 2 map and a timeline with a slider on the screen of a user processor-controlled device.
FIG. 14 illustrates an exemplary block diagram of a user interaction with the system. FIG.l 5 illustrates a further example of a home monitoring and security system.
FIG. 16 illustrates the main components of a sensor node including a single Camera. FIG.17 illustrates an exemplary sensor node.
FIG.18 illustrates an exemplary sensor node. DETAILED DESCRIPTION
Example aspects are described herein in the context of a monitoring system and a method for an object is identification, an object's location detection as well as, recognition of a type of an activity and time of the activity of the detected object. In particular, the system is able to collect data from a plurality of sensors, process the data and send alerts based on the processed data to prevent or react to emergency situations.
For example, the described system may react automatically in situations such as an appearance of an unknown person in the house during night hours, an appearance of an unknown person in the master bedroom during day hours, a broken window, a failure to shut off water in a bathroom, a baby crawling into a fireplace, a baby approaching a swimming pool with no adults in the vicinity, a person who has fallen down, a person who is lying on a floor or in any other unusual location for a long time, a person crying for help, a child not returning from school at a normal time, and so on.
Those of ordinary skill in the art will realize that the following description is illustrative only and is not intended to be in any way limiting. Other aspects will readily suggest themselves to those skilled in the art having the benefit of this disclosure. Reference will now be made in detail to implementations of the example aspects as illustrated in the accompanying drawings. The same reference indicators will be used to the extent possible throughout the drawings and the following description to refer to the same or like items. Reference in this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments. In the context of the present invention, the terms "a" and "an" mean "at least one".
The term "frame" herein may refer to a collection of video data, audio data and other available sensory data, captured from all the cameras and sensors of the monitoring system in one particular time moment, for example, in one second.
The term "object" herein may refer to an object observed by the monitoring system. The term "object" may refer to an animate object, for example, a person (in other words, an individual, a human), a pet, etc. Objects may be "known" - which means they are included into a system internal database of known people and pets, in other words with ID and characteristics that are stored in the memory of the system, or "unknown"— which means that the object is not included into the database, or characteristics that are new for the monitoring system.
The term "territory" herein may refer to territory to be observed by the monitoring system. For example, the term "territory" may refer to a living quarter, to an apartment, to a building of a hospital, a school, an old people's home, a private house, to an adjoining territory, etc. It may be a physical and logical location.
The term "user" herein may refer to any person or group of people who may interact with the monitoring system. For example, a user may be an owner of a house, or a user may be a medical officer who takes care of elderly persons. The term "zone" herein may refer at least to the part of the territory, for example, to a swimming pool, a fireplace, a room, etc.
The term "forbidden" zone herein may refer to a specific part of the territory to be observed by the monitoring system that is not allowed for a specific object. The monitoring system may react if it detects crossing of the border of the forbidden zone by an object.
The term "allowed" zone herein may refer to a specific part of the territory observed by the monitoring system that is allowed for a specific object. The term "abnormal event" herein may refer to pre-defined types of activities within a frame to which the monitoring system may react. For example, pre-defined types of activities may include motions related to the intrusions (such as breaking into a window, fighting, presenting weapons, etc.). Pre-defined types of activities also may relate to activities that may be typical for a specific medical condition (for example: falling down, agonizing, visible blood or injury, furniture movements, etc.) and so on.
The term "alarm condition" herein may refer to a set of rules, describing a situation that may be considered as dangerous or unsafe for normal wellbeing.
In one embodiment, some of the rules may be predefined by the monitoring system. In another embodiment, the owner or other users may create and/ or adjust rules. A defined set of rules depends on the scenarios or application of the monitoring system.
There is a plurality of scenarios for how the monitoring system may be exploited. One of the illustrative examples is equipping a private house with the given monitoring system to protect people from a dangerous situation, to safeguard lives by detecting abnormal events and to send alert notifications to external services, such as police, or a user processor-controlled electronic device.
For example, the monitoring system may protect from physical break-ins, protect people from medical problems, react if a baby is approaching a swimming pool with no adults in the vicinity, and react when someone collapses or has a stroke.
Another illustrative example is equipping a hospital or elderly person home. The monitoring system monitors a person of interest in a room to react and call an alert or medical officer in case the monitored person has a stroke, loses consciousness, falls, faints and so on.
The monitored person may also provide an alarm signal to trigger an alarm notification. The term "alarm signal" herein may refer to a specific gesture, or a specific sound signal, which may turn the monitoring system into the alarm mode and initiate the procedure of sending the alarm notification to the external services or devices. The alarm signal is needed in the case when the monitoring system is not able to recognize the problem in an automatic way, then the monitored person may signal about the problem.
FIG.l illustrates an exemplary monitoring system 100. Those of ordinary skills in the art will appreciate that hardware and software units depicted in FIG. 1 may vary without limitation of a scope of the disclosure.
In one embodiment, a monitoring system 100 incudes a set of sensors 110 which may include at least video cameras, infrared lighting for night vision, and a plurality of microphones. Moreover, the described monitoring system may have a gyroscope, a motion detector, a smoke sensor, a thermographic camera (heat detector), a detector of intrusion (through the window or doors), a thermometer, a smell detector and other sources of information. Sensors 110 may be allocated within the observed territory in such way that they capture signals including all information related to the activity within the territory. For example, sensors 110 of different types produce dense representation of signals and augment data received from other sensors to restore a complete frame. The list of detectors is illustrative.
Additionally, a motion sensor may be used to record the camera positioning and adjust in case of replacement. An accelerometer/ magnetometer may be used to adjust the camera according to the compass points. A pressure sensor may be used to confirm the camera positioning height (the same mechanic phones use to determine their positioning. More than one set of lenses (such as visible light and infra-red) may be used to create a depth picture along with creating a stereoscopic image.
There are several camera modes, and the set of sensors (or sensor node) can utilize a state transition diagram as shown in FIG.2. Examples of camera modes are, but not limited to: idle mode (very low power consumption to conserve energy), low resolution mode (for example 360p), and maximum resolution mode (highest power consumption). When motion is detected by one or more sensors, the camera mode may switch from one mode to another mode. A low quality video mode may also preserve energy and a node's full potential is used only when required. The main camera mode provides the most battery life, since the sensor array uses the infra-red filtered lens during well-lit periods in a low quality mode. For example, a low quality mode may mean the following:
• 5-30 frames per second (changeable by the central computer unit based on the recongnizab!e events happening);
• Low resolution (360p);
• Single lens - no stereo picture;
• No additional sensors active.
By default, a sensor node is in the idle mode, meaning no video recording or feed takes place. It can be activated by movement within the active area.
Once a moving object triggers to the motion sensor, it activates a low quality video feed. If the central computing unit requests a high quality video feed, the sensor node switches to the high quality mode.
The high quality mode is triggered by the computer processing unit 120. It can utilize the full capability of a single lens array to output FULL HD 60 FPS video or UHD video feed with a lower number of frames per second. This mode is automatically used for facial recognition, can be requested by the user or required by the central computing unit in case of an emergency situation.
To ensure correct depth of field, a sensor node can utilize both lens arrays to create a stereoscopic picture and compile it into a single image before ISP (image signal processor) utilizing an internal FPGA. A night mode utilizes the lens array with an infrared filter along with the infra red lighting.
FIG.3 illustrates an exemplary sensor, and in particular, an exemplary video camera 200 in accordance with an implementation. The video camera 200 includes, for example, a speaker 201, an infrared night light 202, a lens 203, a microphone 204, a pass-through adapter 205, a Wi-Fi unit 206, an ARM controller 207 and a backup battery 208.
Referring to FIG.l, computer processing unit 120 interacts with a set of sensors 110 administering them by a control loop. For example, computer processing unit 120 is a local processing system located within or proximate to a designated territory in which sensors 110 monitor activity. For example, computer processing unit 120 receives signals from sensors 110, and processes and analyzes the signals to produce messages that describe activity within the designated territory as monitored by sensors 110. The messages do not include audio, visual or other types of data that directly reveal the identity of persons within the designated territory. For example, the designated territory can be an area in a school, a prison, hospital, a shopping mall, a street, an office, a parking lot or another type of area. The processed data received from the set of sensors 110 may be stored in memory 130.
Machine learning techniques may also be used to provide the ability to determine who is home and if that person is a member of the household through training the computation unit using pictures and video.
The set of sensors are connected to a computer processing unit hub via a Wi-Fi connection. For security reasons the footage used for machine learning purposes does not leave the household when a central hub is present. However a system can be operational without the central hub. The central hub may be a standard PC with a small form factor GPU running the machine learning algorithms.
The computer processing unit preferably has the following interfaces:
• Two Wi-Fi connectivity modules— one of the modules is used to connect to the Internet, while the other one to create a separate channel for the camera node's traffic;
• Ethernet;
• Bluetooth™ — direct connection to a user's phone for the purpose of adjusting settings and controlling the system and as a reserve communication channel.
In case the computer processing unit is not installed, the system may operate through a cloud-provided server-based system but using more of the broadband connection.
The computer processing unit 120 includes a charging station. Battery remains at a 50% charge while being installed into the charging station. If a sensor node sends a signal to the computer processing unit 120 indicating low power levels, the battery is charged up to a 100% prolonging the battery life and optimizing the battery life span. A separate LED indicator situated on the battery body shows the charge level only when removed from the sensor node. For example, the computer processing unit 120 internally identifies bodies of people and animals within the designated territory based on one or a combination of video data using face metric identification that identifies and records facial features, clothes metric identification that detects and records types of clothes on a body, body metric identification that detects and records body shapes, activity metric identification that detects and records current activity of a body, hair-cut metric identification that detects and records hair-cut style tool metric identification that detects and records objects held or in near vicinity to a body, or another technique.
For example, a list of connected devices can be used to distinctively identify people within the household by studying the list of devices present in the presence of a certain amount of people and the changes to that list while people are moving in and out of the operational zone.
For example, computer processing unit 120 recognizes changes made to bodies within the designated territory and updates affected metric identifications.
For example, computer processing unit 120 analyses the kinetic motion of identified bodies to insure movements consistent with laws of physics to detect and resolve inconsistencies when internally identifying bodies. For example, computer processing unit 120 additionally uses audio data to internally identify bodies of people and animals within the designated territory where the audio data includes timbre metrics and relative sound amplitudes.
FIG.4 provides an example of the computer processing unit 120 in accordance with an implementation. The processing unit 120 may include, for example, a Wi-Fi router 231, a backup battery 232, a backup mobile internet module 233, a CPU board 234 and a GPU board 235. In one embodiment, the computer processing unit 120 may be included in a local server 140. The term "local" means that local server is allocated within the territory observed by monitoring system 100. Described allocation of the processing unit and a local server doesn't overload internet channels; increasing the effectiveness of monitoring system 100. Privacy, data protection and data security are protected, for example, by data encryption. Additionally, privacy, data protection and data security are enhanced by the independence of monitoring system 100 from the Internet. Such independence from the Internet makes monitoring system 100 more confidential and reliable for users. Monitoring system 100 does not send any private source data to external storage resources such as a cloud storage system or some other external storage systems located outside the territory.
Signals from the set of sensors 110 are transferred to the central computer processing unit 120 where the signals are processed and analysed. Computer processing unit 120 processes signals from sensors 110 to create low-level description of source incoming data. The low-level video description may include recognized objects - animate and inanimate - on different levels of a hierarchy. For example, the low-level video description may include descriptions of a body, body parts, skeleton joint coordinates, body shape measurements and samples of clothes, skin colors and textures on different parts of the body. For example, the low-level video description may also include descriptions of a face, coordinates of facial features, a type of the haircut, external objects (such as earrings), and samples of color and texture on lips, ears, hairs, eyes and teeth. For example, the low-level video description may also include descriptions of hand, coordinates of finger joints, types of the finger and toe nails, samples of the color and texture of the fingers, the nails and the palms. A low-level audio description may include, for example, recognized sounds on different levels of hierarchy. For example, low level audio description may include a single sound or morpheme, its spectrum, its relative amplitude on different microphones, or a longer duration audio sound which is recognized as known, such as, for example, a phrase to set an alarm on or off, or a phrase to enable/ disable surveillance. Based on the low-level description 121 of data, high-level description 122 may be generated. The processed data received from the sensors 110 is stored in memory 130 in an accurate high-level description file 135, for example, in a textual log-file. In one embodiment, the information in the high-level description file 135 may contain the following information: "1:53pm 13sec: Person ID = "John", Person Location = (x,y); Person activity = 'reading'". For an example of activity detection used to create low-level description algorithms (See Serena Yeung, et al. "Learning of Action Detection from Frame Glimpses in Videos", arXiv:1511.06984, 2015.)
The high-level description 135 may be extracted from the memory storage and processed at any time. For example, the high-level description file 135 may be customized and transferred into the script of any other external system. For example, the information from the file 135 may be processed from text to speech.
Computer processing unit 120 uses the source signal (video signal, audio signal, data from smoke detector, etc.) from sensors 110 to recognize and differentiate objects within a frame, and to identify the type and spatial position of the detected objects. The results produced by computer processing unit 120 are stored in an accurate high-level description file 135. Computer processing unit 120 may produce people and pet identification and re-identification units based on face metrics, body metrics, clothes metrics, hair-cut metrics, pose and limb coordinate estimations, detection of people and pets spatial position, detection of activity, detection of controlling visual gestures and audial signals, detections of specific medical conditions, detection of the "scenario" of people and pets interaction with the surrounding objects, and so on.
For example, spatial localization based on audio signal works uses relative amplitudes. That is, as an object moves towards and away from a microphone, the detected amplitude of an audio signal based on sound from the object varies based on location. The amplitude will vary dependent upon which microphone generates the audio signal based on the sound. When computer processing unit 120 receives audio signal samples pertaining to the spatial position of an object and the corresponding microphone signals amplitudes collected during a calibration stage and further updated during system operation, computer processing unit 120 can statistically correlate the distribution of the audio signal amplitudes at all the microphones on the territory with the known spatial locations of the object. The result of this statistical correlation is a mapping of the space of audial amplitudes to the plan of territory. So, when an object for a period is not seen by any of the video cameras, monitoring system 100 may restore the object spatial location by using the microphone signals and the mapping performed during the calibration stage.
Computer processing unit 120 interacts with a memory storage unit 130 to store the processed information, for example, in the form of a high-level description 135 and to extract stored information for further detailed analysis. Memory storage unit 130 may include, for example, a database of known persons 474, a short-term memory unit of the previous landmark positions 425, a short-term memory of faces, bodies and clothes metrics 435, a short-term memory of geometrical positions and personal identifications 485, a database of pre-set alert conditions 492, and a short-term memory of audio frames 850. To increase the reliability of monitoring system 100, the source unprocessed signals from the sensors may be stored in the memory just for a short period of time.
Monitoring system 100 may include an Application Programming Interface (API) module 155 that provides opportunity for interaction of a "user" with monitoring system 100. Applications on user-processor controlled electronic devices 180 by means of a graphical user interface (GUI) 160 may access the server 140 through the Application Programming Interface (API) 155 - a software interface. Interaction of a user with monitoring system 100 may be performed for example, through the mobile application or through the user interface of a desktop program 180. By means of a graphical user interface of user-processor controlled electronic devices 180, a user may send the request to monitoring system 100 and monitoring system 100 may perform actions in accordance with the request and send a response to the user based on the results of data analysis received from sensors 110 and processed by the processing unit 120. One embodiment, for example may be represented by a client— server system or a request— response model.
Graphical User Interface (GUI) 160 on a user-processor controlled device 180 provides a user with the possibility to receive alerts from monitoring system 100, to view the collected, stored and analysed data from sensors, to calibrate monitoring system 100, to define rules that describe the dangerous situation and to add new objects to the database of monitoring system 100.
In one embodiment, signals from the set of sensors 110 may be represented in real time via the display of a user-controlled device (mobile phone, laptop, smartphone or personal computer, etc.) without any modifications. In other words, monitoring system 100 may provide information pertaining to what is happening at a home to a user. For example, GUI 160 of the system on the user controlled device may display all captured video and images from all cameras and play the recorded audio tracks. The user may give a response to monitoring system 100 based on the viewed information by means of GUI 160. For example, the user may request monitoring system 100 to provide more detailed information from sensors 110, to give a report based on the analysis of data from sensors 110, to call an alert or to call off an alert, to call the police, ambulance or emergency service, or so on. In other words, monitoring system 100 gives access to the user to view information, to obtain access to stored data in memory storage 130 of server 140 and to control monitoring system 100.
In one embodiment, the user may be provided with the results of the information processed by unit 120 data. The high-level description 135 of one or several frames may be pre-processed to display the information on a user-controlled device 180. The information may be represented in several ways, for instance in the form of a textual report, in the form of a map, in the form of a graphical reproduction or cartoons, or in the form of a voice that describes main events during the requested time period. For example, the textual report may include textual information about the main events. For example, the textual report may say: "John woke up at 9 am, went to the kitchen, had a breakfast with nurse, spent 2 hours in the study room, spent one hour nearby the swimming pool."; "A nurse cooked breakfast at 8.30 am, cleaned the rooms, spent 2 hours with John in room B,. . ."; "Patient Mary G. took a medicine at 8 am, read book 3 hours, left a room at 1 pm,. . . "; and so on.
A user's desktop or mobile applications 160 in processor-controlled user devices 180 exchanges information with monitoring system 100. Electronic communications between computer systems generally includes packets of information, referred to as datagrams, transferred from processor-controlled user devices 180 to a local server and from a local server to processor-controlled user devices 180. The elements of the system exchange the information via Wi-Fi.
Other possible channels of information transfer may also be used. In comparison to other known electronic communications, the current interaction of a user/client with monitoring system 100 involves adding and storing information within monitoring system 100 in such way that this information is stored in the memory storage of the local server 140 that is located inside the territory where monitoring system 100 is installed. This ensures privacy for the information about the personal life of a user of the given system.
Monitoring system 100 is able to interact with external systems or services, for example, with smart house systems, with the police systems, with medical services, and so on. For example, the described system for monitoring, security and control 100 send alert notification to the external services and system 170, user processor-controlled devices 180 based on the analysis of the data from sensors 110. The alerts may be represented to the user with help of a GUI on the display for example. For example, external services and system 170 is a monitoring station that receives messages produced by computer processing unit 120 and makes the messages available to external observers.
For example, the messages describe actions performed by a person or animal within the designated territory sufficiently to allow an external observer to determine when external intervention is required. For example, external devices or services 170 includes a monitoring station that produces an alarm when the messages from computer processing unit 120 describe actions performed by a person or animal that indicate an external intervention may be required. For example, sensors 110 are selected to monitor children, disabled, sick or old persons. For example, the monitoring station forwards to computer processing unit 120 a query from an external observer and computer processing unit 120 responds with a message, wherein contents of the message are based on activity within the designated territory detected by processing and analysing the signals from sensors 110. In case of any pre-determined event occurring, the owner can be notified via the mobile device. Notifications can be accompanied by the video or pictures provided by the cameras according to settings defined by the user. To prevent false positive results, a user can view the video feed from the camera. After each situation, the system remembers the result to prevent future false positives. Calibration is an initial step of running of monitoring system 100 within an unknown territory. During the calibration, monitoring system 100 identifies basic parameters of the territory, identifies the type of the territory, creates the map of the territory, and so on. For example, when the designated territory includes a plurality of rooms, the local processing system constructs a layout plan of the territory using video data from the sensors. For example, the local processing system additionally also uses audio data to construct the layout plan. For example, estimates of room geometry are based on detecting horizontal and vertical lines.
For example, major objects within the field of view of video sensors are detected and used to construct the layer. For example, neural network algorithms are used to recognize a room type form based on a database of known room types. For example, measurements can be based on requested configuration activities of a user within a room.
FIG.5 illustrates an exemplary block diagram of a calibration method of monitoring system 100 in accordance with one or more aspects. The method of the calibration of monitoring system 100 starts at step 300 and proceeds to step 370. At step 310, source signals from a set of sensors 110 are synchronized.
One of the key characteristics of monitoring system 100 is the synchronization of all signals with time. Time is a source of information which is used as additional coordinates for further analysis. At step 320, monitoring system 100 determines the geometrical parameters of the territory. Particularly, monitoring system 100 may identify allocation of rooms inside the observed building, sizes of the rooms, their functions, the furniture inside the rooms, the parameters of adjoining territory, for example, garden, garage, playing ground etc. Monitoring system 100 may also determine an allocation of windows and entries, source of light, fireplaces, stairs, connection with other rooms, etc.
The type and function of territory may influence how monitoring system 100 will process the data from sensors, the type of abnormal event within the frame. For example, if the territory represents a private home, the scenarios will be different from the scenarios of monitoring system 100 in the hospital for elderly persons. Type and function of the territory may be indicated by the user at step 330.
At step 320, monitoring system 100 may generate the plan of the territory based on the identified geometrical parameters. The estimation of the room geometry may be based on detecting horizontal and vertical lines, angle positions of the gyroscopes, detection of major objects within field of view of video sensors, and neural network algorithms to recognize a room type form based on a database of known room types. Alternatively, a plan of the territory may be generated in other possible ways without limiting the scope of the present invention. For example, a plan of the territory may be generated by a user.
If the sensor node is removed from a previously designated position, it activates the sensor array and re -maps the surrounding area to redefine its position in space and relative to other sensor nodes while being moved.
If a new sensor node is added to the monitoring system — it configures itself automatically through adding its unique ID to the database of the sensor nodes (included into the system). It is automatically routed BLE and Wi-Fi access data by the central computing unit.
Sensor nodes provided with the system are configured the same way — the only difference is that sensor nodes sold together are written in an online database and transferred to a computing unit on configuration
In one of the embodiments, a user may be requested during calibration to walk from one video camera to another, and from one microphone to another, allowing monitoring system 100 to create the plan of the territory based on the measurements as the user moves along a known trajectory. Monitoring system 100 also, for example, correlates the relative sound amplitudes of all the microphones with the spatial location of the object on a territory plan. A designed plan may include all available information about the territory, including, for example, the allocation of rooms, their geometrical sizes, windows, doors, stairs, source of light, the type of room, etc. In other words, the generated plan depicts the graphical image of the territory and is created to schematically represent to a user the received and processed data from the sensors of monitoring system 100. Generating of a territory map may be performed in a semi-automatic way based on the data from the sensors 110. In one embodiment, monitoring system 100 creates the description of the preliminary plan and the user may correct the results of the created plan at step 330 to increase accuracy of identified parameters of the territory. Step 330 is optional and may be skipped. In one embodiment, monitoring system 100 creates the description of the living space in automatic way without operator assisting.
The plan of the territory may be conventionally divided into "forbidden" and "allowed" zones. In one embodiment, the "forbidden" and "allowed" zones may be common for all observed objects. In another embodiment "allowed" and "forbidden" zones may be specifically indicated for every specific object. For example, a child may have "forbidden zones", such as swimming pool, kitchen, stairs etc. A nurse and housemaid may have different "forbidden zones" such as, for example, a private study room or a bedroom.
At step 340, monitoring system 100 may automatically define forbidden zones for specific objects. For example, a fireplace and a pool may be defined as forbidden for a child. The operator may also indicate which zones are forbidden for which objects. For instance, user may tag on the plan "forbidden" and "allowed" zones for each of the observed object. These zones may be further changed or adjusted. The purpose of dividing the space into "allowed" and "forbidden" zones is control and safety. In one embodiment, monitoring system 100 may react if an alarm condition is fulfilled, for example, when monitoring system 100 recognized
Moreover, some user-defined rules may augment the procedure of verifying the "abnormal" event of a child approaching a swimming pool. For example, monitoring system 100 may check if there are some other objects (grownups) in the vicinity. Monitoring system 100 recognizes the "forbidden zones" and reacts to the approaching and crossing the borders of these zones. The technique of verifying the alarm condition will be described further in FIG.6, FIG.7, FIG.l l. At step 350, all observed objects are classified by monitoring system 100 as being "known" and "unknown". . All people and pets living on this territory, and also the important inanimate objects can be "introduced" to monitoring system 100 by the user, so it would add it to the database of "known" objects 474. The user gives them a name and gives access privileges to the "forbidden" and "allowed" zones.
At step 360, a user may create and augment some rules which may describe the alarm condition. Pre-defined rules represent some description of a situation that may be dangerous. In other words, rules may be described with help of a GUI of user processor- controlled devices that provide grammatical tools in form of main questions such as: "Who is doing something?"; "Where is she/he doing?"; "When is she/he doing?"; and "What is she/he doing?". Monitoring system 100 may already have some rules for alarm conditions. Also, the user may add some types of activities that may be considered as abnormal events.
FIG. 6 and 7 provide an exemplary block diagram of a computer processing unit 120 operating in accordance with one or more aspects of the present disclosure to verify the alarm condition and send an alert notification to the external system or user processor- controlled devices.
At step 410, the computer processing unit 120 receives data of a video frame from at least one sensor, for example, from a camera viewfinder 200. The frame is being analysed based on the known image processing technique.
At step 420, detection and localizing of anatomical landmarks of a human body on the frame is performed. Anatomical landmarks of a human body may include, for example, a forehead, a chin, a right shoulder, a left shoulder, a right palm, a left palm, a right elbow, a right elbow, a right hip, a left hip, a right knee, a left knee, a right foot, a left foot, and so on This list is illustrative, as other parts of a human body may be detected. A hierarchy of body parts exist and may be used for verification of a correct detection of an object. FIG. 8 illustrates schematically an exemplary screen of a user processor controlled electronic device that contains a frame with identified objects in a viewfmder, in according to an implementation. Anatomical landmarks such as a forehead 501, a chin 502, a right shoulder 503, a left shoulder 504, a right palm 505, a left palm 506, a right elbow 507, a left elbow 508, a right hip 509, a left hip 510, a right knee 511, a left knee 512, a right foot 513, a left foot 514, etc. of human bodies are identified within the frame 500. At step 420, monitoring system 100 may interact with a short-term memory storage 425 to extract data about previous landmark positions. The term "previous" may refer to the information extracted from the previously analysed frames.
The identified anatomical landmarks are the basis points to generate a human skeleton and to detect the pose and activity of a human. The human body is identified based on deep learning algorithm of an object recognition. For example, the approach "Real-time Multi-Person 2D Pose Estimation using Part Affinity Fields" of Zhe Cao, Tomas Simon, Shih-En Wei Yaser Sheikh, arXiv:1611.08050, 2016, may be used. Any other algorithms of machine learning such as "Long short-term memory" (recurrent neural network) may also be used here.
Referring to FIG.8, two persons 500a and 500b and a pet are identified within the frame. Each person is included in a describing rectangle that is designed based on the anatomical landmarks of the human body. FIG. 9 illustrates an example user interface of monitoring system 100 in according to an implementation. Namely FIG. 9 illustrates the exemplary results of human skeleton restoring, individual identification, pose detection on a larger scale.
A person can be reliably identified by his/her face or audio timbre. However most of the time the face is not visible to the video cameras. In one of the embodiments, one of the video cameras is directed toward the entrance to the territory, so at least it captures a face of everybody who enters. Once a person is identified by his/her face or audio timbre, all other metrics of this person are measured and updated: his/her today clothes, hairstyle, attached objects such as earrings, rings, bracelets, samples of her lips color, body shape measurements. Some of such metrics are built into monitoring system 100. However, in one of the embodiments, monitoring system 100 can add new metrics to itself, as it challenges itself and learns automatically over time to recognize better and better "known" people on this territory using even minor clues. For example, after a certain amount of time monitoring system 100 will recognize a person by only a fragment his/her hand, for example.
Some metrics are known to be constant over a long time including, for example, facial features, audio timbre or body shape measurements. The other metrics, such as clothes, are only valid for a short time, as a person can change clothes. However, such "short- time" metrics are extremely robust for short-term tracking. Clothes are often more visible to a camera than a face. Some other metrics can have "medium duration", for example, a hair-style.
Referring to FIG.6, monitoring system 100 recognizes a detected person as "known" or "unknown". The person may be identified precisely and identification of the person is assigned. FIG.8 also depicts the main information about the identified object, For example, the type of the object (person or pet), identification, name, pose (standing, sitting) and status. Referring to FIG. 6 at step 430, reading of face metrics, body metrics, and clothes metrics is performed. Monitoring system 100 recognizes the biometric parameters of humans within the frame of video stream. Monitoring system 100 may use the identification based on the shape of the body of a human, haircut or clothes. This type of identification is not reliable but may be used at one of the level of identification. Behavioral characteristics are related to the pattern of behavior of a person, including but not limited to typing rhythm, gait, and voice. In one embodiment, the person may be identified based on the timbre of voice. The voice may be not recorded because of privacy. Monitoring system 100 tracks the sound intensity and the spectral characteristics of the sound. Behavioural characteristics are stored in the memory of monitoring system 100 and analysed.
Monitoring system 100 may know what behaviour, type and time of activities is typical for a specific object. At step 430, monitoring system 100 may interact with a short-term memory storage 435 faces, bodies, clothes metrics which contain the information about detected faces, bodies, clothes metrics detected at the previous frame.
At step 440, person identification is performed based on the information assembled from previous steps and extracted from the database of known person 474.
Moreover, monitoring system 100 allows for identification of a "known" person who just changed clothes. At step 450, monitoring system 100 verifies whether any detected person has changed clothes. If somebody has changed clothes, monitoring system 100 goes to step 460 to update the clothes metrics data and to add it to the short-term memory of faces, bodies, and clothes metrics 435.
If nobody within the frame has changed clothes, monitoring system 100 checks at step 470 whether all persons are identified. If there are not identified objects, monitoring system 100 checks 471 whether it is needed to uniquely identify the person or not. For example, monitoring system 100 may send a request to user to add 473 a new "unknown" person to the database of "known" persons 474. The use may approve the request and give information (name, age, etc.) about detected objects which may be further used in a person identification procedure. If it is not necessary to identify the person, monitoring system 100 assigns ID = "unknown" at step 472. In case of an emergency situation (alarm condition), the biometrical date of this unknown person is transferred to the external services, for example, to the policy.
FIG. 7 illustrates the continuation of FIG.6. At step 480 monitoring system 100 updates the geometrical (spatial) positions of all objects based on the information from the short- term memory of geometrical positions and personal IDs 485.
At step 490 monitoring system 100 compares people's IDs, their spatial position, timestamps and activity types with the list of alert triggers. Database of pre-set alert conditions 492 may be used here. The database of pre-set alert/ alarm conditions 492 may include descriptions of specific objects, such a blood or weapons. If an alert is generated at step 491, monitoring system 100 may send alert notification 494 to the external system or devices and user-controlled devices. If an alert is not generated at step 491, monitoring system 100 continues 493 to process data from the next frame. Techniques for improvements of identification may be used to increase the accuracy of monitoring system 100. For example, such techniques may be used to track a person sitting or staying with their back to the camera, moving from one room to another room, changing clothes or hidden behind furniture. Techniques for tracking a person may be augmented using audio data from microphones. Techniques for tracking a person augmented when an identified object disappears from one camera viewfinder and does not appear in another camera viewfinder and system are important to prevent losing track of such an identified object. When the location of a person is known, it is correlated with the signal from microphones. Then monitoring system 100 may restore the coordinated of a person based only on the signals from the microphones.
The proposed monitoring system 100 is able to identify animals, for example, cats, dogs, etc. Identification of pets is needed for further video frame decomposition and analysis. For example, monitoring system 100 does not react to the motion of pets, when the house is temporally left by the owners. On other hand when a pet bites bite a child, monitoring system 100 reacts and call an alert or makes contacts with a designated adult in closest vicinity to the child. Such a rule describing this dangerous situation may be added by the user into monitoring system 100.
The described monitoring system 100 is adaptive. It means that monitoring system 100 may work in different modes to save energy and resources. FIG.10 illustrates an exemplary block diagram of the running monitoring system 100 in several modes according to an implementation. At step 720, monitoring system 100 is launched. Most of the time monitoring system 100 works in an energy saving mode. At step 730, monitoring system 100 receives signals from sensors 100, namely low-resolution low frequency audio and video data. At step 740, monitoring system 100 detects only a few categories of the objects and their coarse activity. If activity is not detected in step 750 within the frame, monitoring system 100 returns to the step 720.
If activity is detected in step, 750 monitoring system 100 turns on into a full processing mode 760 to detect in step 770 the type of activity precisely as was described above in according to FIG.6 and 7. Monitoring system 100 may verify some conditions as in step 780, for example, whether every object has left the territory and N seconds passed. If these conditions are not fulfilled monitoring system 100 returns to step 720.
The described monitoring system 100 is able to recognize preliminary defined alarm signals, such as for example, a gesture, a spoken phrase, or an audio signal to increase the accuracy in preventing of dangerous events. The alarm signal is needed in the case when monitoring system 100 is not able to recognize the problem automatically, requiring a person to signal to about the problem. FIG.11 illustrates an exemplary block diagram to detect the alarm signal and send alarm notification to external services or user processor- controlled devices in according to an implementation.
At step 810, monitoring system 100 receives a description of types of activity detected earlier. The flowchart in FIG.11 provides more detail. At step 820, recognition of "alarm on" and "alarm off secret gestures is performed. For example, secret gestures are included in monitoring system 100 during the calibration stage of monitoring system 100. Secret gestures are purposed to notify monitoring system 100 about an abnormal event. Secret gestures may trigger the alarm notification sending ("alarm on" secret gestures) or stop monitoring system 100 ("alarm off secret gestures) from performing an action based on a false triggering. In one of the embodiments, monitoring system 100 may have a pre-set list of pre-trained visual and audial alarm on/off gestures or phrases, such as known to be particularly easy for recognition by monitoring system 100. The user then would choose one or a few from this list.
Moreover, monitoring system 100 is able to recognize secret audio signals.
So, at step 830 monitoring system 100 may receive audio frames. At step 840, recognition of "alarm on" and "alarm off secret phrases is performed. For example, short memory 850 of audio frames is used for this purpose.
At step 860, it is verified whether an "alarm on" manual signal was received and not cancelled during M sec by receiving an "alarm off manual signal. If these conditions are not fulfilled, in step 880 monitoring system 100 continues monitoring by going to the next frame and keeping tracking of detected objects. If these conditions are fulfilled monitoring system 100 sends alert notification at step 870. The high-level interface providing information about the observed territory in an informative and compact way is now described. The proposed representation technique doesn't require the user to watch the video and hear the audio in order to understand what people do in some area under surveillance.
The proposed representation technique allows the user to receive a precise overview of the observed territory in just a few seconds. As a result, aspects of the present disclosure allow an increase to the speed of the tracking process and to reduce the cognitive load on the user. Moreover, aspects of the present disclosure enable the user to review data and pick any moment of time in the past to analyse the situation in the past. The system suggests tracks to show the spatial location of the objects in the past. It helps to check the situation on the territory not only during the current moment but also in the past, without picking an exact moment of time. FIG.12 illustrates an exemplary user processor-controlled device 1200 operating in accordance with one aspect of the present invention. In illustrative examples, user processor controlled device may be represented by any electronic device, including a laptop computer, a personal computer, a smart phone, a tablet computer, etc. A user processor controlled device may comprise a processor, memory storage, display 1201, keyboard, input device, etc. The user processor-controlled device may include touch screen input units that may be represented by the touch-input sensitive area. An exemplary graphical user interface with the mini map on the screen of a user processor- controlled device 1200 is represented. In one embodiment processed data from sensors of a monitoring system may be represented on a dashboard, wherein users can track a position of surveillance object. In one embodiment processed by computer processing unit 120 data may be extracted from the memory storage 130 to be represented in visual form on the display of a user processor-controlled device 150.
FIG.13 illustrates an exemplary graphical user interface of a dashboard 1300 with a mini map 405 and a timeline 1340 with a slider 1350 on the screen of a user processor controlled device 150. A dashboard 1300 comprises a customizable screen that compactly and visually displays vital information on different aspects of the system. The dashboard 1300 collects and represents the most important information, more particularly information about all observed objects, their activity, and their allocation within the territory. For example, dashboard may comprise a mini map (or generated plan) 1305. Mini map 1305 (or in other words, plan of the territory) may be generated by the security system 105 based on the identified geometrical parameters. Generating of the territory mini map may be performed in the semi-automatic way based on the data from the sensors 110. For example, information about an angle of a camera from the gyroscope is able to define the plan of the territory more precisely. In one embodiment, the system creates the description of the preliminary plan and the user may correct the results of the created plan at step 330 (FIG. 5) to increase the accuracy of identified parameters of the territory. In another embodiment, the system creates the description of the living space in an automatic way without operator assistance.
Referring to the FIG. 13, the GUI interface represents an exemplary dashboard 1300 with a generated mini map 1300 of the territory, wherein the information about observed objects is indicated in a specific moment of time. Designed plan 405 may include all available information about the territory, namely the allocation of rooms, their geometrical sizes, windows, doors, stairs, source of light, the type of room, etc. In other words, the generated plan depicts the graphical image of the territory and it is created to schematically represent to the user the received and processed data from the sensors of the monitoring system. For example, the information for multiple floors of the observed territory, the interface may connect multiple mini-maps to each other.
Dashboard 1305 represents a customized screen on a user-processor controlled device 150, wherein Graphical User Interface (GUI) 160 provides a user with the possibility to change the settings of the application.
For example, GUI 160 allows the user to define the view mode of a mini map. The term "view mode" may refer herein to the way a mini map is represented to a user on a display. In one embodiment, the mini map of the territory may be displayed on the screen of user processor-controlled device 150 as a schematic picture or drawing of the territory from above. One of the main features of the present technique is representing the information in a form of schematic drawing or cartoon. The existing security, monitoring and control system show all complete information on the screen of a display of a user processor-controlled device thereby increasing the cognitive load on the user and decreasing the speed of the tracking process. The schematic graphical image representation of data from sensors allows the system to show only the most important details, that help focus attention of a user on the most important information and reduce the time of the tracking process. In another embodiment, the mini map of the territory may be displayed from any specific angle, creating for a user a sense of a presence. The user may move within the mini map to make an overview of the situation on the territory to be observed.
The described invention enables the user to add new details on the dashboard that should be represented on the screen of a user processor-controlled device 150. The degree of detailing may be changed in the settings of the application. In one exemplary embodiment, the mini map may include only the plan of the territory with an indication of the allocation of rooms, the titles of room (such as "children's bedroom" 1321, "children's bathroom" 1322, "living room" 423, "master bedroom" 1324, "master bathroom" 1325) etc. In another exemplary embodiment, the mini map may include the allocation of doors and windows, furniture or other objects, such as fireplace, swimming pool, baths, etc.
The mini map 1300 may comprise the information about the location of cameras 420, microphones and additional sensors.
The observed object may be indicated on the mini map 400. In comparison to the existing technique the observed objects within the mini map may be represented by the means of avatars 1320 and schematic figures. Avatar may be chosen by default or changed by user of the system. This type of objects representation provides the privacy of information about personal life of a user of the given system.
Moreover, a dashboard 1300 may comprise a timeline 1340 with a slider 1350 that enable the user to choose a moment of time at the time line 1340. The user can pick any time in the past to check the situation in the past. By scrolling the slider 1350, a user initiates the dynamic change of graphical image representation of the information within the mini map. The term "dynamic" means the change of the information (objects, their activity, their allocation) during the time. The user may overview the information by scrolling the slider by means of touch input or cursor 1202 in a short period of time, providing the considerable speeding-up of information provision. In a conventional system the user requires to view all video data.
Moreover, the current user interface may comprise a track 1350 to show the spatial location of the object in the past. The term "track" may refer to the trace of the object that is identified based on geometrical parameters of the object. Representation of track 460 within the mini map 1305 gives the visual information about the allocation and the movement of the objects during the time within the territory. Time period for track representation may be installed by default, for example during 30 minutes, or this may be changed by user.
The track may be represented on the mini map in different ways. For example, a track 1360 may be painted as a curve line with a color of different intensity. The intensity of coloring may show the speed of the movement of the object within the territory. The high intensity of coloring may indicate on the high velocity of the movement. The low intensity of coloring may indicate the low velocity of movement.
The mini map 1305 may provide short and precise information about the detected activity of the objects. For example, GUI may give some notes about the activities, such as "Lilly is washing hands" 1332, "Mum is sitting on the sofa" 1334, and "Bob went out 10 min ago" 1333, etc.
In one embodiment the user may indicate, for example by touch, the object and receive the precise and short information about the specific object. For example, in response to user selection of one the Lilly avatar 1320 the system may provide the full information about the activity of Lilly during the day. For example, the textual report may include textual information about the main events. For example, "Lilly woke up at 9 am, went to the kitchen, had a breakfast with nurse, spent 2 hours in the study room, spent one hour nearby the swimming pool,.", "Mom cooked breakfast at 8.30 am, cleaned the rooms, spent 2 hours with Lilly in children bedroom. . .", "Bob took medicine at 8 am, read book 3 hours, left a house at 1 pm. . ." etc. Any text-to-speech techniques may be used here to reproduce events during the requested time period. FIG.14 illustrates an exemplary block diagram of an interacting of user with the graphical interface of the system in accordance with one aspect of the invention. The method of an interacting of a user with the graphical interface of the system 100 starts at step 1400 and proceeds to step 1470. At step 1410 a user launches the system application window and may select the view mode. Then at step 1420 a user may send a request to show the information about the observed territory on the mini map on a dashboard 1300 on the display of a user processor controlled device. Also, the user may select the time period of data representation by means of time line 1340 with slider 1350. The user may use input devices, such as an optical mouse or touch screen display, to indicate the time moment on the time line.
When launched, the application may respond to the input of a user received from sensors of a security and monitoring system 105. In one embodiment processed by computer processing unit 120 data may be extracted from the memory storage 130 to be represented in visual form on the display of a user processor-controlled device 150.
The identified objects, their positions and the identified activity are displayed on the mini map at step 1450 of a dashboard 1305. If not all information is represented then the user may request additional information at step 1420, for example the user indicates the specific time moment with the slider 1350 on the timeline 1340. Otherwise the user may monitor the territory or close the system application window.
FIG.15 provides a further example with a drawing of a home monitoring and security surveillance system integrated within a home environment. The system includes a processing unit connected to a set of sensors and provides intelligent home monitoring with a function of notification and emergency calling. FIG.16 shows an example of the main components of a sensor node with a single camera, called the CherryCam, which provides sensor and video/audio data streams via a Wi-Fi/Bluetooth connection to the processing unit inside the house and acts according to software requirements. It comprises 3 main components:
• Mam PCB;
• Camera PCB;
• Battery (removable and rechargeable) .
FIG.17 and 18 show examples of the CherryCam design.
The main PCB includes:
a. Low power 32-bit microcontroller STM32 F4-series;
b. Video processor OV798;
c. Barometer;
d. Accelerometer;
e. Gyroscope;
f. Magnetometer;
g. 2x MEMS microphones;
h. Light sensor;
i. LED indicator;
j. Flash memory;
k. WiFi, Bluetooth 4.0, and FM RX module;
1. Micro-SD socket;
m. Speaker and audio amplifier;
n. Display mode switch.
The camera PCB includes:
a. 2x CMOS RGB image sensors OV4689: with permanent IR-filter
b. Infra-red LEDs;
c. Motion sensor (PIR).
The removable battery includes (exceeding 3500 mAh)
a. Microcontroller STM32 LO-series;
b. USB-C port; c. Charge & battery fuel gauge;
d. LED indicators;
e. Li-Pol 3.7V battery;
f. QI 1.2 charge module.
Examples of usage scenarios are now described.
Connection of processing unit to home Wi-Fi network:
1. Connecting unit to the power supply. LED indicator starts to blink in white. 2. Installing application on mobile phone. Signing up and creating account if necessary.
3. Application searches unit and performs connection via Bluetooth.
4. User inputs required data via application: selects Wi-Fi network and enters the key.
5. If connection was successful LED indicator stops blinking and starts to glow in white.
6. If Ethernet-cable is used - no input data required.
Connection to the processing unit:
1. Inserting battery in CherryCam. LED indicator starts to blink in white, sound signal occurs.
2. Direct connection via Bluetooth (close range):
a. In the case Cherrycam is close to the unit, connection can be established directly via Bluetooth.
b. Camera detects unit via Bluetooth, connects, and obtains internal Wi-
Fi parameters from unit.
3. Connection via phone application (distant range):
a. In the case Cherrycam is far from unit, connection can be established indirectly using phone application.
b. Phone detects camera via Bluetooth and establishes connection.
c. Phone establishes connection with unit via Wi-Fi, obtains internal Wi- Fi parameters and transmits them to CherryCam via Bluetooth.
4. If connection was successful LED indicator stops blinking and starts to glow in white for approx. 20 seconds and then turns off. A standby state or mode is available. The following events may activate the standby state:
• default state of CherryCam after connection to the unit.
• if no movement/noise detected by corresponding sensors, timeout starts. On the
• expiration of timeout standby state is initiated;
• processing unit request via Wi-Fi/Bluetooth.
In standby state, the motion sensor, microphone set and Bluetooth are active. Other components are in sleep mode, "keep alive" signals are also sent to the processing unit via Wi-Fi/Bluetooth at regular intervals.
Other states or modes are available such as, but not limited to:
• Single, 480p and day mode:
• Single, 480p and night mode;
• Single, 720p and day mode;
• Single, 720p and night mode;
• Single, 1080p and night mode;
• Stereo, 480p and day mode:
• Stereo, 480p and night mode;
• Stereo, 720p and day mode;
• Stereo, 720p and night mode;
• Stereo, 1080p and night mode;
As an example, 'single, 480p and day mode' may have the following parameters:
• Examples of events which may activate the state are:
o Motion sensor trigger in case of any motion detected within the sensors' range;
o Microphone set trigger: in case of any noise above threshold is detected by microphone set;
o Processing unit trigger: activation is performed by the processing unit signal via Bluetooth/Wi-Fi.
o light sensor command;
o processing unit request. Operation: transferring video/audio streams, sensor data via Wi-Fi/Bluetooth to the processing unit, and 4MP still-shot on the processing unit request. When there is a 4MP still-shot request, camera is switched to corresponding single mode (for approximately 1 second), takes still-shot and returns to current mode. example, 'single, 480p and night mode' may have the following parameters:
Examples of events which may activate the state are:
o Motion sensor trigger in case of any motion detected within the sensors' range;
o Microphone set trigger: in case of any noise above threshold is detected by microphone set;
o Processing unit trigger: activation is performed by the processing unit signal via Bluetooth/Wi-Fi.
o light sensor command;
o processing unit request.
Operation: transferring video/audio streams, sensor data via WiFi/Bluetooth to the processing unit.
Note
It is to be understood that the above-referenced arrangements are only illustrative of the application for the principles of the present invention. Numerous modifications and alternative arrangements can be devised without departing from the spirit and scope of the present invention. While the present invention has been shown in the drawings and fully described above with particularity and detail in connection with what is presently deemed to be the most practical and preferred example(s) of the invention, it will be apparent to those of ordinary skill in the art that numerous modifications can be made without departing from the principles and concepts of the invention as set forth herein.

Claims

1. A monitoring system comprising:
sensors that monitor activity within a designated territory, the sensors including visual sensors that make video recordings;
a local processing system located within or proximate to the designated territory, the local processing system receiving signals from the sensors, the local processing system processing and analyzing the signals from the sensors to produce messages that describe activity within the designated territory as monitored by the sensors, the messages not including audio, visual or other direct identifying information that directly reveal identity of persons within the designated territory; and
a monitoring station outside the designated territory, the monitoring station receiving the messages produced by the local processing system and making the messages available to external observers.
2. A monitoring system as in claim 1, wherein the messages describe actions performed by a person or animal within the designated territory sufficiently to allow an external observer to determine when external intervention is required.
3. A monitoring system as in claim 1 or 2, wherein the monitoring station produces an alarm when the messages describe actions performed by a person or animal that indicate an external intervention may be required.
4. A monitoring system as in any preceding claim wherein the sensors are selected to monitor children, disabled, sick or old persons.
5. A monitoring system as in any preceding claim wherein when the monitoring station forwards to the local processing system a query from an external observer, the local processor responds with a message, wherein contents of the message are based on activity within the designated territory detected by processing and analyzing the signals from the sensors.
6. A monitoring system as in any preceding claim, wherein the monitoring station produces an alert when the messages indicate a predetermined gesture or predetermined phrase is detected by the sensors.
7. A monitoring system as in claim 6, wherein the monitoring station cancels the alert when the messages indicate a predetermined cancelation gesture or predetermined cancelation phrase is detected by the sensors.
8. A monitoring system as in any preceding claim, wherein the local processing system internally identifies bodies of people and animals within the designated territory based on video data using at least one of the following techniques:
face metric identification that identifies and records facial features; clothes metric identification that detects and records types of clothes on a body;
body metric identification that detects and records body shapes;
activity metric identification that detects and records current activity of a body; hair-cut metric identification that detects and records hair-cut style;
tool metric identification that detects and records objects held or in near vicinity to a body.
9. A monitoring system as in claim 8, wherein the local processing system recognizes changes made to bodies within the designated territory and updates affected metric identifications.
10. A monitoring system as in claim 8, wherein the local processing system analyzes kinetic motion of identified bodies to insure movements consistent with laws of physics to detect and resolve inconsistencies when internally identifying bodies.
11. A monitoring system as in claim 10, wherein the local processing system additionally uses audio data to internally identify bodies of people and animals within the designated territory, the audio data including timbre metrics and relative sound amplitudes.
12. A monitoring system as in any preceding claim, wherein the local processing system internally identifies bodies of people and animals within the designated territory based on video data using more than one of the following techniques:
face metric identification that identifies and records facial features;
clothes metric identification that detects and records types of clothes on a body; body metric identification that detects and records body shapes;
activity metric identification that detects and records current activity of a body; hair-cut metric identification that detects and records hair-cut style;
tool metric identification that detects and records objects held or in near vicinity to a body.
13. A monitoring system as in any preceding claim, wherein the designated territory includes an area in at least one of the following:
a school,
a prison,
a hospital,
a shopping mall,
a street,
an office,
a parking lot.
14. A monitoring system as in any preceding claim, wherein the designated territory comprises a plurality of rooms, the local processing system constructing a layout plan of the territory using video data from the sensors.
15. A monitoring system as in claim 14, wherein the local processing system additionally uses audio data to construct the layout plan.
16. A monitoring system as in claim 14, wherein when constructing the layout plan of the territory, the local processing system utilizes at least one of the following:
estimates of room geometry based on detecting horizontal and vertical lines; detection of major objects within field of view of video sensors;
neural network algorithms to recognize a room type form based a database of known room types; measurements based on requested configuration activities of a user within a room;
configuration audio signals detected by audio sensors.
17. A method for monitoring activity within a designated territory,
comprising:
using sensors to make video recordings;
forwarding signals from the sensors to a local processing system located within or proximate to the designated territory,
using the local processing system to process and analyze the signals from the sensors to produce messages that describe activity within the designated territory as monitored by the sensors, where the messages do not include audio, visual or other direct identifying information that directly reveal identity of persons within the designated territory;
sending the messages to a monitoring station outside the designated territory, the monitoring station making the messages available to external observers.
18. A method as in claim 17, additionally comprising:
producing an alert by the monitoring station when the messages indicate a predetermined gesture or predetermined phrase is detected by the sensors.
19. A method as in claim 17, additionally comprising:
internally identifying bodies of people and animals within the designated territory based on video data using one or more of the following techniques:
face metric identification that identifies and records facial features,
clothes metric identification that detects and records types of clothes on a body, body metric identification that detects and records body shapes,
activity metric identification that detects and records current activity of a body, hair-cut metric identification that detects and records hair-cut style, and tool metric identification that detects and records objects held or in near vicinity to a body.
20. A method as in claim 17, additionally comprising:
recording audio data by the sensors, the audio data being used to internally identify bodies of people and animals within the designated territory, the audio data including timbre metrics and relative sound amplitudes.
EP18790579.9A 2017-04-28 2018-04-30 Computer vision based monitoring system and method Withdrawn EP3616095A4 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201762491832P 2017-04-28 2017-04-28
US201762501545P 2017-05-04 2017-05-04
US15/647,129 US11037300B2 (en) 2017-04-28 2017-07-11 Monitoring system
GBGB1717244.6A GB201717244D0 (en) 2017-10-20 2017-10-20 Cherry I
PCT/US2018/030103 WO2018201121A1 (en) 2017-04-28 2018-04-30 Computer vision based monitoring system and method

Publications (2)

Publication Number Publication Date
EP3616095A1 true EP3616095A1 (en) 2020-03-04
EP3616095A4 EP3616095A4 (en) 2020-12-30

Family

ID=63920130

Family Applications (1)

Application Number Title Priority Date Filing Date
EP18790579.9A Withdrawn EP3616095A4 (en) 2017-04-28 2018-04-30 Computer vision based monitoring system and method

Country Status (3)

Country Link
EP (1) EP3616095A4 (en)
JP (1) JP2020522828A (en)
WO (1) WO2018201121A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7225551B2 (en) * 2018-03-23 2023-02-21 富士フイルムビジネスイノベーション株式会社 Biting detection device and program
WO2020219802A1 (en) 2019-04-24 2020-10-29 The Research Foundation For The State University Of New York System and method for tracking human behavior real-time with single magnetometer sensor and magnets
KR102021441B1 (en) * 2019-05-17 2019-11-04 정태웅 Method and monitoring camera for detecting intrusion in real time based image using artificial intelligence
US20230386253A1 (en) * 2020-10-08 2023-11-30 Nec Corporation Image processing device, image processing method, and program
CN117423197B (en) * 2023-12-18 2024-02-20 广东省华视安技术有限公司 Intelligent security software monitoring method and system

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7634662B2 (en) * 2002-11-21 2009-12-15 Monroe David A Method for incorporating facial recognition technology in a multimedia surveillance system
US7369685B2 (en) * 2002-04-05 2008-05-06 Identix Corporation Vision-based operating method and system
US20060136743A1 (en) * 2002-12-31 2006-06-22 Polcha Andrew J System and method for performing security access control based on modified biometric data
JP4789511B2 (en) * 2004-06-04 2011-10-12 キヤノン株式会社 Status monitoring device and status monitoring system
US7956890B2 (en) 2004-09-17 2011-06-07 Proximex Corporation Adaptive multi-modal integrated biometric identification detection and surveillance systems
US20070036395A1 (en) * 2005-08-15 2007-02-15 Okun Sheri L Reverse identity profiling system with alert function
US7437755B2 (en) * 2005-10-26 2008-10-14 Cisco Technology, Inc. Unified network and physical premises access control server
US7636033B2 (en) * 2006-04-05 2009-12-22 Larry Golden Multi sensor detection, stall to stop and lock disabling system
JP2007317116A (en) * 2006-05-29 2007-12-06 Honda Motor Co Ltd Vehicle having facial authentication system mounted thereon
JP2009088789A (en) * 2007-09-28 2009-04-23 Toshiba Corp Camera monitoring system
JP2009252215A (en) * 2008-04-11 2009-10-29 Sogo Keibi Hosho Co Ltd Security device and information display method
EP2442286B1 (en) 2009-06-11 2018-09-26 Fujitsu Limited Suspicious person detection device, suspicious person detection method, and suspicious person detection program
US9741227B1 (en) * 2011-07-12 2017-08-22 Cerner Innovation, Inc. Method and process for determining whether an individual suffers a fall requiring assistance
JP2014007685A (en) * 2012-06-27 2014-01-16 Panasonic Corp Nursing care system
WO2014182898A1 (en) * 2013-05-09 2014-11-13 Siemens Aktiengesellschaft User interface for effective video surveillance
WO2015093330A1 (en) * 2013-12-17 2015-06-25 シャープ株式会社 Recognition data transmission device
US9697656B2 (en) * 2014-08-19 2017-07-04 Sensormatic Electronics, LLC Method and system for access control proximity location
JP6474126B2 (en) * 2015-02-27 2019-02-27 Kddi株式会社 Object tracking method, apparatus and program
US20170032092A1 (en) * 2016-06-16 2017-02-02 Benjamin Franklin Mink Real Time Multispecialty Telehealth Interactive Patient Wellness Portal (IPWP)

Also Published As

Publication number Publication date
WO2018201121A1 (en) 2018-11-01
EP3616095A4 (en) 2020-12-30
JP2020522828A (en) 2020-07-30

Similar Documents

Publication Publication Date Title
US11120559B2 (en) Computer vision based monitoring system and method
US11875656B2 (en) Virtual enhancement of security monitoring
EP3616095A1 (en) Computer vision based monitoring system and method
KR100838099B1 (en) Automatic system for monitoring independent person requiring occasional assistance
Dahmen et al. Smart secure homes: a survey of smart home technologies that sense, assess, and respond to security threats
US6968294B2 (en) Automatic system for monitoring person requiring care and his/her caretaker
JPWO2015093330A1 (en) Recognition data transmission device, recognition data recording device, and recognition data recording method
JP7375770B2 (en) Information processing device, information processing method, and program
Guerrero-Ulloa et al. IoT-based system to help care for dependent elderly
TWI713368B (en) Image device and method for detecting a thief
Cao et al. An event-driven context model in elderly health monitoring
EP3992987A1 (en) System and method for continously sharing behavioral states of a creature
US20230046710A1 (en) Extracting information about people from sensor signals
US11521384B1 (en) Monitoring system integration with augmented reality devices
CN117730524A (en) Control method and system
JP2024518439A (en) Surveillance system and method for recognizing determined person activity - Patents.com
Dai Implicit Human Computer Interaction: Helping People in Their Working and Living Spaces

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20191126

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20201202

RIC1 Information provided on ipc code assigned before grant

Ipc: H04W 12/06 20090101ALI20201126BHEP

Ipc: G08B 13/196 20060101AFI20201126BHEP

Ipc: G06K 9/00 20060101ALI20201126BHEP

Ipc: H04N 7/18 20060101ALI20201126BHEP

Ipc: H04L 9/32 20060101ALI20201126BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20220412

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20220822